Professional Documents
Culture Documents
Lecture Notes in Computer Science: Edited by G. Goos, J. Hartmanis and J. Van Leeuwen
Lecture Notes in Computer Science: Edited by G. Goos, J. Hartmanis and J. Van Leeuwen
Lecture Notes in Computer Science: Edited by G. Goos, J. Hartmanis and J. Van Leeuwen
Structure versus A u t o m a t a
~ Springer
Series Editors
Gerhard Goos, Karlsruhe University, Germany
Juris Hartmanis, Cornetl University, NY, USA
Jan van Leeuwen, Utrecht University, The Netherlands
Volume Editors
Faron Moller
Department ofTeleinformatics, Kungl Tekniska H6gskolan
Electrum 204, S-164 40 Kista, Sweden
Graham Birtwistle
School of Computer Studies, University of Leeds
Woodhouse Road, Leeds LS2 9JT, United Kingdom
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is
concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting,
reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication
or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965,
in its current version, and permission for use must always be obtained from Springer -Verlag. Violations are
liable for prosecution under the German Copyright Law.
9 Springer-Verlag Berlin Heidelberg 1996
Printed in Germany
Typesetting: Camera-ready by author
SPIN 10512588 06/3142 - 5 4 3 2 1 0 Printed on acid-free paper
Preface
This volume is a result of the VIII TH BANFF HIGHER ORDER WORKSHOP held
from August 27th to September 3rd, 1994, at the Banff Centre in Banff, Canada.
The aim of this annual workshop (of which the VIII TH was the final) was to
gather together researchers studying a specific, well-focussed topic, to present
and contrast various approaches to the problems in their area. The workshop
has been locally organised and hosted in Banff by Graham Birtwistle, at that
time Professor of Computer Science at the University of Calgary, but currently
Professor of Formal Methods at the University of Leeds.
Originally the topics were chosen to reflect some aspect of higher-order rea-
soning, thus justifying the name of ~he workshop series, but the topics were
allowed to diversify more and more as the years passed, so that the higher-order
aspect became less and less adhered to. Thus for example, the previous three
workshops were subtitled Functional Programming Research (1991, chaired by
John Hughes, Glasgow); Advanced Tutorials on Process Algebra (1992, chaired
by Faron Moller, Edinburgh); and Advanced Tutorials on Asynchronous Hard-
ware Design (1993, chaired by A1 Davis, HP Labs, Palo Alto).
The final workshop, held in 1994, was subtitled Logics for Concurrency:
Structure versus Automata and was chaired by Faron Moller, Stockholm. The
basic motivation for the workshop was to explore the apparent dichotomy which
9 exists in the area of process logics, particularly in the study of various temporal
logics. On the one hand, the traditional approach has exploited automata-
theoretic techniques which have been studied for decades; this approach is dom-
inant in research carried out in North America. On the other hand, the "Eu-
rotheory" approach is based more on exploiting structural properties involving
aspects such as congruence and decomposability. The relaxed workshop format
of having a set of three lectures from each of five speakers spread over eight days
allowed this dichotomy to be revealed, dissected, and discussed in detail, provid-
ing for a great deal of friendly debate between proponents of each school. The
five series of lectures were presented by Samson Abramsky (London); E. Allen
Emerson (Austin); Voram Hirshfeld (Tel Aviv) and Faron Moller (Stockholm);
Colin Stirling (Edinburgh); and Moshe Vardi (Rice).
The proceedings of the workshop series have generally only been available
informally; indeed they have usually only been informal documents, sometimes
nothing more than photocopies of slides. However, as the final workshop so
successfully met its goal of creating an environment for contrasting approaches,
it was deemed a worthy exercise by all of the lecturers to provide post-workshop
tutorial-style lecture notes, which explored the individual presentations with the
benefit of hindsight, so as to provide a record of the presentations and discussions
carried out at the workshop.
vI
Introduction ............................................................. 1
A n A u t o m a t a - T h e o r e t i c A p p r o a c h t o Linear T e m p o r a l Logic
MOSHE Y. VARDI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
1 Introduction ...................................................... 238
2 Automata Theory ................................................ 239
2.1 A u t o m a t a on F i n i t e W o r d s - C lo s u r e . . . . . . . . . . . . . . . . . . . . . . . . . . 239
2.2 A u t o m a t a on Infinite W o r d s - C l o s u r e . . . . . . . . . . . . . . . . . . . . . . . . 241
2.3 A u t o m a t a on F i n i t e W o r d s - A l g o r i t h m s . . . . . . . . . . . . . . . . . . . . . . 245
2.4 A u t o m a t a on Infinite W o r d s - A l g o r i t h m s . . . . . . . . . . . . . . . . . . . . . 247
2.5 A u t o m a t a on F i n i t e W o r d s - A l t e r n a t i o n . . . . . . . . . . . . . . . . . . . . . . 248
2.6 A u t o m a t a on Infinite W o r d s - A l t e r n a t i o n . . . . . . . . . . . . . . . . . . . . 251
3 Linear T e m p o r M Logic a n d A u t o m a t a on Infinite W o r d s . . . . . . . . . . . 253
4 Applications ...................................................... 256
4.1 S a t i s f i a b i l i t y . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
4.2 Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
4.3 S y n t h es i s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
Introduction
There has been a great deal of effort spent on developing methodologies for
specifying and reasoning about the logical properties of systems, be they hard-
ware or software. Of particular concern are issues involving the verification of
safety and liveness conditions, which may be of mammoth importance to the safe
functioning of a system. While there is a large body of techniques in existence
for sequential program verification, current technologies and programming lan-
guages permit for ever greater degrees of distributed computation, which make
for a substantial increase in both the conceptual complexity of systems, as well
as the complexity of the techniques for their analysis.
One fruitful line of research has involved the development of process logics.
Although I-Ioare-style proof systems have been successful for reasoning about
sequential systems, they have only been of minor impact when primitives for ex-
pressing concurrent computation are introduced. Of greater importance in this
instance are, for example, dynamic, modal, and temporal logics. Such logics
express capabilities available to a computation at various points of time during
its execution. Two major varieties of such logics exist: linear-time logics express
properties of complete runs, whilst branching-time logics permit a finer consid-
eration of when choices within computation paths are made. The study of such
logics is now quite broad and detailed.
There is, however, a perceived dichotomy which exists in the area of tem-
poral logic research; a divide which appears to be defined to a large extent by
geography (though, as with all things, this :division is far from clear-cut). On
the one hand, the North American approach has exploited automata-theoretic
techniques which have been studied for decades. The benefit of this is the abil-
ity to adapt theoretical techniques from traditional automata theory. Hence, for
example, to verify that a particular system satisfies a particular property, one
would construct an automaton which represents in essence the cartesian prod-
uct of the system in question with the negation of the property in question, and
thus reduce the problem to one of emptiness of the language generated by an
automaton. One obvious drawback to this approach is the overhead incurred
in constructing such an automaton: even if the system being analysed can be
represented by a finite-state automaton, this automaton will typically be of a
size which is exponential in its description (due to the so-called state space ex-
plosion problem), and hence this approach to the verification problem will be (at
least) exponential in the worst case. Of course this approach is inapplicable to
infinite-state systems unless you first introduce symbolic techniques for encoding
infinite sets of states efficiently.
On the other hand, the "Eurotheory" approach to the verification problem
is based more on exploiting structural properties. The essence of this approach
is to work as far as possible with the syntactic system and property descriptions
rather than their semantic interpretations as automata. Typically this involves
the development of tableau-based proof systems which exploit congruence and
decomposition properties. This allows more of a so-called "local" model checking
technique which feasibly could handle even infinite-state systems, as the overhead
of constructing the relevant semantic automata is avoided.
Whichever approach is considered, there are common goals which are stressed.
Firstly and of utmost importance, the methodology must be faithful to the sys-
tem descriptions being modelled, as well as to the intended properties being
defined. The methodology must be sound with respect to its intended pur-
pose. Of great importance as well is the ability to automate the techniques;
as economically- and safety-critical systems being developed become more and
more complex, the need for automated support for their verification becomes an
imperative of ever greater importance. Mechanical procedures for carrying out
the verification of such properties are required due to the sheer magnitude of the
systems to be verified. Beyond this, such algorithms must be tractable. There
will clearly be trade-offs between the expressivity of a formalism employed to
define system properties, and the complexity of the verification of these proper-
ties. The verification procedures, beyond being sound, must be effective, as well
as of an acceptable level of time- and space-complexity.
We finish this introduction with a brief synopsis of each of the five contributions
appearing in this volume.
The paper by Samson Abramsky, Simon Gay, and Rajagopal Nagarajan,
Specification structures and propositions-as-types for concurrency, introduces the
reader via "Hoare triples" to specification structures, a framework for combin-
ing general notions of logical properties with general methods for verification.
These are then employed within the authors' earlier framework of interaction
categories for synchronous and asynchronous processes to provide a mechanism
for expressing properties. Interaction categories were introduced to unify trans-
formational and reactive programming paradigms, which respectively represent
general frameworks underlying functional programming and concurrent compu-
tation. The mechanism developed resembles the provision of a type system for
process algebras which can be used to support verification. As examples of
the application of the approach, the paper details how a specification structure
can be provided to permit the expression of deadlock-freedom. In the end, the
problem of the dining philosophers is used to great effect to demonstrate the
application of the techniques in practice.
The paper by E, Allen Emerson, Automated temporal reasoning about re-
active systems, presents a wide range of complexity and expressiveness results
for a number of linear- and branching-time temporal logics. The paper stresses
the trade-offs between the complexity of various algorithms for satisfiability and
model-checking, presenting essentially optimal automated algorithms for a num-
ber of such problems, and the expressibility of the different temporal logics. The
two varieties of algorithms, automata-theoretic and tableau-based, are explored
and interestingly contrasted. This in-depth overview is presented with an em-
phasis on underlying intuition, and includes an extensive bibliography for the
reader to follow up the topics discussed in far greater detail.
The paper by Yoram Hirshfeld and Faron Moller, Decidability results for au-
tomata and process theory, invites the reader to investigate the theory of context-
free grammars in a process-theoretic framework, where composition (juxtaposi-
tion of symbols)represents either sequential composition or parallel composition.
It is demonstrated how each of these interpretations leads to a familiar process
algebra. Many standard results are encountered, for example that regular gram-
mars give rise to finite-state automata. However, surprising and unexpected
results are also encountered, particularly regarding the decidability of process
equivalence between automata denoted by general context-free grammars; these
results contrast strikingly with the well-known undecidability results in language
theory for context-free languages. Beyond this, non-trivial sub-classes are iden-
tified where tractable algorithms are proven to exist for the process equivalence
problem.
The paper by Colin Stir!ing, Modal and temporal logics for processes, moti-
vates and presents the modal mu-calculus, a very expressive temporal logic, as
a simple modal logic (Hennessy-Milner logic) equipped with fixed points for ex-
pressing safety and liveness properties. The presentation opens with a discussion
of processes and transition graphs, the model of concurrent computation over
which temporal properties are defined. Modal properties are then introduced
in the guise of expressing capabilities, and fixed points are then motivated for
defining perpetual properties. A novel feature of the presentation is the use of
games as an intuitive framework for explaining satisfiability, as well as for intro-
ducing tableau-based algorithms for the model-checking problem. One benefit
of this "local" model checking approach is its applicability to the verification of
properties of infinite-state processes, a theme which is explored in detail in the
presentation.
Finally, the paper by Moshe Vardi, An automata-theoretic approach to lin-
ear temporal logicl presents the reader with an introduction to the theory of
automata on infinite words by way of the more familiar theory of automata on
finite words, and demonstrates its use for analysing linear-time temporal prop-
erties of infinite computations. Its application to each of program specification,
verification, and synthesis is carefully motivated and demonstrated.
Specification Structures and
Propositions-as-Types for Concurrency*
Department of Computing,
Imperial College of Science, Technology and Medicine, London, UK.
email: {sa, sjg3 ,rn4}@doc. ic. ac.uk.
1 Introduction
Type Inference and Verification are two main paradigms for constraining
the behaviour of programs in such a way as to guarantee some desirable
properties. Although they are generally perceived as rather distinct, on
closer inspection it is hard to make any very definite demarcation be-
tween them; type inference rules shade into compositional proof rules for
a program logic. Indeed, type inference systems, even for the basic case
of functional programming languages, span a broad spectrum in terms of
expressive power. Thus, ML-style types [31] are relatively weak as regards
expressing behavioural constraints, but correspondingly tractable as re-
gards efficient algorithms for "type checking". System F types [21] are
considerably more expressive of polymorphic behaviour, and System F
typing guarantees Strong Normalization. However, System F cannot ex-
press the fact that a program of type list[nat] ~ list[nat] is actually a sort-
ing function. Martin-LSf type theory, with dependent types and equality
types, can express complete total correctness specifications. In the richer
theories, t y p e checking is undecidable [35].
* This research was supported by EPSRC project "Foundational Structures in Com-
puter Science", and EU projects "CONFER" (ESPRIT BRA 6454) and "COORDI-
NATION" (ESPRIT BRA 9102).
One might try to make a methodological distinction: post-hoc verifi-
cation vs. constructions with intrinsic properties. However, this is more
a distinction between ways in which Type Inference/Verification can be
deployed than between these two formal paradigms.
We suggest that it is the rule rather t h a n the exception that there are
many different notions o f "properties of interest" for a given computa-
tional setting. Some examples:
The idea behind the picture is that we have a semantic universe (category
with structure) Co, suitable for modelling some computational situation,
but possibly carrying only a very rudimentary notion of "type" or "be-
havioural specification". The tower arises by refining Co with richer kinds
of property, so that we obtain a progressively richer setting for performing
specification and verification 1.
We will now proceed to formalize this idea of enriching a s e m a n t i c
universe with a refined notion of property in terms of Specification Struc-
tures.
a Of course, non-linear patterns of refinement--trees or dags rather than sequences--
can also be considered, but the tower suffices to establish the m~n ideas.
2 Specification Structures
~{idA)~ (sl)
~p{f}r162 '.- ~ { f ;g)8 (s2)
The axioms (sl) and (s2) are typed versions of the standard Hoare logic
axioms for "sequentiM composition" and "skip" [16].
Given C and S as above, we can define a new category Cs. The objects
are pairs (A,~) with n e 0b(C) and ~ e P A . A morphism f : ( d , ~ ) --*
(B, r is a morphism f : A --* B in C such that ~ { f } r
Composition and identities are inherited from C; the axioms (sl) and
(s2) ensure that Cs is a category. Moreover, there is an evident faithful
functor
C~Cs
given by
A ~ (A,~).
In fact, the notion of "specification structure on C" is coextensive with
that of "faithful functor into C'. Indeed, given such a functor F : D -+ C,
we can define a specification structure by:
P A = {~ e Oh(D)I F(~ ) = A}
(by faithfulness, a is unique if it exists). It is easily seen t h a t this passage
from faithful functors to specification structures is (up to equivalence)
inverse to t h a t from S to C ~-~ C s .
A more revealing connection with s t a n d a r d notions is yielded by the
observation t h a t specification structures on C correspond exactly to lax
functors from C t o T6~l, the category of sets and relations. Indeed, given
a specification structure S on C, the object part of the corresponding
functor R : C --+ T~el is given by P, while for the arrow part we define
R ( f ) = {(~, r ~{f}r
T h e n (sl) and (s2) become precisely the s t a t e m e n t t h a t R is a lax functor
with respect to the usual order-enrichment o f T & / b y inclusion of relations:
idR(A) C R(idA)
R(f) ; R(g) g R(f ;g).
Moreover, the functor (2 ~ C s is the lax fibration arising from the
Grothendieck construction applied to R.
The notion of specification structure acquires more substance when
there is additional structure on C which should be rifted to Cs. Suppose
for example t h a t C is a monoidal category, i.e. there is a bifunctor | :
Ca --+ C, an object I, and natural isomorphisms
assocA,B,C : (A | B) | C ~- A | (B | C)
unitlA : I | A- A
unitrA : A| I ~ A
| : PA • PB ~ P(A | B)
and an element u E P I satisfying, for f : A --+ B, f ' : A' --+ B' and
properties ~o, ~#, r r 0 over suitable objects:
C(A @ B , C ) ~ - C ( A , B - o C).
---OA,B: P A • P B ~ P ( A --o B )
and axioms
~P| r162 |
-o r | ,{ewlA,B}r
7~ | r ~ ~ { A ( f ) } r ---o ~o.
Going one step further, suppose t h a t C is a . - a u t o n o m o u s category, i.e.
a model for the multiplicative fragment of classical linear logic [11], with
linear negation ( - ) • where for simplicity we assume t h a t A •177= A.
T h e n we require an action
(--)A~: P A --* P A •
satisfying
7~•177_- ~p
--o r = | r177177
Under these circumstances all this structure on C lifts to Cs. For example,
we define
.
C = Set, P X = X , a { f } b =_ f ( a ) = b.
In this case, Cs is the category of pointed sets.
.
C = T~l, P X = P X , S { R } T = Vx E S.{y [ x R y } C_ T.
This i s essentially a typed version of dynamic logic [25], with the
"Hoare triple relation" specialized to its original setting. If we take
S|
sk=x\s
then Cs becomes a model of Classical Linear Logic.
.
C = "r~el, P X = { C c_ x 21 c = C ~ C n idx = ~ } ,
C { R } D ~ x C x t, x R y , x'Ry' ~ y D y ~.
C | D = {((x,x'),(y,y'))lxCy ^ x'Dy'}
C~. = X 2 \ ( C U i d x ) .
.
c : s e t , p x = { ~ : ~ -~ x I W e x . 3 ~ e ~ . ~ ( ~ ) = ~},
s { f } t - 3• ~ w . / o s ~_ t o ~
.
C = the category of SFP domains,
PD = gY2(D)(the compact-open subsets of D),
U { f } Y =_ U g f - l ( Y ) .
This yields (part of) Domain Theory in Logical Form [2], the other
part arising from the local lattice-theoretic structure of the sets PD
and its interaction with the global type structure.
These examples show the scope and versatility of these notions. Let us
return to our picture of the tower of categories:
Co ~ C 1 ~ C 2 ~ . - . ~Ck.
c +1 = )s,+l.
A
i=l
Once this has been done, by whatever means model checking, theo-
rem proving, manual verification, etc. the morphism is now available
in Ck to participate in typing judgements there. In this way, a coher-
ent framework for combining methods, including both compositional and
non-compositional approaches, begins to open up.
We now turn to the specific applications of this framework which i n
fact originally suggested it, in the setting of the first author's interaction
categories.
]2
3 Interaction Categories
3.1 T h e I n t e r a c t i o n C a t e g o r y ,S'Proc
In this section we briefly review the definition of SProc, the category of
synchronous processes. Because the present paper mainly concerns the use
of specification structures for deadlock-freedom, we omit the features of
SProc which will not be needed in later sections. More complete definitions
c a n be found elsewhere [1, 6, 18].
An object of SProc is a pair A = (~'A, SA) in which "~A is an alphabet
(sort) of actions (labels) and SA C_n~pr~/E~ is a safety specification, i.e. a
non-empty prefix-closed subset of Z~. If A is an object of 8Proc, a process
of type A is a process P with sort ~A such that traces(P) C SA. Our
notion of process is labelled transition system, with strong bisimulation
13
EA| X ES
SA| d~f {a E Z ~ S I fst*(~) E SA,srld*(o') E ,5'B}.
P (a,b! p, q (b,c) q,
P ; q (a,c! pl ; ql
At each step, the actions in the common type B have to match. The
processes being composed constrain each other's behaviour, selecting the
possibilities which agree in B. For example, if p and q are as shown:
p a,.Q aES
a Qr(S/a)
15
aESA
id (a,a! id.
Proposition 2. SProc is a category.
Proof. The proof that composition is associative and that identities work
correctly uses a coinductive argument to show that suitable processes are
bisimilar. Full details can be found elsewhere [1, 6]. []
P ~, Q a E dora(f)
example, (a, a) ~-* ((,, a), a) denotes the partial function which has the
indicated effect when its arguments are equal.
def
unitlA = ida[(a, ((,, a),a)]
unitrA def idA[(a,a) ((a, ,),a)]
def
assoCA,B,C ((a,(b,c)),((a,b),c))]
def
symmA,B ((a,b),(b,a))].
If f : A | B --+ C then A ( f ) : A ~ (B --o C) is defined b y
A ( f ) def f [ ( ( a , b ) , e ) ~ (a,(b,e))].
a very useful typing rule which we call the multi-cut. (This is actually
Gentzen's MIX rule [19] b u t we avoid the use of this term since Girard
has used it for quite a different rule in the context of Linear Logic.)
The usual Cut Rule
F F,A ~ A,A •
~-F,A
allows us to plug two modules together by an interface consisting of a
single "port" [5]:
A A•
o o o 9 . . . . . o o o
~ 1 7 6 1 7 6
such as the Scheduler described in [30]. The problem with building a cycle
is at the last step where we have already connected
To connect
F- F, A F- F', A •
f- F, F'
C ~ = (c~ | 1 7 4 ~ ~- c,* | 1 7 4
]9
l" (unit)
I@I
f|
(4 | ~) | ([~ | ~L)
(canonical isos)
(evaluation)
A|174174174
1 (unit)
A|
(Note t h a t in a compact closed category I = _l_ so A • = A - o I.)
In the case where k = 1 this construction is the internalization of com-
position in the category (using the a u t o n o m o u s s t r u c t u r e ) s o it properly
generalizes the s t a n d a r d interpretation of Cut. For some related notions
which have arisen in work on coherence in compact closed categories, see
[13, 24].
20
P (~,b! p, q (a,b! q,
P + q (~,b! p, P + q (a,b! q,
For any objects A and B, there is a m o r p h i s m nil : A --* B which has no
transitions.
The non-deterministic sum and the nil morphisms exist for quite gen-
eral reasons: SProc has biproducts, and it is standard t h a t this yields a
commutative monoid structure on every homset [26]. In the present p a -
per, we have defined + directly as we will not make any other use of the
products, and coproducts.
A f , 9
B 9 OB
g
commutes. We will not go into the applications of this property in the
present paper, except to mention that it supports guarded recursive def-
initions [1, 6, 18] and is an important part of a proposed axiomatisation
of interaction categories [18].
Apart from 9 there are two other delay functors: the initial delay
and the propagated delay A. These are the same as the operators used
by Milner [29, 30] to construct CCS from SCCS, and they can also b e
used to construct asynchronous processes in the synchronous framework
of S P r o c . However, when analysing asynchronous problems it is much
more convenient to work in a different category, AS7)roc, which we will
define shortly. For this reason, we will give only the basic definitions of
the delay functors here, and not dwell on their properties.
The functors ~ and A are defined on objects by
26Ade=fl + ~A
S,bA d=ef {.r~O. ] (?Z < 0)) A (or E SA))
2AA a_--ef1 + ZA
S,aa d__ef{e} O {al ,n, a2 ,n2 a3 . . . [ ( n i < w ) A ( a l a 2 a 3 . . . E SA)}
22
f (o,b! f, f (a,b! f,
f (.,.! ~f ~ f (a,b! f, A f (a,b! ~ A f'
Both of these functors are monads. Full details can be found elsewhere
[1, 6, 18].
3.3 .ASProc as a C a t e g o r y
Just as in SProc the morphisms are defined via the object part of the
, - a u t o n o m o u s structure. Given objects A and B, the object A | B has
P (~,~'.! p, q (~'B,c! q,
P ; q (~,~c! p,; q P ; q ('~A,c!P ; q,
P (~,b! p, q (b,c) q,
P ; q (~,c! pl ; ql
The first two rules allow either process to make a transition independently,
if no communication is required. The third rule allows the processes to
24
3.5 Time
In AVProc, t h e delay monads $ and A are less meaningful than in SProc,
since delay is built into all the definitions. But the unit delay functor 9
is still important. On objects it is defined by
def
7- 9 = rA
So A dJ V [ SA}.
4.1 T h e S y n c h r o n o u s Case
We shall now describe a specification structure D for SProc such that
b-TProcD will be a category of deadlock-free processes, closed under all
the type constructions described above (and several more omitted in this
introductory account). This specification structure has a number of re-
markable features:
p , pl q a 9 ql
p~q a , p, ql.
We read p + as "p is deadlock-free", i.e. it can never evolve into the nil
process.
An important point is that we need to restrict attention to those ob-
jects of SProe whose safety specifications do not force processes to dead-
lock. The object A is progressive if
By considering just the progressive objects, we can be sure that there are
deadlock-free processes of every type.
Finally, the key definition:
p _L q = ( p F l q ) t .
We can think of p ,L q as expressing the fact t h a t "p passes the test q";
but note that _ _L _ is symmetric so the rSles of p and q, tester and testee,
can be interchanged freely.
Now we lift this symmetric relation to a self-adjoint Galois connection
on sets of processes in a standard fashion [14]:
p -L U = V q E U. p-L q
U l =- { P l P "L U}.
U•177177
_ U•
27
The Category :T'Proc The category $'Proc (fair processes) has objects
A = (ZA, TA,:SA, FA). The first three components of an object are exactly
as in YiSTProc.The fourth, FA, is a subset of 0 b A c t ( A ) ~ such that all finite
prefixes of any trace in FA are in SA. The interaction category operations
on objects are defined as in .4SProc, with the addition that
nedFA
FA| aed{s E ObAct(A | B)~ l s[A E FA, s[B e FB}
FOA nod I rA}.
A process in .TT)rocis almost the same as a process in .4SProc, except that
there now has to be a way of specifying which of the infinite traces of a
synchronisation tree are to be considered as actuM infinite behaviours of
the process. This is done by working with pairs (P, Tp) in which P is an
.4SProc process and O ~ Tp C infobtraces(P). Only the infinite traces
in Tp are viewed as behaviours of P , even though the tree P m a y have
m a n y other infinite traces. There is a condition for this specification of
valid infinite traces to be compatible with transitions: if P a Q then
Tp D {as ts E TQ}.
A process of type A in .T'Proc is a pair (P, Tp) as above, in w h i c h P
is a process of type (ZA, TA, SA) in .AS'Proc, and Tp C_FA. Equivalence of
processes is defined by
Tf| ~f {s 9 infobtraces(f | g) I
s r(A, C) 9 TI, s r(B, D) 9 Tg, s 9 FA|174 }.
P n Q ~. P' ~Q~.
If P and Q have type A in ~Proc and Tp n TQ ~ 0, then P R Q can be
converted into an ,T~roc process of type A by defining TpnQ de=fTp N TQ.
Orthogonality is now defined by
a e 0bAct(A)
a
maxA ' maxA/a
with TmaxA ---- F A. Note that a process maxA could be defined in this way
for any ~:~roc object A; maxA is simply the process which exhibits every
behaviour permitted by the safety specification SA. In general maxA might
have deadlocking behaviours, but because we are working in YPro%r, ev-
ery safe trace can be extended indefinitely and so maxA never terminates.
The process maxA is orthogonal to every convergent process of type
A: writing Proc(A) for the set of all convergent processes of type A, we
have maxA I Proc(A). In fact, Proc(A) • = {maxA}. Proc(A) is a valid
property over A, as is {maxA}, and they are mutually related by ( - ) •
The deadlock-free type (A, {rnaxA}) specifies an input port, because it
forces all possible actions to be accepted. The type (A, Proc(A)) specifies
an output, because any selection of actions is allowed. From now on,
we denote Proc(A) and {rnaxA} by outA and inA respectively, so that
in i = OUtA and out S = inA. It is not hard to prove
P r o p o s i t i o n 10. o u t A x2 o u t B ---- O u t A x ~ B .
This result is very useful for applications, as we shall see in the next
section. Another useful fact is that if the safety specification of A is such
that in every state there is a unique allowable next action, then inA =
outA.
The deadlock-free categories SPrOCD and .~PrOCD are not compact closed,
which means that the categorical structure no longer supports the con-
struction of arbitrary process networks. Any non-cyclic structure can be
constructed, using the fact that the category is ,-autonomous, but addi-
tional proof rules are needed to form cycles.
Suppose that P : (F,U) ~ ( X , V ) ~ ( X • J-) in .~']~OCD. There is an
obvious condition that forming T by connecting the X and X J- ports
should not cause a deadlock: that every trace s of P with srX = srX x
can be extended by an action (~t,x,x) of P. The action x could be r x ,
as it is permissible for the sequence of communications between the X
and X • ports to pause, or the action tuple ~ could be r r , but not both.
Again, to obtain P : (F, U) in ~rocD it is also necessary to ensure that
the specification U can still be satisfied while the communication is taking
place.
The possibility of divergence does not have to be considered separately.
It is conceivable that P could have a non-deadlocking infinite behaviour in
which no observable actions occur in F, but the corresponding behaviour
of P would be unfair because it would neglect the ports in F. Thus it is
sufficient to state a condition which guarantees that forcing X and X •
to communicate does not affect the actions available i n t h e other ports.
This condition can be expressed in terms of ready pairs. The definition
of readies(P) for an ~l)roc process P of type A is
- For every (s, A) E readies(P) such that s i x = s~X • and every action
(~, x, y) E A, there is z E ~ x such that Vr~x~gx. ~ (-5,z, z) E A.
34
P : (P,U)~(X,V),e(X• • cycle(P)
P:(r,v)
This rule illustrates one of the main features of our approach--the combi-
nation of type-theoretic and traditional verification techniques. Typically,
the construction of a process will be carried out up to a certain point by
means of the linear combinators, and its correctness will be guaranteed
by the properties of the type system. This phase of the verification pro-
cedure is completely compositional. However, if cyclic connections are to
be formed, some additional reasoning about the behaviour of the pro-
cess is needed. The nature of this reasoning is embodied in the above
proof rule. The rule is not compositional, in the sense that the internal
structure of P must be examined to some extent in order to validate the
condition cycle(P), but the departure from compositionality is only tem-
porary. Once the hypotheses of the proof rule have been established, the
result is that P has a type, and can be combined with other processes
purely on the basis of that type.
(
)
2. If there is P~ whose right fork is up and whose left fork is down, it can
either put down the right fork (if it has just put down the left fork)
or pick up the left fork (if its neighbour has the right fork).
3. If all forks are up and some Pi has both its forks, it can put down the
left fork.
4. If all forks are up and every P~ has just one fork, they all have their
left forks, and there is a deadlock.
The last case is the classic situation in which the dining philosophers may
deadlock--each philosopher in turn picks up the left fork, and then they
are stuck. In terms of ready sets, there is a state in which every possible
next action has non-matching projections in the two X ports.
In Hoare's formulation of the dining philosophers problem [22] the
philosophers are not normally seated, but have to sit down before at-
tempting to pick up their forks. This means that the possibility of dead-
lock can be removed by adding a f o o t m a n , who controls when the philoso-
phers sit down. The footman ensures that at most four philosophers are
seated at any one time, which means that there is always a philosopher
with an available fork on both sides; in this way, the deadlocked situ-
ation is avoided. However, implementing this solution involves a major
change to the system: there is a new process representing the footman,
the philosopher processes have extra ports on which they interact with the
footman, and consequently their types need to be re-examined. It is more
convenient to use an alternative approach, which will now be described.
One of the philosophers is replaced by a variant, pi, which picks up
the forks in the opposite order. So PI = r u . l u . e . r d . l d . P ~in CCS notation.
38
Intuitively, this prevents the deadlocking case from arising, because even
if the four Ps each pick up their left fork, P ' is still trying to pick up its
right fork (which is already in use) and so one of the Ps has a chance to
pick up its right fork as well. The check that there are no deadlocks takes
the form of a case analysis, as before.
1. If all the forks are up and some philosopher has both its forks, it can
put one of t h e m down, whether it is P or P'.
2. If all the forks are up a n d every philosopher has just one, either they
all have their left fork or all the right. If t h e y all have their left fork,
then P ' can put down its left fork. If they all have their right fork,
then any P can put down its right fork.
3. If two adjacent forks are down, then the philosopher in between them
can pick one of them up, whether it is P or P'.
- If phil 2 is P and doesn't have its right fork, it can pick up the
left fork.
- If phil 2 is P' and has its right fork, it can pick up the left fork.
If phil 2 is P' and doesn't have its right fork, then phil 3 m u s t
be P and has its left fork. Then if p h i l a ' s right fork is down,
p h i l 3 can pick it up. If the right fork is up and p h i l 3 has it, it
can put down the left fork. Otherwise, p h i l 4 is P and has its left
fork. Continuing this argument for each p h i l / with i /> 4 leads
eventually to either a possible action, or cyclically back t o ' i = 1
and the deduction that p h i l 1 has its left fork. In the latter case,
since p h i l 1 is P, it can pick up its right fork.
Acknowledgements
We would like to thank Rick Blute, Robin Cockett, Phil Scott and David
Spooner for their detailed reviews of this p~per.
References
17. W. P. de Roever. The quest for compositionality--a survey of assertion based proof
systems for concurrent programs, Part I: Concurrency based on shared variables.
In Proceedings of the IFIP Working Conference, 1985.
18. S. J. Gay. Linear Types for Communicating Processes. PhD thesis, University of
London, 1995. Available as t h e o r y / p a p e r s / G a y / t h e s i s .ps .gz via anonymous ftp
to t h e o r y .doc. i c . ac .uk.
19. G. Centzen. Investigations into logical deduction. In M. E. Szabo, editor, The
Collected Papers of Gerhard Gentzen. North-Holland, 1969.
20. J.-Y. Girard. Linear Logic. Theoretical Computer Science, 50(1):1-102, 1987.
21. J.-Y. Girard, Y. Lafont, and P. Taylor. Proofs and Types, volume 7 of Cambridge
Tracts in Theoretical Computer Science. Cambridge University Press, 1989.
22. C. A. R. Hoare. Communicating SequentialProcesses. Prentice Hall, 1985.
23. N. D. Jones and F. Nielson. Abstract interpretation. In S. Abramsky, D. Gabbay,
and T. Maibaum, editors, Handbook of Logic in Computer Science, volume 4. Ox-
ford University Press, 1995. To appear.
24. G. M: Kelly and M. L. Laplaza. Coherence for compact closed categories. Journal
of Pure and Applied Algebra, 19:193-213, 1980.
25. D. C. Kozen and J. Tiuryn. Logics of programs. In van Leeuwen, editor, Handbook
of Theoretical Computer Science, volume B, pages 789-840. North Holland, 1990.
26. S. Mac Lane. Categories for the Working Mathematician. Springer-Verlag, Berlin,
1971.
27. Z. Manna and A. Pnueli. The Temporal Logic of Reactive and Concurrent Systems.
Springer-Verlag, 1992.
28. J. McKinna and R. Burstall. Deliverables: A categorical approach to program de-
velopment in type theory. In Proceedings of Mathematical Foundation of Computer
Science, 1993.
29. R. Milner. Calculi for synchrony and asynchrony. Theoretical Computer Science,
25:267-310, 1983.
30. R. Milner. Communication and Concurrency. Prentice Hall, 1989.
31. R. Milner, M. Torte, and R. Harper. The Definition of Standard ML. MIT Press,
1990.
32. P. W. O'Hearn and R. D. Tennent. Relational parametricity and local variables.
In Proceedings, 20th ACM Symposium on Principles of Programming Languages.
ACM Press, 1993.
33. A. M. Pitts. Relational properties of recursively defined domains. In 8th Annual
Symposium on Logic in Computer Science, pages 86-97. IEEE Computer Society
Press, Washington, 1993.
34. R. Soare. Recursively Enumerable Sets and Degrees. Perspectives in Mathematical
Logic. Springer-Verlag, Berlin, t987.
35. J. B. Wells. Typability and type checking in the second-order )~-calculus are equiv-
alent and undecidable. In Proceedings, Ninth Annual 1EEE Symposium on Logic
in Computer Science. IEEE Computer Society Press, 1994.
Automated Temporal Reasoning
about
Reactive Systems
E. Allen Emerson
1 Introduction
There is a growing need for reliable methods of designing correct reactive sys-
tems. These systems are characterized by ongoing, typically nonterminating and
highly nondeterministic behavior. Examples include operating systems, network
protocols, and air traffic control systems. There is widespread agreement that
some type of temporal logic, or related formalism such as a u t o m a t a on infinite
objects, provides aft extremely useful framework for reasoning about reactive
programs.
The "classical" approach to the use of temporal logic for reasoning about
reactive programs is a manual one, where one is obliged to construct by hand
a proof of program correctness using axioms and inference rules in a deductive
system. A desirable aspect of some such proof systems is that they m a y be for-
mulated so as to be "compositional', , which facilitates development of a program
hand in hand with its proof of correctness by systematically composing together
proofs of constituent subprograms. Even so, manual proof construction can be
extremely tedious and error prone, due to the large number of details that must
be attended to. Hence, correct proofs for large programs are often very difficult
to construct and to organize in an intellectually manageable fashion. It is not en-
tirely clear that it is realistic to expect manual proof construction to be feasible
for large-scale reactive systems.
42
1 It seems that nowadays there is widespread agreement that some type of automation
is helpful, although this opinion is not unanimous.
43
2 Preliminaries
2.1 Reactive S y s t e m s
The ultimate focus of our concern is the development of effective methods for
designing reactive systems (cf. [Pn86]). These are computer hardware and/or
computer software systems that usually exhibit concurrent or parallel execution,
where many individual processes and subcomponents are running at the same
time, perhaps competing for shared resources, yet coordinating their activities
to achieve a common goal. These processes and subcomponents may be geo-
graphically dispersed so that the computation is then distributed. The cardinM
characteristic of reactive systems, however, is the ongoing nature of their com-
putation. Ideally, reactive systems exhibit nonterminating behavior.
There are many important practical examples of reactive systems. These in-
clude: computer operating systems; network communication protocols; computer
hardware circuits; automated banking teller networks; and air traffic control sys-
tems.
The semantics of reactive systems can thus be given in terms of infinite
sequences of computation states. The computation sequences may in turn be
organized into infinite computation trees. The branching behavior of these re-
active systems is typicMly highly nondeterministic, owing to a variety of factors
44
including the varying speeds at which processes execute, uncertainty over the
time required for messages to be transmitted between communicating processes,
and "random" factors in the environment. Because of this high degree of non-
determinism, the behavior of reactive systems is to a high degree unpredictable,
and certainly irreproducible in practice. For this reason, the use of testing as
a means of ascertaining correctness of a reactive system is even more infeasible
than it is for sequential programs. Accordingly, the use of appropriate formal
methods for precisely specifying and rigorously verifying the correct behavior of
reactive systems becomes even more crucial in the case of reactive systems.
2.2 T e m p o r a l Logic
One obvious difficulty is that formalisms originally developed for use with se-
quent'ial programs that are intended to terminate, and are thus based on initial
state - final state semantics are of little value when trying to reason about reac-
tive systems, since there is in general no finM state. Pnueli [Pn77] was the first
to recognize the importance of ongoing reactive systems and the need for a for-
malism suitable for describing nonterminating behavior. Pnueli proposed the use
of temporal logic as a language for specifying and reasoning about change over
time. Temporal logic in its most basic form corresponds to a type of modal tense
logic originally developed by philosophers (cf. [RU71]). It provides such sim-
ple but basic temporal operators as Fp (sometime p) and Gp (always p), that,
Pnueli argued, can be combined to readily express many important correctness
properties of interest for reactive systems.
Subsequent to the appearance of [Pn77], hundreds , perhaps thousands, of
papers developing the theory and applications of temporal logic to reasoning
about reactive systems were written. Dozens, if not hundreds, of systems of
temporal logic have been investigated, both from the standpoint of basic theory
and from the standpoint of applicability to practical problems. To a large extent
the trend was to enrich (and "elaborate") Pnueli's original logic thereby yielding
logics of increasingly greater expressive power. The (obvious) advantage is that
more expressive logics permit handling of a wider range of correctness properties
within the same formalism. More recently, there has been a counter-trend toward
"simplified" logics tailored for more narrowly construed applications.
In any event, there is now a widespread consensus that some type of temporal
logic constitutes a superior way to specify and reason about reactive systems.
There is no universal agreement on just which logics are best, but we can make
some general comments, temporal logics can be classified along a number of
dimensions:
1. point-based, where temporal assertions are true or false of moments in time,
versus interval-based, where temporal assertions are true or false of intervals
of time.
45
It is our sense that the majority of the work on the use of temporal logic to
reason about reactive systems has focussed on propositional, point-based, future-
tense systems. There are a large number of users of both .the linear time and the
branching time frameworks. We ourselves have some preference for branching
time, since in its full generality it subsumes the linear time framework. We
shall ,thus focus on the systems CTL, CTL* discussed below along with the
"ultimate branching time logic", the Mu-calculus, and PLTL, ordinary linear
temporal logic. It is to be re-emphasized we are always restricting our attention
to propositional logics, which turn out to be adequate for the bulk of our needs.
2 This is more along the line of the working mathematician's notion of rigorous, but
not strictly formal. To us, a strictly formal proof is conducted by simply performing
symbol manipulations. There should exist an algorithm which, given a text that is
alleged to constitute a strictly formal proof of an assertion, mechanically checks that
each step in the proof is legitimate, say, an instance of an axiom or results from
previous steps by application of an inference nile.
46
and it seems to us that complete formalization is, given the current state of
knowledge about formal systems and notations, not likely to be practical in the
manual setting.
We point out that there are additional drawbacks to manual proof construc-
tion of program correctness. Just as for any proof, ingenuity and insight are
required to develop the proof. The problem is complicated by the vast amount
of tedious detail that must be coped with, which must in general be organized in
subtle ways in formulating "loop invariants", and so forth. There may be so much
detail that it is difficult to organize the proof in an intellec.tually manageable
fashion.
The upshot is that the whole task of manual proof construction becomes
extremely error prone. One source of errors is that strong temptation to replace
truly formal reasoning by quasi-formal reasoning. This is seen through-out the
literature on manual program verification, and it frequently leads to errors.
We feel compelled to assert the following (perhaps controversial):
Claim. Manual verification will not work for large-scale reactive systems.
Our justification is that the task is error-prone to an overwhelming degree.
Even if strictly formal reasoning were used throughout, the plethora of technical
detail would be overwhelming. By analogy, consider the task of a human adding
100000 decimal numbers of 1000 digits each. This is rudimentary in principle
but likely impossible in practice for any human to perform accurately. Similarly,
the verification of 1000000 or even 100000 line programs by hand will not be
feasible. The transcription errors alone will be prohibitive.
For these reasons plus the convenience of automation, we therefore believe
that it is important to focus on mechanical reasoning about program correctness
using temporal logic and related formalisms. There are at least four approaches
to explore:
We note that approach 0, while less ambitious than approach 2, relies on the
technical machinery of approach 2, a decision procedure for satisfiability/validity.
Actually, it can be argued that approaches 1 and 3 also rely heavily on approach
2. In any event, in the sequel, we shall focus on approach 1, model checking, and
approach 2, decision procedures for satisfiability.
2.4 CTL*, C T L , a n d P L T L
In this section we provide the formal syntax and semantics for three repre-
sentative systems of propositional temporal logic. Two of these are branching
time temporM logics: CTL and its extension CTL*. The simpler branching time
logic, CTL (Computational Tree Logic), allows basic temporal operators of the
form: a path quantifier--either A ("for all futures") or E ("for some f u t u r e " -
followed by a single one of the usual linear temporal operators G ("always"),
F ("sometime"), X ("nexttime"), or U ("until"). It corresponds to what one
might naturally first think of as a branching time logic. CTL is closely related
to branching time logics proposed in [La80], [EC80], [QS82], [BPM83], and was
itself proposed in ICE81]. However, its syntactic restrictions limit its expressive
power so that, for example, correctness under fair scheduling assumptions can-
not be expressed. We therefore also consider the much richer language CTL*,
which is sometimes referred to informally as full branching time logic. The logic
CTL* extends CTL by allowing basic temporal operators where the path quan-
tifier (A or E) is followed by an arbitrary linear time formula, allowing boolean
combinations and nestings, over F, G, X, and U. It was proposed as a unifying
framework in [EH86], subsuming a number of other systems, including both CTL
and PLTL.
The system PLTL (Propositional Linear temporal logic) is the "standard"
linear time temporal logic widely used in applications (cf. [Pn77], [MP92]).
Syntax
We now give a formal definition of the syntax of CTL*. We inductively define
a class of state formulae (true or false of states) using rules S1-3 below and a
class of path formulae (true or false of paths) using rules P1-3 below:
The set of state formulae generated by the above rules forms the language
CTL*. The other connectives can then be introduced as abbreviations in the
48
The set of state formulae generated by rules S1-3 and P0 forms the language
CTL. The other boolean connectives are introduced as above while the other
temporal operators are defined as abbreviations as follows: E F p abbreviates
E(trueUp), AGp abbreviates -~EF'~p, A F p abbreviates A(trueVp), and EGp
abbreviates -~AF-~p. (Note: this definition can be seen to be consistent with
that of CTL*.)
Finally, the set of path formulae generated by rules S1,P1-3 define the syntax
of the linear time logic PLTL.
Semantics
We may view M as a labeled, directed graph with node set S, arc set R, and
node labels given by L.
A fullpath of M is an infinite sequence so, sl, s 2 , . . , of states such that Vi
(si, s~+l) 6 R. We use the convention that x = (so, sl, s~,...) denotes a fullpath,
and that x i denotes the suffix path (si,si+l,si+2,...). We write M, so ~ p
(respectively, M, x ~ p) to mean that state formulap (respectively, path formula
p) is true in structure M at state so (respectively, of fullpath z). We define
inductively as follows:
49
S1 M, so ~ P i f f P E L ( s o )
$2 M So ~ p A q iff M, so ~ p a n d M , so ~ q
M so ~ -~p iff it is not the ease that M, so ~ p
$3 M so ~ Ep iff 3 fullpath x = (so, sl, s2,...) in M, M, x ~ p
M so ~ Ap iff V fullpath x = (so, si, s2,...) in M, M, x ~ p
P1 M x ~ p iff M, so ~ p
P2 M x ~ p A q i f f i , x ~ p and i , x ~ q
M x ~ -~p iff it is not the case that M, x ~ -~p
P3 M x ~ pUq iff 3i [U, x i ~ q and Vj (j < i implies M, xJ ~ p)]
M x ~ X p iff M, x 1 ~ p
A formula of CTL is also interpreted using the CTL* semantics, using rule
P3 for path formulae generated by rule P0.
Similarly, a formula of PLTL, which is a "pure path formula" of CTL* is
interpreted using the above CTL* semantics.
We say that a state formulap (resp., path formulap) is valid provided that for
every structure M and every state s (resp., fullpath x) in M we have M, s ~ p
(resp., M, x ~ p). A state formulap (resp., path formulap) is satisfiable provided
that for some structure M and some state s (resp., fullpath x) in M we have
M, s ~ p (resp., M, x ~ p).
We can define CTL* and other logics over various generalized notions of struc-
ture. For example, we could consider more general structures M = (S, X, L)
where S is a set of states and L a labeling of states as usual, while X C S ~
is a family of infinite computation sequences (fullpaths) over S. The definition
of CTL* semantics carries over directly, with path quantification restricted to
paths in X, provided that "a fullpath x in M " is understood to refer to a fullpath
x in X. Usually, we want X to be the set of paths generated by a binary relation
R (cf. [Em83]).
Another generalization is to define a multiprocess structure, which is a refine-
ment of the above notion of a "monolithic" structure that distinguishes between
different processes. Formally, a multiprocess structure M = (S, R, L) where
S is a set of states,
R is a finite family { R 1 , . . . , Rk} of binary relations Ri on S (intuitively, Ri
represents the transitions of process i) such that R = OR is total (i.e. Vs E S
3 t E S (s,t) E R ) ,
L associates with each state an interpretation of the proposition symbols at
the state.
50
so 81 s2 83
si alternating with relation indices di+l such
is an i n f i n i t e s e q u e n c e of states
that (si, si+l) E Rdi+l, indicating that process di+l caused the transition from
si to si+l. We also assume that there are distinguished propositions enl, . . . ,
enk, exl, . . . , eXk, where intuitively enj is true of a state exactly when process
j is enabled, i.e., when a transition by process j is possible, and exj is true of a
transition when it *is performed by process j. Technically, each enj is an atomic
proposition--and hence a state formula--true of exactly those states in domain
as well as an additional set of arc assertion symbols that are interpreted over
transitions (s, t ) E R. Typically we think of L((s, t)) as the set of indices (or
names) of processes which could have performed the transition (s, t). A (gener-
alized) fullpath is now a sequence of states si alternating with arc assertions di
as depicted above.
Now we say that a general structure M = (S, R, X, L) where
S is a set of states,
R is a total binary relation C S x S,
X is a set of fullpaths over R, and
L is a mapping associating with each state s an interpretation L(s) of all state
symbols at s, and with each transition (s, t) E R an interpretation of each
arc assertion at (s, t).
There is no loss of generality due to including R in the definition: for any set
of fullpaths X, let R = {(s, t) E S x S : there is a fullpath of the form ystz in X
where y is a finite sequence of states and z an infinite sequence of states in S);
then all consecutive pairs of states along paths in X are related by R.
The extensions needed to define CTL* over such a general structure M are
straightforward. The semantics of path quantification as specified in rule $3
carries over directly to the general M, provided that a "full path in M" refers to
one in X. If d is an arc assertion we have that:
M, x ~ d iff d E L((so, sl))
Alternative Syntax
Here the essential idea is that of a basic modality. A formula of CTL* is a basic
modality provided that it is of the form Ap or Ep where p itself contains no A's or
E's, i.e., p is an arbitrary formula of PLTL. Similarly, a basic modality of CTL is
of the form Aq or Eq where q is one of the single linear temporal operators F , G,
X, or U applied to pure propositional arguments. A CTL* (respectively, CTL)
formula can now be thought of as being built up out of boolean combinations
and nestings of basic modalities (and atomic propositions).
2.5 .Mu-calculus
CTL* provides one way of extending CTL. In this section we describe another
way of extending CTL. We can view CTL as a sublanguage of the propositional
Mu-Calculus L# (cf. [Ko83], [EC80]). The propositional Mu-Calculus provides
a least fixpoint operator (#) and a greatest fixpoint operator (u) which make
it possible to give fixpoint characterizations of the branching time modalities.
Intuitively, the Mu-Calculus makes it possible to characterize the modalities in
terms of recursively defined tree-like patterns. For example, the C T L assertion
52
EFp (along some computation path p will become true eventually) can be char-
acterized as #Z.p V EXZ, the least fixpoint of the functional p V E X Z where
Z is an atomic proposition variable (intuitively ranging over sets of states) and
EX denotes the existential nexttime operator.
We first give the formal definition of the Mu-Calculus.
Syntax
The formulae of the propositional Mu-Calculus Lit are those generated by rules
(1)-(6):
The set of formulae generated by the above rules forms the language Lp. The
other connectives are introduced as abbreviations in the usual way: p A q abbre-
viates -~(-~pA-~q), AXp abbreviates -~EX-~p, uY.p(Y) abbreviates -~pY.-~p(-~X),
etc. Intuitively, pY.p(Y) (vY.p(Y)) stands for the least (greatest, resp.) fixpoint
of p(Y), EXp (AXp) means p is true at some (every) successor state reachable
from the current state, A means "and", etc. We use IP[ to denote the length (i.e.,
number of symbols) of p.4
We say that a formula q is a subformula of a formula p provided that q, when
viewed as a sequence of symbols, is a substring ofp. A subformula q o f p is said
to be proper provided that q is not p itself. A top-level (or immediate) subformula
is a maximal proper subformula. We use SF(p) to denote the set of subformulae
of p.
The fixpoint operators/t and v are somewhat analogous to the quantifiers S
and V. Each occurrence of a propositional variable Y in a subformula pY.p(Y)
(or vY.p(Y)) of a formula is said to be bound. All other occurrence are free. By
renaming variables if necessary we can assume that the expression pY.p(Y) (or
vY.p(Y)) occurs at most once for each Y.
A sentence (or closed formula) is a formula that contains no free propositional
variables, i.e., every variable is bound by e i t h e r / t or t,. A formula is said to be
in positive normal form (PNF) provided that no variable is quantified twice
Defined below.
4 Alternatively, we can define ]p] as the size of the syntax diagram for p.
53
and all the negations are applied to atomic propositions only. Note that every
formula can be put in PNF by driving the negations in as deep as possible
using DeMorgan's Laws and the dualities "~pY.p(Y) = vY.'~p('~Y),-,vY.p(Y) =
#Y.',p(-,Y). (This can at most double the length of the formula). Subsentences
and proper subsentences are defined in the same way as subformulae and proper
sub formulae.
Let tr denote either/~ or v. If Y is a bound variable of formula p, there is
a unique ju or v subformula ~ry.p(y) of p in which Y is quantified. Denote this
subformula by crY. Y is called a p-variable if cry = p y ; otherwise, Y is called a
v-variable. A cr-subformula (cr-subsentence, rasp.) is a subformula (subsentence)
whose main connective is either/~ or v. We say that q is a top-level cr-subformula
of p provided q is a proper e-subformula of p but not a proper tr-subformula of
any other ~-subformula of p. Finally, a basic modality is a ~-sentence that has
no proper cr-subsentences.
Semantics
P(Y1,..., Yn) to denote that all free variables of p are among Y1,..., Yn. A val-
uation 1), denoted (V1,..., Vn), is an assignment of the subsets of S, V1,..., Vn,
to free variables Y1,..., Yn, respectively. We use pM(I)) to denote the value o f p
on the (actual) arguments V1,..., Vn (cf. [EC80], [Ko83]). The operator pM is
defined inductively as follows:
Note that our syntactic restrictions on monotonicity ensure that least (as
well as greatest) fixpoints are well-defined.
Usually we write M, s ~ p (respectively, M, s ~ p(Y)) instead of s E pM
(respectively, s E pM(y)) to mean that sentence (respectively, formula) p is true
in structure M at state s (under valuation P). When M is understood, we write
simply s ~ p.
Extensions
Just as for CTL and CTL*, we have the multiprocess versions of the Mu-calculus.
One possible formulation is to use EXip for "there exists a successor state sat-
isfying p, reached by some step of process i". Dually, we then also have AXip.
The classical notation, going back to PDL [FL79], would write < i > p and [i]p,
respectively.
Discussion
We can get some intuition for the the Mu-Calculus by noting the following
extremal fixpoint characterizations for CTL properties:
E F P - #Z.P V E X Z
A G P =_vZ.P A A X Z
A F P = pZ.P V A X Z
E G P - vZ.P A E X Z
A(P V Q) - ttZ.Q v (P A A X Z )
E ( P U Q) - pZ.Q v (P A E X Z )
For these properties, as we see, the fixpoint characterizations are simple and
plausible. It is not too difficult to give rigorous proofs of their correctness [EC80],
55
[EL86]. However, it turns out that it is possible to write down highly inscrutable
Mu-calculus formulae for which there is no readily apparent intuition regarding
their intended meaning. As discussed subsequently, the Mu-calculus is a very
rich and powerful formalism so perhaps this should come as no surprise. We
will comment here that Mu-calculus formulas are really representations of al-
ternating finite state a u t o m a t a on infinite trees (see section 6.5). Since even
such basic a u t o m a t a as deterministic finite state automata on finite strings can
be quite complex "cans of worms", we should again not be so surprised at po-
tential inscrutability. On the other hand, many Mu-calculus characterization of
correctness properties are elegant, and the formalism seems to have found in-
creasing favor, especially in Europe, owing to its simple and elegant underlying
mathematical structure.
One interesting measure of the "structural complexity" of a Mu-calculus
formula is its alternation depth. Intuitively, the alternation depth refers to the
depth of nesting of alternating/~'s and u's. The alternation must be "significant",
entailing a subformula of the forms
where, for example, a #'d Y occurs within the scope of uZ or u'd Y occurs within
the scope of a / t Z .
All the basic modalities AFq, AGq, EFq, etc. of CTL can be expressed in the
Mu-Calculus with alternation depth 1 as illustrated above. So can all CTL for-
mula. For example, EFAGq has the Mu-Calculus characterization ItY.(EXY V
u Z. (PA A X Z)), which is still of alternation depth 1 since while u Z appears inside
/tY, the "alternation" does not have Y inside v Z and does not match the above
form (*). A property such as E(P*Q)*R, meaning there exists a path matching
the regular expression (P*Q)*R, can be expressed by #Y.I~Z.(P A E X Z V (Q A
E X Y ) V R, which is still of alternation depth 1:
On the other hand, properties associated w i t h fairness require alternation
co
depth 2. For example, E F P (along some path P occurs infinitely often) can
oo
be characterized by uY.t~Z.EX(P A Y V Z). It can be shown that E F P is not
expressible by any alternation depth 1 formula (cf. [EC80] [EL86]).
Let L/~k denote the Mu-Calculus L/t restricted to formulas of alternation
depth at most k. It turns out that most all modal or temporal logics of programs
can be translated into L#I or L/~2, often succinctly (cf. [EL86]). Interestingly,
it is not known if higher alternation depths form a true hierarchy of expressive
power. The question has some bearing on the complexity of model checking in
the overall Mu-calculus as discussed in section 7.
56
3 Model Checking
for the least k _< IS] such that ~(false) = rk+l(false). The intuition here
is just that each "ri(false) corresponds to the set of states which can reach P
within at most distance i; thus, P is reachable from state s iff P is reachable
within i steps from s for some i less than the size of M iff s E r i (false) for some
such i less than the size of M. This idea can be easily generalized to provide a
straightforward model checking algorithm for all of CTL and even the entire Mu-
calculus. 5 The Tarski-Knaster theorem handles the basic modalities. Compound
formulae built up by nesting and boolean combinations are handled by recursive
descent. A naive implementation runs in time complexity O((IMllpl)k+l) for
input structure M and input formula p with/z, v formulas nested k deep. Some
improvements are possible as described in section 6.3.
The above model checking algorithm has been dubbed a global algorithm
because it computes for the (closed) formula f over structure M the set fM
of all states of M where f is true. A technical characteristic of global model
checking algorithms is that with each subformula g of the specification f there
is calculated the associated set of state gM. A potential practical drawback is
that all the states of the structure M are examined.
5 Model checking for PLTL and CTL* can be performed as discussed in section 6.3.
57
abstraction by state graphs of this size, and extensional model checking can be
a helpful tool. On the other hand, it can quickly become infeasible to represent
the global state graph for large n. Even a banking network with 100 automatic
teller machines each having just 10 local states, could yield a global state graph
of astronomical size amounting to about 10 l~176 states.
A major advance has been the introduction of symbolic model checking tech-
niques (cf. [McM92], [Pi90], [CM90]) which are - in practice- often able to suc-
cinctly represent and model check over state graphs of size 101~176 states and
even considerably larger. The key idea is represent the state graph in terms of a
boolean characteristic function which is in turn represented by a Binary Decision
Diagram (BDD) (cf. [Sr86]). SDD-based model checkers have been remarkably
effective and useful for debugging and verification of hardware circuits. For rea-
sons not well understood, BDDs are able to exploit the regularity that is readily
apparent even to the human eye in m a n y hardware designs. Because software
typically lacks this regularity, BDD-based model checking seems less helpful for
software verification. We refer the reader to [McM92] for an extended account
of the utility of BDDs in hardware verification.
There has thus been great interest in model checking methods that avoid
explicit construction of the global state graph. In some cases, it is possible to
give methods that work directly on the program text itself as an implicit rep-
resentation of the state graph. Some approaches use process algebras such as
CCS or formalisms such as Petri-nets to represent programs. This facilitates
succinct representation of (possibly infinite) families of states and exploitation
of the compositional structure of programs (cf. [BS92]) in performing a more
general type of local model checking. The drawback is that the general method
can no longer be fully automated, for such basic reasons as the unsolvability of
the halting problem over infinite state spaces. Still, such partially automated
approaches are intriguing.
4.1 Overview
Basic Idea 2 subsumes Basic Idea 1. For logics to which the tableau m e t h o d is
applicable, the tableau can be viewed as defining an a u t o m a t o n . For other logics,
we can build an a u t o m a t o n when it is not possible to build a tableau because
we can appeal to certain difficult combinatorial constructions from the theory
of a u t o m a t a on infinite objects. The most i m p o r t a n t of these is determinization
of a u t o m a t a . 7 The power of a u t o m a t a theory seems to lie in a reservoir of deep
combinatorial results t h a t show how to construct an a u t o m a t o n corresponding
to a formula, even though t h a t correspondence is by no mear~s apparent. Of
course, this opacity could be viewed as a drawback, but it seems to be inherent
in the problem.
{ P , . . ., -~Q, A X 9 1 , . . ., A X g k , E X h l , . . . , E X h l } .
1. Delete any node C which contains both a proposition P and its negation
-hR.
2. Delete any node C One of whose original successors Di has been deleted, or
which has no successors to begin with.
3. Delete any node D all of whose original successors have been deleted.
4. Delete any node C containing an eventuality which is not "fulfillable" within
the current version of the tableau (as described in detail below).
systematic way as shown in Figure 1. Here S1,..., SN are all the AND-nodes,
i.e. the "states" of the tableau and el, . . , em are all the eventualities. We have
a matrix whose entries are all the DAGG[Si,ej]'s spliced together as shown.
9 . . . . . . . . . . . . . . . . . . . . . .
A ~ j J
J~
0 O 0 level 1
L
s
level 2
9 9
$1 S2 SN
level m
I I
I I
I I
I. I I -I
Thus we see, that in this systematically constructed graph, call it M1, for any
node C, we have/141, C ~ AC. If there is a C containing f0 then f0 is satisfiable.
This establishes "soundness" of the decision procedure.
Completeness of the algorithm may be established as follows. If f0 is sat-
isfiable then there is a model M and state so such that M, so ~ f0. Without
loss of generality, we may assume M has been unwound into a tree-like model.
Let M ~ be the quotient structure obtained from M by identifying all states of
M that satisfy exactly the same AND-node label. It follows that M ~ defines a
pseudo-model of f0 that is contained within the tableau throughout its pruning.
The essential point is that if an eventuality such as AFq holds at a state s in M
there is a subtree of M rooted at s whose frontier nodes contain q. This subtree
can be collapsed to yield DAG[Cs,AFq] in the tableau where Cs is the AND-
node for s. Hence, C, will never be eliminated on account of its eventuality AFq
being unfulfilled. O
The size of the tableau To is exponential in n = If01 since there are at most
O(n) different subformulas that can appear in AND-nodes and OR-nodes. The
pruning procedure can be implemented to run in time polynomial in IT01, yielding
an exponential time upper bound. A matching lower bound can be established
by simulating alternating polynomial space Turing machines (cf. [FL79]) Thus,
we have (eft [EH85], [EC82])
T h e o r e m 4.2. CTL satisfiability is deterministic exponential time complete.
We first review the basics of automata-theoretic approach for linear time. (See
[Va9?] for a more comprehensive account.) The tableau construction for CTL can
be specialized, essentially by dropping the path quantifiers to define a tableau
construction for PLTL, remembering that in a linear structure, Epo =- Apo =- Po.
The extended closure of a PLTL formula Po, ecl(po), is defined to be {q,--q :
q appears in some AND-node C }. The (initial) tableau for p0 c a n then be
simplified to be a structure T = (S, R, L) where S is the set of AND-nodes, i.e.,
states, R C S x S consists of the transitions (s, t) defined by the rule (s, t) E R
exactly when V formula X p E ecl(po), X p E s iff p E t, and L(s) = s, for each
sES.
We may view the tableau for PLTL formula p0 as defining the transition
diagram of a nondeterministic finite state automaton .4 which accepts the set of
infinite strings over alphabet ~ = 2 A P that are models of p0, by letting the arc
(u, v) be labeled with AtomicPropositions(v), i.e., the set of atomic propositions
in v (cf. [ES83]). Technically, `4 is a tuple of the form (Q, ~ , J , s0,~) where
Q = S t9 {so} is the state set, so g S is a unique start state, J is defined so
that J(s0,a) -= { states s e S : P0 E s and AtomicPropositions(s) = a} for
each a E E, J(s,a) -- { states t E S : (s,t) E R and AtomicPropositions(t)
= a}. The acceptance condition ~ is described below below. A run r of `4 on
input x - a l a 2 a 3 . . . E ~ is an infinite sequence of states s o s l s 2 . . , such that
Vi > 0 ~i(si,ai+l) D. {Si+l}. Note that Vi > ] AtomicPropositions(si) -" hi.
The automaton ,4 accepts input x iff there is a run r on x that satisfies the
acceptance condition ~.
Several different types of acceptance conditions ~ may be used. For Muller
acceptance, we are given a family ~- of sets of states. Letting I n r denote the
set of states in Q that appear infinitely often along r, we say that run r meets
the Muller condition provided that I n r E Y:.
For a pairs automaton (cf. [McN66], [Ra69]) acceptance is defined in terms of
a finite list ((m~D1, GREEN1),..., (REDk,GREENk))of pairs of sets of automaton
states (which may be thought of as pairs of colored lights where ,4 flashes the
red light of the first pair upon entering any state of the set RED1, etc.): r satisfies
the pairs condition iff there exists a pair i E [1..k] such that REDi flashes finitely
often and GREEN/ flashes infinitely often. It is often convenient to assume the
pairs acceptance condition is given formally by a temporal logic formula 4} =
Vifi[1..k] (~ GREEN/A - ~ RED/). Similarly, a complemented pairs ( cf. [St81] )
automaton has the negation of the pairs condition as its acceptance condition;
i.e., for all pairs i E [1..k], infinitely often CrtEEN/ flashes implies that REDi
flashes infinitely often too. The complemented pairs acceptance condition can
oo oo
given formally by a temporal logic formula 4~ = A/ell:k] FGREENI =~ FRED/. A
special case of both pairs and complemented pair conditions is the Buchi [Bu62]
65
oo
acceptance condition. Here there is a single GREEN light and r = FGrtEEN.
A final acceptance condition that we mention is the parity acceptance con-
dition [Mo84] (cf. [EJ9]]). Here we are given a finite list ( C 1 , . . . , Ck) of sets
of states which we think of as colored lights. The condition is that the highest
index color Ci which flashes infinitely often should be of even parity.
Any run of `4 would correspond to a model of P0, in that Vi >_ 1, x /
A{ formulae p : p E si}, except that eventualities might not be fulfilled. To
check fulfillment, we can easily define acceptance in terms of complemented
pairs. If eel(po) has m eventualities (plVql), ..., (PrnUqm), we let .4 have m pairs
(RED/,GREEN/)of lights. Each time a state containing (p/Uq/) is entered, flash
REDi; each time a state containing qi is entered flash GREEN/. A run r is accepted
iff for each i E [l:m], there are infinitely many RED/ flashes implies there are
infinitely many GREEN/ flashes iff every eventuality is fulfilled iff the input string
z is a model of P0.
We can convert .4 into an equivalent nondeterministic Buchi a u t o m a t o n .41,
where acceptance is defined simply in terms of a single GREEN light flashing
infinitely often. We need some terminology. We say that the eventuality (pUq) is
pending at state s of run r provided that (pUq) E s and q ~ s. Observe that run r
of.4 on input z corresponds to a model of p0 iffnot(3 eventuality (pUq) e eel(po),
(pUq) is pending almost everywhere along r) iff V eventuality (pUq) E ecl(po),
(pUq) is not pending infinitely often along r. The Buchi automaton .41 is then
obtained from .4 augmenting the state with an m + 1 valued counter. The counter
is incremented from i to i + 1 mod (m + 1) when the ith eventuality, (piUqi) is
next seen to be not pending along the run r. When the counter is reset to 0, flash
GrtEEN and set the counter to 1. (If m = 0, flash GREEN in every state.) Now
observe that there are infinitely many GREEN flashes iff Vi E [1 :m] (piUqi) is not
pending infinitely often iff every pending eventuality is eventuality fulfilled iff the
input string z defines a model of p0. Moreover, .4a still has ezp([ Po D'O([ p0 1)
= ezP(I Po [) states.
Similarly, the tableau construction for a branching time logic with relatively sim-
ple modal/ties such as CTL can be viewed as defining a Buchi tree a u t o m a t o n
that, in essence, accepts all models of a candidate formula P0. (More precisely,
every tree accepted by the automaton is a model of P0, and if P0 is satisfiable
there is some tree accepted by the automaton.) General automata-theoretic tech-
niques for reasoning about a number of relatively simple logics, including CTL,
using Buchi tree a u t o m a t a have been described by Vardi and Wolper [VW84].
However, it is for richer logics such as CTL* that the use of tree a u t o m a t a
become essential.
66
Tree Automata
Tree A u t o m a t a R u n n i n g on Graphs
8 CTL* and the other logics we study, have the property that their models can be
unwound into an infinite tree. In particular, in [ESi84] it was shown that a CTL*
formula of length k is satisfiable iff it has an infinite tree model with finite branching
bounded by k, i.e. iff it is satisfiable over a k-cry tree. Our exposition of tree automata
can be easily generalized k-cry trees. We consider only binary trees to simplify the
exposition, and for consistency with the classical theory of tree automata.
67
A binary structure M = (S, Ro, R1, L) consists of a state set S and labeling
L as before, plus a transition relation R0 (9 R1 decomposed into two functions:
R0 : S ) S, where Ro(s) specifies the 0-successor of s, and R1 : S ) S,
where Rl(S) specifies the 1-successor of s.
A run of automaton .4 on binary structure M = (S, R0, R1, L) starting at
so E S i s a m a p p i n g p : S - + Q s u c h that Vs E S, (p(Ro(s)), p(Rl(s))) E
5(p(s), i(s)) and p(so) = qo. Intuitively, a run is a labeling of U with states of
.4 consistent with the local structure of .4's "transition diagram".
T h e Transition D i a g r a m of a Tree A u t o m a t o n
@@@|
9 = OR-nodes (states of .A)
= AND-nodes (transitions on a)
Henceforth, we therefore assume that we are dealing with tree automata over a
one symbol alphabet.
L i n e a r Size M o d e l T h e o r e m
The following theorem is from [Em85] (cf. [VS85]). Its significance is that it
provides the basis for our method of testing nonemptiness of pairs automata;
69
9 The technical fine point is that cycles satisfying the one pair condition are closed
under union. This is not so for multiple pairs. However, a slightly more subtle con-
struction can be used to get M1 in the case of multiple pairs (cf. [HR72]).
70
model of the pairs condition Ar It successively calculates the set of states where
r i (false) is "satisfiable" in the transition diagram using Tarski-Knaster approx-
imation. The effective size of the above fixpoint characterization is exponential
in the number of pairs. The pseudo-model checking algorithm runs in time pro-
portional to the size of the fixpoint characterization and polynomial in the size
of the transition diagram. It is shown in [EJ88] that this yields a complexity of
(mn) ~ for an automaton with transition diagram of size m and n pairs. This
bound is thus polynomial in the number of automaton states, with the degree of
the polynomial proportional to the number of pairs. This polynomial complex-
ity in the size of the state diagram turns out to be significant in applications to
testing satisfiability as explained below.
Related results on the complexity of testing nonemptiness of tree automata
may be found in [EJ88], [PR89], [SJ91].
For branching time logics with richer modalities such as CTL*, the tableau con-
struction is not directly applicable. Instead, the problem reduces to constructing
a tree automaton that accepts some tree iff the formula is satisfiable. This tree au-
tomaton will in general involve a more complicated acceptance condition such as
pairs or complemented pairs, rather than the simple Buchi condition. Somewhat
surprisingly, the only known way to build the tree automaton involves difficult
combinatoriM arguments and/or appeals to delicate automata-theoretic results
such as McNaughton's construction ([McN66]) for determinizing automata on
infinite strings, or subsequent improvements [ESS3], [SaS8], [EJ89].
The original CTL* formula f0 can be converted, by the introduction of auxil-
iary propositions, into a normal form fl that is satisfiable iff the original formula
is, but where path quantifiers are nested to depth at most 2. For example,
EFAFEGP
EFAFQ1 A AG(Q1 =_EGP) ,~
EFQ2 A AG(Q2 -- AFQ1) A AG(Qi - EGP)
branching time modalities of the form Apo, in terms of the w-string automaton
for the corresponding linear time formula p.
We explain the difficulty that manifests itself with just the simple modality
Apo. The naive approach to get a tree automaton for Apo would be to simply
build the w-string automaton for P0 and then run it it down all paths of the
input tree. However, while this seems very natural, it does not, in fact, work.
To see this, consider two infinite paths zy and zz in the input tree which start
off with the same common finite prefix x but eventually separate to follow two
different infinite suffixes y or z. It is possible that p0 holds along both paths
xy and zz, but in order for the nondeterministic automaton to accept, it might
have to "guess" while reading a particular symbol of the finite prefix x whether
it will eventually read the suffix y or the suffix z. The state the string automaton
guesses for y is in general different from the state it guesses for z. Consequently,
no single run of a tree automaton based on a nondeterministic string automaton
can lead to acceptance along all paths.
Of course, if the string automaton is deterministic the above difficulty van-
ishes. We should therefore ensure that the string automaton for p0 is deter-
minized before constructing the tree automaton. The drawback is that deter-
minization is an expensive operation. However, it appears to be unavoidable.
For a linear temporal logic formula P0 of length n we can construct an equiv-
alent Buchi nondeterministic finite state automaton on w-strings of size ezp(n).
We can then get tree automata for Epo and AGEpo of size ezp(n). However, for
Apo, use of classical automata-theoretic results yields a tree automaton of size
triple exponential in n. (Note: by triple exponential we mean exp(exp(ezp(n))),
etc.) The large size reflects the exponential cost to build the string automaton
as described above for a linear time formula P0 plus the double exponential cost
of McNaughton's construction to determinize it. For a CTL* formula of length
n, nonemptiness of the composite tree a u t o m a t o n can be tested in exponential
time to give a decision procedure of deterministic time complexity quadruple
exponential in n.
An improvement in the determinization process makes an exponential im-
provement possible. In [ES83] it was shown that, due to the special structure
of the string a u t o m a t a derived from linear temporal logic formulae, such string
a u t o m a t a could be determinized with only single exponential blowup. This re-
duced the complexity of the CTL* decision procedure to triple exponential.
Further improvement is possible as described below.
The size of a tree automaton is measured in terms of two parameters: the
number of states and the number of pairs in the acceptance condition. A careful
analysis of the tree automaton constructions in temporal decision procedures
shows that the number of pairs is logarithmic in the number of states, and for
CTL* we get an automaton with a double exponential number of states and
73
it is quite possible that it is too succinct. Certainly, it is known that the complex-
ity of mechanical reasoning in it would be nonelementary. PLTL, on the other
hand, seems to provide a good combination of expressive power, succinctness,
and complexity of mechanical reasoning (as does C T L in the branching time
framework). 12
6.1 Tradeoffs
In general, the goals of work in this area are (a) to formulate the most expres-
sive logic possible with the lowest complexity decision problem relevant to the
application at hand; and (b) to understand the tradeoffs between complexity
and expressiveness. In connection with point (b), it is worth noting that there
is some relationship between the syntactic complexity of temporal logic formula
and the computational complexity of their decision procedures. This appears
related to the size and structure of the a u t o m a t o n (that would be) constructed
for the formula. However, the relationship is somewhat intricate.
The first thing to note is that (finite state, pairs) tree a u t o m a t a coincide in
expressive power with the Mu-calculus. Since virtually all mechanical reasoning
operations can be performed in terms of tree a u t o m a t a and all branching time
logics can, it turns out, be translated into tree automata, it is reasonable to take
tree automata as a reference standard for branching time expressibility. 13
12 Actually, rather little research effort has gone into work on succinctness. Particu-
larly valuable topics might include: identification of tractable and useful fragments
of FOLLO (or equivalently S1S), use of w-regular expressions as a specification lan-
guage, and general efforts to gain a deeper understanding of the relation between
syntactic Complexity of a formula and the cost of mechanical reasoning w.r.t, the
formula.
13 In this connection, there is one minor technical caveat about comparing apples and
oranges in the context of expressiveness: tree automata as originally defined run on
infinite binary trees and can distinguish "left" from "right". In contrast, logics such
as CTL* (or the the Mu-calculus with A X , E X as opposed to <0>, [0], <1>, [1]) are
interpreted over models of arbitrary arity but cannot distinguish "left" from "right".
There are a variety of ways to formulate a uniform, compatible framework permitting
76
We next note that the logic PDL-A is strictly subsumed in expressive power
by the Mu-calculus. PDL-A is the Propositional Dynamic Logic with infinite
repetition operator; in essence, it permits assertions whose basic modalities are
of the form E~ where ~ is an w-regular expression (cf. [St81]). PDL-A can be
translated into the Mu-calculus essentially because w-regulai expressions can be
translated into the "linear time" Mu-calculus (cf. [EL86]). For example, P*Q -
pZ.Q v (P A X Z) and P~ =_ v Y . P A X Z. Similarly, E P* Q - # Z . Q v ( P A E X Z)
and E P ~ =_ u Y . P A E X Z . The general translation can be conducted along these
lines. It can be shown, however, that vY. <1> YA <2> Y is not expressible in
PDL-A, over ternary structures with directions/arc labels 0, 1, 2 (cf. [Niw84]).
We also have that CTL* is strictiy subsumed in expressive power by PDL-A.
CTL* can be translated into PDL-A through use of tl~e following main idea:
each linear time formula h defines; using the tableau construction, an equivalent
Buchi automaton (cf. [ES83]) which can be translated into an equivalent w-
regular expression a. Thus, the basic modMity E h of CTL* maps to E a in
PDL-A. Because w-regular expressions are strictly more expressive than PLTL
(cf. [MP71], [Wo83]), there are properties expressible in PDL-A that cannot be
captured by any CTL* formula. E ( P ; true) ~ is perhaps the classic example of
such a property.
It is worth noting that CTL* syntax can be described in a sort of shorthand:
B(F, G, X, U, A, -~, o). This means that the basic modalities of CTL* are of the
form A or E (for a Branching time logic) followed by a pure linear time for-
mula built up from the linear time operators F, G, X, U, the boolean connectives
A,-~, with nestings/compositions allowed as indicated by o. Then we have the
expressiveness results below (cf. [EH86]).
We next compare CTL* with CTF, which is the precursor to CTL and CTL*,
oo
going back to [ECS0]. CTF may be described as the logic B(F, G, X, U, F, A, -~):
Plainly, any CTF formula is a CTL* formula. The difference, syntactically, is
that CTF does not permit arbitrary nesting of linear time formulas in its basic
co
modalities, although it does permit the special infinitary operator(s) F (and in
effect its dual ~) to support reasoning about fairness. However, the CTL* basic
modality A ( F ( P A X P ) ) is not a CTF formula and, moreover, can be shown
to be inequivalent to any CTF formula. Thus, CTL* is strictly more expressive
than CTF.
The logic CTL + is given as B(F, G, X, U, A, -~), permitting basic modalities
with linear time components that are boolean combinations of the linear time
operators F, G, X, U. Thus, CTL + is a sublanguage of CTF, omitting the infini-
6.3 C o m p l e x i t y Summary
The table below summarizes key complexity results for automated temporal rea-
soning. The left column indicates the logic under consideration, the associated
entry in the middle column characterizes the complexity of the logic's model
checking problem, while the associated right column entry describes the com-
plexity of the satisfiability problem. Each row describes a particular logic: PLTL,
CTL, CTL*, and the Mu-calculus.
The first row deals with PLTL, whose complexity was first analyzed in [SC85].
(cf. [Va9?]). PLTL model checking can be polynomially transformed to PLTL
satisfiability testing. The essential point is that the "structure" of a structure
M can be described by a PLTL formula where the nexttime operator and ex-
tra propositions are used to characterize what states are present what are the
successors of each state.
The satisfiability problem of PLTL is PSPACE-complete. In practice, this
bound amounts to a decision procedure of complexity exp(n) for an input formula
h of length n. The decision procedure is a speciMization of that for CTL: build
the exponential sized tableau for the formula, which may be viewed as a Buchi
nfa on infinite strings and tested for nonemptiness in time polynomial in the
size of of the automaton. It is possible,in fact, to build the automaton on-the-fly,
keeping track of only an individual node and a successor node at any given time,
guessing an accepting path in nondeterministic polynomial space34 This serves
to show membership in PSPACE for satisfiability testing of PLTL and for model
checking of PLTL by virtue of the above-mentioned reduction. By a generic
reduction from PSPACE-bounded Turing machines, PLTL model checking can
be shown to be PSPACE-hard; it then follows that PLTL satisfiability testing is
also PSPACE-hard.
An important multi-parameter analysis of PLTL model checking was per-
formed by Lichtenstein and Pnueli [LP85], yielding a bound of O(IM I 9exp(Ihl) )
for an input structure M and input formula h. The associated algorithm is simple
and elegant. We wish to check whether there is a path starting at a given state
so in M satisfying the PLTL formula h. (We clarify why we have formulated the
PLTL model checking problem in just this way below.) To do that, first build
the tableau 7- for h. Then form essentially the product graph M x 7-, view it
as a tableau, and test it for satisfiability. This amounts to looking for a path
through the product graph whose projection onto the second coordinate defines
a model of h that, by virtue of the projection onto its first coordinate, must also
be a path in M. Vardi and Wolper [VW86] made the important recognition that
this construction could be described still more cleanly and uniformly in purely
automata-theoretic terms. Use 7- to define an associated Buchi nfa ,4. Then de-
fine a Buchi automaton B that is the product of M and .A, and simply test B
for nonemptiness.
Along with the above algorithm and its complexity analysis, the following
"Lichtenstein-Pnueli thesis" was formulated: despite the potentially daunting
exponential growth of the complexity of PLTL model checking in the size of
the specification h formula, it is the linear complexity in the size of the input
structure M which matters most for applications, since specifications tend to
14 See the very interesting work by Barringer et. al [BFGGO89] on executable temporal
logics extending this idea.
79
be quite short while structures tend to be very large. Thus, the argument goes,
the exponential growth is tolerable for small specifications, and we are fortunate
that the cost grows linearly in the structure size. Our main point of concern thus
should be simply the structure size.
There appearsto be a good deal of empirical, anecdotal evidence that the
Lichtenstein-Pnueli thesis is often valid in actual applications. As further noted
in a forthcoming section, very simple assertions expressible in a fragment of CTL
are often useful. On the other hand, it is also possible to find instances where
the Lichtenstein-Pnueli thesis is less applicable.
We remark that we have formulated the PLTL model checking problem to
test, in effect, M, so ~ Eh. However, in applications using the linear time frame-
work, we want to know whether all computations of a program satisfy a spec-
ification h'. This amounts to checking M, so ~ Ah ~. It is, of course, enough to
check M, so ~: E('~h ~) which the Lichtenstein-Pnueli formulation handles. Since
PLTL is trivially closed under complementation we thus have a workable, effi-
cient solution to the "all paths" problem in terms of of the Lichtenstein-Pnueli
formulation. (cf. [EL87]).
The next row concerns CTL. CTL model checking is P-complete. Membership
in P was established in ICE81] by a simple algorithm based on the Tarski-Knaster
theorem. This was improved to the bound O([M]]f] ) for input structure M
and CTL formula f in [CES86]. Satisfiability testing for CTL is complete for
deterministic exponential time [EH85]. The upper bound established using the
tableau method was discussed previously. The lower bound follows by a generic
reduction from alternating polynomial space Tm's (cf. [FL79]).
We next consider CTL*. Its model checking problem is of the same com-
plexity as for PLTL. It is PSPACE-complete with a multi-parameter bound of
O([M[. ezp([f])). The lower bound follows because the PLTL model checking
problem is a special case of the CTL* model checking problem. The upper bound
follows because, as noted above,. Ah = ~E~h, and by using recursive descent to
handle boolean connectives and nested path quantifiers. In particular, to check
the formula E ( F A G P A GAFQ), first check A G P and label all states where it
holds with auxiliary proposition P~ ; next check A F Q and label all states where
it holds with auxiliary proposition Q~ ; finally, check E ( F P ~A GQ/). Of course,
in practice it is not really necessary to introduce the auxiliary propositions. It is
simply enough to observe that subformulas A G P and A F Q are state formulas
that can be used to first label the states where they are found to hold before
evaluating the top level formula. CTL* satisfiability can be tested in determin-
istic double exponential time by building a tree automaton of essentially double
exponential size and then testing it for nonemptiness as discussed previously.
The lower bound follows by a generic reduction from alternating exponential
space Tm's [VS85].
80
6.4 A u t o m a t o n Ineffable P r o p e r t i e s
While finite state a u t o m a t a on infinite trees seem a good reference standard for
logics of the sort we have been considering, it is worth noting that there are some
types of reasonably properties correctness properties which are not expressible
by any finite state tree automaton. One such property we refer to as "uniform
inevitability".
15 or by succinctly encoding CTL.
8]
6.5 M u - c a l c u l u s is E q u i v a l e n t t o T r e e A u t o m a t a
where ata denotes the class of alternating tree a u t o m a t a (which are defined
technically below). All of these translations from left-to-right are direct. None
involves the Complementation Lemma. We can then accomplish the simplified
proof (b) of the Complementation L e m m a based on the following simple idea:
given a a tree automaton ,4 there is an equivalent Lp formula f . The negation of
f , -~f, is certainly also a f o r m u l a o f Lp since Lp is trivially closed under syntactic
negation. Therefore, -~f can then be translated into an equivalent tree automaton
which recognizes precisely the complement of the set of trees recognized by
A.
R e m a r k : The restricted Mu-calculus, Rp, of [Niw88] consists of formulas
f, g, ... built up from constructs of these forms: atomic proposition constants and
their negations P, Q, -~P, -~Q..., atomic proposition variables Y, Z , . . . , restricted
conjunctions of the form P A EXoY A EX1Z, disjunctions f V g, and least and
greatest fixpoint operators i.tY.f(Y), ~,Y.f(Y). Since it is not syntactically closed
under complementation, nor obviously semantically closed (as the general A is
missing) 17, we cannot use it directly to establish the Complementation Lemma.
The idea underlying the translation of Lp into ata is simple: a Mu-calculus for-
mula is an alternating tree automaton. In more detail, the syntax diagram of a
Mu-calculus formula may be viewed as (a particular formulation of) the transi-
tion diagram of an alternating tree automaton that checks the local structure of
the input tree to ensure that it has the organization required by the formula. As
the alternating automaton runs down the input tree, "threads" from the syntax
diagram are unwound down each path; in general, there may be multiple threads
going down the same path of the tree due to conjunctions in the formula. We
remark that it is these conjunctions and associated multiple threads which make
the automaton an alternating one.
For example, the syntax diagram of pY.P V ( < 0 > YA < 1 > Y) is shown in
Figure 3. As indicated, there is a node in the syntax diagram for each subformula
and an edge from each formula to its immediate subformulae. In addition, there
is an edge from each occurrence of Y back to #Y.
The transition diagram consists of AND-nodes and OR-nodes. All nodes are
OR-nodes except those corresponding to connective A. is Each node has an input
symbol, usually an implicit r indicating, as usual, that no input is consumed.
The automaton starts in state pY, from which it does an ~ move into state V.
In state V it makes a nondeterministic choice. The automaton may enter state
17 It is semantically closed but the proof requires an appeal to the Complementation
Lemma.
is Matters are simplified by noting that over a binary tree 0,1 are functions, not just
relations.
83
% %9
% %
% %
%
V %
/\
P A
/
I
t
I
t
!
!
<( )> I
!
I
I
I
I
I
I
/
I /S
Y" s S
P. In this case it checks the input symbol labeling the current node to see if
it matches P, in which case it accepts; otherwise, it rejects. Alternatively, the
automaton may enter state A, which is an AND-node and from which it will
exercise universal nondeterminism. From A the automaton launches two new
threads of control: < 0 > down the left branch and < l > down the right branch.
Then state < 0 > does an e move to ensure that at the left successor node the
automaton is in state Y from which does an e move into pY, Similarly, state
< 1 > does an ~ move to ensure that at the right successor node the automaton
is in state Y and then I.tY. Etc.
Acceptance is handled by colored lights placed to ensure that p-regeneration 19
is well-founded. One way to do this is to associate a pair of lights (REDi, GrtEEN~)
with each eventuality # ~ . f . Whenever the eventuality # ~ .f is regenerated along
along a thread, indicated in the syntax diagram by traversing the edge re-entering
the node for/tY/from within the scope of/~Y~, flash GREENi. Whenever the scope
of #Y~ is exited, flash RED~. Thus, /~Y~.f is regenerated infinitely often along a
Nondeterminization
current state and position in the tree (and not the thread of which it is a tip).
Thus, such a history free run will not necessarily be a tree superimposed on
the input tree, but a dag, with intuitively threads intertwined down each tree
branch, such that there is only a single copy of each automaton state at each
tree node. Along a path through the input tree, the coSafra construction (cf.
[EJ89], [Sa92]) can be used to bundle together the collection of infinitely many
threads along the path, which are choked through a finite number of states at
each node.
Finally, it turns out that the alternating tree automaton corresponding to
a Mu-calculus formula is history free. The intuitive justification is that at each
existential choice node, one can take the choice of least rank, ensuring that p's
are fulfilled as soon as possible.
Remark: An alternative method of nondeterminization is to essentially con-
struct the tree automaton for testing satisfiability of a Mu-calculus formula as
in [SE84]. A sharpening of this construction builds a tree automaton equivalent
to the Mu-calculus formula. Basically, perform a tableau construction to get a
local automaton checking the invariance properties inherent in the formula. Con-
join it with a global automaton that checks well-foundedness of p-regenerations.
The global automaton is obtained as follows: Build an w-string automaton that
guesses a bad thread through the tableau along which some p-formula is regen-
erated infinitely often, in violation of the requirement that it should be well-
founded. Then use the coSafra construction to simultaneously determinize and
complement that string automaton, The tree automaton that runs the resulting
string automaton down all paths of the tree is the desired global automaton that
checks the liveness properties associated with p-formulas.
- The limited logics exhibit greater specificity; they are tailored for particular
applications.
- The limited logics are of restricted expressive power. The restrictions may
limit both raw expressive power and economy of descriptive power.
- They are intended to support efficient, polynomial time decision procedures.
We will focus on the restricted logics from [ESSS9] and lEES90]. It can be
quite delicate to obtain a logic that is restricted, efficiently decidable, and at the
same time useful. Some of the implications of these requirements are:
- We must give up propositional logic in its full generality, since obviously any
logic subsuming propositional logic must be at least NP-hard.
- The atomic propositions should be disjoint and exhaustive. Otherwise, if
we allow overlapping propositions, there can be as many as 2 n subsets of
n propositions, yielding an immediate combinatorial explosion. (In practice,
this restriction may not be that onerous in our applications. For example,
we may wish to describe a program which may be in any of n locations. This
may be described using propositions at-locl,...,at-lOCn.
- The overall syntax should be simplified. One simplification is to restrict for-
mulas to be of the form: A assertion; t h a t is, to be a conjunction of simpler
assertions. Note that for purposes of program specification A is more fun-
damental than V- One typically wants a program that meets a conjunction
of criteria. Another simplification is to limit the depth of nesting of tempo-
ral operators. Deeply nested temporal modalities are rarely used in practice
anyway.
Simplified C T L
We first consider Simplified CTL (SCTL). It turns out that SCTL corresponds
precisely to the fragment of CTL actually used in program synthesis in [EC82].
The formulae of SCTL are conjunctions of assertions of the following forms:
- P V . . . V P~ -initial assertions
- AG(Q V . . . Ol) - invariance assertions
- AG(P ~ A F ( R V . . . V R)) - leads-to assertions
- A G ( P ~ A((Q V . . . V Q')Us(R V . . . R ' ) ) - assurance assertions
- AG(P ~ AX(Q, V . . . v Q~)A - successor assertions
E X ( R V . . . V R') A . . . A E X ( S V. . . V S~))
over an alphabet of disjoint, exhaustive propositions P, P~, Q, Q~, R, R~,..., etc.,
subject to the following Euclidean Syntactic Constraint (ESC):
If an SCTL formula has conjuncts of the form A G ( P =~ AFQ) and
AG(P ~ A X ( . . . R V . . . ) . . . ) then AG(R ~ AFQ) must also be a
conjunct of the formula.
87
The ESC ensures that eventualities are recoverable from propositions alone.
The significance in practice of the ESC is that, while it is a restriction, it
permits the specification of "history-free" processes. This means that the even-
tualities pending at each state S of a process are the same irrespective of t h e
path taken by the process from its initial state to S.
The restricted syntax of S C T L permits the decision procedure of CTL to
be simplified yielding a polynomial time decision procedure for formulas of
SCTL. To understand in broad terms why this is possible in it helpful to think
automata-theoretically [VW84]. The a u t o m a t o n / t a b l e a u for a CTL formula m a y
be thought of as the product of the local automaton (the nexttime tableau) with
the global automaton, which is itself the product of an eventuality a u t o m a t o n
for each eventuality such as AFP. The size of the whole a u t o m a t o n is thus the
size of the the nexttime tableau times the product of the sizes of each eventu-
ality automaton. An eventuality automaton for, say, A F P has 2 states, one for
A F P being pending, one for it it being fulfilled (or not required to be satisfied).
Thus, the size of the whole automaton is exponential in the number of eventual-
ities. However, if the set of pending eventualities can be determined from set of
atomic propositions, as is the case with SCTL owing to the ESC, then the local
automaton can serve as the entire automaton.
The SCTL decision procedure then amounts to:
- Construct the nexttime tableau for the input formula f0 using the nexttime
assertions. Each atomic proposition P is an AND-node of the tableau. Each
such AND-node P gets sets of successor AND-nodes, intermediated by OR-
nodes, based on the nexttime assertion 2~ associated with P . For example,
AG(P ~ AX(Q1 v Q2 v R1 V R2) A EXQ1 A EX(R1 V R2)) would have two
OR-node successors, one with AND-node Q1 as a successor, the other with
AND-nodes R1, R2 as successors. This is the local automaton .4io. By virtue
of the ESC it is also the entire automaton. The initial assertions determine
the "start state".
- Check .410 for "nonemptiness". Repeatedly delete "bad" nodes from its tran-
sition diagrams. There is associated with every node a set eventualities e that
must be fulfiliable. The key step is to ensure that e a c h such e is fulfillable
by finding the appropriate DAG[Q,e]'s as for ordinary CTL.
(P V Tt)
AG(P V R V S V T)
AG(P ~ A X S A EXS)
AG(R ~ A X ( R Y T) A EXT)
AG(S ::~ A X ( P V R) A E X P A EXR)
AG(T :::vA X ( P Y T) A E X ( P Y T)
AG(P :=~AFT)
AG(S ~ AFT)
We get the initial tableau shown in Figure 4 (i). Since A F T is not fulfillable
at node P, delete P . Propagate the deletion to any incident edges as well as
OR-nodes whose only successors were P.
The resulting tableau is shown in Figure 4 (ii). Node S violates its successor
assertion because it no longer has a P successor. Thus, S is deleted and its now
spurious successor OR-node,
In Figure 4 (iii) the final pruned tableau is shown. It induces the model shown
in Figure 4 (iv).
(i)
57)
(fi)
(iii)
(iv)
subgraph C. The latter means that if any AF(Q V V Q') appears in C, then
one of Q , . . . , Q' also appears in C. Thus it suffices to build the initial tableau,
split into SCC's, delete non-self-fulfilling SCC's, until stabilization.
AG(P V Q V R V S V T )
AG(P ~ AXQ)
AG(R =~ A X P )
AG(T ~ A X ( P V R))
AG(Q ::~ A X ( R Y S))
AG(S ~ A X ( R V S))
AG(Q :* AFP)
AG(S ::~ AFT)
We now illustrate that the boundary between efficient decidability and worst
case intractability can be quite delicate. Certainly, RLTL is highly restricted.
While useful specifications can be formulated in RLTL, many properties cannot
be expressed. One very basic type of assertion that was omitted was an initial
assertion. Let us define Restricted Initialized Linear temporal logic (RILTL) to
be the logic permitting formulas which are conjunctions of assertions of the form:
where one again propositions are disjoint and exhaustive, and there is no Eu-
clidean Syntactic Constraint or related restriction. In other words, RILTL equals
RLTL plus an initial assertion.
Most surprisingly, this small change increases the complexity:
T h e o r e m The satisfiability problem for RILTL is NP-hard.
The idea behind the proof is as follows. Given a state graph M, such as that
shown in Figure 6, we can capture its structure by a formula fM that is a simple
~Z
(
o-q
( 9
./ \
~~
() |
92
7 Conclusion
References
[AI(86] Apt, K. and Kozen, D., Limits for Automatic Verification of Finite State
Systems, IPL vol. 22, no. 6., pp. 307-309, 1986.
[BFGGO89] Barringer, H., Fisher, M., Gabbay, D., Gough, G., and Owens, R.,
Metatem: A Framework for Programming in Temporal Logic. In Proc. of
the REX Workshop on Stepwise Refinement of Distributed Systems: Mod-
els, Formalisms, Correctness, Mook, The Netherlands, Springer LNCS,
no. 430, June 1989.
[BKP84] Barringer, H., Kuiper, R., and Pnueli, A., Now You May Compose Tem-
poral Logic Specifications, STOC84.
[BKP86] Barringer, H., Kuiper, R., and Pnueli, A., A Really Abstract Concurrent
Model and its Temporal Logic, pp. 173-183, POPL86.
[BPM83] Ben-Ari, M., Pnueli, A. and Manna, Z. The Temporal Logic of Branching
Time. Acta Informatica vol. 20, pp. 207-226, 1983.
[BG93] Bernholtz~ O., and Grumberg G., Branching T i m e Temporal Logic
and Amorphous Automata, Proc. 4th Conf. on Concurrency Theory,
Hildesheim, Springer LNCS no. 715, pp. 262-277, August 1993.
[BS92] Bradfield, J., and Stirling, C., "Local Model Checking for Infinite State
Spaces", Theor. Comp. Sci., vol. 96, pp. 157-174, 1992.
[BCD85] Browne, M., Clarke, E. M., and Dill, D. Checking the Correctness of
sequential Circuits, Proc. 1985 IEEE Int.. Conf. Comput. Design, Port
Chester, NY pp. 545-548
[BCDM86a] Browne, M., Clarke, E. M., and Dill, D,, and Mishra, B., Automatic ver-
ification of sequential circuits using Temporal Logic, IEEE Trans. Comp.
C-35(12), pp. 1035-1044, 1986
[Br86] Bryant, R., Graph-based algorithms for boolean function manipulation,
IEEE Trans. on Computers, C=35(8), 1986.
[Su62] Buchi, J. R., On a Decision Method in restricted Second Order Arith-
metic, Proc. 1960 Inter. Congress on Logic, Methodology, and Philosophy
of Science, pp. 1-11.
[CE81] Clarke, E. M., and Emerson, E. A., Design and Verification of Synchro-
nization Skeletons using Branching Time Temporal Logic, Logics of Pro-
grams Workshop, IBM Yorktown Heights, New York, Springer LNCS no.
131., pp. 52-71, May 1981.
[CES86] Clarke, E. M., Emerson, E. A., and Sistla, A. P., Automatic Verification
of Finite State Concurrent System Using Temporal Logic, 10th ACM
Symp. on Principles of Prog. Lang., Jan. 83; journal version appears in
A C M Trans. on Prog. Lang. and Sys., vol. 8, no. 2, pp. 244-263, April
1986.
[CFJ93] Clarke, E. M., Filkorn, T., Jha, S. Exploiting Symmetry in .Temporal
Logic Model Checking, 5th International Conference on Computer Aided
Verification, Crete, Greece, June 1993.
[CGBS8] Clarke, E. M., Grumberg, O., and Brown, M., Characterizing Kripke
Structures in Temporal Logic, Theor. Comp. Sci., 1988
[CG86] Clarke, E. M., Grumberg, O. and Browne, M.C., Reasoning about Net-
works with Many Identical Finite State Processes, Proc. 5th ACM PODC,
pp. 240-248, 1986.
94
[CG87] Clarke, E. M. and Grumberg, O., Avoiding the State Explosion Problem
In Temporal Model Checking, PODC87.
[CGSTb] Clarke, E. M. and Grumberg, O. Research on Automatic Verification of
Finite State Concurrent Systems, Annual Reviews in Computer Science,
2, pp. 269-290, 1987
[CM83] Clarke, E. M., Mishra, B., Automatic Verification of Asynchronous Cir-
cuits, CMU Logics of Programs Workshop, Springer LNCS ~164, pp.
101-115, May 1983.
[CGB89] Clarke, E. M., Grumberg, O., and Brown, M., Reasoning about Many
Identical Processes, Inform. and Comp., 1989
[CS931 Cleaveland, R. and Steffan, B., A Linear-Time Model-Checking Algorithm
for the Alternation-Free Modal Mu-calculus, Formal Methods in System
Design, vol. 2, no. 2, pp. 121-148, April 1993.
[C193] Cleaveland, R., Analyzing Concurrent Systems using the Concurrency
Workbench, Functional Programming, Concurrency, Simulation, and Au-
tomated Reasoning Springer LNCS no. 693, pp. 129-144, 1993.
[cvw85] Courcoubetis, C., Vardi, M. Y., and Wolper, P. L., Reasoning about Fair
Concurrent Programs, Proc. 18th STOC, Berkeley, Cal., pp. 283-294, May
86.
[CM90] Coudert, O., and Madre, J. C., Verifying Temporal Properties of Sequen-
tial Machines without building their State Diagrams, Computer Aided
Verification '90, E. M. Clarke and R. P. Kurshan, eds., DIMACS, Series,
pp. 75-84, June 1990.
[DGG93] Dams, D., Grumberg, O., and Gerth, R., Generation of Reduced Models
for checking fragments of CTL, CAV93, Springer LNCS no. 697, 1993.
[Di76] Dijkstra, E. W. , A Discipline of Programming, Prentice-Hall, 1976.
[DC86] Dill, D. and Clarke, E.M., Automatic Verification of Asynchronous Cir-
cuits using Temporal Logic, IEEE Proc. 133, Pt. E 5, pp. 276-282, 1986.
[Em81] Emerson, E. A., Branching Time Temporal Logics and the Design of
Correct Concurrent Programs, P h . D . Dissertation, Division of Applied
Sciences, Harvard University, August 1981.
[Em83] Emerson, E. A., Alternative Semantics for Temporal Logics, Theor.
Comp. Sci., v. 26, pp. 121-130, 1983.
[Em85] E.A. Emerson, "Automata, Tableaux, and Temporal Logics", Proc. Work-
shop on Logics of Programs, Brooklyn College, pp. 79-87, Springer LNCS
no. 193, June 1985.
[EC80] Emerson, E. A., and Clarke, E. M., Characterizing Correctness Prop-
erties of Parallel Programs as Fixpoints. Proc. 7th Int. Colloquium on
Automata, Languages, and Programming, Lecture Notes in Computer
Science #85, Springer-Verlag, 1981.
[EC82] Emerson, E. A., and Clarke, E. M., Using Branching Time Temporal
Logic to Synthesize Synchronization Skeletons, Science of Computer Pro-
gramming, vol. 2, pp. 241-266, Dec. 1982.
95
lEES90] Emerson, E. A., Evangelist, M., and Srinivasan, J., On the Limits of
Efficient Temporal Satisfiability, Proc. of the 5th Annual IEEE Symp. on
Logic in Computer Science, Philadelphia, pp. 464-477, June 1990.
[EH85] Emerson, E. A., and Halpern, J. Y., Decision Procedures and Expressive-
ness in the Temporal Logic of Branching Time, Journal of Computer and
System Sciences, vol. 30, no. 1, pp. 1-24, Feb. 85.
[EH86] Emerson, E. A., and Halpern, J. Y., 'Sometimes' and 'Not Never' Revis-
ited: On Branching versus Linear Time Temporal Logic, JACM, vol. 33,
no. 1, pp. 151-178, Jan. 86.
[EJ88] Emerson, E. A. and Jutla, C. S., "Complexity of Tree Automata and
Modal Logics of Programs", Proc. 29th IEEE Foundations of Computer
Sci., 1988
[E J89] Emerson, E. A., and Jutla, C. S., On Simultaneously Determinizing and
Complementing w-automata, Proceedings of the 4th IEEE Symp. on Logic
in Computer Science (LICS), pp. 333-342, 1989.
[EJ91] Emerson, E. A., and Jutla, C. S. "Tree Automata, Mu-Calculus, and
Determinacy", Proc. 33rd IEEE Syrup. on Found. of Comp Sci., 1991
[EJS93] Emerson, E. A., Jutla, C. S., and Sistla, A. P., On Model Checking for
Fragments of the Mu-calculus, Proc. of 5th Inter. Conf. on Computer
Aided Verification, Elounda, Greece, Springer LNCS no. 697, pp. 385-
396, 1993.
[EL86] Emerson, E. A., and Lei, C.-L., Efficient Model Checking in Fragments
of the Mu-calculus, IEEE Symp. on Logic in Computer Science (LICS),
Cambridge, Mass., June, 1986.
[EL87] Emerson, E. A., and Lei, C.-L.m Modalities for Model Checking: Branch-
ing Time Strikes Back, pp. 84-96, ACM POPL85; journal version appears
in Sci. Comp. Prog. vol. 8, pp 275-306, 1987.
[ES93] Emerson, E. A., and Sistla, A. P., Symmetry and Model Checking, 5th
International Conference on Computer Aided Verification, Crete, Greece,
June 1993full version to appear in Formal Methods in System Design.
[ES83] Emerson, E. A., and Sistla, A. P., Deciding Full Branching Time Logic,
Proc. of the Workshop on Logics of Programs, Carnegie-Mellon Univer-
sity, Springer LNCS no. 164, pp. 176-192, June 6-8, 1983; journal version
appears in Information ~ Control, vol. 61, no. 3, pp. 175-201, June 1984.
[ESS89] Emerson, E. A., Sadler, T. H. , and Srinivasan, J. Efficient Temporal
Reasoning, pp 166-178, 16th ACM POPL, 1989.
[Em87] Emerson, E. A., Uniform Inevitability is Finite Automaton Ineffable, In-
formation Processing Letters, v. 24, pp. 77-79, 30 January 1987.
[Em90] Emerson, E. A., Temporal and Modal Logic, in Handbook of Theoretical
Computer Science, vol. B, (J. van Leeuwen, ed.), Elsevier/North-Holland,
1991.
[FL79] Fischer, M. J., and Ladner, R. E, Propositional Dynamic Logic of Regular
Programs, JCSS vol. 18, pp. 194-211, 1979.
[Fr86] Francez, N., Fairness, Springer-Verlag, New York, 1986
96
[GPSS80] Gabbay, D., Pnueli A., Shelah, S., Stavi, J., On The Temporal Analy-
sis of Fairness, 7th Annual ACM Syrup. on Principles of Programming
Languages, 1980, pp. 163-173.
[GS92] German, S. M. and Sistla, A. P. Reasoning about Systems with many
Processes, Journal of the ACM, July 1992, Vol 39, No 3, pp 675-735.
[GH82] Gurevich, Y., and Harrington, L., "Trees, Automata, and Games", l$th
ACM STOC, 1982.
[HT87] Haler, T., and Thomas, W., Computation Tree Logic CTL* and Path
Quantifiers in the Monadic Theory of the Binary Tree, ICALP87.
Knowledge and Common Knowledge in a Distributed Environment, Proc.
3rd ACM Syrup. PODC, pp. 50-61.
[HS86] Halpern, J. Y. and Shoham, Y., A Propositional Modal Logic of Time
Intervals, IEEE LICS, pp. 279-292, 1986.
[Ha79] Harel, D., Dynamic Logic: Axiomatics and Expressive Power, PhD Thesis,
MIT, 1979; also available in Springer LNCS Series no. 68, 1979.
[HA84] Harel, D., Dynamic Logic, in Handbook of Philosophical Logic vol. II:
Extensions of Classical Logic, ed. D. Gabbay and F. Guenthner, D. Reidel
Press, Boston, 1984, pp. 497-604. Applications, 16th STOC, pp. 418-427,
May 84.
[HS84] Hart, S., and Sharir, M., Probabilistic Temporal Logics for Finite and
Bounded Models, 16th STOC, pp. 1-13, 1984.
[HS84] Hart, S. and Sharir, M. Probabilistic Temporal Logics for Finite and
Bounded Models, 16th ACM STOC, pp. 1-13, 1984.
[Ho78] Hoare, C. A. R., Communicating Sequential Processes, CACM, vol. 21,
no. 8, pp. 666-676, 1978.
[Ha82] Hailpern, B., Verifying Concurrent Processes Using Temporal Logic,
Springer-Verlag LNCS no. 129, 1982.
[HO80] Hailpern, B. T., and Owicki, S. S., Verifying Network Protocols Using
Temporal Logic, In Proceedings Trends and Applications 1980: Computer
Network Protocols, IEEE Computer Society, 1980, pp. 18-28.
[HR72] Hossley, R., and Rackoff, C, The Emptiness Problem For Automata on
Infinite Trees, Proc. 13th IEEE Symp. Switching and Automata Theory,
pp. 121-124, 1972.
[ID93] Ip, C-W. N., Dill, D. L., Better Verification through Symmetry, CHDL,
April 1993.
[Je94] Jensen, K., Colored Petri Nets: Basic Concepts, Analysis Methods, and
Practical Use, vol. 2: Analysis Methods, EATCS Monographs, Springer-
Verlag, 1994.
[JR91] Jensen, K., and Rozenberg, G. (eds.), High-level Petri Nets: Theory and
Application, Springer-Verlag, 1991.
[Ka68] Kamp, Hans, Tense Logic and the Theory of Linear Order, PhD Disser-
tationl UCLA 1968.
[Ko87] Koymans, R., Specifying Message Buffers Requires Extending Temporal
Logic, PODC87.
97
[OL82] Owicki, S. S., and Lamport, L., Proving Liveness Properties of Concurrent
Programs, ACM Trans. on Programming Languages and Syst., Vol. 4, No.
3, July 1982, pp. 455-495.
[PW84] Pinter, S., and Wolper, P. L., A Temporal Logic for Reasoning about
Partially Ordered Computations, Proc. 3rd ACM PODC, pp. 28-37, Van-
couver, Aug. 84
[Pi90] Pixley, C., A Computational Theory and Implementation of Sequen-
tial Hardware Equivalence, CAV'90 DIMACS series, vol.3 (also DIMACS
Tech. Report 90-3 1), eds. R. Kurshan and E. Clarke, June 1990.
[Pi921 Pixley, C., A Theory and Implementation of Sequential Hardware Equiv-
alence, IEEE Transactions on Computer-Aided Design, pp. 1469-1478,
vol. 11, no. 12, 1992.
[Pe81] Peterson, G. L., Myths about the Mutual Exclusion Problem, Inform.
Process. Letters, vol. 12, no. 3, pp. 115-116, 1981.
[Pn771 Pnueli, A., The Temporal Logic of Programs, 18th annual IEEE-CS Syrup.
on Foundations of Computer Science, pp. 46-57, 1977.
[Pn81] Pnueli, A., The Temporal Semantics of Concurrent Programs, Theor.
Comp. Sci., vol. 13, pp 45-60, 1981.
[Pn83] Pnueli, A., On The Extremely Fair Termination of Probabilistic Algo-
rithms, 15 Annual ACM Symp. on Theory of Computing, 1983, 278-290.
[Pn841 Pnueli, A., In Transition from Global to Modular Reasoning about Con-
current Programs, in Logics and Models of Concurrent Systems, ed. K.
R. Apt, Springer, 1984.
[Pn85] Pnueli, A., Linear and Branching Structures in the Semantics and Logics
of Reactive Systems, Proceedings of the 12th ICALP, pp. 15-32, 1985.
[Pn86] Pnueli, A., Applications of Temporal Logic to the Specification and Ver-
ification of Reactive Systems: A Survey of Current Trends, in Current
Trends in Concurrency: Overviews and Tutorials, ed. J. W. de Bakker,
W.P. de Roever, and G. Rozenberg, Springer LI~CS no. 224, 1986.
[PR89] A. Pnueli and R. Rosner, On the Synthesis of a Reactive Module, 16th
Annual ACM Syrup. on Principles of Programing Languages, pp. 179-190,
Jan. 1989.
[PR89b] A. Pnueli and R. Rosner, On the Synthesis of an Asynchronous Reactive
Module, Proc. 16th Int'l Colloq. on Automata, Languages, and Program-
ming, Stresa, Italy, July, 1989, pp. 652=671, Springer-Verlag LNCS no.
372.
[Pr81] Pratt, V., A Decidable Mu-Calculus, 22nd FOCS, pp. 421-427, 1981.
[Pr67] Prior, A., Past, Present, and Future, Oxford Press, 1967.
[RU71] Rescher, N., and Urquhart, A., Temporal Logic, Springer-Verlag, 1971.
[QS82] Queille, J. P., and Sifakis, J., Specification and verification of concurrent
programs in CESAR, Proc. 5th Int. Symp. Prog., Springer LNCS no. 137,
pp. 195-220, 1982.
[QS83] QueiUe, J. P., and Sifakis, J., Fairness and Related Properties in Transi-
tion Systems, Acta Informatica, vol. 19, pp. 195-220, 1983.
100
Preface
The study of Process Algebra has received a great deal of attention since the
pioneering work in the 1970s of the likes of R. Milner and C.A.R. Hoare. This
attention has been merited as the formalism provides a natural framework for
describing and analysing systems: concurrent systems are described naturally
using constructs which have intuitive interpretations, such as notions of abstrac-
tions and sequential and parallel composition.
The goal of such a formalism is to provide techniques for verifying the cor-
rectness of a system. Typically this verification takes the form of demonstrat-
ing the equivalence of two systems expressed within the formalism, respectively
representing an abstract specification of the system in question and its imple-
mentation. However, any reasonable process algebra allows the description of
any computable function, and the equivalence problem--regardless of what rea-
sonable notion of equivalence you consider--is readily seen to be undecidable in
general. Much can be accomplished by restricting attention to (communicating)
finite-state systems where the equivalence problem is just as quickly seen to be
decidable. However, realistic applications, which typically involve infinite enti-
ties such as counters or timing aspects, can only be approximated by finite-state
systems. Much interest therefore lies in the problem of identifying classes of
infinite-state systems in which the equivalence problem is decidable.
Such questions are not new in the field of theoretical computer science. Since
the proof by Moore [50] in 1956 of the decidability of language equivalence for
finite-state automata, language theorists have been studying the decidability
problem over classes of automata which express languages which are more ex-
pressive than the class of regular languages generated by finite-state automata.
Bar-Hillel, Perles and Shamir [3] were the first to demonstrate in 1961 that the
class of languages defined by context-free grammars was too wide to permit a
103
decidable theory for language equivalence. The search for a more precise divid-
ing line is still active, with the most outstanding open problem concerning the
decidability of language equivalence between deterministic push-down automata.
W h e n exploring the decidability of the equivalence checking problem, the
first point to settle is the notion of equivalence which you wish to consider.
In these notes we shall be particularly interested not in language equivalence
but in bisimulation equivalence as defined by Park and used to great effect by
Milner. Apart from being the fundamental notion of equivalence for several
process algebraic formalisms, this behavioural equivalence has several pleasing
mathematical properties, not least of which being that--as we shall discover--it
is decidable over process classes for which all other common equivalences remain
undecidable, in particular over the class of processes defined by context-free
grammars. Furthermore in a particularly interesting class of processes--namely
the normed deterministic processes--all of the standard equivalences coincide,
so it is sensible to concentrate on the most mathematically tractable equivalence
when analysing properties of another equivalence. In particular, by studying
bisimulation equivalence we shall rediscover old theorems about the decidability
of language equivalence, as well as provide more efficient algorithms for these
decidability results than have previously been presented. We expect that the
techniques which can be exploited in the study of bisimulation equivalence will
prove to be useful in tackling other language theoretic problems, notably the
problem of deterministic push-down automata.
The production rules are extended to be defined over the domain (V OT)* by
allowing 7Xfl --* 7c~)3 for each 7, fl E (V U T)* whenever X --* a is a production
rule of the grammar. A word w E T* (that is, a string of terminals) is generated
by a string a E (V U T)* iff a --** w. The (context-free) language defined by the
grammar, denoted L(G), is the set of words which can be generated from the
start symbol S. More generally, the language L ( a ) generated by a string a is
the set of words which it can generate, and hence L(G) = L(S).
The norm of a string of symbols (~ E (V U T)*, written norm(a), is the
length of a shortest word which can be generated from a via productions in P.
In particular, the norm of the empty string e is 0; the norm of a terminal symbol
a E T is 1; and the norm is additive, that is, norm(a/~) = norm(a) + norm(fl).
A grammar is normed iff all of its variable have finite norm. Notice that the
language defined by a grammar is nonempty exactly when its start symbol has
finite norm.
A grammar is guarded iff each of its production rules is of the form X --+ ac~
where a E T. If moreover each c~ E V* then the grammar is in Greibach normal
form (GNF). If furthermore each such c~ is of length at most k, then it is in k-
Greibach normal form (k-GNF). A 1-GNF grammar is called a regular grammar
as such grammars generate precisely the regular languages which do not contain
the empty string e. Finally, if within a guarded grammar we have that a = fl
whenever X --~ ac~ and X --+ aj3 are both production rules of the grammar for
105
/ \
Example 1 Consider the grammar G = ( { X , Y } , { a , b } , P , X ) where P con-
% /
1.2 Processes
9 S is a set of states;
A process is image-finite if for each c~ E S and each a E A the set {;3 " c~ a
f~ } is finite. We also refer to states of a process as being image-finite if the
r
process itself is image-finite. Finally, if we have t h a t / 3 = 3' whenever ~ ) ;3
a
and c~ ) "y are b o t h transitions of the process for some ~ E S and some a E A,
then the process is deterministic. We also refer to states of a process as being
deterministic if the process itself is deterministic.
We m a y abstract away from the behaviour of a process P and define the
language L(c~) which is defined by a state ~ of the process as the set of strings
s E A* such that c~ -s ) I~ where/3 is a terminated state, that is, where there
are no transitions evolving from ;3. The language generated by the process P is
then given as L(P) = L(c~o).
,f
( ~ a D a 9 a a 9 9 9
, , , , ...
From this example it is clear to see that the definition of n o r m in the process
sense is consistent with the definition of n o r m in the g r a m m a r sense. In par-
ticular, a normed g r a m m a r will give rise to a normed process. Furthermore the
language defined by the process associated with a g r a m m a r is the same as the
language defined by the g r a m m a r itself.
Notice that in the case of a CFG in Greibach normal form the state set of the
associated context-free process need only be V* as any sequence of transitions
from any element of V* (in particular from the start state S) must lead to a
state given by another element of V*. For the same reason the state set of
the process associated with a regular g r a m m a r need only be the finite set V,
which coincides with an expectation that the regular processes (those given by
g r a m m a r s generating regular languages) are finite-state processes.
X --* a X b X --~ c X d
This grammar defines the following concurrent context-free process (modulo com-
mutativity of symbols).
a___r
--U--
d ---U-
a
a i
-'--b----
B P A : Basic P r o c e s s A l g e b r a
A process algebra is defined by some term algebra along with a particular tran-
sitional semantic interpretation assigned to the constructs of the algebra. A
process is then given by a finite set of process equations
Xi = Ei " l<i<n
where each E~ is an expression over the particular term algebra with free variables
taken from the collection of Xis. In the case of the Basic Process Algebra (BPA)
of Bergstra and Klop [4] the form of the expressions Ei is given by the following
syntax equation.
E ::= a I Xi I E+F I EF
where a is taken from some finite set of atomic actions A. Informally, each a E A
represents an atomic process, Xi represents the process expression Ei, E 4- F
represents a choice of behaving as process E or process F, and E F represents the
sequential composition of processes E and F. Formally the semantic interpreta-
tion of these constructs is given by the least relation ) C_ P • A • ( P U {e})
satisfying the following rules.
110
For the reverse direction a bit of care is needed to handle summation. Given a
BPA process
E ::= a I X~ I E+F I aE
The effect of this modification would be to restrict ourselves to an algebra cor-
responding to regular grammars and hence generating the class of finite-state
processes.
Basic Parallel Processes (BPP) are defined by a process algebra given by includ-
ing a parallel combinator [[ within the algebra of regular processes. Hence the
term algebra is given by the following syntax equation.
111
E ::=a I l aE I EfIF
The semantic interpretation of the new construct is given by the following rules.
E ~G F a,G
E[IF ~,G[]F E[IF ~,E[IG
(Note that we also absorb the symbol e into parallel terms, so as to read E [I e
and e II S as E.)
Again it is straightforward to recognise the correspondence between concur-
rent context-free processes and B P P processes. For example, the context-free
process of Example 3 can be given as the B P P process
X --~ aY Y --* b X X
112
{Xy de__fdefa Y ,
b(x IfX)
b
This grammar defines the following concurrent context-free process.
ooo
2 Bisimulation Equivalence
We can straightforwardly define language equivalence between CFGs by saying
that two grammars G1 and G2 are language equivalent, denoted G1 "~L G2,
if L(G1) = L(G2). This definition applies equally well to processes. However
in these notes we shall be concentrating on a much stricter notion of process
113
c~ and fl are bisimulation equivalent or bisimilar, written a ~ fl, iff (~, fl) E Tr
for some bisimulation Tr
X ~ ab X ~ ac Y ---~ a Z Z -* b Z --+ c
Q
a a
b c
Lemma 9 I f c~ ~ fl and a - - ~ a ' f o r s E A* then ~ --2-* fl' such that o/ ..~ fl'.
C o r o l l a r y 11 I f a ~ fl then n o r m ( a ) = norm(fi)
115
P r o o f It suffices to demonstrate that the relation ~ (a, fl) " a "~L ~ and ce~
%
a-...L\- I / a ...
116
The states X and Y are clearly not bisimilar, as the state Z cannot be bisimilar
to a k f o r any k >_ O. H o w e v e r X "~k Y f o r each k >_ 0 as Z ~ k a k.
t" "1
(a, fl) " there exists 7 such that norm(7 ) < c~ and a 7 "~ f17
Notice that this theorem fails for unnormed processes. The reason for failure
is immediately apparent from the observation that a ,-~ aft for any unnormed a
and, any ft.
C a s e I . If kj > 0 for some j < i, then we may let c~ perform some norm-
reducing transition via process Pj. Process fl cannot match this transition,
as it cannot increase the exponent Ii without decreasing the exponent of
some prime with norm greater than that of Pi.
C a s e II. If kj > 0 for some j > i,' then we may let a perform a norm-reducing
transition via process Pj that maximises (after reduction into primes) the
increase in the exponent ki. Again the process fl is unable to match this
transition.
P r o o f Immediate. O
and S = Ys. The production rules of P are defined by Ya "* a for each a E T,
and Yx --* aEa for each X E V and each rule X --~ a a in P , where Ee = r and
Eaa = YaEa for ~r E V U T.
L e m m a 22 C(G) ,~ C(G).
f
P r o o f As for the previous case we can demonstrate t h a t T~ = ~ (a, Ea) " E
(V U T)* } is a bisimulation relation by demonstrating t h a t a a ~/3 if and only
ifEa a, E~ for a l l a E ( V U T ) * . Suppose then t h a t a a~ /3. This can
occur in one of two ways: either c~ = 7a6 a n d / 3 = 76, in which case we have
~a = ~'rYa~t a , E.rE t = E#; or else a = 7X6 and 13 = 7y6 where X ~ ay is
a rule of P , in which case we have E~ = ETYxE~ a ~ ETEnE6 = E#. Similarly
we can show t h a t i f E ~ a , E ~ t h e n a a~/3. []
then valid, though the resulting grammars may be exponentially larger than the
original grammars. We leave it to the reader to verify these facts, in particular
by considering the transformation of the weakly guarded grammar given by
X1 ~ a, X1 ~ b, and for each 0 < k < n, Xk+l ~ X k a and Xk+l "* Xkb.
A (bisimulation) equivalent grammar in Greibach normal form is necessarily
exponentially larger than this original grammar.
3 Decidability Results
If we consider the class of regular (finite-state) processes, theh bisimulation
equivalence is readily seen to be decidable: To check if ~ ,~/~ we simply need
to enumerate all binary relations over the finite-state space of ~ and/~ which
include the pair ((~,/~) and check if any of these is by definition a bisimulation
relation. Language equivalence is similarly known to be decidable for finite-state
automata. This can be seen by noting that regular grammars can be transformed
easily into normed deterministic grammars; the decidability of language equiv-
alence then follows from the decidability of bisimilarity along with Corollary 10
and Lemma 12.
As soon as we move to a class of infinite-state processes, the decision problem
for bisimulation equivalence becomes nontrivial. For image-finite processes, the
nonequivalence problem is quickly seen to be semi-decidable--given the com-
putability of the transition relation--using L e m m a 14, noting that the relations
~k are all trivially computable. However there would appear to be potentially
infinitely many pairs of states to check individually in order to verify the bisim-
ilarity of a pair of states. We may also be led to believe that the equivalence
problem for bisimulation checking is undecidable given the classic result con-
cerning the undecidability of language equivalence.
However it turns out that bisimulation equivalence is in fact decidable for
both context-free processes and concurrent context-free processes. In this section
we shall concentrate on presenting these two results. The result concerning
context-free processes is due to Christensen, Hiittel and Stirling [18], while that
concerning concurrent context-free processes is due to Christen'sen, Hirshfeld
and Moller [15]. One immediate corollary of the former result is the result of
Korenjak and Hoperoft [44] that language equivalence is decidable for simple
grammars.
We shall henceforth simplify our study, using the results of the previous
section, by assuming that our grammars are in Greibach normal form. Let us
then fix some Greibach normal form grammar G = (V, T, P, S). Our problem is
to decide a -,~ fl for ~, fl E V*, first in the case where we interpret the grammar
as a context-free process, and then in the case where we interpret the grammar
as a concurrent context-free process. Notice that such processes are image-finite,
121
Hence the definition of a Caucal base differs from that of a bisimulation only in
how the derivative states a ~ and 13~are related; in defining T~ to be a bisimulation,
we would need these derivative states to be related by 7~ itself and not just by
T~
the (typically much larger) congruence -- . A Caucal base then is in some sense
a basis for a bisimulation. The importance of this idea is encompassed in the
following theorem.
7~
T h e o r e m 24 ( C a u c a l ) I f T~ is a Caucal base, then -- is a bisimulation. In
T~
particular, -- C ,,~.
P r o o f Immediate. []
P r o o f We need simply check that each pair (a, ;3) of the finite relation Tr
satisfies the two clauses of the definition of a Caucal base, which requires testing
(in parallel) if each transition for o n e of a and fl has a matching transition
from the other. This matching t e s t - - t h a t is, checking if the derivative states
Tr Tr
are related by - --is itself semi-decidable, as the relation -- is semi-decidable.
[]
P r o o f Let 7~ { (~a, 6;3) " o~ ,,~ "Ta and fl ,-, "7;3 for some "7 ~ r }. We may
demonstrate straightforwardly that -~7r is a bisimulation, from which we may
deduce our desired result. []
Proof We shall show that 7~ = { (a, ;3) " a 7 "~ ;37 for infinitely many non-
bisimilar 7 } is a bisimulation, from which our result will follow.
Let (c~,;3) E 7~, and suppose that c~ a ~ a,. Since c~ r e, by Lemma 27 there
can only be one 7 (up to bisimilarity) such that a 7 ,-~ 7, so we must have that
123
We may split the set of variables into two disjoint sets V = N O U with the
variables in N being normed and those in U being unnormed. Our motive in
this is based on the following lemma.
bisimulation. []
Hence we need only ever consider states a E N* U N ' U , the others being im-
mediately rewritable into such a bisimilar state by erasing all symbols following
the first unnormed variable.
The argument relies on recognising when a term may be broken down into a
composition of somehow simpler terms. To formalise this concept we start with
the following definition.
. yNX7 andTfl,~a.
The situation would be clear if all bisimilar pairs were decomposable; indeed
we shall exploit this very property of normed processes--which follows there from
our unique decomposability result--in in Section 4. However we can demonstrate
that there is in some sense only a finite number of ways that decomposability Can
fail. This crucial point in our argument is formalised in the following lemma. We
consider two pairs ( X a , Yfl) and ( X a ' , Yfl') to be distinct if a r ~t or fl 7L fit.
P r o o f If X, Y E U then clearly ~ can contain at most the single pair (X, Y).
124
Proof Firstly, 7~0 and T~1 must both be finite. Also we must have _-- _ ~.
T~
7r
We will demonstrate by induction on E_ that X a ~ Yfl implies X a - Yfl
If (Xa, Yfl) is decomposable, then X, Y E N and (without loss of generality)
assume that X ,~ Y~/ and 7 a ,,~ ft. Then s(Ta ) < s(Y^/a) = s(Xa) and
7~
s(fl) < s(Yfl), so (7c~,/3) if_ (Xa, Yfl). Hence by induction 7c~ _~/3. Then from
T~
(x, n o we get x a = Y/3.
Suppose then that (Xa, Yfl) is not decomposable. Then ( X a ~, Y/3~) E T~I
for some a ~ ,~ a and/3~ ,-~/3 with (a',/3~) E_ (a,/3).
9 If X, Y E N, then (a, fl), (a', fl') r- ( X a , Yfl), so (a, a ' ) , (/3,/3') E (Xa, Y/3).
Thus by induction a -_- a n d / 3 -/3~, so X a -- Xa~TIY/3 ~ =_ Y/3.
"R,
X a = X a I ~- Y. A symmetric argument applies for the case when X E U
and Y E N.
Tr
* IfX, YEU, thenc~=a~=fl=fl'=r ET~l, S O X ~ - - Y f l .
[]
D e f i n i t i o n 35 X ~ ' . . . X ~ "
v- X~ 1 . . . x ~ " iff there exists j such that kj < lj
and for all i < j we have that ki = li.
unf(X~'...X~") = E E aX~'+I~'"X~'+I'-I'"X~"+I"
l<i<n zl In
-- -- Xi---*aX x ...X~
k~>0
REC
u n f ( ~ ) ----u n f ( ~ )
p q
E i = I a i a i ---- ~jffil bjjSj
SUM
{ aiozi = b](i)~](i) i=1
{ bjl~j = ag(j)otg(j)
}"j=l
where f : { 1 , . . . , p } --+ { 1 , . . . , q }
g : { 1 , . . . , q } --+ {1 . . . . ,p}
ac~ = a/~
PREFIX
,~=~
T a b l e 1: R u l e s of t h e t a b l e a u s y s t e m .
Xl ~ X2
REC
aX1X4 = aX3
PREFIX
XaX4 = X3
StmL
X2X4 = Xa
REc
aX3X4 + bX2 = aXaX4 + bZ2
SUM
aX3X4 = aX3X4 bX2 = bX2
PREFIX PREFIX
XaX4 = XaX4 )(2 = Xa
Proof S u p p o s e t h a t we h a v e a t a b l e a u w i t h r o o t l a b e l l e d c~ = ft. I t c a n o n l y
b e i n f i n i t e if t h e r e exists a n i n f i n i t e p a t h t h r o u g h it, as e v e r y n o d e h a s f i n i t e
128
We now proceed to show the soundness and completeness of the tableau system.
The proof of soundness of the tableau system relies on the alternative stratified
characterisation of bisimulation equivalence.
* If PREFIX is applied to ni, then the consequent is ni+l and mi+l = rni - 1 .
4 A l g o r i t h m s for N o r m e d P r o c e s s e s
In the previous section we demonstrated the decidability of bisimulation equiv-
alence over both the class of context-free processes and the class of concurrent
context-free processes. However, the computational complexity of the algorithms
which we presented shows them to be of little practical value. To overcome this
deficiency we concentrate in this section on developing efficient algorithms for
deciding bisimilarity within these classes of processes. What we demonstrate
in fact are polynomial algorithms for the problem of deciding equivalences over
the subclasses of normed processes. These algorithms will both be based on
an exploitation of the decomposition properties enjoyed by normed processes;
however, despite the apparent similarity of the two problems, different methods
appear to be required.
For our algorithms, we fix a normed context-free grammar G = (V, T, P, S)
in Greibach normal form. Our problem then is to determine efficiently--that
is, in time which is polynomial in the size n (the number of symbols in the
production rules) of the grammar--whether or not c~ --, fl for c~, j3 E V*, where
we interpret these first as context-free processes and then as concurrent context-
free processes.
Our basic idea is to exploit the unique prime decomposition theorem by de-
composing process terms sufficiently far to be able to establish or refute the
131
In general, the refinement step makes progress, i.e., the new base B is strictly
contained in the base B from which it was derived. If, however, no progress
occurs, an important deduction may be made.
full.
P r o o f Suppose Y ,~ X/~ with X _< Y, and let v = norm(X); then (Y, X[Y]v) E
B0 for some [Y]~ such that Y ' > [Y]v in v norm-reducing steps, where s E A V.
But the norm-reducing sequence Y ' ~ [Y]~ can only be matched by X/~ ' ~/3.
Hence [Y]~ -~/?, and B0 must be full. []
The basic structure of our procedure for deciding bisimilarity between normed
processes a and/3 is now clear: simply iterate the refinement procedure B :=
from the initial base B = B0 until it stabilises at the desired base B, and then
test a -=t3/3. By the preceding four lemmas, this test is equivalent to a ,,~/3.
So far, we have not been specific about which process [X]v is to be selected
among those reachable from X via a sequence of u norm-reducing transitions.
A suitable choice is provided by the following recursive definition. For each
variable X E V, let a x E V* be some process reachable from X by a single
norm-reducing transition X " ~ a x . Then,
-
L e m m a 45 With this definition for [.]~, the base Bo introduced in Lemma ,t4
may be explicitly constructed in polynomial time; in particular, every pair in Bo
has a compact representation as an element of V • V*.
Recall that the only condition we impose on the relation -t3 is that it
B
satisfies the inclusions -~ C ---B _ -= whenever B is full. This flexibility in the
specification of ---s is crucial to us, and it is only by carefully exploiting this
flexibility that a polynomial-time decision procedure for =-B can be achieved.
The definition and computation of --~ is the subject of the following section.
Central to the definition of the relation ---B is the idea of a decomposing func-
tion. A function g : V ---+ V* is a decomposing function of order q if either
g(X) = X or g(X) = X 1 X 2 . . . X p with 1 < p < q and Xi < X for each
1 < i < p. Such a function g can be extended to the domain V* in the obvious
fashion by defining g(e) = e and g(Xa) = g(X)g(c~). We then define g*(~) for
E V* to be the limit of gt(~) as t --* ~ ; owing to the restricted form of g we
know that it must be eventually idempotent, that is, that this limit must exist.
The notation g[X ~-+ a] will be used to denote the function that agrees with g
at all points in V except X, where its value is a.
The definition of the relation --B may now be given. For base B and de-
composing function g, the relation a = ~ / ? . i s defined b y the following decision
procedure:
- if (Y, XT) 9 B then the result is given by a - ~ fl, where 9 = g[Y ~-+
xT];
134
P r o o f The first inclusion is easily confirmed , since for any g constructed by the
B
algorithm for computing --B , it is the case that X =_ g ( X ) for each X E V.
For the second inclusion, suppose that a ,~ /3 and at some point in our
procedure for deciding a --t~ fl we have that g* (a) # g* (fl), and that we have
only ever updated g with mappings X ~ 7 satisfying X -,~ 7. Let X and Y
(with X < Y) be the leftmost mismatching pair. Then Y ,~ X 7 must hold
for some 7, and so, by fullness, (Y, XT) E B for some 7 with Y ,,* XT. So the
procedure does not terminate with a false result, but instead updates g with this
new semantically sound mapping and continues. []
Finally, we are left with the problem of deciding g*(a) = g*(/3), all other
elements in the definition of ---t~ being algorithmically undemanding. Note
that the words g*(a) and g*(/3) will in general be of exponential (in n) length,
so we cannot afford to compute them explicitly.
We shall begin by assuming that the function g is of order 2, that is, maps
a single variable to at most two variables; this simplification may be achieved
using a standard polynomial-time reduction to Chomsky normal form. In the
sequel, let n denote the total number of variables after this reduction to what
is essentially Chomsky normal form, and let V refer to this extended set of
variables. We say that the positive integer r is a period of the word a E V* if
1 < r < length(a), and the symbol at position p in a is equal to the symbol at
position p + r in a, for all p in the range 1 < p < length(a) - r. Our argument
will be easier to follow if the following lemma is borne in mind; we state it in
the form given by Knuth, Morris and Pratt.
P r o o f See [43, Lemma 1]; alternatively the lemma is easily proved from first
principles. []
I ~ = g*(X) I
I I
fl = g*(Y) t 7 - - g*(Z)
First symbol of 7, and ith of
symbol of 7 (see Figure 1). Such alignments, which we call spanning, may be
specified by giving the index i of the symbol in a that is matched against the
first symbol in 7. It happens that the sequence of all indices i that correspond
to valid alignments forms an arithmetic progression. This fact opens the way
to computing all alignments by dynamic programming: first with the smallest
variable X and Y, Z ranging over V, then with the next smallest X and ]I, Z
ranging over V, and so on.
O/
I I
I ........ ~
The necessary machinery is now in place, and it only remains to show how
spanning alignments of the form depicted in Figure 1 may be computed by
137
o~! og I!
I I I
I I
z t "T
ff P
CASE I. The alignment of a' against ~3' includes position p'. These alignments
can be viewed as conjunctions of spanning alignments of a " (which are precom-
puted) with inclusive alignments of a' (which can be computed on demand using
Lemma 50). The valid alignments in this case are thus an intersection of two
arithmetic progressions, which is again an arithmetic progression.
CASE II. The alignment of c~' against f13' does not includes position p', i.e., ties
entirely to the right ofp'. If there are just one or two spanning alignments of a "
against ~ and % then we simply check exhaustively, using Lemma 50, which, if
any, extend to alignments of c~ against/~3'. Otherwise, we know that a " has the
form p%r with k > 2, and ~ a strict initial segment of p; choose p to minimise
length(e ). A match of a " will extend to a match of a only if c / = cr'pm, where
or' is a strict final segment of e. (Informally, a' is a smooth continuation of the
periodic word a " to the left. Thus either every alignment of c~" extends to one
of a = a % " , or none does, and it is easy to determine which is the case. As in
Case I, the result is an arithmetic progression.
The above arguments were all for the situation in which it is the word a "
138
the first polynomial algorithm for the (language) equivalence problem for simple
grammars.
T h e o r e m 52 There is a polynomial-time algorithm for deciding equivalence of
simple grammars
P r o o f To obtain a polynomial-time decision procedure for deciding language
equivalence of simple context-free grammars, we merely recall from Corollary 10
and L e m m a 12 that in the case of normed simple grammars, language equiva-
lence and bisimulation equivalence coincide. We can restrict attention to normed
grammars, as any unnormed grammar can be transformed into a language-
equivalent normed grammar by removing productions containing unnormed non-
terminals. (Note that this transformation does not preserve bisimulation equiva-
lence, which makes it inapplicable for reducing the unnormed case to the n o r m e d
case in checking bisimilarity.) Thus language equivalence of simple grammars
may be checked in polynomial time b y the procedure presented in the previous
two sections. []
and
(iii) the relation =_79 is a bisimulation provided every pair in F(:D) satisfies
expansion within =-79 ; this condition may be checked by a polynomial-time
algorithm;
(iv) the maximal bisimulation ,~ coincides with the congruence =-79 , where :D
represents the unique decomposition in ,,,.
(i) the relation c~ - / 3 is decidable in polynomial time (in the sum of the sizes
of ~ and/3);
With the three preceding lemmas in place, the procedure for deciding bisim-
ulation equivalence writes itself; in outline it goes as follows.
(1) Let the congruence =- be defined b y c~ -- ~ iff n o r m ( a ) = norm(fl).
We complete the section by providing the missing proofs of the various lem-
mas.
P r o o f o f L e m m a 54
P r o o f o f L e m m a 55 Define the relation = as follows: for all c,,/3 E V*, the re-
lationship a = / 3 holds iff a --~/3 and the pair (a,/3) satisfies expansion in --~9 9
We must demonstrate that - satisfies conditions (i)-(iv) in the statement of the
lemma.
* i f X j =z,i p~l . . . p : ~ is the decomposition of Xj, for some j < i, then the
pair (Xj, P ~ ' . . . P [ , ) satisfies norm-reducing expansion in =z,, ;
143
~Again, note t h a t Q 1 , . . . , Qt, although primes with respect to ,,~, are not in general primes
with respect to ---~i"
144
CASE II. Ifak > Ofor some k > j, then let c~ perform a norm-reducing transition
via process Pk that maximises the increase in the exponent aj. Again the
process fl is unable to match this transition.
References
[1] P. Aczel. Non-well-founded Sets. CSLI Lecture Notes 14, Stanford Univer-
sity, 1988.
[2] J.C.M. Baeten, J.A. Bergstra and J.W. Klop. Decidability of bisimulation
equivalence for processes generating context-free languages. Journal of the
A C M 40, pp653-682, 1993.
[4] J.A. Bergstra and J.W. Klop. Algebra of Communicating Processes with
Abstraction. Theoretical Computer Science 37, pp77-121, 1985.
[8] S.D. Brookes, C.A.R. Hoare and A.W. Roscoe. A theory of Communicating
Sequential Processes. Journal of the ACM 31, pp560-599, 1984.
[19] J. Esparza and M. Nielsen. Decidability issues for Petri nets - a survey.
Bulletin of the EATCS 52, pp245-262, February 1994.
[20] E.P. Friedman. The inclusion problem for simple languages. Theoretical
Computer Science 1, pp297-316, 1976.
[21] R.J. van Glabbeek. The linear time-branching time spectrum. In Proceed-
ings of CONCUR 90, J. Baeten, J.W. Klop (eds), Lecture Notes in Computer
Science 458, pp278-297. Springer-Verlag, 1990.
[22] J.F. Groote. A short proof of the decidability of bisimulation for normed
BPA processes. Information Processing Letters 42, pp167-171, 1991.
[23] J.F. Groote and H. Hiittel. Undecidable equivalences for basic process al-
gebra. Information and Computation, 1994.
[24] J.F. Groote and F. Moller. Verification of parallel systems via decomposi-
tion. In Proceedings of CONCUR 92, W.R. Cleaveland (ed), Lecture Notes
in Computer Science 630, pp62-76. Springer-Verlag, 1992.
[39] D.T. Huynh and L. Tian. On deciding some equivalences for concurrent
processes. Informatique Th orique et Applications (RAIRO) 28(1), pp51-
71, 1994.
[40] P. JanSar. Decidability questions for bisimilarity of Petri nets and some
related problems. In Proceedings of STACS'94, P. Enjalbert, E.W. Mayr
and K.W. Wagner (eds), Lecture Notes in Computer Science 775, pp581-
592, Springer-Verlag, 1994.
[41] P.C. Kanellakis and S.A. Smolka. CCS expressions, finite-state processes
and three problems of equivalence. Information and Computation (86),
pp43-68, 1990.
[42] S.C. Kleene. Representation of events in nerve nets and finite automata. In
Automata Studies, pp3-42, Princeton University Press, Princeton, 1956
[43] D.E. Knuth, J.H. Morris, and V.R. Pratt, Fast pattern matching in strings,
SIAM Journal on Computing 6, pp323-350, 1977.
148
[51] D. Muller and P. Schupp. The theory of ends, pushdown automata and
second order logic. Theoretical Computer Science 37, pp51-75, 1985.
[52] R. Paige and R.E. Tarjan. Three partition refinement algorithms. SIAM
Journal on Computing 16, pp937-989, 1987.
[55] L.J. Stockmeyer. The polynomial time hierarchy. Theoretical Computer Sci-
ence 3, ppl-22, 1977.
Modal and Temporal Logics for Processes
Colin Stifling
Preface
1 Processes
Its meaning is that process a.E may perform (respond to, participate in, or
accept) the action a and evolve into the process E. An instance is the transition
r tick) Cl. Next is the transition rule for d___ef,which is presented as a goal
directed inference rule:
Provided that E may become F by performing a and that the side condition
P ~ f E is fulfilled, it follows that P a ) F .
Using these two rules we can show Cl tick Cl:
Cl ~ic~) Cl
CI d__eftick. Cl
tick. Cl tick) Cl
tick
directed labelled arcs between them. Each vertex is a process term, and one of
them is Cl which can be thought of as the root. All the possible transitions from
each vertex, those that are provable from the rules for transitions, are represen-
ted.
A second example, a very simple vending machine, is defined in figure 21
Here + (which has wider scope than .) is the choice operator from Milner's
def
Vent = l i t t l e . c o l l e c t , . V e n
R(+) EI+E~ a F
E1 a ~ F
The final transition is an axiom instance; an application of the first R(+) rule
to it yields the intermediate transition; and the goal therefore follows using the
rule R(d--ef): The transition graph for Ven is presented in figure 3.
1 In proofs of transitions we usually omit explicit mention of side conditions in the
application of a rule such as R(~f).
152
~coll:ct~.Ven collect/.Yen
clef
Cto = up. Ctl +
Cto round.
deI
Cti+l = up. Cti+2 + down. Cti
R(~) ~'~{Ei : i e I} a F
Ej a , F j6I
A special case is when the indexing set I is empty. By the rule R ( ~ ) this process
has no transitions as the subgoal can never be fulflled. In CCS this nil process
is abbreviated to 0 (and to S T O P in Hoare's CSP, Communicating Sequential
Processes, [40]). Thus t i c k . 0 can only do a single tick before terminating.
Actions can be viewed as ports or channels, means by which processes can
interact. It is then also i m p o r t a n t to consider the passage of d a t a between pro-
cesses along these channels or through these ports. In CCS input of data at a
153
port named a is represented by the prefix a(x).E where a(x) binds free occur-
rences of x in E. (In CSP a(x) is written a?x.) Now a no longer names a single
action but instead represents the set {a(v) : v E D} where D is the appropriate
family of d a t a values. The transition axiom for this prefix input f o r m is:
R(in) a(x).E
a(l)! E{v/x} rED
where E{v/x} is the process t e r m which is the result of replacing all free occur-
rences of x in E with v ~. Output at a port n a m e d a is represented in CCS by
the prefix "5(e).E where e is a data expression. The overbar - symbolizes output
at the n a m e d port. (In CSP ~(e) is written a!e.) The transition rule for output
depends on extra machinery for expression evaluation. Assume t h a t Val(e) is the
d a t a value in D (if there is one) to which e evaluates:
Cop
Cop i.(o? Cop
The subgoal is an instance of R(in), (6-~(=).Cop){v/=} is 6-~(v).Cop. This
as
latter process has only one possible transition, o--~(v). Cop ~-~(~) Cop, an instance
of R(out) as we assume t h a t Val(v) is v. Whenever Cop inputs a value at i n it
immediately disgorges it through out. The size of the transition graph for Cop
depends on the size of the data domain D and is finite when D is a finite set.
Input actions and indexing can b e mingled, as in the following description of
a family of registers where both i and x have type N:
i f b t h e n E1 else E2 ~ E' I
R(ifl) E1 ~ ~ E I Val(b) = true
I
R(if2) if b t h e n E1 else E2 ~ ~ E' Val(b) = false
E2 a ~E'
T(5) performs the transition sequence T(5) ;~(]) T(8) o-~(s) T(4) o-~(4) T(2),
and then cycles through the transitions T(2) ~-~(~) T(1) "-~(~) T(2). []
1.2 C o n c u r r e n t interaction
A compelling feature of process theory is its modelling of concurrent interaction.
A prevalent approach is to appeal to handshake communication as primitive.
At any one time only two processes may communicate at a port or along a
channel. In CCS the resultant communication is a completed internal action.
Each incomplete, or observable, action a has a partner ~, its co-action. Moreover
the action ~ is a which means that a is also the co-action of ~. The partner of
a parametrized action in(v) is i-~(v). Simultaneously performing an action and
its co-action produces the internal action r which is a complete action and so
does not have a partner.
Concurrent composition of E and F is expressed as the process E [ F. The
crucial transition rule for [ which conveys communication is:
In the first of these rules the process F does not contribute to the action a which
E performs. An example derivation is:
Cop
wri~e~ User ~ ~ . ~ n ~
Cop ~ out:
potential movement of information flowing into and out of ports, and also ex-
hibits the ports through which a process is in principle willing to communicate.
In the case of User the incoming arrow to the port labelled w r i t e represents
input whereas the outgoing arrow from i--~ symbolizes output. The flow graph
for Cop I Userhas the crucial feature that there is a potential linkage between
the output port in of Userand its input in Cop permitting information to cir-
culate from Userto Cop when communication takes place. However this port
is still available for other users: both users in Cop I User I User are able to
communicate, at different times, with Cop.
Consider now the situation where a user has private access to a copier. This is
modelled using an abstraction or encapsulation operator which conceals ports or
channels. In CCS there is a restrictionoperator \ J where J ranges over families
of incomplete actions (thereby excluding r). Let K be the set {in(v) : v 6 D}
where D is the space of values that could be accessed through in. In the process
(Cop I User)\K the port in is inaccessible from the outside. Its flow graph is
pictured in figure 6 where the linkage without names at the ports represents that
they are concealed from other users. This flow graph can therefore be simplified
as in the second diagram of figure 6.
The visual effect of \ J on flow graphs is underpinned by its transition rule,
where the set 7 i s {~ : a 6 J}.
4 Section 2.4 provides further justification for this.
157
out
E\J a , F\J - I
R(\) E a~F a~JOJ
!
The behaviour of E \ J is part of that of E. The presence of \ K prevents Cop
in (Cop I User)\K from ever doing an i n transition except in the context of a
communication with User. This therefore enforces communication between these
two components. The only available transition after an initial write transition,
(Cop] User)\K . r i t . ( ~ ) ( C o p [ User~)kK, is the communication whose proof is:
def
Road = car.up.cc--ff-fh~.do~-a.Road
Rail ~ train.green.tcross.rod.Rail
Signal d_efg~-d-ff.red. Signal Jr fi-ff.down.Signal
approaches and tries to cross. The flow graphs of the components and the overall
system are depicted in figure 8, as is its transition graph. []
$ car I green
green
o
train ~ w
I car
~tcross
Crossing
wiF . _ _
ccross
Fig. 8. Flow graphs of the crossing and its components, and its transition graph
160
Sender d~=fin(x).~(x).Sendl(x)
Sendl (,) %f ms:~(,).Sendl (,) + ok.Sender
Medium d_,_fsm(y).Medl(y)
Medl (y) de-4~i-f(y).Medium + r.~'g.Medium
Receiver ~' mr(z).'6~(z).-~.Receiver
IO d,__~slot.bank.(lost.loss.IO + release(y).~Tn(y).lO)
Bn dealb ~ k . ~ ( , + X).l*ft(y).8,
m dej max(z).(lost.left(z).D + ~ { ~ ( y ) . Y ~ ( z - y).D : 1 <_ y < z})
loss
win
E alP) r
R(tr) E ~, E
E ~>E' E' ~~
For instance, the crossing of figure 7 performs the cycle Crossing ~ Crossing,
when w is t r a i n r ~ c r o s s r.
There is an important difference between the completed internal r action and
incomplete actions. An incomplete action is observable in the sense that it can
be interacted with in a parallel context. Assume that E may at some time per-
form the action ok, and that F is a resource. Within the process ( E ] ~-k.F)\{ok}
accessibility to this resource is triggered only when E performs ok. Here obser-
vation of ok is the same as the release of the resource. The silent action r can
not be observed in this fashion.
Consequently an important abstraction of the behaviour of processes is away
clef
from silent activity. Consider a slightly different copier C = in(x).o---ut(x).o--k.C,
161
[ R(Tr) E ~ r
E '~F u=w [(9
Observable traces can also be built from their observable components. The
extended observable transition Crossing ~r~i~ros~ Crossing is the result of glu-
ing together the two transitions Crossing t~= E and E t ~ Crossing when
the intermediate state is either E2 or Es of figure 8. Following [52] we therefore
define a new family of transition relations which underpin observable traces. A
label in this family is either the empty sequence ~ or an observable action a:
arrow transitions. However the abundance of arcs may result in redundant ver-
tices, for when minimized with respect to notions of observable equivalence, the
graphs may be dramatically simplified as their vertices are fused. In this way
the observable transitions of a process offer an abstraction from its behaviour.
The processes Cop and User are essentially one place buffers, taking in a value
and later expelling it. Assume that B is a canonical buffer, B gel i(z).~(x).B.
Cop is the process B when port i is i n and o is out and User is B when i is
w r i t e and o is in. Relabelling of ports or actions can be made explicit, as in
CCS with a renaming combinator.
The crux of renaming is a function mapping actions into actions. To ensure
pleasant algebraic properties (see section 2.7) a renaming function f is subject
to a few restrictions. First it should respect complements: for any observable a
the actions f(a) and f(~) are co-actions. Second f should also preserve values
passing between ports, f(a(v)) is an action b(v) with the same value, and for any
other value w the action f(a(w)) is b(w). Finally it should conserve the silent
action, f ( r ) = r. Associated with any function f obeying these conditions is
the renaming operator Ill which when applied to a process E is written as E[f],
and is the process E whose actions are relabelled according to f . Following [52]
a renaming function f is usually abbreviated to its essential part: when the ai
are distinct observable actions bl/al,..., bn/a, represents the function f which
renames ai to bi, and leaves any other action c unchanged. For instance Cop
abbreviates the process B [ i n / i , out/o], as we maintain the convention that i n
stands for the family {in(v) : v 6 D} and i for {i(v) : v 6 D}, and so i n / i
symbolizes the function which maps i(v) to in(v) for each v.
The transition rule for renaming is:
:C o _Qo - $ -
The flow graph of B1 I ... I Bn is also depicted in figure 11, and contains
the intended links. The n-place buffer is the result of internalizing these links,
(B1 I... I B,)\{ox,...,o,-1}.
A more involved example from [52] is the construction of a scheduler from
small cycling components. Assume n tasks when n > 1, and that action ai
initiates the ith task whereas bi signals its completion. The scheduler timetables
the order of task initiation, ensuring that the sequence of actions al ... an is
performed cyclically starting with al. The tasks may terminate in any order,
but a task can not be restarted until its previous performance has finished. So
the scheduler must guarantee that the actions ai and bi happen alternately for
each i.
Let Cy I be a cycler of length four, Cy I def a.e.b.d.Cyt, whose flow graph is
illustrated in figure 12. In this case the flow graph is very close to the transition
graph, and so we have circled the a label to indicate that it is initially active. A
first attempt at building'the required scheduler is as a ring of Cy ~ eyclers.
_ % b~
a _ C
Fig. 12. Flow graph of Cy' and Cy~ I Cy~ ] Cy~ ] Cy~
E x F ab E' x F'
E a , E' F ~ F'
Here ab is the concurrent product of the component actions a and b, and . . . m a y
be filled in with a side condition: in the case of I of section 1.2 the actions a and
b must be co-actions, and their concurrent product is the silent action. Other
rules permit components to act alone.
EIIK F a E, IIKF ~
aEK
E '~ ) E I F - ~ FI
[24, 21], and the description of stochastic behaviour using probabilistic instead
of non-deterministic choice [46]. These extensions are useful for modelling hybrid
systems which involve a mixture of discrete and continuous, and can be found
in chemical plants, and manufacturing.
Processes can also be used to capture foundational models of computation
such as Turing machines, counter machines, or parallel random-access machines.
This remains true for the following restricted process language where P ranges
over process names, a over actions, and I over finite sets of indices:
E ::= P I y ~ { a i . E i : i e I} I E1 I E2 I E\{a}
A process is given as a finite family of definitions {Pi de=~Ei : 1 < i < n} where
all the process names in each Ei belong to the set { P 1 , . . . , Pn}. Although process
expressions such as the counter Cto (figure 4) the register Reg o (section 1.1) and
the slot machine SMo (figure 10) are excluded because their definitions appeal
to value passing or infinite sets of indices, their observable behaviour can be
described within this restricted process language. Consider, for example, the
following finite reformulation [68] of the counter Cto, the process Count:
2.1 H e n n e s s y - M i l n e r logic
E~tt
E~=ff
E~A~ iff E ~ 4~ and E ~ g'
E ~r life ~ or E ~ t/'
E iffVFE{E' : E :~>E'andaEK}.F~
iffSFE{E' : E a)E, andaEK}.F~
Verifying that Ven has these properties is undemanding. Their proofs merely
appeal to the inductive definition of the satisfaction relation between processes
and formulas. For instance Ven ~ [1p, 2p][1p, 2p]ff iff Venb ~ [1p, 2p]ff and
Venz ~ [lp, 2 p i l l , and clearly both of these hold. Similarly establishing that
Ven lacks a property, such as (lp)(lp, b i g ) t t , is equally routine. Notice that
it is not necessary to construct the transition graph of a process when showing
that it has, or fails to have, a property.
Actions in the modalities may contain values. For instance the register Reg 5
from section 1.1 has the property (r---e~(5))tt h [{r-'-e-~(k) : k # 5}]ff.
Assume that ,4 is a universal set of actions including r: so ,4 = O U {r}
where O is described in section 1.3. We let - K abbreviate the set .4 - K, and
within modaiities we write - h i , . . . , a , for - { a l , . . . , an}. Moreover we assume
that - abbreviates the set - $ (which is just .A). Consequently a process E has
the property [ - ] ~ w h e n each F in the set {E' : E a E ' a n d a E , 4 } has the
feature 4~. The modal formula [ - ] f f expresses deadlock or termination.
Within this modal logic we can also express immediate necessity or inevitab-
ility. The property that only a can be performed, that it must be the next action,
is given by the formula ( - ) t t A [ - a ] f f . The conjunct ( - ) t t affirms that some ac-
t i o n is possible while [ - a ] f f states that every action except a is impossible. After
2p is deposited Ven must perform big, and so Ven ~ [ 2 p ] ( ( - ) t t h [ - b i g ] f f ) .
2.2 M o r e m o d a l logics
Process activity is delineated by the two kinds of transition relation distinguished
by the thickness of their arrows ~ and ~ . The latter captures the perform-
ance of observable transitions as ~ permits silent activity before and after a
happens: the relation =:~ was defined (see section 1.3) in terms of a ~ and the
relation ==~ indicating zero or more silent actions.
7 We assume that A and V have wider scope than the modalities [K], (K/, and that
brackets are introduced to resolve any further ambiguities as to the structure of a
formula: consequently, A is the main connective of the subformula (t ick)ttA[tock]ff.
169
The modal logic of the previous section does not express observable capabil-
ities of processes as silent actions are not accorded a special status. To overcome
this it suffices to introduce two new modalities [ ] and (()):
These operators are not definable within the modal logic of the previous section.
Using them, supplementary mo~lalities [K] and ((K)) are definable when K is a
subset of observable actions O.
[K] 9 de~ [ ] [K] [ ] ~) ((K>>Ode__f (( >>(K}(()>~
The derived meaningsof these modalitiesappeal to the observable transition
relations ~ in the same way that their counterparts [K] and (K} appeal
to a,. We write [ a l , . . . , a , ] and ((al,...,an)) instead of [{al, .... ,an}] and
(<{el,...,an})).
The simple modal formula ((tick))tt expresses the observable capability for
performing the action t i c k while )tick] f f expresses an inability to tick after
any amount of internal activity. Both clocks CI' clef = t i c k . C / ' + r.0 and CIdef=
tick. C1 have the property ((tick))tt but Cl' may at any time silently stop
ticking, and therefore it also has the property ((tick)) [ t i c k ) f f .
E x a m p l e 1 Crossing satisfies [car) [ t r a i n ) ( ( ( ~ ) ) t t V ((~))tt), but
it fails to satisfy )car] [ t r a i n ) ( ( ( ~ ) ) t t A ((cc-W2-a~))tt). []
Modal formulas can also be used to express notions that are basic to the
theory of CSP [40]. A process may Perform the observable trace a l . . . an provided
that it has the property ((al))... ((an))tt. The formula [ K ) f f expresses that the
observable set of actions K is a refusal. The pair ( a l . . . an, K) is an observable
failure for a process if it has the property ((al }}... (Can}}([r]ffA [K) f f ) : a process
satisfies this formula if it can perform the observable trace al ... an and become
stable (unable to perform r) and also be unable to perform any observable action
in K.
Recall that O is a universal set of observable actions (which does not contain
r). Let l - K ) abbreviate )O - K) and similarly for (i-K}}. Moreover let [ - )
and ({-}} be abbreviations for [O] and ((O}}. This means that the modal formula
[ - ) f f expresses an inability to perform an observable action: so the process Div
has this property when Div de--4fr.Div.
The modal logic of the previous section permits the expression of immediate
necessity or inevitability. The property that a process must perform a next is
given by the formula ( - } t t A [ - a ] f f . However the similarly structured formula
((-}}tt A I - a ) f f does not preclude the possibility that the observable action a
becomes excluded through silent activity. For instance both clocks CI and Cl'
earlier have the feature ((-}}tt A ) - - t i c k ) f f : CI' has this property because it is
able to perform an observable transition (so satisfies ((-}}tt) and is unable to
perform any observable action other than t i c k (and so satisfies ) - t i c k ] f f ) . But
170
Cl' may silently break down and therefore be unable to tick. This shortcoming
can be surmounted with the strengthened formula [ ] ( ( - / ) t t A [ - t i c k ] f f . Now
Cl' ~ [ ] ( ( - ) ) t t because of the silent transition Cl' ~ O. But there is still a
question mark over this formula as an expression of necessity. Let Cll be a further
clock, Cll cleft i c k . Cll + "r.Cll, which satisfies [ - t i c k ] f f as its only observable
transition is that of t i c k . However it also has the property [ ] ( ( - ) ) t t as the
set {F : Cll =~ F} contains the sole element Cll which obeys ((-))tt. But
interpreting this as the inevitability that t i c k must be performed fails to take
into account the possibility that Cll perpetually engages in internal activity and
therefore never ticks.
A process diverges if it is able to perform internal actions for ever: we write
E T i f E diverges following [36], and E ~ if E converges (fails to diverge). So Cll T
whereas Cl~ 1. Convergence and divergence are not definable in the modal logics
introduced so far. Consequently we introduce another modality [~], similar to
[ ], except it contains information about convergence:
Other mixtures are also definable: [K 1] as [ ] [K] [J,]; and [$ K ~] as [~] [K] [~].
Thus the stronger necessity ~ ] ((-))1;1; A [ - t i c k ] f f excludes divergence.
Features of processes concerning divergence, appealed to in definitions of
behavioural refinement [36], can also be expressed as modal formulas in this ex-
tended modal logic. For instance the strong property that there can not be diver-
gence throughout the observable trace a l . . . an is given as [~ al ~ ] . . . [1 an 1] t t .
Interesting work has been done on this topic, mostly however with respect
a
to the behaviour of processes as determined by the single thin transitions ---*.
Candidates for basic distinguishable features include traces and completed traces
(given respectively by formulas of the form ( e l ) . . . ( a n ) t t and (a 1 ) . . . ( a , ) [ - ] f f ) .
Elegant results are contained within [13, 35, 34] which isolate congruences for
traces and completed traces. These cover very general families of process op-
erators whose behavioural meaning is governed by the permissible format of
their transition rules. The resulting congruences are independently definable as
equivalences 9. Results for observable behaviour include those for the failures
model [40] which takes the notion of observable failure as basic. Related results
are contained in the testing framework of [36] where processes are tested as to
what they may and must do.
E x a m p l e 1 Consider the following three vending machines:
def
Venl -- Ip.lp.(tea. Venl + coffee. Venl)
def
Ven 2 -----Ip.(Ip.tea. Ven2 + Ip.coffee. Ven2)
def
Yen3 -- l p . l p . t e a . Ven3 + l p . l p . c o f f e e . Ven3
def
which have the same (observable) traces. Assume a user, Use = l p . l p . t e a . o k . 0 ,
who only wishes to drink a single tea by offering coins and having done so
expresses visible satisfaction as the action ~ . For each of the three vending
machines we can build the process ( Veni J Use)\K where K is {1p, t e a , coffee}.
When i = 1 there is just one completed trace v v re--k:
The two processes (Rep( Veni [ Use))\K, i = 2 and i = 3, have different com-
pleted traces. When i = 2 it has the completed trace r r r r ~-k
2.4 I n t e r a c t i v e g a m e s a n d bisimulations
Equivalences for GCS processes begin with the simple idea t h a t an observer can
repeatedly interact with a process by choosing an available transition f r o m it.
Equivalence of processes is then defined in terms of the ability for these observers
to m a t c h their selections so that they can proceed with further corresponding
choices. The crucial difference with the approach of the previous section is t h a t
an observer can choose a particular transition. Such choices can not be directly
simulated in terms of process activity 1~ These equivalences are defined in terms
of bisimulation relations which capture precisely what it is for observers to m a t c h
their selections. However we proceed with an alternative exposition using games
which offer a powerful image for interaction.
A pair of processes (E0, F0) is an interactive game to be played by two parti-
cipants, players I and I I who are the observers who make choices of transitions.
A play of the game (E0, F0) is a finite or infinite length sequence of the form
(E0, F 0 ) . . . (Ei, Fi) .... Player I a t t e m p t s to show that a conspicuous difference
between the initial processes is detectable whereas player II wishes to estab-
lish t h a t they can not be distinguished. Suppose an initial part of a play is
(E0, F 0 ) . . . ( E j , Fj). The next pair ( E j + I , F j + I ) is determined by one of the
following two moves:
Player II knows which transition player I chose, and more generally b o t h players
know all the previous moves. The play then m a y continue with further moves.
The next move in a game play is therefore very straightforward. The import-
ant issue is when a player is said to win a play of a game. A game is played
until one of the players wins, where the winning circumstances are described in
figure 13. If a player is unable to make a move then the other player wins t h a t
play of the game. Player II loses when condition 1' holds, when no corresponding
transition is available in response to a move from player I, which happens in the
10 For instance if E ~ E1 and E ~-~ E2 then the observer is able to choose either of
these, but there is not a "testing" process ~.F which can guarantee this choice in the
context (g.F] E)\{a}: the two results (F [ El)\{a} and (F E2)\{a} are equally
likely after the synchronization on a.
174
1. The play is (E0, F0)... (E~, F~) 1'. The play is (E0, F0)... (En, Fn)
and there are no available and for some a either
transitions from E~ or from Fn. En _2.. E' and not(3F'. Fn _.2_+F') or
F,~ --~ F' and not(3E'. E , --2-* E').
2. The play is (E0, F0)... (En, Fn)
and for some i < n
Ei = En and Fi = F~.
configuration (En, Fn) when one of these processes is able to perform an initial
action which the other can not, and so a manifest difference is detectable. Player
I loses in the configuration (En, Fn) when b o t h these processes have terminated
or are deadlocked as described in 1. Player II also wins in the other two circum-
stances, first when there is a repeat configuration, when (En, Fn) has already
occurred earlier in the play, as in 2, and second when the play has infinite length.
In both these cases player I has been unable to expose a difference between the
initial processes. Condition 2 is not necessary, as it is subsumed by 3: however
we include it because then an infinite length play is only possible when at least
one of the initial processes is infinite state.
E x a m p l e 1 Each play of the game (Cl, Cl2), when Cl d__~ft i c k . C / a n d Cl2 de=f
t i ok. t i ok. Cl2, has the form ( Cl, Cl2) ( Cl, t i ok. Cl2) ( Cl, Cl2) which ensures th at
player I loses because of the repeat configuration. It does not m a t t e r whether
player I chooses the { ) or the [ ] move, as player II is guaranteed to win. In the
case of the game (Cl, Cls) when Cl5 d_~ft i c k . C / 5 + t i c k . 0 there are plays that
player I wins and plays that player II wins. If player I initially moves Cl~ tick 0
then after her opponent makes the move Cl tick Cl, the resulting configuration
(Cl, O) obeys condition 1' of figure 13, and so player I wins. Instead if player I
chooses the other [ ] move Cl~ ~ Cl5 then player II wins immediately with
the transition Cl tick Cl. However player I has the power to win any play of
(Cl, Cls) by initially choosing the transition Cl5 tick O. Similarly player II is
able to win any play of (Cls, Cls) just by copying any move that player I makes
in the other process. []
A strategy for a player is a set of rules which tells her how to move depending
on what has happened previously in the play. A player uses the strategy ~r in a
play if all her moves in the play obey the rules in ~r. The strategy ~ is a winning
strategy if the player wins every play in which she uses It. In example 1 above,
175
player I's winning strategy for the game (Cl, Cls) consists of the single rule: if the
current game configuration is (Cl, Cls) choose the transition Cl5 tir 0. In the
example game (Cls, Cls), player II's winning strategy contains the two rules: if
the current game configuration is (Cls, Cls) and player I has chosen Cl5 tick Cl5
then choose Cls tick Cls, and if it is (Cls, Cls) and player I has chosen Cl5 tick 0
then choose Cl5 tick) 0. It turns out that for any pair of processes one of the
players has a winning strategy, and that this strategy is history free in the sense
that the rules do not need to appeal to moves that occurred before the current
game configuration.
E x a m p l e 2 The different choice points in the vending machines Ven2 and Ven 3
of the previous section ensure that player I has a winning strategy for the game
( Ven2, Vena). []
E x a m p l e 3 Player II has a winning strategy for the game (B, B') where these
def
processes are, B = • I o u t . B and B ' def
= i n . B ~+ o u t : B ~. In this case any play
has to be of infinite length, and player II's strategy resides with the fact that
she is always able to respond to any move by player I. []
When player II has a winning strategy for the game (E, F ) we say that
process E is game equivalent to process F. In this circumstance player II is able
to win any play irrespective of the moves her opponent makes, and so player I
is unable to distinguish between E and F .
Player II can always match player I's moves when two processes E and F
are game equivalent: by the ( ) move if E a ) E ~ then there is a corresponding
transition F --5+ F ' and E ~ and F ' are also game equivalent, and by the [ ]
move if F a ) F , then there is also a corresponding transition E a ) E , with E ~
and F ~ game equivalent. This is precisely the criterion for being a bisimulation
176
Co = { Cnt [ Oj : j _ > 0 }
Ci+l = {E l 0 j I down.0 I ok : E E Ci and j > 0 and k > 0}
E x a m p l e 2 Consider the family of clocks C//+1 dealt i c k . C// for i _> 0 and the
clock Cl clef tick.C/. Let E be the process ~ { C l i : i > 0}, and let F be E + CI.
The processes E and F are not bisimilar because the transition F t i r Cl would
have to be matched by E tick Cl j for some j > 0, and clearly Cl 7~ CIj. On the
other hand E and F satisfy the same modal properties. []
P r o p o s i t i o n 3 E ~ F iff E =Moo F .
1. The play is (E0, F0) ... (En, F,~) 1'. The play is (E0, F0)... (E,, Fn)
and for some i < n, and for some a either
Ei = En and Fi = Fn. E,~ ==~ E' and not(3F'. F,~ ~ F') or
F , ==~ F' and not(3E'. E,, = ~ E').
2. The play has infinite length.
((Protocol, Cop)} U
{((Sendl(m) I Medium I -~.Receiver)\J, Cop) : m 9 D} U
((( (m).Sendl (m) I Medium l aeceiver)\J, - (m).Cop) : m 9 D } U
{((Sendl (m) I Medl(m) ] Receiver)\J, -5-~(m). Cop) : m 9 D} U
{((Send1 (rn) ] Medium I -o--ff'~(m).-~-~.Receiver)\J, "o'-~(m). Cop) : m 9 D} U
{((Sendl (m) ]~-g.Medium I Receiver)\J, "5-~(rn).Cop) : m 9 D}
is an observable bisimulation. []
D e f i n i t i o n 2 E ~ F iff
1. E ~ F
2. i f E r ~ E ! t h e n F T, F1 ~ F ' and E ' ~ F ' for some F1 and F', and
3. i f F r , F, then E r ) E1 ==~ E' and E ' ~ F ' for some E1 and E'.
When E and F are stable (as is the case with Protocol and Cop of example 1)
E ~ F implies E ~c F.
There is a finer observable bisimulation equivalence called branching bisim-
ulation equivalence which also has a logical characterization [27]: we consider it
briefly in section 3.2. Observable bisimilarity and its congruence are not sensitive
to divergence. So they do not preserve the strong necessity properties discussed
at the end of section 2.2. However it is possible to define equivalences that take
divergence into account [36, 40, 71].
A direct proof that two processes are (observably) bisimilar is to exhibit the
appropriate bisimulation relation which contains them. Example 7 of section 2.4
and example 1 of section 2.6 exemplify this proof technique. In the case that
processes are finite state this can be done automatically. There is a variety of
tools which include this capability including the Edinburgh Concurrency Work-
bench [25] which exploits efficient algorithms for checking bisimilarity between
finite state processes, as developed in [41].
Alternatively equivalence proofs can utilize conditional equational reasoning.
There is an assortment of algebraic, and semi-algebraic, theories of processes
depending on the equivalence and the process combinators: for details see [36,
40, 6, 52]. It is essential that the equivalence is a congruence. To give a flavour of
equational reasoning we present a proof in the equational theory for CCS that a
simplified slot machine without data values is equivalent to a streamlined process
description. The congruence is ~c which was defined in the previous section.
The following are important CCS laws which are used in the proof:
14 More generally only case 2 of Proposition 4 of section 2.4 fails for ~.
182
The last four are clear from the behavioural meanings of the operators. T h e first
two are T-laws and show that we are dealing with an observable equivalence.
There is also appeal to a rule schema, called an expansion law by Milner [52],
relating concurrency and choice:
P r o o f rules for recursion are also needed. In the case t h a t E does not contain
any occurrence of [, we say that P is guarded in E if all its occurrences in E
are within the scope of a prefix a. operator when a is an observable action. This
condition guarantees that the equation P = E, when the only process constant
E contains is P , has a unique solution up to ~c: t h a t is, if F ~ E { F / P } and
G ~ E { G / P } then F ~ G. This justifies the following two conditional rules
for recursion:
- if P d=efE then P = E.
- if P = E and P is guarded in E, and Q = F and Q is guarded in F, and
E { Q / P } = F, then P = F.
The slot machine S M without data values, and its succinct description S M '
appear in figure 15. We prove that S M = SM'. The idea of the proof is to first
simplify S M by showing that it is equal to an expression which does not contain
the parallel operator. The proof proceeds on S M using the expansion law and
the laws earlier for \ K , 0 and r (and the first recursion rule):
B d__~fbank.B1 B1 d__~~-~.left.B
SM3 - (~-~-6.I0 I l e f t . B l i - ~ - t . D ) \ K
SM4 - (~-n.IO I l e f t . B I Y C ~ . D ) k K
The r-laws are used in the following chain of reasoning
SM3 = (1--6-~,I0 I l e f t . B I Y o - ~ . D ) \ g
= xoss(IO I Xe~t.B I Y ~ . D ) \ K + r ( X - ~ . I O JB I D)
= loss.(r.SM + slot.(IO1 Ileft.B lie-{-t.D)\K) + r.loss.SM
= Xo.~.(r.SM + sXot.r.(101 lS fD)\K) + r.Xos~.SM
= loss.(r.SM + slot.(IO1 I B ID)\K) + r.loss.SM
= loss.(r.SM + SM) + r.loss.SM
= loss.r.SM + r.loss.SM
= r.loss.SM
3 Temporal Properties
ModM logic as introduced in sections 2.1 and 2.2 is able to express local capabil-
ities and necessities of processes such as that t i c k is a possible next (observable)
action or that it must happen next. However it cannot express enduring capab-
ilities (such as that t i c k is always possible) or long term inevitabilities (such
as that t i c k must eventually happen). These features, especially in the guise of
safety or liveness properties, have been found to be very useful when analysing
the behaviour of concurrent systems. Another abstraction from behaviour is a
run of a process which is a finite or infinite length sequence of transitions. Runs
provide a basis for understanding longer term capabilities. Logics where prop-
erties are primarily ascribed to runs of systems are called temporal logics. An
alternative foundation for temporal logic is to view these enduring features as
extremal solutions to recursive modal equations.
A property separates a set of processes into two disjoint subsets, those with the
property and those without it. For example ( t i c k ) t t divides {Cl],tock.C/1}
into the two subsets {Cll} and {rock.C/i} when Cll d_._,ftick.rock.C/1. We let
n4~ ]0e be the subset of the family of processes s having the modal property ~,
the set {E 9 E : ~eE.~ ~}. Consequently any modal formula 9 partitions $ into
n~n e and $ - ]4~
The set nO If is definable directly by induction on the structure of 9 provided
that s obeys a closure condition described below. First the boolean connectives:
Ilttll e = e
II~ll e =
I 1 ~ ^ ~11 e = H~II e n II~n e
I I ~ v ~11e = II~H e u I1~11e
When # E {[K], (K/, [ ], (( //, [~]} is a modal operator the definition of the
subset of processes with the property # ~ appeals to the process transformer
I[# If mapping subsets of C into subsets of E.
The operator II# IIe is the semantic analogue of # in the same way that N is the
interpretation of A. In the cases of [K] and (K) these transformers are:
Proposition 3 of the previous section shows that modal formulas are not very ex-
pressive. Although able to describe immediate capabilities and necessities, they
cannot capture more global or long term features of processes. We can contrast
the local capability for ticking with the enduring capability for ticking forever,
and the urgent inevitability that tick must happen next with the lingering inev-
itability that tick eventually happens.
Another abstraction from behaviour, which throws light on this contrast
between immediate and long term, is that of a run of a process E0 which is a finite
or infinite length sequence of transitions of the form E0 a l E1 a2 .... When a
run has finite length its final process is then unable to perform a transition. So a
run from E0 can be viewed as a computation from E0, a maximal performance
of actions.
Game or bisimulation equivalence, as defined in section 2.4, "preserves" runs
as stated by the next Proposition.
1. i f Eo al ) E i a2 ~ . . . -a,
' ~ En is a finite length run f r o m Eo then there
is a run Fo al) F1 a2 ) . . . a, ) Fn f r o m Fo such that El ", Fi f o r all
i : O < i < n, and
2. if Eo a, ) E i a2 ) . . . is an infinite length run f r o m Eo then there is an
infinite length run Fo al ' F1 a2 , .. . f r o m Fo such that Ei "~ Fi f o r all i.
Because E0 ,,, F0 implies F0 "~ E0, each run from F0 also has to be matched
with a run from E0. A simple consequence is t h a t the clock C! is not bisimilar
to any clock Cl i because Cl has a run which cannot be emulated by Cl i.
Clearly observable bisimulation equivalence, ~ , from section 2.6 does not
preserve runs in the sense of Proposition 1. A simple example is t h a t the infinite
run Div r ~ Div r ) . . . has no correlate from r.0 although Div ~ 7".0 when
def
Div = v.Div. We m a y try and weaken the matching requirement. For any run
from E0 there is a corresponding run from F0 such t h a t there is a finite or infinite
partition across these runs containing equivalent processes. However this induces
a finer equivalence than observable bisimulation, called branching bisimulation
[33]. It should also be compared with "stuttering" equivalence as proposed within
[19].
Observable bisimilarity does preserve observable runs whose transitions are
given by the thicker arrows - - ~ and 3 . B u t there is a drawback because of
=:~ transitions. The process C l de f t i c k . C / has the observable inactive run
Cl ~ Cl ~ . . . which means t h a t it fails to have the observable property that
t i c k must eventually happen.
Many significant properties of systems can be understood as features of all
their runs. Especially i m p o r t a n t is a classification of properties into safety and
liveness, originally due to L a m p o r t [43]. A safety property states that nothing
bad ever happens whereas a liveness property expresses that something good does
eventually happen. A process has a safety property just in case no run from it
contains the bad feature, and it has a liveness property when each of its runs
contains the good trait.
E x a m p l e 2 The level crossing of figure 7 should have the crucial safety property
that it is never possible for a train and a car to cross at the same time. In terms
Of runs this means that no run of Crossing passes through a process that can
perform b o t h t c r o s s and c c r o s s as next actions, and so the bad feature is
Liveness and safety properties of a process concern all its runs. We can weaken
t h e m to features of some runs. A weak safety property states that something
bad does not h a p p e n in some run, and a weak liveness property asserts t h a t
187
something good may eventually happen, that it eventually happens in some run.
E0~[g]4~ iffforallE0runsE0 a , E1 a2 . . . , i f a l E K t h e n E l ~ 4 ~ .
E0 ~ ( g ) ~ i f f f o r s o m e E 0 r u n E 0 a l E1 a2 . . . , al e K a n d E 1 ~45.
In this work we do not base temporal logic upon the notion of a run. It is of
course a very useful abstraction. A run is simply a subset of a transition closed
set. Although the properties described in the examPles above are not expressible
in modal logic of section 2 we shall find appropriate closure conditions on sets
of processes which define them, by appealing to inductive definitions built out
of modal logic. The idea is, for instance, that a long term capability is just a
particular closure of an immediate capability.
188
PRE i f F E E a n d E E T ) andE~iCkFthenEEs
POST if E E E then E ~ick F for some F E s
One solution is the empty set as it trivially fulfills both conditions. When 7) is
{ Cl}, the other subset { Cl} also obeys both conditions because of the transition
Cl tick Cl. In this instance both candidate solutions are successful fixed points,
and they can be ordered by subset, 0 C {Cl}. In the case that 7) is generated by
the more sonorous clock Cll deg t i c k . r o c k . Cll that alternately ticks and tocks,
there are more candidates for solutions but besides 0 all the rest fail PRE or fail
POST 9
With respect to any transition closed set the equation Z de__f( t i c k / Z has
both a least and a greatest solution (which m a y coincide) relative to the subset
ordering9 The general result guaranteeing this is due to Tarski and Knaster. It
shows that the least solution is the intersection of all prefixed points, of all those
subsets obeying PRE, and that the greatest solution is the union of all post'
fixed points, of all those subsets fulfilling POST. The result applies to arbitrary
monotonic functions from subsets of 7) to subsets of P. The set 29 is the set of
all subsets of 7), and the function g : 2~' --~ 2~' is monotonic with respect to C if
s C .T' implies g(s C g(J=)9
P r o p o s i t i o n 1 If g : 29 --~ 29 is monotonic with respect to C_ then g
i. has a least fixed point with respect to C given by N{E c 7) : g(s C s
ii. has a greatest fixed point with respect to C given by U{s C_7) : s C_g(E)}.
format we let pZ. kh express the property given by the least solution of Z de f ~ ,
and we let uZ. ~P express the property determined by its largest solution.
For the equation earlier, the least solution/~Z. ( t i c k ) Z expresses the same
property as r irrespective of P, the empty set obeys condition PRE. Much
more stimulating is that uZ. ( t i c k ) Z expresses the long-standing capability for
performing the action t i c k forever. Let E C :P consist of all those processes
E0 that have an infinite length run of the form E0 tick E1 ,ick .... It is clear
that E obeys POST, and that it is the largest such set. As shown in section 3.1
this capability is nol expressible within modal logic. More generally, uZ. ( K ) Z
expresses a capability for performing K actions forever. Two special cases are
striking, uZ. ( - ) Z expresses a capacity for never-ending behaviour and uZ. ( r ) Z
captures divergence, the ability to engage in infinite internal chatter.
A more composite equation schema is Z def 45 V ( K ) Z where 9 does not
contain Z. Any solution 8 C "P divides into the following two closure conditions:
Every subset E fulfilling PRE must contain those processes in P with the prop-
erty 4~, and also includes those processes that fail 4~, but are able to perform a
K action and become a process having ~, and so on. Therefore a process E0 has
the property #Z. r V ( K ) Z if it has a finite or infinite length run of the form
E0 4 0 ... a._~ En ~"~ ... with En ~ 4~ for some n and where each action aj,
j < n, belongs to K: that is, E0 is able to perform K actions until 9 holds. The
maximal solution, uZ. q~V (K)Z, also includes the extra possibility of performing
K actions forever without r ever becoming true.
Two general case s of/~Z. ~V ( K ) Z are worth noting. When K is the complete
set of actions it expresses weak liveness, that 4~ is eventually true in some run,
and when K is the singleton set {r} it expresses that after some silent activity
r is true, that is (())4~. Recall that the modality (()) is not definable within the
modal logic of section 2.1.
Another useful composite schema (assuming again that 45 does not contain
Z) is Z d___ef4~A (K)Z. The least solution is of no interest as it is expressed by f f .
The maximal solution over ~' is the union of all subsets g obeying the following
condition POST
which requires there to be a perpetual run involving only K actions and with
4~ true throughout. A slight weakening is that 9 holds throughout a maximal
performance of K actions, as expressed by uZ. q~A ( { K ) Z V [K]ff), and when K
is the set of all actions it expresses a weak safety property.
The complement of weak liveness is safety. A process fails /JZ.4~ V ( - ) Z
if 4~ never becomes true in any run. Similarly the complement of weak safety
is liveness. A process lacks uZ. 45 A ( ( - ) Z V [-]~f) if in every run eventually
is false. Complements are expressible when negation is freely admitted into
190
formulas with its intended meaning, [I--4~ ~7, is the set of processes • - ~~ II7".
But then not every modal equation has extremal solutions, a simple instance
is zd--ef --Z. It fails the monotonicity requirement of Proposition 1' However, if
def
we restrict the form of an equation Z = 4~ so that every free occurrence of Z
in 9 lies within the scope of an even number of negations then mon0tonicity is
guaranteed.
However, the complement of a formula is also in the logic without the explicit
presence of negation. Let ~c be the complement of ~, which is defined inductively
as follows (assuming that Z c = Z):
ttc : ff ffc : tt
r
3.4 M o d a l mu-calculus
IIz Hv = v(z)
H+^ a~llv = II+Hv n Ila~llv
H~ v ~'llv = II~llv u H~Hv
H[K]~ IIv = II[K]H 114 IIv
II(K)411v H(K)I11141Iv
IlvZ.+lv = U{~ c_ 7) : E c_ll+llvte/z]}
Ill, Z.~nv -- f l { c c_ 7) : 114llvt~/zlC_ E}
The subset of 7) with the property Z is that stipulated by the function i;. The
semantic clauses for the boolean operators are as in section 3.1, except for the
additional valuation component for understanding free variables. The meanings
of the modal operators appeal to the transformers [[[K] [[~' and [[( g ) ~ defined
in section 3.1. (The derived clauses for the boolean constants are [[t t IIv = 7)
and nf f Ilv = 0.)
It is straightforward to show that any formula 4 determines the monotonic
function ~g C_7). 114IlWte/z] with respect to the variable Z, the valuation 12, and
the transition closed set 7). Hence the meanings of the fixed point formulas are
instances of Proposition 1 of section 3.3: the greatest fixed point is given as the
union of all postfixed points whereas the least fixed point is the intersection of
192
all prefixed points. One consequence is that the meaning of ~Z. ~ is the same as
its unfolding r
Formulas of the logic without free variables are closed under complement: this
follows from the observations in the previous section. In particular (uZ. ~)r is
/~Z. ~c and (/~Z. 4~)e is uZ. Ce: for instance (uZ. I~Y. uX. [a](((b)XA Z) V [K]Y)) r
is the formula #Z. uY. I~X. (a)(([b]X V Z) A (K)Y). This is not true for open
formulas containing free variables. For example the formula Z does not have an
explicit complement. However as we employ valuations we are free to introduce
the understanding that a free variable Y has the meaning of the complement of
a different free variable Z.
Modal mu-calculus was originally proposed by Kozen [42] (and also see Pratt
[60]) but not for its use here 16. Its roots lie with more general program logics
employing extremal fixed points, developed by Park, De Bakker and De Roever,
especially when formulated as relational calculi [8, 9, 56]. Kozen developed this
logic as a natural extension of propositional dynamic logic. Larsen proposed
that Hennessy-Milner logic with fixed points is useful for describing properties
of processes [44]. Previously Clarke and Emerson used extremal fixed points on
top of a temporal logic for expressing properties of concurrent systems [28].
In the case of a closed formula~ (one without free variables), the subset [1~ [Iv
is independent of the particular valuation Y, and so is the same as [[~ ~v, for
any other valuation I / . Therefore when 9 is closed we let [[9 ]~' be the subset of
processes of P with the temporal property 9 relative to an arbitrary valuation,
and we also use the notation E ~ 9 to mean that E E[I r [[~'. More generally
when 9 may contain free variables we write E ~ v 4~ whenever E E [[~b ~v.
E x a m p l e 1 Entangled fixed point formulas are the most difficult to understand.
def D ~ def
Assume that D and D ~ are the two processes D = a.D ~ and = b.O + a.D,
and that 7' is {D, D ~, 0}. Let r and ~P be the following similar formulas:
= uZ. pY. [a](((b)tt A Z) V r )
= #Y. uz. [a](((b)tt V Y) A Z)
The formula 9 expresses that b is possible infinitely often throughout any infinite
length run consisting wholly of a actions, and so all the processes in P have this
property. The set U{E c 7" : E C 1[/~Y.[a](((b)tt A Z) V Y)Uv[e/z]} is 7". To
show this we establish that 7" C_ [[#Y. [a](((b)tt A Z ) V Y)[[v[Wz]. This depends
on proving that
%f . z . 9 v (r)z
[]4
[11 v %f . z . 9 ^ [r]z
Therefore the derived modalities ((K)), [ K ] , [1 K] are also definable. For
instance, [[K] ~ was defined as 11 [ K ] [ ] ~ which is the fixed point formula
PZ. [K](vY. 9 ^ [r]Y) ^ [r]Z. Observable modal mu-calculus is the sublogic when
the modalities are restricted to the subset {[ 1, (()), [ K ] , ((K))}, when r ~ K.
This fixed point logic is suited for expressing observable properties of processes.
An important feature of modal mu-calculus is that it has the finite model
property: i f a closed formula holds of some process then there is a fiaite state
process satisfying it. A proof of this can be found in [67].
3.5 Approximants
At first sight there is a chasm between the meaning of an extremal fixed point
and techniques (other than exhaustive analysis) for actuMly finding the set it
defines. There is however a more mechanical method, an iterative technique, due
to Tatski and others, for discovering least and greatest fixed points: Let pg be
the least and vg the greatest fixed point of the monotonic function g mapping
subsets of ~' to subsets of 7~.
Suppose we wish to determine the set ug, which is the union of all subsets
E that obey E C_ g(E). Let v~ be the full set •, and let v~+lg be the set
g(u~g). Clearly g(u~ C. u~ that is ulg C u~ and by monotonicity of g this
implies tha~i g(ulg) C g(u~ that is v2g C ulg. Consequently by repeated
application of g it follows that g(uSg) C u~g for each i, and so there is a possibly
decreasing sequence of sets, P 0 g _D v 1 g_D . . . D P'g D_.... The required set Pg is
a subset of each member of this sequence. By definition vg C u~ and therefore
g~(Pg) C g~(p0g) for any n where g"(x) is the application of g to x n times. As
Pg is a fixed point gn(pg) = pg, and gn(pOg) is the set P"g. If pig is equal to
pi+lg then pig is the set Pg, and therefore also pig is Pg for every j > i.
194
v~ g = P = {Cl, t i e k . 0 , 0 }
vlg = n (r, ick)Zllvt,,oa/z] = {el, tick.O}
v2 g = ~ (tiek)Z]lvtv,a/z ] = {C/}
= II ( t i c k ) z ]vt,../z] = {Cl}
Stabilization occurs at the stage vZg as this set is the same as v3g, and con-
sequently is the same a s png for all n > 2. Consequently vg is the singleton set
{o}. []
When 7~ is not a finite set of processes, we can still guarantee that vg is reachable
iteratively by invoking ordinals as indices. Ordinals can be ordered as follows:
O, 1 , . . . , w , w + 1 , . . . , w + w , w + w + I , . . .
Here w is the initial limit ordinal (one without an immediate predecessor) while
w + 1 is its successor 17. Assume that c~, fl and ~ range over ordinals. The set
va+lg is defined as g(vag) and rag when ~ is a limit ordinal is N{v~g : c~ < ~}.
Therefore there is the possibly decreasing sequence
The situation for the least fixed point #g is dual. The required set is the
intersection of all prefixed points, of all subsets g C_ 79 with the feature that
g(g) C g. Assume that/~0g is the empty set, and that i~i+lg is the set g(p~g).
Therefore there is the possibly increasing sequence of sets/~0g C pig C . . . C
pig C_ ... and #g is a superset of each of the sets i~ig. Again if pig is equal to
its successor #i+lg then/~ig is the required fixed point ttg. An iterative method
for finding/~g is to construct the sets ju'g starting with p0g until it is the same
as its successor. When 79 is finite and consists of n processes this iteration has
to terminate at, or before, l~ng.
Example 3 Let g be Ag C_ 79. II [tiek]ee V (-)Z ~VtelZ] when 79 is as in
example 1.
/~0g = I~
~ag = II[tick]f~ V ( - ) Z Ilvt~og/z] = {0}
~2g = II[ t i c k ] ~ V ( - ) Z Ilvt..~/z] = {tick.O, O}
~3g = [ [ t i c k ] ~ v ( - ) Z nvt..~lZl = {tick,O, o}
Stabilization occurs at ju2g which is the required fixed point. Notice that if we
consider ug instead then we obtain the following different set.
u~g = 79 = { Cl, t i c k . 0 , 0}
~lg = ~ [ t i c k ] ~ v ( - ) Z Ilvt.o~/z] = 79
This stabilizes at the initial point. []
If 79 is not a finite set then again we may need to invoke larger ordinals as
indices. The set / ~ + l g is g(p~g), and #Xg is the union set U{pC'g : o~ < ~}
when $ is a limit ordinal. Therefore there is the possibly increasing sequence
0g C_ ... c_ p~g C_ ~ + l g c _ . . .
The set/~g is a superset of each member of this sequence, and also occurs within
it, and the first such time is not when the ordinal is a limit.
E x a m p l e 4 Consider the following clock Cl'
def
el' =~{cd:i>_o}
def
CI i+1 = t i c k . C l /
CI' describes an arbitrary new clock, which will eventually break down. Let 7)
be the set {Cl', Cl i : i > 0}. As all behaviour is finite each process in 79 has the
property #Z. [tiek]Z. Let g be the function hE C_79. ~[tiek]Z~v[e/z].
/~0g = 0
~,Xg = ~[tick]Zllv[..g/z] = {Cd : j < 1}
So the initial limit point/2~ is U{p~g : i < w} which is {CI j : j > 0}. At the
next iteration the required fixed point is reached as/jW+lg is H[t• IIv[g+g/z]
which is P, and moreover #~g = P for all oL> w + 1. []
The sets cr"g are approzimants for trg in that they converge towards it. Each
v~g approximates vg from above, whereas each p"g approximates/tg from below.
In this way an extremal fixed point is the limit of a sequence of approximants.
We now provide a more syntactic characterization of these fixed point sets in
the extended modal logic Mr of section 2.5. I f g is ,~s C_ :P. 11+ Ilvt~/z] then vg is
IIvZ. 4~ IIv and ~tg is IIp Z . + Ilv (both with respect to P). The initial approximant
v~ is just 1[t t Ilv and #0g is ]1ff ]Iv. Therefore via is ]]9 ]]v[btvlZ] which is the
set 11q~{t,t,/Z} [[v: similarly r is ][~ { f f / Z } ]]v. For each ordinal (~ we define
a Z a.~ as a formula of M ~ . As before let $ be a limit ordinal:
v Z O. ~b = 1;1; pZ ~ ~ = ff.
~,z~+L ~, = +{~,z ~. + / z } ~ z "+~. r = + { ~ z ~, ~ , / z }
~,z ~.+ = h ( ~ Z " . + : ~ < ~) ~,z ~.+ = V ( ~ Z " . + : ~ < ~}
vZ ~ ~0 = t t /zZ O. ~ = ff
vZ 1. ~ = (~ A [r]tt = r ~ Z ~. ~ = ~ A [r]~
v Z 2. g~ = ~ ^ [r]~ ~z 2.~ = + ^ N ( + ^ [d~)
~ z ~. + = + ^ [ d ( + ^ [ d ( + A . . . [ d + - - . ) )
pZ ~. ~ = + ^ [ d ( + A [ d ( + ^ . . . [ d ( + A [ d ~ ) . . . ) )
The approximant juZi. !/r carries the extra demand that there can not be a se-
quence of silent actions of length i. Hence [~] ~ requires all immediate v beha-
viour to eventually peter out. []
19 It is assumed that Z is not free in r
197
E x a m p l e 1 The vending machine Ven has the property vZ. [2p, Ip]~P A [-]Z,
when r is/zY. ( - ) t t A [--{collectb, co 92 Let 7" be the transition closed
set { Ven, Venb, Vent, c o l l e c t b . Ven, c o l l e c t t . Ven}. First using approximants
we evaluate the embedded fixed point #, and we abbreviate its ith approximant
to #Y~:
/,yO :
0
#y1 = II <-bt A [-{collectb,collect,}]Y~v[.vo/v]
{r Ven, c o l l e c t t . Ven}
/,y2 : II( - b t A [-{coZX.ctb, coner b[~,Y~/n
{ Venb, Vent, r Ven, c o l l e e t t . Ven}
p y 3 = [I(--)i;tA [--{collectb, collecl;t}]Y ~v[uv2/y]
7"
Next the outermost fixed point is evaluated, given that the meaning of # is 7".
We abbreviate its ith approximant to vZ i.
vZ ~ = 7)
. z ! = II[~p, ap]~ A [-]Zb[~zom
=7"
Here the embedded fixed point can be evaluated independently of the outermost
fixed point. []
Example 1 illustrates how the iterative technique works for formulas with
multiple fixed points that are independent of each other. In abstract terms,
the formula of example 1 has the form vZ. r gt(y)) where the notation
makes explicit which variables can be be free in subformulas: here Z does not
occur free within the subformula/,Y, gt(y) but may occur within ~(Z,/,Y. ~P(Y)).
Consequently when evaluating the outermost fixed point we have that:
vZ ~ = 7'
.z 1 = II*(z, ~v. ~(v))Ilv[~zom
Throughout these approximants the meaning of the subformula pY. gt(y) is in-
variant because it does not contain Z free: consequently ~ltY. ~(Y) [[v[,,z'vz] is
the same set as [[laY. ~(Y) ~v[vz~'/z] for any ordinals a and ft.
198
v Z ~ = 79
pZ 1 - - " ~uY. [a](((b)tt A Z) V Y)Ilvt,.~om
#y00 = 0
juY O1 ----- ~[a](((b)tt A Z ) V Y ) I I ( v t . z o / z ] ) t . Y o . / v )
= {0, D}
#yO2 = 11[a]((<b)'l;'t A Z ) V Y)U(vt.zo/zl)t.Yo,./v)
=79
So v Z 1 = 79
t~Y ~ = O
t'Y1 = NvZ. [a](((b)tt v Y) ^ Z)Ilvt.yo/yl
v Z ~176= 79
- z ~ - II[a](((b)tt V Y) ^ Z)kvt.Y./YDt,,zoo/z]
= {0, D}
v Z ~ -" II[a](((b)i;t V Y ) A Z ) ~(v[.yo/y])[vzol/z]
= {o}
v Z ~ -" ~[a](((b)i;t V Y ) A Z)II(vt.Y./Yl)t~zo~/z]
= (o}
So/~y1 = {0}
~,y2 = II~,z. [a](({b)'t;t V Y ) A Z ) I l v t . v , / v l
v Z 1~ = 79
= ~[a](({b)tt V Y ) A Z)[[(v[uYUrl)[~,zlo/z]
vZ n
= {0, O}
vz12 = II[a](((b)tt v Y) ^ Z) ~(V[Uyx/y])tvZxa/g]
= {o}
~'z~" -- U[a](((b)tr V Y) A Z)II(vt.Y,/Y])t,.z,"/z]
= {o}
So/~y2 = {0}
Here we need to evaluate the innermost fixed point with respect to more than
one outermost approximant. [3
199
Example 2 illustrates dependency of fixed points. The first formula has the
shape vZ. 4~(Z,/~Y. ~V(Z, Y)) where Z is free in the innermost fixed point 9 Eval-
uation of such a formula using approximants takes the form:
vZ ~ = 79
~,z 1 = U+(z, ~,Y. ~(z, Y)) llvt,,~o/z]
I~Y ~176= I~
I~Y ~ = ] ~ ( Z , Y ) ~(VtvZo[Z])[ljytm/y]
The meaning of the subformula pY. ~/'(Z, Y) may vary according to the inter-
pretation of Z.
Approximants for uZ. ~ and #Z. ~) start from the sets v Z ~ = 79 and # Z ~ = 0.
In principle, there will he fewer calculations if the initial approximants are closer
to the required fixed points. A set v Z ~ = C where 7) __DC _DIIuZ. ~ ~ could be a
better initial point than 79. This observation can be utilized when evaluating an
embedded fixed point formula whose shape is v Z . 4)(Z, uY. ~P(Z, Y ) ) .
vZ ~ = P
/./21 ---- [[(~'(Z, pg. ~TI(Z, Y) ) ~l)[vZo/Z]
v y ~176= 79
/ty01 = ]lilt(z, Y) U(v[vz~176176 /Y]
vZo 7)
vza = II~ ( z , ~Y. ~ ( z , v ) ) Uvt,,zo/z]
v Y ~176
= O
vY~ = II# ( z , Y) ]l(v[vgo/g])[vyoo/y]
logic). We may wonder if the restriction to image finite processes is still necessary
for this result given that fixed points are expressible using infinitary conjunction
and disjunction, and that infinitary modal logic Moo characterizes bisimulation
equivalence exactly. Recall the example of the two clocks in section 2.5 that
showed t h a t image finiteness is essential in the case of modal logic. The two
clocks have the same modal properties but they are not bisimilar. The presence
of fixed points allows us to distinguish them because one of the clocks has an in-
finite tick capability expressed by the formula uZ. (tick) which the other lacks.
The following more complex example due to Roope Kaivola shows the continu-
ing need for image finiteness (or a weakened version of it). Let {Qi : i E I} be
the set of all finite state processes whose actions belong to {a, b}, and assuming
n E N let P(n) def=an.b.P(n+ 1), R def=7~{a.Qi : i E I}, and pd_ef P(1) + R
(where a~ is E and a n + l . E is a'~.a.E). The behaviour of P(1) is:
The set C a is the largest bisimulation closed subset of C, and E ~ is the smallest
bisimulation closed superset of s both with respect to P.
202
L e m m a 2 For any subsets S and yr of 79, the sets Sd and gu are bisimulation
closed and g d C g C Su. Moreover, if S is bisimulation closed then sd = S~,
and if g C_ 9~ then ~ C_.~d and gu C_ yrs.
This result tells us more than that bisimilar processes have the same properties
when expressed using closed formulas. They also have the same properties when
expressed by open formulas provided that the meanings of the free variables are
bisimulation closed. The proof of this result also establishes that closed formulas
of observable modal mu-calculus (built using/he modal operators [ g ] , [ ], ((lC)),
and (())) are preserved by observable bisimulation equivalence.
The first is too weak as it merely states that eventually some action in K is
possible without any guarantee that it happens. The second is too strong as
it states that eventually only K actions are possible (and therefore must then
happen).
Weak liveness and safety properties may also be ascribed to states or actions.
However recall that weak liveness is the complement of safety and weak safety
is the complement of liveness. That 9 is eventually true in some run is given by
(vZ. 4~c A [-]Z) c which is the formula #Z. 9 V ( - ) Z . And that some action in K
happens in some run is expressed by (uZ. [ K ] f f A [-]Z) c which is the formula
#Z. ( K ) t t V ( - ) Z . Weak state safety, that q~ is true throughout some run, is
expressed by (/zZ. ~ V ( ( - ) t t A [-]Z)) c, which is the formula vZ. 4i A ( [ - ] f f V
(-}Z) and that there is a run where no action in K occurs is (#Z. ( - } t t A
204
vV.[car](#X.vVl .( Q V [ - - ~ I ( v V 2 . ( RV X ) A [ - - ~ ] Y 2 ) )A[--~]Y1)A[--]Y
The property expressed here is extensional. In this case we can view Q and R
as probes in the sense of [70]. []
Another class of properties is until properties' These are of the form # remains
true until # becomes true, or in terms of actions K actions happen until a J
action occurs (or a mixture of state and action). Again they can be viewed as
holding of all runs, or some runs, or of a particular family of runs which obey a
condition. The formula/~Y. ~P V (4~ A ( - - ) t t A [-]Y) expresses that 9 holds until
~P in every run. Note here the requirement that ~P does eventually become true.
This commitment can be removed by changing fixed points. The property that
in every run # remains true unless kV holds does not imply that ~P does become
true, and so is expressed as vY. ~PV (r A [-]Y).
Sometimes we are only interested in part of the behaviour of a process. There
are many ways to understand what part of a behaviour means. A simple case is
205
when attention is restricted to a subset of the actions that a process can perform.
Liveness, safety and until properties can therefore be relativized in this way. An
example is the property that 4~ is eventually true in any run consisting of K
actions.
Cyclic properties can also be described in the logic. A simple example is that
each even action is rock: if E0 a~ E1 a2 ... is a finite or infinite length run
then each action a2i is took. This is expressed as vZ. [ - ] ( [ - - r o c k i e r A [ - ] Z ) .
def
The clock Cll = t i c k . t o o k . Cll has this property. It also has the tighter cyclic
property that every run involves the repeated cycling of t i c k and r o c k actions,
expressed as vZ. [ - t i c k ] f f h [ t i c k ] ( [ - t o c k ] f f A [ - ] Z ) 2~ These properties can
also be weakened to some family of runs. Cyclic properties that allow other
actions to intervene within a cycle can also be expressed:
Another example is that in each run a can only happen finitely often, I.tX. uY. [a]XA
I-elY.
However there are also many counting properties that are not expressible
in the logic. A notable case is the following property of a buffer (which is a
consequence of [62]): the number of out actions never exceeds the number of i n
actions.
s0 This formula leaves open the possibility that a run has finite length. To preclude it
one adds ( - ) t t at the outer and inner level.
206
A very rich temporal logic has been introduced which is able to describe useful
liveness, safety, cyclic and other properties of processes. The next step is to
provide techniques for verification, for showing when processes have, o r fail to
have, these features.
To show that a process has, or fails to have, a modal property we can ap-
peal to the inductive definition of satisfaction between individual processes and
formulas, and a proof of a modal property thereby reduces to proofs of sub-
properties, as stipulated by the inductive definition of satisfaction. Therefore
the transition graph of a process is not needed when proving modal properties.
However in the case of modal mu-calculus the satisfaction relation between
processes and formulas is defined indirectly. The primary semantics of a formula
is defined in terms of every process in a transition closed set which has the
property. One m e t h o d for determining whether E has r is to first present a
transition closed set of processes containing E, second to calculate ~~ IIv with
respect to this set, and then finally to check whether E belongs to it. When E
has a small transition graph this is a reasonable technique. A s a general method
it is cumbersome and not feasible for processes that determine enormous let
alone infinite state transition graphs. Moreover, picking out all processes in a
transition Closed set which have a weak liveness, safety or cyclic property may
involve considerable redundancy if the intention is to show that a particular
process has it.
An alternative approach to showing that processes satisfy formulas is to
appeal to their approximants as described in sections 3.5 and 3.6. A more direct
definition of satisfaction is then available. However proofs will now require the
use of induction over ordinals, and some care must be taken with limit ordinals.
In the presence of embedded fixed points this will require the use of embedded
induction. Moreover, we will then lose that simple idea that a proof of a property
reduces to proofs of subproperties.
Discovering fixed point Sets in general is not easy, and is therefore liable to
lead to errors. Instead we would like simpler, and consequently safer, methods
for checking whether temporal properties hold. Towards this end we first provide
a different characterization of the satisfaction relation between a pro~ess and a
formula in terms of games. It turns out that a process has a property just in
case player II has a winning strategy for the game associated with this pair.
Underpinning player II's successful strategy is the notion of a successful tableau.
We therefore also present tableau proof systems for property checking, which
were originally developed with David Walker [66] and Julian Bradfield [17].
if #~ = #1 A !P2 then player I chooses one of the conjuncts #~: the process Ej+I is
Ej and #j+l is #~.
i r e i = !/rl V k~2 then player II chooses one of the disjuncts k~i: the process Ej+I is
Ej and # j + l is ~i.
if Cj = [K]k~ then player I choosesa transition Ej --2-+Ej+I with a E K and #j+l
is ~.
if ~j = {K)k~ then player II chooses a transition Ej a Ej+I with a E K and
~j+~ is ~.
- if ~j = vZ. ~ then player I chooses a new constant U and sets U ~'~f vZ. k~: the
process Ej+I is Ej and ~j+l is U.
- if ~j = ~tZ. k~ then player II chooses a new constant U and sets U d=efttZ. k~: the
process Ej+I is Ej and ~j+l is U.
- if ~j = U and U ~f vZ. ~ then player I unfolds the fixed point so ~3+1 is k~{U/Z}
and Ej+I is Ej.
- if Cj = U and U d=_~f#Z. k~ then player II unfolds the fixed point so 4i+1 is ~{U/Z}
and Ej+I is Ej.
[K] and (K), vZ. ~ and #Z. kP as they complement each other. Each time the
current game configuration is (E, a Z . ~ ) a new constant U is introduced as an
abbreviation for crZ.4~, and at the next step this fixed point is, in effect, unfolded
once as the formula becomes r 22. The point of constants is to provide a
mechanism for understanding when embedded fixed points recur.
The rules for a next move are backwards sound with respect to the inten-
tions of the players. If player I makes the move ( E j + I , O j + l ) from (Ej, Oj) and
21 It is straightforward to reformulate their definition so that players take turns.
22 The decision to make player I responsible for introducing and unfolding constants
for maximal fixed point formulas and player II responsible for least fixed points is
somewhat axbitrary, as these moves never provide real choice for either player. An
alternative exposition is to appeal to a third participant, a referee who makes these
moves.
208
Ej+l ~:v (~j-bl t h e n Ej ~ v 4~j. In contrast, if player II makes this move and
Ej+I ~ v 45j+1 then Ej ~ v ~bj. This is clear for the rules which govern boolean
and modal operators. In the case of a fixed point formula this follows provided we
understand the presence of a constant to be its defined equivalent. Formulas are
no longer "pure" as they may contain constants. However we can recover a pure
formula from an impure formula by replacing constants with their defined fixed
def def
points in reverse order of introduction: assuming that U1 - gzl ... Un = grn is
the sequence of declarations in order of introduction, the meaning of ~ is just
~P{g/n/Un}... {~Pl/U1}. Consequently the fixed point unfolding principle, that
E ~ v a Z . ~ iff E ~ v ~{crZ.r justifies the backwards soundness of the
moves determined by constants.
A player wins a play of a game in the circumstances depicted in figure 17.
If the configuration (E, t t ) or (E, Z) when E E ])(Z) is reached in a play
1. The play is (Eo, 40)... (En, 4n) 1'. The play is (Eo, 4 0 ) . . . (E,,, 4, 0
and either 4 . = t t or and either 4 , = f f or
4 . = Z and E E I;(Z). 4 , = Z and E ti~ V(Z).
3. The play is (Eo,4o)... (E,~,4.) 3'. The play is (Eo, 4 0 ) . . . (En, 4n)
and 4~ = U and U ~f uZ. 4 and and 4,~ = U and U d__~f/zZ.4 and
Ei=E~anddi=4~fori<n. Ei=En anddi=4~fori<n.
wins 23. More generally as a play can have infinite length this repeat condition
for winning is generalized. Player I wins an infinite length play if there is a least
fixed point constant U which is traversed infinitely often, and player II wins if
there is a greatest fixed point constant U which occurs infinitely often, and only
one of these can happen.
L e m m a 1 If (Eo,~o)... (En, ~ n ) . . . is an infinite length game play then there
is exactly one constant U which for infinitely many j, ~1 = U.
This lemma shows the role of constants (as they provide an exact account of
when the same fixed point subformula is repeated).
As with equivalence games, a strategy for a player is a family of rules which
tells her how to move depending on what has happened earlier in the play. A
player uses the strategy Ir in a play if all her moves in the play obey the rules
in It. The strategy ~ is winning if the player wins every play in which she uses
lr. Every game (E, 4~) relative to l; is determined, that is either player I has a
winning strategy or player II has a winning strategy, and this strategy is history
free in that the rules do not need to appeal to moves that occurred earlier in the
play. So a strategy for player I tells her how to move in the game configurations
(E, ~1 A ~2), (E, [g]~), (E, vZ. ~) and (E, U) when U dej vZ. (~, and a strategy
for player II is similar, as it decides the next configuration when the current one
is ( E , ~ I V~2), (E, ( g ) r ( E , # Z . ~ ) and (E, U) when V dee ]AZ.~.
Player II has a winning strategy for (E, # ) relative to 13 just in case E has
the property 9 relative to 1).
T h e o r e m 1 E ~ v ~ iff player H has a winning strategy for the game (E,r
relative to l;.
Theorem 1 yields an alternative account of the satisfaction relation between pro-
cesses and formulas. Game playing does not require explicit calculation of fixed
points, nor does it depend on induction over approximant indices. Moreover it
does not require the construction of the transition graph of a process. Game
playing also maintains the principle that a proof of a property reduces to sub-
proofs of subproperties, provided that we view the unfolding of a fixed point
formula as a subformula. There is another feature which could be exploited, the
possibility of more sophisticated game playing where moves m a y also be guided
by the algebraic structure of a process expression.
As an infinite length game play must traverse a particular constant infinitely
often, it follows that when E is finite state a play of (E,q~) has finite length.
There is also an exponential upper bound on the number of different plays up to
renaming of constants of such a game. Property checking of finite state processes
via game playing is therefore decidable. However this is not a very time efficient
method as the length of a play may be exponential in the number of fixed point
operators in a formula. In section 4.3 we provide less costly techniques based on
games. For the remainder of this section we illustrate game playing.
23 These two conditions 3 and 3' are in fact redundant. We only include them because
they guarantee that any play of (E, ~) has finite length when E is finite state.
210
E x a m p l e 1 Player II has a winning strategy for the game (Cl, uZ. (tick)Z).
The only possible play is (Cl, ~Z. (tick)Z) (Cl, U) (Cl, (tick)U) (Cl, U) up to
renaming the constant: the winning strategy here is, in effect, the empty set
of rules, as player II has to make the move (Cl, U) from the configuration
(Cl, (tick)U), and choice of fixed point constants does not affect play. Player II
also has a winning strategy for (Cls, vZ. (tick)Z) with respect to the slightly
different clock Cl5 deg tick.C/5 + tick.0. A winning play is almost identical
to the previous game play, (Cls, uZ. (tick)Z) (Cls, U) (Cls, (tick)U) (Cls, U):
the important part of the winning strategy is the rule, if (Cls, (tick)U) is the
current configuration then choose (CI~, U) as the next move. []
E x a m p l e 2 Cat is a simple infinite state system, Cat def up.(Cnt I down.0). It
has the property that it may do up forever, as the single play of (Cat, ~,Z. (up/Z)
is won by player II. However this play has infinite length:
4.2 Tableaux
with n _> 1 and possibly with side conditions. The premise sequent E F 4~
is the goal to be achieved (that E has the temporal property 4 ) while the
consequents are the subgoals. The rules are presented in figure 18. Again we use
EF~A~
A
EF4~ EF~
EF~v~ EF~v~
V v
EF4~ EF@
E F [g]~ { f : E - - ~ F and a E K} = { E 1 , . . . , E , }
[K] E 1 F r ... E n F r
E F
(K/+ E - ~ F a n d a e K
(g) FFr
a'Z. E F aZ. + U d~--4~aZ. ~ and U is new
Et-U
U E~-U U =deaf Z . ~
E ~- + ( U / Z }
It is clear that nodes labelled with sequents which obey 1 or 2 are successful
as then F has the property ~ relative to 1), and similarly nodes labelled with
sequents fulfilling 1~ or 2 ~ are not true. The remaining two conditions are ana-
logues of termination of a finite length game play through repetition, and are
pictured in figure 19. It is at this point that we distinguish in the proof theory
9
U d=
ef
uZ.~ U de~
= #Z.~
F~-U FFU
:
FI-U FFU
Successful Unsuccessful
between the two kinds of fixed points as they are not differentiated by the rules
earlier9
213
Cl k . z . ^ <-)it) ^ [ - ] z
Clk U
Cl e ^ <-)tt) ^ [-]u
cl [-tick] ^ (->it
ct F [-tick] Ct k (-)it CI~ U
Cl N tt
E x a m p l e 2 The vending machine Ven, from section 1.1, has the property that
whenever 2p is deposited eventually a big item is collected, which is expressed by
the formula ~P= uZ. [2p](#Y. ( - ) t t A [ - c o l l ectb]Y) A [-]Z. A successful tableau
is presented in two stages in figure 20 where 4 abbreviates the subformula
#Y. ( - ) t t A [--collectb]Y, and cb. Ven and ct. Ven abbreviate collectb. Ven
and coZlectl. Ven. Notice how similar the subtableaux T1 and T2 are. Later we
examine how to amalgamate their proof trees. []
E x a m p l e 3 Let D and 4 be as in example 1 of section 3.4. The resulting
successful tableau for D b 4 is presented in figure 21. Notice the important
requirement that a new constant is introduced when a fixed point is met. Both
V and W abbreviate the same fixed point formula. []
The tableau proof system presented here is complete for finite state processes.
This is a consequence of the game characterization of satisfaction.
214
Ven h ~P
Venh U
re. F [2p]~ ^ [ - ] u
Veil F- [2p]~ Ven I-- [-]U
Venb h T1 T2
Ven~, h V
Ven ~- tt
T1
Venb h U
wil~ F- [2p]~ ^ [ - ] u
v~.~ ~- [2p]~ w . ~ ~- [ - ] u
Venb I- [-]U
cb. Ven F- U
cb. Wn b [2p]~ ^ [ - ] U
~ . Veil ~- [2p]~ cb. wil ~- [-]u
Ven h U
T2
Venl b- U
Ven I- U
Fig. 20. A successful tableau for Ven
215
DI-~
DFU
D F- I~Y.[a](((b)tt A U) V Y)
D~'V
D I- [a](((b}tt A U) V V)
D' I- ( ( b ) t t A U) V V
D' t- (b)tt A U
D' F"(b)tt D' F" U
OI-tt D' I- #Y. [al(((b)tt A U) V Y)
D'~-W
D' I- [a](((b)tt A U) V W)
D ~ ((b)tt^ u) v w
DFW
D F [a](((b)tt h U) V W)
D' F ((b)tt A u) v w
D' F (b)tt A U
D'l-(b}tt D'~-U
OI-tt
In this section we refine the definition of game play to provide a more efficient
characterization of the satisfaction relation by reintroducing constants. We then
show how this refinement affects the construction of tableaux.
Figure 16 contains the rules for the next move in a play whose initial part is
(E0,4~0)... (Ej, 45j). We now change the rules for introducing constants for fixed
points, and divide each of them into two cases.
- if 4ij = vZ. ~ and player I has not previously introduced a constant V dej
216
uZ. ~P then player I chooses a new constant U and sets U ~ f uZ. ~P: the
process Ej+I is Ej and ~ j + l is U.
def
- if ~j = / ~ Z . ~ and player II has not previously introduced a constant V =
def
pZ. kV then player II chooses a new constant U and sets U = #Z. ~: the
process Ej+I is Ej and ~j-]-I is U.
- if 4 j = t,Z. kv and player I has previously introduced a constant V d__efvZ.
then Ej+I is Ej and ~ j + l is V.
- i f ~ j = #Z. ~ and player II has previously introduced a constant V d__cf#Z. ~P
then Ej+I is Ej and 4j+1 is V.
This change means t h a t constants are reintroduced as abbreviations for the same
fixed point formula. All the other moves and who is responsible for t h e m remain
unchanged.
9We also need to change the criteria for when a player wins a play. The
winning conditions for the earlier games are given in figure 17. We retain the
conditions 1, 2, 1~ and 2~: for instance if the configuration reached in a play is
(F, [K]kV) and the set {E : r a E and a e K } is e m p t y then player II wins.
The other conditions 3, 4, 3 ~ and 4' need to be redefined because constants are
reintroduced. An infinite length play m a y now contain more than one constant
that recurs infinitely often.
The following definition provides a useful notion t h a t will be used in the
reformulated termination conditions.
D e f i n i t i o n 1 The constant U is active in 9 iff either U occurs in q~, or some
def
constant V = erZ.~P occurs in 4, and U is active in ~Z.~P.
The discipline of introducing constants ensures t h a t being active is well defined.
If U1 def
: O'Zl.k[r ...Un def: ~rZn.kVn is the sequence of declarations of distinct
constants in order of their initial introduction then although Ui can be active in
Uj when i < j it is not possible for Uj to be active in Ui. Activity of a constant
can be extended to finite or infinite length sequences of formulas: we say t h a t U
is active throughout 4 0 . . . 4n . . . if it is active in each t i -
The following l e m m a governs the remaining Winning conditions.
L e m m a 1 i. / f ( E 0 , 4 0 ) . . . ( E n , 4 n ) is an initial part of a game play and
r = 4n for some i < n, then there is a unique constant U which is active
throughout r and which occurs there, ~j = U for some j : i <_j <_ n.
ii. If (Eo, 4 0 ) . . . (En, 4 n ) . . . is an infinite length game play then there
is a unique constant U which occurs infinitely often and is active throughout
r . . . 4 , , ... for some j >_ O.
A repeat configuration (E, ~P) when ~P is any formula, and not just a constant,
terminates play. W h o wins depends on the sequence of formulas between (and
including) the identical configurations. There is exactly one constant U which
is active throughout this cycle and which occurs within it: if it abbreviates a
m a x i m a l fixed point formula then player II wins and otherwise it abbreviates a
217
least fixed point formula and player I wins. This replaces conditions 3 and 3 ~ of
figure 17. In any infinite length play there is a unique constant which is traversed
infinitely often and which is active for all but a finite prefix: if this constant
abbreviates a maximal fixed point formula player II wins and otherwise player
I wins. This replaces conditions 4 and 4 ~ of figure 17. These revised termination
conditions are pictured in figure 22. Again the conditions 3 and 3 ~ are redundant,
and are only included because they guarantee that any play from a finite state
process has finite length.
: U~fvZ.# : U~fI~Z.#
(E,~) T (E,~I') T
: U active : U active
(F, U) (F, U)
: throughout : throughout
(E,~) I (E,~) I
(Ek, U) (Ek~,U)
:
(E3, U) T (Ej, U) T
U active . U active
(E~, U) (E,~, U)
throughout throughout
A strategy is again a set of rules which dictates how a player should move,
and it is winning if the player wins every play in which she uses it. For each
218
game (E, ~) one of the players has a winning strategy, which is again history
free.
T h e o r e m 1 E ~ v ~ iff player II has a winning strategy for (E, ~) relative to
13.
When E is a finite state process let IEI be the number of processes in 7~(E),
and let I~1 be the size o f ~ (the number of symbols within it). There are at most
IEI • I~1 different configurations in any game play (up to renaming of constants).
This means that any play of ( E , r has length at most 1 + (IEI • I~1).
E x a m p l e 1 A case where game playing is shorter is example 3 of section 4.1.
Let D a D', D' a D and D' b_~ 0, and let ~ be I~Y. uZ.[a](((b)ttVY)AZ).
Player I's winning strategy for (D ~, ~) is the same as in that example. The play
proceeds:
(D', ~) (D', U) (D', uZ. [a](((b)tt V U) A Z)) (D', V)
(D', [a](((b)tt V U) A V)) (D, ((b)tt V U) A V) (D, (b)tt V U)
At this configuration player II has a choice. If she chooses the first disjunct
she then loses because there is not a b transition from D. Otherwise the play
proceeds:
(D, U) (D, uZ. [a](((b)tt V U) A Z)) (D, V)
(D, [a](((b)tt V U) A Y)) (D', ((b)tt V U) A V) (D', V)
9 U~f~Z.r : U~fpZ.r
:
E~, T E~, T
: U active : U active
F~-U Ft-U
: throughout : throughout
Successful Unsuccessful
Again it is at this point that we distinguish in the proof theory between the
two kinds of fixed points, as they are not differentiated by the rules earlier. As
before a successful tableau for E ~- 9 expresses player II's winning strategy for
the game (E, ~) and is therefore a proof that E has the property ~.
P r o p o s i t i o n 1 I f E t-v ~) has a successful tableau then E ~ v ~.
E x a m p l e 2 The tableau proof that D satisfies q~ = L,Z. pY. [a](((b)tt A Z) V Y)
w h e n D a ) D ~ a ) D a n d D ~ b) 0 is given in figure 23. This proof is shorter
than the previous proof in figure 21. As U, a maximal fixed point constant, is
active throughout the cycle from D ~- V to D ~- V and occurs within it the
tableau is successful. []
The tableau proof system presented here is again complete for finite state
processes. This is again a consequence of the game chaiacterization of satisfac-
tion.
P r o p o s i t i o n 2 I f E is a finite state process and E ~w ~ then E ~-v 4~ has a
successful tableau.
Assume that E is a finite state process. The game graph for (E, 4~) relative
to ]) is the graph representing all possible plays of the game ( E , ~ ) , of the
previous section, modulo a canonical means of choosing constants. The vertices
are pairs (F, ~), configurations of a possible game play, and there is a directed
edge between two vertices vl ) v2 if a player can make as her next move v2
from vl. Let G ( E , ~ ) be the game graph for ( E , ~ ) , and let IG(E,~)I be its vertex
size which we know, from the previous section, is no more than IE[ x Ir The
complexity of property checking is NP N co-NP. This follows from the observation
that giyen a strategy for player II or player I it is straightforward to check in
polynomial time whether or not it is winning: the technique in [29] is easily
convertible into game playing.
220
Dt-~
DI--U
D I- #Y. [a](((b)tt A U) v Y)
D~-V
D I- [a](((b)tt h U) V V)
D' F ((b)tt A U) V V
D' F (b)tt A U
D' I- (b)tt D'FU
We can easily ensure that game playing must proceed to infinity by adding
extra moves when a player is stuck (and removing the redundant repeat ter-
mination conditions 3 and 31 of the previous section). The resulting game graph
is then an alternating automaton: the and-vertices are the configurations from
which player I proceeds and the or-vertices are those from which player II moves,
and the acceptance condition is given in terms of active constants. See [10] which
directly uses alternating automata for property checking.
An important open question is whether model checking modal mu-calculus
formulas can be done in polynomial time (with respect to IEI x 14~1).One direction
for research is to provide a finer analysis of successful strategies, and to be able
to describe optimizations of them. New insights may come from the relationship
between the games developed here and other graph games where there are such
descriptions.
The property checking game can be (easily) abstracted into the following
graph game. A game is a graph with vertices { 1 , . . . , n} where each vertex i has
two directed edges i ---. j l and i * J2. Each vertex is labelled I or II. A play is
an infinite path through the graph starting at vertex 1, and player I moves from
vertices labelled I and player II from vertices labelled II. The winner of a play is
determined by the label of the least vertex i which is traverse.d infinitely often:
if i is labelled I then player I wins, and if it is labelled II then player II wins. A
221
player wins the game if she has a winning strategy (which again is history free).
For each game one of the players has a winning strategy. Notice that this graph
game is described without mention of property checking.
Simple stochastic games [26] are graph games where the vertices are labelled
I, II or A (average), and where there are two special vertices I-sink and II-sink
(which have no outgoing edges). As above each I, II (and A) vertex has two
outgoing edges. At an average vertex during a game play a coin is tossed to
determine which of the two edges is traversed each having probability 89 More
generally one can assume that the two edges are labelled with probabilities of
the form a where {3 < p < q < 2"~ for some m, as long as their sum is 1. A game
play ends when a sink vertex is reached: player II wins if it is the II-sink, and
player I otherwise. The decision question is whether the probability that player
II wins is greater than 89 I t is not known whether this problem can be solved in
polynomial time, although in [26] it is shown that it does belong to to NPf3co-
NP. In [48] a "subexponential" (2 ~ algorithm is presented, which works by
refining optimal strategies. A polynomial time algorithm for simple stochastic
games would imply that extending space bounded alternating Turing machines
with randomness does not increase the class of languages that they accept.
Mark Jerrum noted that there is a reduction from the graph game to the
simple stochastic game. The idea is to add the two sink vertices, and an average
vertex il for each vertex i for which there is an edge j ~ i with j > i. Each
such edge j ~ i when j > i is changed to j ~ il. And the vertex il has an
edge to i, and to I-sink if i is labelled I or to II-sink otherwise. With suitable
rational probabilities on the edges, player II has a winning strategy for the graph
game iff she has one for the simple stochastic game.
examples such as the Protocol which pass data items through the system ob-
livious of their particular values. A number of authors has identified classes of
processes which are in this sense data independent. At the other extreme are
systems such as T(i) of section 1.1 where future behaviour strongly depends on
the value i. In between are systems such as the register where particular values
are essential to change of state.
A third class of processes is infinite state independently of parameterization.
An instance is the counter Count of section 1.5. Here the system evolves its
structure as it performs actions. In certain cases processes that are infinite state
in that they determine an infinite state transition graph are in fact bisimulation
equivalent to a finite state process. A simple example is that C de f a.C I b.C is
bisimilar to C' ~ f a.C' + b.C'. Another interesting subclass of infinite state pro-
cesses are those for which bisimulation equivalence is decidable. Two examples
are the context free processes and the basic parallel processes [23, 22].
A final class of systems is also parameterized. However for each instance of
the parameter the system is finite state. Two paradigm examples are the buffer
B u f f , and the scheduler Sched,, both from section 1.4. Although the techniques
for verification of temporal properties apply to instances they do not apply to
the general families. In such cases we would like to prove properties generally, to
show for instance that for each n > 1 Sched, is free from deadlock: T h e proof
of this requires exposing structure that is common to this whole family.
In this section we present a simple generalization of satisfaction, and examine
how it can be used to provide a tableau proof system. The full story of this proof
system (presented with Julian Bradfield in [18]) continues into the next section.
A straightforward generalization of satisfaction is as a relation between a set
of processes and a formula. We use the same relation ~ v for this extension. If
79 is a transition closed set with g C 79 then
E~v~IA~2iffE~vr andE~v~2
E ~V ~1 V ~2 iff 3El, E2. E1 U E2 = E and ~1 ~ r and E2 ~v 4~2
possibly with side conditions. As in section 4.2 the premise sequent E F- ~ is the
goal to be achieved (that every process in E has the property ~) while the con-
sequents are the subgoals. The tableau proof rules are presented in figure 24. As 9
we are generalizing the proof system of section 4.2 new constants are introduced
as fixed point formulas are met: this makes termination less complex than if
we generalized the proof system of section 4.3 where constants are reintroduced.
There is one new kind of rule, a structural rule Thin, which allows the set of
processes in a goal sequent to be expanded. Clearly, the rules are backwards
sound.
To show that all the processes in E have the property 4~ relative to 13, one tries
to achieve the goal E Fv r by building a successful tableau. As before a successful
24 By definition 0 ~ v ~ for any ~.
224
A v C=E~ u82
(K) Ef ( F-
E )(K)#
.P r~ f : E -~ K(E) is a choice function
[K] K ( e ) P
g l - aZ.q~
a'Z. U ~f aZ. ~ and U is new
EFU
tableau is a finite proof tree whose root is labelled with this initial sequent,
and where all the leaves are labelled by sequents that are successful. Sequents
labelling the immediate successors of a node labelled :~ F ~ are determined
by an application of one of the rules, either by Thin or by the rule for the
main connective of k~. The crucial missing ingredient is when a node counts as
a terminal node.
The definition of a leaf in a tableau is, as we shall see in the next section,
underpinned by the game theoretic characterization of satisfaction. A tableau
now captures a whole family of games. For each process in the set of processes
on the left hand side of a sequent determines a play from it and the property
on the right hand side. A node n labelled by the sequent ~ F !P is terminal in
the circumstances described in figure 25. Clearly a node labelled with a sequent
I. ~ = tt or 1'. k ~ = f f o r
k~ = Z and Y C_ l;(Z) = Z and ?" ~ / 2 ( Z )
fulfilling 1 or 2 is successful, and similarly any node labelled with 1' or 2' is not
225
true. The other two conditions are generalizations of those for the proof system
def
: U~uZ.4 , : U = #Z.O
: C~_F : ~_C
EFU E.I--U
OrFU ~FU
Successful Unsuccessful
of section 4.2, and are pictured in figure 26. The justification for the success
of condition 3 can be seen by considering any infinite length game play from a
process in E with respect to the property U which cycles through the leaf ~" t- U.
As :T C ,~ the play continues from this companion node. Such an infinite play
must pass through U infinitely often, and is therefore a win for player II.
A successful tableau for E ~-v 4i is a finite proof tree all of whose leaves are
successful. A successful tableau only contains true leaves.
P r o p o s i t i o n 1 I f E b'v q~ has a successful tableau then E ~ v O.
However as the proof system stands, the converse is not true. A further termin-
ation condition is needed for least fixed point constants. However this condition
is a little complex and so we delay its discussion until the next section. Instead
we present various examples that can be proved without it. In the following we
write E ~- 4~ instead of {E} ~- q~.
E x a m p l e 2 It is not possible to show that Cnt has the property uZ. (up)Z using
the tableau proof system of section 4.2, when Cnt is the infinite state process,
Cnt ~ f u p . ( C n t [ down.0). There is a very simple proof within this more general
proof system. Let Cnto be Cnt and let Cnti+l be Cnti I down.0 for any i > 0.
{ S M . : n >_ O} t- vZ.Q A [ - ] Z
c F ~,Z.Q ^ [-]z
~F-U
r QA[-]U
~Q ~ [-Iv
g~-u
Here the Thin rule is used to enlarge the set of slot machines to the set ~ which
is 7)( SM,~ ). []
The proof system is also applicable to finite state examples, providing a much
more succinct presentation of player II's winning strategy for a game.
E x a m p l e 4 Recall the level crossing of figure 7. Its safety property is expressed
as v Z . ( ~ f f V [cc--5~5~]ff) A [ - ] Z . Let 9 be this formula. We employ the
abbreviations in figure 8, and we let E be the full set { E 0 , . . . E l l } . Below is a
successful tableau showing that the crossing has this property:
Crossing t-
E~U
E F ([~]~ v [~]~f) ^ [-]u
~~ ~ v [~]~f c ~ [-]u
s - {Es, ET} e [ ~ - ~ - ~ ~ - {E4, E6} e [ ~ ] ~ ~ e U
0 ~ f~ 0F ~
Again notice the essential use of the Thin rule at the first step. []
E x a m p l e 5 In example 2 of section 4.2 we noted how similar the two subtableaux
T1 and T2 are. These can be amalgamated as follows (where the same abbrevi-
ations are used):
227
{ Venb, Vent} b U
{ venb, Vent} ~- [2fie ^ [-]u
{ Venb, Vent} t- [2p]~ (genb, genl} ~- [-IV
{ Venb, Yen,} ~- [-W
{cb. Ven, ct. Ven} ~- U
{r Yen, ~t. Ven} F [:p]~ ^ [-]U
{r Ven, r Ven} e [2p]r {r Ven, et. Ven} F [-]U
Ot"~ Venb U
4.6 Well f o u n d e d n e s s
The proof system of the previous section is not complete. An example for which
there is not a successful tableau is given by the following cell, C d=ef in(x).Bx
when x : N and Bn+l def down.Bn. This cell has the property of eventual ter-
mination, I-tZ. [-]Z, The only possible tableau for C b pZ. [ - ] Z up to renaming
the constant, and inessential applications of Thin is:
c ~- ~z. [-]z
CbU
c ~ [-]u
(1) {Bi : i _ > O } b U
(2) {Bi : i >_ O} I-- [-]U
(3) {Bi : i > 0 } [ - U
The final node (3) is terminal because of the node labelled (1) above it, and it
is unsuccessful because U is a least fixed point constant. However any play of
the game (C,/~Z. I-]Z) is won by player IL One solution to the problem is to
permit induction on top of the current proof system by showing that each Bi
has the property U. However we would like to avoid explicit induction principles.
Instead we shall present criteria for success which captures player II's winning
strategy. This requires one more condition for termination.
The additional circumstance for being a leaf node of a proof tree concerns
least fixed point constants. A node n labelled by the sequent 2" b U is also a
terminal if it obeys the (almost repeat) condition of figure 27. This circumstance
is very similar to condition 3 of the previous section except it is with respect
228
def
: U -- # Z . ~
~c_E
CFU
~'FU
to a least fixed point constant, and it is also similar to 3' except that the set
of processes ~" at the leaf is a subset of C. Not all nodes that obey this new
condition are successful. The definition of success (taken from [18]) is intricate,
and requires some notation.
A leaf which obeys condition 3 of being a terminal from the previous section
or the new terminal condition above is called a (r-terminal, where ~r m a y be
instantiated by a v or p depending on the kind of constant involved. Suppose
node n ' is an immediate successor of n, and n is labelled by E F ~ and n ~ is
labelled E ~ h ~ .
n: s
A game play proceeding through (E, ~) where E E E can have as its next
configuration (E~,~ ') where E ~ E C' provided the rule applied at n is not Thin.
Which possible processes E ~ E C~ can be in this next configuration depend on
the structure of ~. This motivates the following notion 9 We say that E ' E E' at
n ~ is a dependant of E E E at n if
All the possibilities are covered here. An example is that each Bi at node (2) in
the tableau earlier is a dependant of the same Bi at node (1), and each Bi at
(1) is a dependant of C at the node directly above it.
Assume t h a t the companion of a cr-terminal is the most recent node above
it which makes it a terminal. (There may be more than one node above a a-
terminal which makes it a leaf, hence we take the lowest one.) Next we define
the notion of a trail.
D e f i n i t i o n 1 Assume that node nk is a p-terminal and node na is its compan-
ion. A trail from process E1 at n l to Ek at nk is a sequence of pairs of nodes
and processes (nl, E l ) . . . . , (nk, Ek) such that for all i with 1 _< i < k either
((1), B2)((2), B2) ((3), B1) is a simple trail from B~ at (1) to B1 at (3) in the
tableau earlier. In this case B2 at (2) is a dependant of B2 at (1), and B1
at (3) is a dependant of B2 at (2). Condition 2 of Definition 1 is needed to
take account of the possibility of embedded fixed points as pictured in figure 28.
A trail from (n~, E l ) to (nk, E~) may pass through node nj repeatedly before
9 nl EFU
n3 C~)-V
Success means that the relation <ln induced by the companion of a p-terminal is
well-founded. This precludes the possibility of an infinite game play asssociated
with a tableau which cycles through the node n infinitely often.
A tableau is successful if it is finite and all its leaves obey one of the conditions
for being a successful terminal (either 1, 2 or 3 from the previous section, or
that of being a successful p-terminal). The tableau technique is both sound and
complete for arbitrary (infinite state) processes. Again the result is proved using
230
and where Q holds when the crossing is in any state where Rail can by itself
perform green and R holds in any state where Road can by itself perform up. We
employ the abbreviations in figure 8, and we let g be the full set {E0, ..., Ell}.
The states that Q holds at are E~, E3, E6 and El0 and R holds at El, Ea,
ET, and Ell. A proof that the crossing has this property is given in stages in
figure 29.
In this tableau there is one/~-terminal, labelled (1) whose companion is (c).
The relation E>r is well founded as we only have: E1 t>c E4, E4 t>r Es and
E3 I:>e E6. Therefore the tableau is successful. []
E x a m p l e 3 The verification that the slot machine has the weak liveness prop-
erty that a million pounds can be won infinitely often is given by the following
successful tableau:
T1
~1 "~ { E l , E 3 , E 4 , E 6 , E T , E l l }
T1
(e) E~ e v
C1 F ~,Y~.(Q v [-r162 v V) ^ [-r162 ^ [ - r162
~'1 I- U1
~r I- Q v [ - ~ ] ( l p Y 2 . ( R v V) A [ - ~ ] Y 2 ) A [-cc-"~ro'-~]Ui
E, F Q v [-~r v V) ^ [-r162 c~ e [ - ~ ] v ,
{E3,E~} F Q T2 E~ t- UI
T2
{El, E~, E~ , Ell} ~- [ - c c ~ ] ( v Y 2 . ( R v V) A [-cc--SVS-~]Y2)
{E,,E3,E4,E6,Ell} F .Y2.(R v V) ^ [ - ~ ] 8 9
{E,,E3,E4,E6,E~,E~I} ~- . 8 9 v V) ^ [ - ~ ] 8 9
{E1,E3,E~,E6,E~,E.} F U2
{E1,Ez,E,,E6,Er,E. } F ( R v V ) A [ - ~ ] U ~
{ EI, E3, E4, E6, ET , Ell } I- R V V { E~ , Ea, E4, E6, ET , E ~ } I- [ - ~ ] U 2
{E~,Ea,ET,EI~} I- R (1) {E4,E~} I- V {E~,Ez,E4, E6,ET,E~I} t- U2
E is the set of all derivatives. The vital rules in this tableau are the disjunction at
node (1), where El is exactly those processes capable of performing a ~in(106)
action, and E2 is the remainder; and the (K) rule at node (3), where J" is defined
to ensure that s is eventually reached: for a process with less than 10 s in the
bank, f chooses events leading towards loss, so as to increase the amount in the
bank; and for processes with more than 106, f chooses to release(106). The
formal proof requires partitioning s into several classes, each parametrized by
an integer n, and showing that while n < 106, n is strictly increasing over a cycle
through the classes; then when n = 106, f selects a successor that is not in E2,
and so a Chain from E0 through nodes (1), (2), (3), (4) terminates. []
E x a m p l e 4 Consider the following family of processes for i > i from section 1.1:
If T(i) for all i _> 1 stabilizes into the cycle T(2) *-'~'(~)T(1) ,-~(1)) T(2) then the
following tableau is successful, and otherwise it is not. But which of these holds
is not known!
The problem is that we dont know if the relation induced by the companion of
this leaf is well-founded. []
5 Concluding Comments
In previous sections we used modal logic and modal mu-calculus for analysing
properties of processes. We also noted the close relationship between bisimilarity
and having the same properties. Some of the techniques mentioned, especially in
the case of finite state processes, are implemented in the Edinburgh Concurrency
Workbench 25. Another tool is the infinite state model checker for Petri nets based
on tableaux, described in [16].
An important topic which we have not discussed is to what extent verification
can be guided by the theory of processes. Game playing and the tableau proof
rules are directed by the logical structure of the property. A simple case of where
the theory of processes m a y impinge is the following proof rule that can be added
to the tableaux proof systems of sections 4.2 and 4.3 when r is a closed formula.
25 Which is freely available by emailing Perdita.Stevens@dcs.ed.ac.uk, or from the
WWW, http://www.dcs.ed.ac.uk/packages/cwb/index.html.
233
EF~
F,,,E Fh~
This is justified because as we saw in section 3.7 bisimulation equivalence pre-
serves temporal properties. Moreover if r belongs to weak modal mu-calculus
then we only need the subgoal F ~ E. Use of this rule could appeal to the
theory of bisimulation, and techniques for minimizing process descriptions. Al-
ternatively it could appeal to the equational theory of processes.
9Process behaviour is chronicled through transitions. But processes also have
structure, defined as they are from combinators. To what extent can process
properties be defined without appealing to transition behaviour, but instead to
this algebraic structure? The ascription of boolean combinations of properties to
processes does not directly depend on their behaviour: for instance, E satisfies
Y kP provided it satisfies one of the disjuncts. Consequently it is the modal
(and fixed point) operators that we need to concern ourselves with, how algebraic
structure relates to them. Some simple cases are captured in the following lemma.
References
23. Christensen, S., Hfittel, H., and Stirling, C. (1992). Bisimulation equivalence is
decidable for all context-free processes. Lecture Notes in Computer Science, 630,
138-147.
24. Cleaveland, R., and Hennessy, M. (1988). Priorities in process algebra. Proe. 3rd
IEEE Symposium on Logic in Computer Science, 193-202.
25. Cleaveland, R, Parrow, J, and Steffen, B. (1989). The concurrency workbench.
Lecture Notes in Computer Science, 407, 24-37.
26. Condon, A. (1992). The complexity of stochastic games. Information and Com-
putation, 96, 203-224.
27. De Nicola, R. and Vaandrager, V. (1990). Three logics for branching bisimulation.
Proc. 5th IEEE Symposium on Logic in Computer Science, 118-129.
28. Emerson, E, and Clarke, E. (1980). Characterizing correctness properties of par-
allel programs using fixpoints. Lecture Notes in Computer Science, 85, 169-181.
29. Emerson, E., and Jutla, C. (1991). Tree automata, mu-calculus and determinacy.
In Proc. 32rid 1EEE Foundations of Computer Science.
30. Emerson, E, and Lei, C. (1986). Efficient model checking in fragments of the
propositional mu-calculus. In Proc. 1st IEEE Symposium on Logic in Computer
Science, 267-278.
31. Emerson, E, and Srinivasan, J. (1989). Branching time temporal logic. Lecture
Notes in Computer Science, 354, 123-284.
32. van Glabbeek, J. (1990). The linear time-branching time spectrum. Lecture Notes
in Computer Science, 458, 278-297.
33. van Glabbeek, J. F., and Weijland, W.P. (1989). Branching time and abstraction
in bisimulation semantics. Information Processing Letters, 89, 613-618.
34. Groote, J. (1993). Transition system specifications with negative premises. The-
oretical Computer Science, 118, 263-299.
35. Groote, J. and Vaandrager, F. (1989), Structured operational semantics and
bisimulation as a congruence. Lecture Notes in Computer Science, 372, 423-438.
36. Hennessy, M. (1988). An Algebraic Theory of Processes. MIT Press.
37. Hennessy, M. and Ingolfsdottir. (1990). A theory of communicating processes with
value-passing. Lecture Notes in Computer Science, 443, 209-220.
38. Hennessy, M. and Milner, R. (1980). On observing nondeterminism and concur-
rency. Lecture Notes in Computer Science, 85, 295-309.
39. Hennessy, M. and Milner, R. (1985). Algebraic laws for nondeterminism and con-
currency. Journal of Association of Computer Machinery, 32, 137-162.
40. Hoare, C. (1985). Communicating Sequential Processes. Prentice Hall.
41. Kannellakis, P. and Smolka, S. (1990). CCS expressions, finite state processes,
and three problems of equivalence. Information and Computation, 86, 43-68.
42. Kozen, D. (1983). Results on the propositional mu-calculus. Theoretical Computer
Science, 27, 333-354.
43. Lamport, L. (1983) Specifying concurrent program modules. A C M Transactions
of Programming Language Systems, 6, 190-222.
44. Larsen, K. (1990). Proof systems for satisfiability in Hennessy-Milner logic with
recursion. Theoretical Computer Science, 72, 265-288.
45. Larsen, K. (1990). Ideal specification formalism; Lecture Notes in Computer Sci-
ence , 458, 33-56.
46. Larsen, K. and Skou. (1989). Bisimulation through probabilistic testing. In 16th
Annual A CM Symposium on Principles of Programming Languages.
236
47. Long, D., Browne, A., Clarke, E., Jha, S., and Marrero, W. (1994). An improved
algorithm for the evaluation of fixpoint expressions. Lecture Notes in Computer
Science, 818.
48. Ludwig, W. (1995). A subexponential randomized algorithm for the simple
stochastic game problem. Information and Computation, 117, 151-155.
49. Manna, Z, and Pnueli, A. (1991). The Temporal Logic of Reactive and Concurrent
Systems. Springer.
50. Milner, R. (1980). A Calculus of Communicating Systems. Lecture Notes in Com-
puter Science, 92.
51. Milner, R. (1983). Calculi for synchrony and azynchrony. Theoretical Computer
Science, 25, 267-310.
52. Milner, R. (1989). Communication and Concurrency. Prentice Hall.
53. Milner, R., Parrow, J., and Walker, D. (1992). A calculus of mobile processes,
Parts I and II, Information and Computation, 100, 1-77.
54. Moiler, F. and Tofts, C. (1990). A temporal calculus of communicating processes.
Lecture Notes in Computer Science, 458, 401-415.
55. Nicollin, X. and Sifakis, J. (1992). An overview and synthesis on timed process
algebras. Lecture Notes in Computer Science, 575, 376-398.
56. Park, D. (1969). Fixpoint induction and proofs of program properties. Machine
Intelligence, 5, 59-78, Edinburgh University Press
57. Park, D. (1981). Concurrency and automata on infinite sequences. Lecture Notes
in Computer Science, 154, 561-572.
58. Parrow, J. (1988). Verifying a CSMA/CD-Protocol with CCS. In Protocol Spe-
cification, Testing, and Verification VIII, 373-384. North-Holland.
59. Plotkin, G. (1981). A structural approach to operational semantics. Technical
Report, DAIMI FN-19, Aarhus University.
60. Pratt, V. (1982). A decidable mu-calculus, 22nd1EEE Symposium on Foundations
of Computer Science, 421-427.
61. Simone, R. de (1985). Higher-level synchronizing devices in Meije-SCCS. Theor-
etical Computer Science, 37, 245-267.
62. Sistla, P., Clarke, E., Francez, N. and Meyer, A. (1984). Can message buffers be
axiomatized in linear temporal logic? Information and Control, 68, 88-112.
63. Stirling, C. (1987). Modal logics for communicating systems, Theoretical Com-
puter Science, 49, 311-347.
64. Stirling, C. (1992) Modal and temporal logics. In Handbook of Logic in Computer
Science Vol. 2, ed. Abramsky, S, Gabbay, D, and Ma~baum, T., 477-563, Oxford
University Press.
65. Stifling, C. (1995). Local model checking games. Lecture Notes in Computer Sci-
ence, 962, 1-11.
66. Stirling, C. and Walker, D. (1991). Local model checking in the modal mu-
calculus. Theoretical Computer Science, 89, 161-177.
67. Streett, R. and Emerson, E. (1989). An automata theoretic decision procedure
for the propositional mu-calculus. Information and Computation, 81,249-264.
68. Taubner, D. (19.89). Finite Representations of CCS and TCSP Programs by Auto-
mata and Petri Nets. Lecture Notes in Computer Science, 369.
69. Walker, D. (1987). Introduction to a calculus of communicating systems. Technical
Report ECS-LFCS-87-22, Dept. of Computer Science, Edinburgh University.
70. Walker, D. (1989). Automated analysis of mutual exclusion algorithms using CCS.
Formal Aspects of Computing, 1, 273-292.
237
M o s h e Y. Vardi*
Rice University
Department of Computer Science
P.O. Box 1892
Houston, TX 77251-1892, U.S.A.
Email: vardi@ es.rice.edu
1 Introduction
While program verification was always a desirable, but never an easy task, the advent o f
concurrent programming has m a d e it significantly more necessary and difficult. Indeed,
the conceptual complexity o f concurrency increases the likelihood o f the program con-
raining errors. To quote from [OL82]: "There is a rather large b o d y o f sad experience to
indicate that a concurrent program can withstand very careful scrutiny without revealing
its errors"
The first step in program verification is to come up with aformal specification o f the
program. One o f the m o r e widely used specification languages for concurrent programs
is temporal logic [Pnu77, MP92]. Temporal logic comes in two varieties: linear time and
branching time ([EH86, Lain80]); we concentrate here on linear time. A linear temporal
specification describes the computations o f the program, s o a program satisfies the
* Part of this work was done at the IBM Almaden Research Center.
239
specification (is correct) if all its computations satisfy the specification. Of course, a
specification is of interest only if it is satisfiable. An unsatisfiable specification cannot
be satisfied by any program. An often advocated approach to program development is to
avoid the verification step altogether by using the specification to synthesize a program
that is guaranteed to be correct.
Our approach to specification, verification, and synthesis is based on an intimate
connection between linear temporal logic and automata theory, which was discussed
explicitly first in [WVS83] (see also [LPZ85, Pei85, Sis83, SVW87, VW94]). This
connection is based on the fact that a computation is essentially an infinite sequence
of states. In the applications that we consider here, every state is described by a finite
set of atomic propositions, so a computation can be viewed as an infinite word over
the alphabet of truth assignments to the atomic propositions. The basic result in this
area is the fact that temporal logic formulas can be viewed as finite-state acceptors.
More precisely, given any propositional temporal formula, one can construct a finite
automaton on infinite words that accepts precisely the computations satisfied by the
formula [VW94]. We will describe the applications of this basic result to satisfiability
testing, verification, and synthesis.
Unlike classical automata theory, which focused on automata on finite words, the
applications to specification, verification, and synthesis, use automata on infinite words,
since the computations in which we are interested are typically infinite. Before going
into the applications, w e give a basic introduction to the theory of automata on infinite
words. To help the readers build their intuition, we review the theory of automata on
finite words and contrast it with the theory of automata on infinite words. For a more
advanced introduction to the theory of automata on infinite objects, the readers are
referred to [Tho90].
2 Automata Theory
We are given a finite nonempty alphabet 22. A finite word is an element of S * , i.e., a
finite sequence at, ..., an of symbols from S . An infinite word is an element of S ~
i.e., an w-sequence 2 a0, a l , . . , of symbols from S . Automata on finite words define
(finitary) languages, i.e., sets of finite words, while automata on infinite words define
infinitary languages, i.e., sets of infinite words.
edge-labeled directed graph: the states of the automaton are the nodes, the edges are
labeled by symbols in S , a certain set of nodes is designated as initial, and a certain set
of nodes is designated as accepting. Thus, t E p(s, a) means that that there is edge from
s to t labeled with a. When A is deterministic, the transition function p can be viewed
as a partial mapping from S • S to S, and can then be extended to a partial mapping
from S x S* to S as follows: p(s, e) = s and p(s, xw) = p(p(s, x), w) for x E S and
wES*.
A run r of A on a finite word w = at, 9 an-1 E z~,* is a sequence so,.. :, sn of
n + 1 states in S such that so E S ~ and Si+l E p(si, ai) for 0 < i < n. Note that a
nondeterministic automaton can have many runs on a given input word. In contrast, a
deterministic automaton can have at most one run on a given input word. The run r is
accepting if s,, E F. One could picture the automaton as having a green light that is
switched on whenever the automaton is in an accepting state and switched off whenever
the automaton is in a non-accepting state. Thus, the run is accepting if the green light
is on at the end of the run. The word w is accepted by A if A has an accepting run on
w. When A is deterministic, w E L(A) if and only i f p ( s ~ w) E F, where S O = {so).
The (finitary) language of A, denoted L(A), is the set of finite words accepted by A.
An important property of automata is their closure under Boolean operations, We
start by considering closure under union and intersection.
Proof: Let A| = (S, S1, S ~ Pl,/71) and A2 = ( S , $2, S ~ p2, F2). Without loss of
generality, we assume that S1 and $2 are disjoint. Intuitively, the automaton A nonde-
terministically chooses AI or A2 and runs it on the input word.
Let A = ( S , S, S O, p, F), where S = S1 U $2, S O = S O U S ~ F = FI U F2, and
p(s, a) = f p'(s' a) if s E $1
[p2(s,a) i f s E $2
It is easy to see that L( A ) = L( A1) U L( A2 ). |
We call A in the proof above the union of A1 and A2, denoted A1 U A2.
Proof: Let A1 = (E, $1, S ~ pl, FI) and A2 = ( Z , $2, S ~ p2,/72). Intuitively, the
automaton A runs both A1 and A2 on the input word.
Let A = ( S , S, S ~ p, F ) , where S = $1 x $2, S O = $1~ x S ~ F = FI • and
p((s, t), a) = Pl (s, a) x p2(t, a). It is easy to see that L(A) = L(A1) N L(A2). |
We call A in the proof above the product of A1 and A:, denoted A1 x A2.
Note that both the union and the product constructions are effective and polynomial
in the size of the constituent automata.
Let us consider now the issue of complementation. Consider first deterministic
automata.
241
Proof: Let A = (S, S, S t, p, F). Then Aa = ( S , 2 s, {so}, pa, Fd). The state set of
Aa consists of all sets of states in S and it has a single initial state. The set Fa =
{T [ T f3 F ~ 0} is the collection of sets of states that intersect F nontrivially. Finally,
pa(T, a) = { t i t E p(s, a) for some s E T}. 1
Intuitively, Aa collapses all possible runs of A on a given input word into one run
over a larger state set. This construction is called the subset construction. By combining
Propositions 4 and 3 we can complement a nondeterministic automata. The construction
is effective, but it involves an exponential blow-up, since determinization involves an
exponential blow-up (i.e., i f A has n states, then Aa has 2'* states). As shown in [MF71],
this exponential blow-up for determinization and complementation is unavoidable.
For example, fix some n > 1. The set of all finite words over the alphabet S =
{a, b} that have an a at the nth position from the right is accepted by the automaton
A = (S, {0, 1 , 2 , . . . , n}, {0}, p, {n}), where p(0, a) = {0, 1}, p(0, b) = {0}, and
p(i, a) = p(i, b) = {i + 1} f o r 0 < i < n. Intuitively, A guesses a position in the input
word, checks that it contains a, and then checks that it is at distance n from the right
end of the input.
Suppose that we have a deterministic automaton Aa = ( S , S, {so}, pa, F) with
fewer than 2 n states that accepts this same language. Recall that pa can be viewed as a
partial mapping from S x S* to S. Since ISI < 2-, there must be two words uavl and
ubv2 of length n for which pa(s ~ nay1) = pd(s ~ ubv2). But then we would have that
pa(s ~ uavlu) = pa(s ~ ubv2u); that is, either both uavtu and ubv2u are members of
L(Aa) or neither are, contradicting the assumption that L(Aa) consists of exactly the
words with an a at the nth position from the right, since lavxul = Ibv2ul = n.
i.e., lim(r ) N F # 0. If we picture the automaton as having a green light that is switched
on precisely when the automaton is in an accepting state, then the run is accepting if the
green light is switched on infinitely many times. The infinite word w is accepted by A
if there is an accepting run of A on w. The intinitary language of A, denoted L,o (A), is
the set of infinite words accepted by A.
Thus, A can be viewed both as an automaton on finite words and as an automaton
on infinite words. When viewed as an automaton on infinite words it is called a Bfichi
automaton [Btic621.
Do automata on infinite words have closure properties similar to those of automata
on finite words? In most cases the answer is positive, but the proofs may be more
involved. We start by considering closure under union. Here the union construction does
the fight thing.
Proposition5. [Cho74] Let A1, A2 be Bachi automata. Then Lo~(AI U A2) = L~ (A1) U
L,~(A2).
One might be tempted to think that similarly we have that Lo~(AI x A2) = Lo~(A1) N
Lo~(A2), but this is not the case. The accepting set of A1 x A2 is the product of the
accepting sets of AI and A2. Thus, A1 x A2 accepts an infinite word w if there are
accepting runs rl and r2 of A1 and A2, respectively, on w, where both runs go infinitely
often and simultaneously through accepting states. This requirement is too strong. As a
result; Loj (A1 x A2 ) could be a strict subset of L,o (A~) n L~ (A2). For example, define the
two Btichi automata A1 = ({a}, {s, t}, {s}, p, {s}) and A2 = ({a}, {s, t}, {s}, p, {t})
with p(s, a) = {t} and p(t, a) = {s}. Clearly we have that L,o (A1) = L,o (A2) = {a ~~},
but L,o (AI x A2) = 0.
Nevertheless, closure under intersection does hold.
Proposition 6. [Cho74] Let A1, A2 be Bfichi automata. Then there is a Bachi automaton
A such that Lo~(A) = Lo~(A1) tq Lo~(A2).
Proof: LetA1 = (27, $1, S~1,pl, F1) andA2 = (27, $2, sO, p2, F2). LetA = (27, S, S t, p, F),
where S = S I X $2 X {1,2}, S ~ = S~I x S~ x {1}, F = F1 x $2 x {1}, and
(s',t',j) E p((s,t,i),a) i f s ' ~ pl(s,a), t' ~ p2(t,a), a n d i = j , unless i = 1 a n d
s E/;1, in which case j = 2, or i = 2 and t E F2, in which case j = 1.
Intuitively, the automaton A runs both A1 and A2 on the input word. Thus, the
automaton can be viewed has having two "tracks", one for each of A1 and A2. In
addition to remembering the state of each track, A also has a pointer that points to one
of the tracks (1 or 2). Whenever a track goes through an accepting state, the pointer
moves to the other track. The acceptance condition guarantees that both tracks visit
accepting states infinitely often, since a run accepts iff it goes infinitely often through
F1 x $2 x {1}. This means that the first track visits infinitely often an accepting state
with the pointer pointing to the first track. Whenever, however, the first track visits an
accepting state with the pointer pointing to the first track, the pointer is changed to point
to the second track. The pointer returns to point to the first track only if the second
track visits an accepting state. Thus, the second track must also visit an accepting state
infinitely often. |
243
Thus, B tichi automata are closed under both union and intersection, though the con-
struction for intersection is somewhat more involved than a simple product. The situation
is considerably more involved with respect to closure under complementation. First, as
we shall shortly see, Biichi automata are not closed under determinization, i.e., non-
deterministic Btichi automata are more expressive than deterministic Biichi automata.
Second, it is not even obvious how to complement deterministic Biichi automata. Con-
sider the deterministic Btichi automaton A = (S, S, S ~ p, F ) . One may think that it
suffices to complement the acceptance condition, i.e., to replace F by S - F and define
= (S, S, S ~ p, S - F). Not going infinitely often through F , however, is not the
same as going infinitely often through S - F. A run might go through both F and S - F
infinitely often. Thus, L~ (A) may be a strict superset of S ~ - L~o(A). For example,
Consider the Btichi automaton A = ({a}, {s,t}, {s},p, {s}) with p(s,a) = {t} and
p(t, a) = {s}. We have that Lo~(A) = L~(A) = {a~}.
Nevertheless, Biichi automata (deterministic as well as nondeterministic) are closed
under complementation.
Proposition 7. [Biic62] Let A be a Bfichi automaton over an alphabet 2~. Then there is
a (possibly nondeterministic) Bachi automaton-A such that Lo~(-A) = S ~ - L~ (A).
ztoOuiO...Oui(O...O•j) w .
But the latter word has infinitely many occurrences of O, so it is not in F. {
Note that the complementary language Z w - F = ( ( 0 + 1) *O)'~ (the set of infinite words
in which 0 occurs infinitely often) is acceptable by the deterministic Biichi automaton
A = ({0, 11, {s,t}, {s},p, {sl),wherep(s,O) = p(t,O) = {slandp(s, 1) = p(t, 1) =
{t}. That is, the automaton starts at the state s and then it simply remembers the
last symbol it read (s corresponds to 0 and t corresponds to 1). Thus, the use of
nondeterminism in Proposition 7 is essential.
To understand why the subset construction does not work for Btichi automata, con-
sider the following two automata over a singleton alphabet: AI = ({a}, {s, t}, {s}, Pl, {t})
and A2 = ({a}, {s, t}, {s}, P2, {t}), where pl(s, a) = {s, t}, pl(t, a) = O, p2(s, a) =
{s, t}, and p2(t, a) = {s}. It is easy to see that A1 does not accept any infinite word,
since no infinite run can visit the state t. In contrast, A2 accepts the infinite word a ~,
since the run (st) ~ is accepting. If we apply the subset construction to both automata,
then in both cases the initial state is {s}, pa({s}, a) = {s, t}, and pa({s, t}, a) = {s, t}.
Thus, the subset construction can not distinguish between A1 and A2.
To be able to determinize automata on finite words, we have to consider a more
general acceptance condition. Let S be a finite nonempty set of states. A Rabin condi-
tion is a subset G of 2 s • 2 s, i.e., it is a collection of pairs of sets of states, written
[(L1, U 1 ) , . , . , (Lk, Uk)] (we drop the external brackets when the condition consists of
a single pair). A Rabin automaton A is an automaton on infinite words where the accep-
tance condition is specified by a Rabin condition, i.e., it is of the form ( S , S, S ~ p, G).
A r u n r ofAisacceptingifforsome iwehavethatlim(r)flLi # 0 andlim(r)NUi = 0,
that is, there is a pair in G where the left set is visited infinitely often by r while the
right set is visited only finitely often by r.
Rabin automata are not more expressive than Btichi automata.
2.3 A u t o m a t a on Finite W o r d s - A l g o r i t h m s
1. A breadth-first-search algorithm can construct in linear time the set of all states
conncected to a state in S O [CLR90]. A is nonempty iff this set intersects F
nontriviaUy.
2. Graph reachability can be tested in nondeterministic logarithmic space. The al-
gorithm simply guesses a state so E S ~ then guesses a state sx that is directly
connected to so, then guesses a state s2 that is directly connected to sl, etc., until it
reaches a state t E F. (Recall that a nondeterministic algorithm accepts if there is a
sequence of guesses that leads to acceptance. We do not care here about sequences
of guesses that do not lead to acceptance [GJ79].) At each step the algorithm needs
to remember only the current state and the next state; thus, if there are n states the
algorithm needs to keep in memory O (log n) bits, since log n bits suffice to describe
one state. On the other hand, graph reachability is also NLOGSPACE-hard [Jon751.
II
Proposition13.
1. [EL85b, ELg5a] The nonemptiness problem for Bfichi automata is decidable in
linear time.
2. [VW94] The nonemptiness problem for Bfichi automata is NLOGSPACE-complete.
Nondeterminism gives a computing device the power of existential choice. Its dual gives
a computing device the power of universal choice. (Compare this to the complexity
classes NP and co-NP [GJ79]). It is therefore natural to consider computing devices that
have the power of both existential choice and universal choice. Such devices are called
alternating. Alternation was studied in [CKS81] in the context of Turing machines
and in [BL80, CKS81] for finite automata. The alternation formalisms in [BL80] and
[CKS81] are different, though equivalent. We follow here the formalism of [BL80].
For a given set X, let B + (X) be the set ofpositive Boolean formulas over X (i.e.,
Boolean formulas built from elements in X using A and V), where we also allow the
formulas true and false. Let Y C_ X. We say that Y satisfies a formula 0 E 13+(X)
if the truth assignment that assigns true to the members of Y and assigns false to the
members of X - Y satisfes 0. For example, the sets {Sl, s3} and {sl, s4} both satisfy
the formula (sl V s2) A (s3 V 84).
Consider a nondeterministic automaton A = (27, S, S ~ p, F). The transition func-
tion p maps a state s E S and an input symbol a E S to a set of states. Each dement
in this set is a possible nondeterministic choice for the automaton's next state. We
can represent p using B+(S); for example, p(s, a) = {sl, s2, s3} can be written as
p(s, a) = sl V s2 V s3. In alternating automata, p(s, a) can be an arbitrary formula from
B + (S). We can have, for instance, a transition
meaning that the automaton accepts the word aw, where a is a symbol and w is a word,
when it is in the state s if it accepts the word w from both Sl and s2 or from both s3 and
249
s4. Thus, such a transition combines the features 'of existential choice (the disjunction
in the formula) and universal choice (the conjunctions in the formula).
Formally, an alternating automaton is a tuple A = ( S , S, s t, p, F ) , where S is
a finite nonempty alphabet, S is a finite nonempty set of states, s o E S is the initial
state (notice that we have a unique initial state), F is a set of accepting states, and
p : S x S ~ B + (S) is a transition function.
Because o f the universal choice in alternating transitions, a run o f an alternating
automaton is a tree rather than a sequence. A tree is a (finite or infinite) connected
directed graph, with one node designated as the root and denoted by e, and in which
every non-root node has a unique parent (s is theparent of t and t is a child of s if there
is an edge from s to t) and the root e has no parent. The level of a node x, denoted Iz I,
is its distance from the root e; in particular, lel = 0. A branch fl = x0, X h . . . of a tree
is a maximal sequence of nodes such that x0 is the root e and xi is the parent of Xi+l
for all i > 0. Note that fl can be finite or infinite. A E-labeled tree, for a finite alphabet
S , is a pair (r, T), where r is a tree and T is a mapping from nodes(r) to S that
assigns to every node of r a label in S . We often refer to T as the labeled tree. A branch
/3 = x0, X l , . . . of T defines an infinite word T(fl) = T(xo), T ( x l ) , . . . consisting o f
the sequence of labels along the branch.
Formally, a run of A on a finite word w = a0, al, 9 9 9 an- 1 is a finite S-labeled tree
r such that r(e) = s o and the following holds:
if Ixl = i <., r(x) = s, and p(s, ai) =/9, then x has k children Zl,... , Zk,
for some k < ISI, and { r ( x l ) , . . . , r ( x k ) } saUsfies 0.
For example, if p(s ~ at) is (sl V s2) A (s3 V s4), then the nodes of the run tree at level 1
include the label sl or the label s2 and also include the label s3 or the label s4. Note that
the depth of r (i.e., the maximal level o f a node in r) is at most n, but not all branches
need to reach such depth, since if p(r(x), ai) = true, then x does not need to have any
children. Note that if I~1 = i < n and r ( x ) = s, then we cannot have p(s, ai) = false,
since false is not satisfiable.
The run tree r is accepting if all nodes at depth n are labeled by states in F . Thus,
a branch in an accepting run has to hit the true transition or hit an accepting state after
reading all the input word.
What is the relationship between alternating automata and nondetenninistic au-
tomata? It turns out that just as nondeterministic automata have the same expressive
power as deterministic automata but they are exponentially more succinct, alternating
automata have the same expressive power as nondetenninistic automata but they are
exponentially more succinct.
We first show that alternating automata are at least as expressive and as succinct as
nondeterministic automata.
- - p a ( 8 , b) = V t E p ( s , b ) t .
Note that Aa has essentially the same size as A; that is, the descriptions of A~ and A
have the same length.
We now show that alternating automata are not more expressive than nondetermin-
istic automata.
Proposition 16. [BL80, CKS81, Lei81] Let A be an alternating automaton. Then there
is a nondeterministic automaton An such that L( An ) = L( A ).
Proof: Let A = (S, S, s t, p, F ) . Then A,~ = ( S , Sn, {{sO}}, Pn, Fn), where Sn = 2 s,
-fin : 2 F, and
pn(T, a) = { T ' I T ' satisfies A p(t, a)}.
tET
(We lake an empty conjunction in the definition of pn to be equivalent to true; thus,
0 E Pn (0, a).)
Intuitively, An guesses a run tree of A. At a given point of a run of An, it keeps in its
memory a whole level of the run tree of A. As it reads the next input symbol, it guesses
the next level of the run tree of A. |
formula 0 is obtained from 0 by switching V and A, and by switching true and false. For
example, z v (y A z) = ~: A (y V z). (Note that we are considering formulas in B + (X),
so we cannot simply apply negation to these formulas.) Formally, we define the dual
operation as follows:
- ~ = ~:, f o r z E X ,
- true = false,
- false = true,
- (4 ^ = v Z__-),and
- (c~Vfl)=(6Afl).
By combining Propositions 11 and 16, we can obtain a nonemptiness test for alter-
nating automata,
The run r is accepting if every infinite branch in r includes infinitely many labels in F .
Note that the run can also have finite branches; if Ix[ = i, r(x) = s, andp(s, ai) = t r u e ,
then z does not need to have any children.
We with alternating automata, alternating Biichi automata are as expressive as non-
deterministic Btichi automata. We first show that alternating automata are at least as
expressive and as succinct as nondeterministic automata. The proof of the following
proposition is identical to the proof of Proposition 19.
- foru ~ 0,
p,~((U, V), a) = {(U', W) [ there exist X, Y c_ S such t h a t
X satisfies A t ~ v p(t, a),
Y satisfies /~,~v p(t, a),
U'=X-F, andV'=YtO(XOF)},
The proof that this construction is correct requires a careful analysis of accepting
runs of A, |
While complementation of alternating automata is easy (Proposition 17), this is not
the case for alternating B tichi automata. Here we run into the same difficulty that we ran
into in Section 2.2: not going infinitely often through accepting states is not the same as
going infinitely often through non-accepting states. >From Propositions 7, 19 and 20.
253
it follows that alternating Biichi automata are closed under complement, but the precise
complexity of complementation in this case is not known.
Finally, by combining Propositions 13 and 20, we can obtain a nonemptiness test
for alternating Biichi automata.
Proposition 21.
1. The nonemptiness problem for alternating Bachi automata is decidable in exponen-
tial time.
2. The nonemptiness problem for alternating Bachi automata is PSPACE-complete.
by some finite automaton on infinite words. This fact was proven first in [SPH84]. The
proof there is by induction on structure of formulas, Unfortunately, certain inductive
steps involve an exponential blow-up (e.g., negation corresponds to complementation,
which we have seen to be exponential). As a result, the complexity of that translation
is nonelementary, i.e., it may involve an unbounded stack of exponentials (that is, the
complexity bound is of the form
2n
"" 1
Theorem22. [MSS88, Var94] Given an LTL formula T, one can build an alternating
Bachi automaton A~ = ( S, S, s ~ p, F), where Z = 2 Pr~ at/z/IS[ is in O([TI), such
that L~ ( A~ ) is exactly the set of computations satisfying the formula T.
Proof.- The set S of states consists of all subformulas o f t and their negation (we identify
the formula -~mb with r The initial state s o is T itself. The set F of accepting states
consists of all formulas in S of the form --,(~Ur It remains to define the transition
function p.
In this construction, we use a variation of the notion of dual that we used in Sec-
tion 2.5. Here, the dual 0 of a formula is obtained from 0 by switching V and A,
by switching true and false, and, in addition, by negating subformulas in S, e.g.,
-~p V (q A Xq) is p A (-~q V -~Xq). More formally,.
- ~ = --,~, for ~ E S,
- true = false,
- false = true,
- (,~ A ~) = (~ v ~), and
- ( ~ v/~) = ( ~ ^ ~).
branches labeled by --,((U~b), we should not allow infinite branches labeled by (UV.
This is why we defined F to consists of all formulas in S of the form --,(~U!b). |
In the state ~o, if q does not hold in the present state, then A~ requires both X-~p to
be satisfied in the present state (that is, --,p has to be satisfied in next state), and it to be
satisfied in the next state. As ~ ~ F, A~, should eventually reach a state that satisfies q.
Note that many of the states, e.g., the subformulas X - , p and q, are not reachable; i.e.,
they do not appear in any run of A~,. |
Coronary23. [VW941 Given an LTL formula ~, one can build a Bachi automaton
A~, = (S, S, S ~ p, F), where S = 2 P~~ and ISI is in 2 ~ such that L~(A~,) is
exactly the set of computations satisfying the formula ~.
The proof of Corollary 23 in [VW94] is direct and does not go through alternating
Btichi automata. The advantage of the proof here is that it separates the logic from
the combinatorics. Theorem 22 handles the logic, while Proposition 20 handles the
combinatorics.
Example2. Consider the formula to = FGp, which requires p to hold from some point
on. The Biichi automaton associated with ~ is A~o = (2 {p) , {0, 1}, {0}, p, { 1}), where
p is described inthe following table.
The automaton A~, can stay forever in the state 0. Upon reading p, however, A~, can
choose to go to the state 1. Once A~, has made that transition, it has to keep reading p,
otherwise it rejects. Note that A~, has to make the transition to the state 1 at some point,
since the state 0 is not accepting. Thus, A~o accepts precisely when p holds from some
point on. |
256
4 Applications
4.1 Satisfiability
An LTL formula ~ is satisfiable if there is some computation ~r such that 7r ~ ~. An
unsatisfiable formula is uninteresting as a specification, so unsatisfiability most likely
indicates an erroneous specification. The satisfiability problem for LTL is to decide,
given an LTL formula ~, whether ~ is satisfiable.
Proof: By Corollary 23, given an LTL formula ~, we can construct a Btichi automaton
A~,, whose size is exponential in the length of ~, that accepts precisely the computations
that satisfy ~. Thus, ~ is satisfiable iff A~ is nonempty. This reduces the satisfiability
problem to the nonemptiness problem. Since nonempljness of Btichi automata can
be tested in nondeterministic logarithmic space (Proposition 13) and since A~ is of
exponential size, we get a polynomial-space algorithm (again, the algorithm constructs
A~ "on-the-fly").
To prove PSPACE-hardness, it can be shown that any PSPACE-hard problem can be
reduc~l to the satisfiability problem. That is, there is a logarithmic-space algorithm that
given a polynomial-space-bounded Turing machine M and a word w outputs an LTL
formula ~g,w such that M accepts w iff ~M,w is satisfiable. 1
An LTL formula ~ is valid if for every computation 7r we have that 7r ~ ~. A
valid formula is also uninteresting as a specification. The validity problem for LTL is to
decide, given an LTL formula ~, whether ~ is valid. It is easy to see that ~o is valid iff
- ~ is not satisfiable. Thus, the validity problem for LTL is also PSPACE-complete.
4.2 Verification
We focus here onfinite-state programs, i.e., programs in which the variables range over
finite domains. The significance of this class follows from the fact that a significant
number of the communication and synchronization protocols studied in the literature
are in essence finite-state programs [Liu89, Rud87]. Since each state is characterized
by a finite amount of information, this information can be described by certain atomic
propositions. This means that a finite-state program can be specified using propositional
temporal logic. Thus, we assume that we are given a finite-state program and an LTL
formula that specifies the legal computations of the program. The problem is to check
whether all computations of the program are legal. Before going further, let us define
these notions more precisely.
A finite-state program over a set Prop of atomic propositions is a structure of the
form P = (IV, w0, R, V), where W is a finite set of states, w0 E W is the initial state,
R C_ W 2 is a total accessibility relation, and V : W ~ 2 Pr~ assigns truth values
to propositions in Prop for each state in W. The intuition is that W describes all the
states that the program could be in (where a state includes the content of the memory,
registers, buffers, location counter, etc.), R describes all the possible transitions between
states (allowing for nondeterminism), and V relates the states to the propositions (e.g.,
it tells us in what states the proposition request is true). The assumption that R is total
257
(i.e., that every state has a child) is for technical convenience. We can view a terminated
execution as repeating forever its last state.
Let u be an infinite sequence u0, u l . . . of states in W such that u0 = w0, and
uiRui+l for all i > 0. Then the sequence V(uo), V ( u l ) . . . is a computation of P. We
say that P satisfies an LTL formula 9 if all computations of P satisfy 9. The verification
problem is to check whether P satisfies 9.
The complexity of the verification problem can be measured in three different
ways. First, one can fix the specification 9 and measure the complexity with respect to
the size of the program. We call this measure the program-complexity measure. More
precisely, the program complexity of the verification problem is the complexity of the
sets {P [ P satisfies 9} for a fixed 9. Secondly, one can fix the program P and measure
the complexity with respect to the size of the specification. We call this measure the
specification-complexity measure. More precisely, the specification complexity of the
verification problem is the complexity of the sets {9 [ P satisfies 9} for a fixed P.
Finally, the complexity in the combined size of the program and the specification is the
combined complexity.
Let C be a complexity class. We say that the'program complexity of the verification
problem is in C if {P [ P satisfies 9} E C for any formula 9. We say that the program
complexity of the verification problem is hard for C if {P I P satisfies 9} is hard for
C for some formula 9. We say that the program complexity of the verification problem
is complete for C if it is in C and is hard for C. Similarly, we say that the specification
complexity of the verification problem is in C if {9 1P satisfies 9} E C for any program
P, we say that the specification complexity of the verification problem is hard for C if
{9 1P satisfies 9} is hard for C for some program P, and we say that the specification
complexity of the verification problem is complete for C if it is in C and is hard for C.
We now describe the automata-theoretic approach to the verification problem. A
finite-state program P = (W, w0, R, V) can be viewed as a Biichi automaton A p =
(S, W, {w0}, p, W), where 2? = 2 P~~ and s' E p( s , a) iff ( s , s' ) E R and a = V ( s ) .
As this automaton has a set of accepting states equal to the whole set of states, any
infinite run of the automaton is accepting. Thus, Lo~(Ap) is the set of computations of
P.
Hence, for a finite-state program P and an LTL formula 9, the verification problem
is to verify that all infinite words accepted by the automaton Ap satisfy the formula 9.
By Corollary 23, we know that we can build a Btichi automaton A~ that accepts exactly
the computations satisfying the formula 9. The verification problem thus reduces to the
automata-theoretic problem of checking that all computations accepted by the automaton
Ap are also accepted by the automaton A~, that is L~ (Ap) C__Lo~(A~). Equivalently,
we need to check that the automaton that accepts Lo~ ( A p ) N L~ (A~) is empty, where
L,~('A-7) = L ~ ( A ~ ) = L ~ - Lo~(A~).
First, note that, b y Corollary 23, Lo~(A~) = Lo~(A-,~) and the automaton A-~ has
2~ states. (A straightforward approach, starting with the automaton A~ and then
using Proposition 7 to complement it, would result in a doubly exponential blow-up,)
To get the intersection of the two automata, we use Proposition 6. Consequently, we
can build an automaton for L,~ ( A p ) CI L , z ( A . ~ ) having I W I , 2 ~ states. We need to
check this automaton for emptiness. Using Proposition 13, we get the following results.
258
We note that a time upper bound that is polynomial in the size of the program and
exponential in the size of the specification is considered here to be reasonable, since the
specification is usually rather short [LP85]. For a practical verification algorithm that is
based on the automata-theoretic approach see [CVWY92].
4.3 Synthesis
In the previous section we dealt with verification: we are given a finite-state program
and an LTL specification and we have to verify that the program meets the specification.
A frequent criticism against this approach, however, is that verification is done after sig-
nificant resources have already been invested in the development of the program. Since
programs invariably contain errors, verification simply becomes part of the debugging
process. The critics argue that the desired goal is to use the specification in the program
development process in order to guarantee the design of correct programs. This is called
program synthesis. It turns out that to solve the program-synthesis problem we need to
use automata on infinite trees.
Rabin Tree Automata Rabin tree automatarun on infinitelabeled trees with a uniform
branching degree (recall the definition of labeled trees in Section 2.5). The (infinite) k-
ary tree ~-k is the set { 1 , . . . , k}*, i.e., the set of all finite sequences over { 1 , . . . , k}.
The elements of rk are the nodes of the tree. If x and xi are nodes of rk, then there is an
edge from x to xi, i.e., z is the parent of zi and xi is the child of z. The empty sequence
e is the root o f r k . A branch/3 = x0, z l , . . , of rk is an infinite sequence of nodes such
that z0 = e, and xi is the parent of zi+l for all i _> 0. A S-labeled k-ary tree T , for a
finite alphabet S , is a mapping 7 : 7-k ~ Z that assigns to every node a label. We often
refer to labeled trees as trees; the intention will be clear from the context. A branch
/3 = a:0, z l , . . , of T defines an infinite word 7(/3) = T(z0), 7-(Xl),... consisting of
the sequence of labels along the branch.
A k-ary Rabin tree automaton A is a tuple ( S , S, S ~ p, G), where 2 is a finite
alphabet, S is a finite set of states, SO C_ S is a set of initial states, G C_ 2 s x 2 s
is a Rabin condition, and p : S x S -~ 2 sk is a transition function. The automaton
A takes as input Z-labeled k-ary trees. Note that p(s, a) is a set of k-tuples for each
state s and symbol a. Intuitively, when the automaton is in state s and it is reading a
node x, it nondeterministically chooses a k-tuple ( s l , . . . , sk) in p(s, T ( x ) ) and then
makes k copies of itself and moves to the node zi in the state si for i = 1 , . . . , k. A
run r : "ok ~ S of A on a Z-labeled k-ary tree 7- is an S-labeled k-ary tree such that
the root is labeled by an initial state and the transitions obey the transition function p;
that is, r(e) E S ~ and for each node x we have ( r ( x l ) , . . . , r ( z k ) ) E p(r(z), 7-(x)).
The run is accepting if r(/3) satisfies G for every branch/3 = z0, x l , . . , of ~-k. That is,
259
for every branch/~ = ~0, xl, .... there is some pair (L, U) E G such that r(xi) E L
for infinitely many i's, but r(xi) E U for only finitely many i's. Note that different
branches might be satisfied by different pairs in G. The language of A, denoted Lo~(A),
is the set of trees accepted by A. It is easy to see that Rabin automata on infinite words
are essentially 1-ary Rabin tree automata.
The nonemptiness problem for Rabin tree automata is to decide, given a Rabin
tree automaton A, whether Lw (A) is nonempty. Unlike the nonemptiness problem for
automata on finite and infinite words, the nonemptiness problem for tree automata is
highly nontrivial. It was shown to be decidable in [Rab69], but the algorithm there had
nonelementary time complexity; i.e., its time complexity could not be bounded by any
fixed stack of exponential functions. Later on, elementary algorithms were described
in [HR72, Rab72]. The algorithm in [HR72] runs in doubly exponential time and the
algorithm in [Rab72] runs in exponential time. Several years later, in [Eme85, VS85],
it was shown that the nonemptiness problem for Rabin tree automata is in NP. Finally,
in [EJ88], it was shown that the problem is NP-complete.
There are two relevant size parameters for Rabin tree automata. The first is the
transition size, which is size of the transition function (i.e., the sum of the sizes of the
sets Ip(s, a)] for s E S and a E S); the transition size clearly takes into account the the
number of states in S. The second is the number of pairs in the acceptance condition G.
For our application here we need a complexity analysis of the nonemptiness problem
that takes into account separately the two parameters.
Proposition 26. [EJ88, PR89] For Rabin tree automata with transition size m and n
pairs, the nonemptiness problem can be solved in time (ran) ~
In other words, the nonemptiness problem for Rabin tree automata can be solved in time
that is exponential in the number of pairs but polynomial in the transition size. As we
will see, this distinction is quite significant.
the program makes the first move (into the first state), the environment responds with the
second move, the program counters with the third move, and so on. We associate with
r the computation V(r) = V(wo), V(wl), ., and say that r satisfies an LTL formula
if V(r) satisfies 9. The goal of the program is to satisfy the specification 9 in the
face of every possible move by the environment. The program has no control over the
environment moves; it only controls its own moves. Thus, the situation can be viewed
as an infinite game between the environment and the program, where the goal of the
program is to satisfy the specification 9. Infinite games were introduced in [GS53] and
they are of fundamental importance in descriptive set theory [Mos80].
Histories are finite words in W*. The history of a run r = w0, w l , . . , at the even
point i > 0, denoted hist(r, i), is the finite word wl, w 3 , . . . , wi-1 consisting of all
states moved to by the environment; the history is the empty sequence e for i = 0.
A program is a function f : W* ~ W from histories to states. The idea is that if
the program is scheduled at a point at which the history is h, then the program will
cause a change into the state f(h). This captures the intuition that the program acts in
reaction to the environment's actions. A behavior r over W is a run of the program f if
si = f(hist(r, i)) for all even i. That is, all the state transitions caused by the program
are consistent with the program f . A program f satisfies the specification ~ if every
run of f over W satisfies 9. Thus, a correct program can be then viewed as a winning
strategy in the game against the environment. We say that ~ is realizable with respect
to W and V if there is a program f that satisfies 9, in which case we say that f realizes
9. (In the sequel, we often omit explicit mention of W and V when it is clear from the
context.) It turns out that satisfiability of 9 is not sufficient to guarantee realizability of
9.
Example3. Consider the case where Prop = {p}, W = {0, 1}, V(0) = 0, and
V(1) = {p}. Consider the formula Gp. This formula requires that p always be true,
and it is clearly satisfiable. There is no way, however, for the program to enforce this
requirement, since the environment can always moves to the state 0, making p false.
Thus, Gp is not realizable. On the other hand, the formula GFp, which requires p to
hold infinitely often, is realizable; in fact, it is realized by the simple program that maps
every history to the state 1. This shows that realizability is a stronger requirement than
satisfiability. |
Consider now the specification 9. By Corollary 23, we can build a Btichi automaton
A~ = (S, S, S ~ p, F), where S = 2Pr~ and ISI is in 2 ~ such that L~(A~) is
exactly the set of computations satisfying the formula ~o. Thus, given a state set W
and a valuation V : W ~ 2 Pr~ we can also construct a Biichi automaton A~ =
(W, S, S O, p', F ) such that L~ (A~) is exactly the set of behaviors satisfying the formula
9, by simply taking p~(s, w) = p(s, V(w)). It follows that we can assume without loss
of generality that the winning condition for the game between the environment and the
program is expressed by a Btichi automaton A: the program f wins the game if every
run of f is accepted by A. We thus say that the program f realizes a Btichi automaton
A if all its runs are accepted by A . We also say then that A is realizable.
It turns out that the realizability problem for Btichi automata is essentially the solv-
ability problem described in [Chu63]. (The winning condition in [Chu63] is expressed
261
in S1S, the monadic second-order theory of one successor function, but it is known
[Btic62] that S 1S sentences can be translated to Biichi automata.) The solvability prob-
lem was studied in [BL69, Rab72]. It is shown in [Rab721 that this problem can be
solved by using Rabin tree automata.
Consider a program f : W* ---. W. Suppose without loss of generality that W =
{ 1 , . . . , k}, for some k > 0. The program f can be represented by a W-labeled k-ary
tree 7-I. Consider a node z = ioil ... i,,~, where 1 < ii < k for j = 0 . . . . , m. We note
that x is a history in W*, and define T / ( x ) = f ( z ) . Conversely, a W-labeled k-ary
tree 7- defines a program fT-. Consider a history h = ioil . . . ira, where 1 < ii < k
for j = 0 , . . . , m. We note that h is a node of rk, and define fT-(h) = 7-(h). Thus,
W-labeled k-ary trees can b e viewed as programs.
It is not hard to see that the runs of f correspond to the branches of Ty. Let
/~ = z0, 2:1,... be a branch, where x0 = e and ~:j = ~ j - l i j - 1 for j > 0. Then
r = 7-(z0), i0, 7-(zl), il, 7-(x2), 999 is a run of f , denoted r(/~). Conversely, if r =
i0, i l , . . , is a run of f , then 7-I contains a branch/~(r) = z0, ~:1, 99-, where z0 = e,
:ej = xi_li2j+l, and 7-(xj) = i2j f o r j >__0. O n e w a y to visualize thisis to thinkof the
edge from the parent z to its child xi as labeled by i. Thus, the run r(fl) is the sequence
of edge and node labels along ft.
We thus refer to the behaviors r(fl) for branches j3 of a W-labeled k-ary tree 7- as
the runs o f T , and we say that 7" realizes a Biichi automaton A if all the runs of 7" are
accepted by A. We have thus obtained the following:
Proposition27. A program f realizes a Bachi automaton A iff the tree 7-f realizes A.
We have thus reduced the realizability problem for LTL specifications to an automata-
theoretic problem: given a Biichi automaton A, decide if there is a tree 7- that realizes
A. Our next step is to reduce this problem to the nonemptiness problem for Rabin tree
automata. We will construct a Rabin automaton/3 that accepts precisely the trees that
realize A. Thus, Lo~(/3) ~ 0 iffthere is atree that realizes A.
Proof: Consider an input tree T . The Rabin tree automaton B needs to verify that for
every branch/3 of T we have that r(/~) E L~ (A). Thus, B needs to "run A in parallel"
on all branches of 7-. We first need to deal with the fact that the labels in 7- contain
information only about the actions of f (while the information on the actions of the
environment is implicit in the edges). Suppose that A = (W, S, S o , p, F ). We first define
a Btichi automaton A' that emulates A by reading pairs of input symbols at a time. Let
A' = (W 2, S x {0, 1}, S O x {0}, p', S x {1}), where
Intuitively, p' applies two transitions of A while remembering whether either transition
visited F. Note that this construction doubles the number of states. It is easy to prove
the following claim:
Claim: A' accepts the infinite word (w0, wl), (w2, w3), 999over the alphabet W 2 iffA
accepts the infinite word w0, wl, w2, w3,.., over W.
In order to be able to run A' in parallel on all branches, we apply Proposition 10
to A' and obtain a deterministic Rabin automaton Ad such that Lo~(Ad) = L~(A').
As commented in Section 2.2, Ad has 2~ log,~) states and O(n) pairs. Let Aa =
(W 2, Q, {q0}, 6, G).
We can now construct a Rabin tree automaton that "runs Ad in parallel" on all
branches o f T . Let B = (W, Q, {q0}, 6~, G), where 6~is defined as follows:
Corollary 30. [PR89] The realizability problem for LTL can be solved in doubly expo-
nential time.
Proof: By Corollary 23, given an LTL formula 9, one can build a Biichi automaton A~
with 20(1~1) states such that L~(A~) is exactly the set of computations satisfying the
formula 9. By combining this with the bound of Theorem 29, we get a time bound of
k 2~(!~'I). 1
In [PR89], it is shown that the doubly exponential time bound of Corollary 30 is essen-
tially optimal. Thus, while the realizability problem for LTL is decidable, in the worst
case it can be highly intractable.
Example4. Consider again the situation where Prop = {p}, W = {0, 1}, V(0) = 0,
and V(1) = {p}. Let ~ be the formula Gp. We have A~ = (W, {1}, {1}, p, W), where
p(1, 1) = {1}, and all other transitions are empty (e.g., p(1,0) = 0, etc.). Note that
A~ is deterministic. We can emulate A~, by an automaton that reads pairs of symbols:
A~ = (W 2, W x {0, 1}, {(1,0)},p', W x {1}), where p((1,0), (1, 1)) = {(1, 1)},
and all other transitions are empty. Finally, we construct the Rabin tree automaton
263
B = (W, W x {0, 1 }, (1, 0), 6', (L, U)), where 6' (s, a) is empty for all states s and
symbol a. Clearly, L~ (B) = 0, which implies that Gp is not realizable. |
We note that Corollary 30 only tells us how to decide whether an LTL formula is
realizable or not. It is shown in [PR89], however, that the algorithm of Proposition 26 can
provide more than just a "yes/no" answer. When the Rabin automaton B is nonempty,
the algorithm returns a finite representation of an infinite tree accepted by B. It turns out
that this representation can be converted into a program f that realizes the specification.
It even turns out that this program is afinite-state program. This means that there are
a finite set N , a function g : W* ~ N, a function a l : N x W --~ N, and a function
c~2 : N ---, W such that for all h E W* and w E W we have:
- f ( h ) = c~2(g(h))
- g(hw) = cq(g(h), w)
Acknowledgements
I am grateful to Oystein Haugen, Orna Kupferman, and Faron Moiler for their many
comments on earlier drafts of this paper.
References
IALW891 M. Abadi, L. Lamport, and P. Wolper. Realizable and unrealizable concurrent pro-
gram specifications. In Proc. 16th Int. Colloquium on Automata, Languages and Pro-
gramming, volume 372, pages 1-17. Lecture Notes in Computer Science, Springer-
Verlag, July 1989.
[BL69] J.R. B~lchi and L.HG. Landweber. Solving sequentialconditions by finite-state strate-
gies. Trans. AMS, 138:295-311, 1969.
[BL80I J.A. Brzozowski and E. Leiss. Finite automata, and sequential networks. Theoretical
Computer Science, 10:19-35, 1980.
lBilc62] J.R. Btlchi. On a decision method in restricted second order arithmetic. In Proc.
Internat. Congr. Logic, Method and Philos. Sci. 1960, pages 1-12, Stanford, 1962.
Stanford University Press.
[Cho74] Y. Choueka. Theories of automata on w-tapes: A simplified approach. J. Computer
and System Sciences, 8:117-141, 1974.
[Chu63] A. Church. Logic, arithmetics, and automata, In Proc. International Congress of
Mathematicians, 1962, pages 23-35. institut Mittag-Leffler, 1963.
[CKS811 A.K. Chandra, D.C. Kozen, and L.J. Stockmeyer. Alternation. Journal of the Asso-
ciation for Computing Machinery, 28(1):114-133, January 1981.
[CLR90] T.H. Cormen, C.E. Leiserson, and R.L. Rivest. Introduction to Algorithms. Mrr
Press, 1990.
264
[LPZ851 O. Lichtenstein, A. Pnueli, and L. Zuck. The glory of the past. In Logics of Pro-
grams, volume 193, pages 196-218,Brooklyn, June 1985. Lecture Notes in Computer
Science, Springer-Verlag.
[MEN66] R. McNaughton. Testing and generating infinite sequences by a finite automaton.
Information and Control, 9:521-530, 1966.
[MF71] A.R. Meyer and i . J . Fischer. Economy of description by automata, grammars, and
formal systems. In Prec. 12th IEEE Syrup. on Switching andAutomata Theory, pages
188-i91, 1971.
[MH84] S. Miyano and T. Hayashi. Alternating finite automata on w-words. Theoretical
Computer Science, 32:321-330, 1984.
[Mic88] M. Michel. Complementation is more difficult with automata on infinite words.
CNET, Paris, 1988.
[Mos80] Y.N. Moschovakis. Descriptive Set Theory. North Holland, 1980.
[MP92] Z. Manna and A. Pnueli. The Temporal Logic of Reactive and Concurrent Systems:
Specification. Springer-Verlag, Berlin, January 1992.
[MS72] A.R. Meyer and L.J. Stockmeyer. The equivalence problem for regular expressions
with squaring requires exponential time. In Proc. 13th IEEE Symp. on Switching and
Automata Theory, pages 125-129, 1972.
[MS87] D.E. Muller and EE. Schupp. Alternating automata on infinite trees. Theoretical
Computer Science, 54,:267-276, 1987.
[MSS88] D. E. Muller, A. Saoudi, and E E. Schupp. Weak alternating automata give a simple
explanation of why most temporal and dynamic logics are decidable in exponential
time. In Proceedings 3rd IEEE Symposium on Logic in Computer Science, pages
422-427, Edinburgh, July 1988.
[MW84] Z. Manna and E Wolper. Synthesis of communicating processes from temporal
logic specifications. ACM Transactions on Programming Languages and Systems,
6(1):68-93, January 1984.
[OL82] S. Owicki and L. Lamport. Proving liveness properties ofconcurrentprograms. ACM
Transactions on Programming Languages and Systems, 4(3):455-495, July 1982.
[Pei85] R. Peikert. o~-regular languages and propositional temporal logic. Technical Report
85-01, ETH, 1985.
[Pnu77] A. Pnueli. The temporal logic of programs. In Proc. 18th IEEE Symposium on
Foundation of Computer Science, pages 46-57, 1977.
[PR89] A. Pnueli and R. Rosner. On the synthesis of a reactive module. In Proceedings of
the Sixteenth ACM Symposium on Principles of Programming Languages, Austin,
Januery 1989.
[Rab69] M.O. Rabin. Decidability of second order theories and automata on infinite trees.
Transaction of the AMS, 141:1-35, 1969.
[Rab721 M.O. Rabin. Automata on infinite objects and Church's problem. In Regional Conf.
Set Math., 13, Providence, Rhode Island, 1972. AMS.
[RS59] M.O. Rabin and D. Scott. Finite automata and their decision problems. IBM J. of
Research and Development, 3:115-125, 1959.
[Rud87] H. Rudin. Network protocols and tools to help produce them. Annual Review of
Computer Science, 2:291-316, 1987.
[Saf88] S. Safra. On the complexity of omega-automata. In Proceedings of the 29th IEEE
Symposium on Foundations of Computer Science, White Plains, October 1988.
[Sav70] W.J. Savitch. Relationship between nondeterministic and deterministic tape com-
plexities. J. on Computer and System Sciences, 4:177-192, 1970.
[sc85] A.E Sisfla and E.M. Clarke. The complexity of propositional linear temporal logic.
J. ACM, 32:733-749, 1985.
266
ISis83] A.E Sistla. Theoretical issues in the design and analysis of distributed systems. PhD
thesis, Harvard University, 1983.
[SPH84] R. Sherman, A. Pnueli, and D. Harel. Is the interesting part of process logic un-
interesting: a translation from PL to PDL. SlAM J. on Computing; 13(4):825-839,
1984.
[SV3V87] A.P. Sistla, M.Y. Vardi, and P. Wolper. The complementation problem for Btlchi
automata with applications to temporallogic. Theoretical ComputerScience , 49:217-
237, 1987.
[The90] W. Thomas. Automata on infinite objects. Handbook of theoretical computerscience,
pages 165-191, 1990.
pCax94] M.Y. Vardi. Nontraditional applications of automata theory. In Prec. Int'l Syrup.
on Theoretical Aspects of Computer Software, volume 789, pages 575-597. Lecture
Notes in Computer Science, Springer-Verlag, 1994.
Ws851 M.Y. Vardi and L. Stockmeyer. Improved upper and lower bounds for modal logics
of programs. In Proc 17th ACM Symp. on Theory of Computing, pages 240-251,
1985.
[VW861 M.Y. Vardi and P. Wolper. An automata-theoretic approach to automatic program
verification. In Proceedings of the First Symposium on Logic in Computer Science,
pages 322-331, Cambridge, June 1986.
[VW941 M.Y. Vardi and P. Wolper. Reasoning about infinite computations. Information and
Computation, 115(1):1-37, November 1994.
[Vr P. Wolper. Synthesis of Communicating Processes from Temporal Logic Specifica-
t/ons. PhD thesis, Stanford University, 1982.
[WVS83] P. Wolper, M.Y. Vardi, and A.P. Sistla. Reasoning about infinite computation paths.
In Prec. 24th IEEE Symposium on Foundations of Computer Science, pages 185-194,
Tucson, 1983.
Lecture Notes in Computer Science
For information about Vols. 1-975
Vol. 976: U. Montanari, F. Rossi (Eds.), Principles and Vol. 993: T.C. Fogarty (Ed.), Evolutionary Computing.
Practice of Constraint Programming - - CP '95. Proceedings, 1995. VIII, 264 pages. 1995.
Proceedings, 1995. XIII, 651 pages. 1995. Vol. 994: M. Hebert, J. Ponce, T. Boult, A. Gross (Eds.),
Vol. 977: H. Beilner, F. Bause (Eds.), Quantitative Object Representation in Computer Vision.
Evaluation of Computing and Communication Systems. Proceedings, 1994. VIII, 359 pages. 1995.
Proceedings, 1995. X, 415 pages. 1995. Vol. 995: S.M. Mailer, W.J. Paul, The Complexity of
Vol. 978: N. Revell, A M. Tjoa (Eds.), Database and Simple Computer Architectures. XII, 270 pages. 1995.
9 Expert Systems Applications. Proceedings, 1995. XV, Vol. 996: P. Dybjer, B. NordstrOm, J. Smith (Eds.),
654 pages. 1995. Types for Proofs and Programs9 Proceedings, 1994. X,
Vol. 979: P. Spirakis (Ed,), Algorithms - - ESA '95. 202 pages. 1995.
Proceedings, 1995. XII, 598 pages. 1995. Vol. 997: K.P. Jantke, T. Shinohara, T. Zeugmann
Vol. 980: A. Ferreira, J. Rolim (Eds.), Parallel (Eds.), Algorithmic Learning Theory. Proceedings,
Algorithms for Irregularly Structured Problems. 1995. XV, 319 pages. 1995.
Proceedings, 1995. IX, 409 pages. 1995. Vol. 998: A. Clarke, M. Campolargo, N. Karatzas
Vol. 981: I. Wachsmuth, C.-R. Rollinger, W. Brauer (Eds.), Bringing Telecommunication Services to the
(Eds.), KI-95: Advances in Artificial Intelligence. People - IS&N '95. Proceedings, 1995. XII, 510 pages.
Proceedings, 1995. XII, 269 pages. (Subseries LNAI). 1995.
Vol. 982: S. Doaitse Swierstra, M. Hermenegildo Vol. 999: P. Antsaklis, W. Kohn, A. Nerode, S. Sastry
(Eds.), Programming Languages: Implementations, (Eds.), Hybrid Systems II. VIII, 569 pages. 1995.
Logics and Programs. Proceedings, 1995. XI, 467 Vol. 1000: J. van Leeuwen (Ed.), Computer Science
pages. 1995. Today. XIV, 643 pages. 1995.
Vol. 983: A. Mycroft (Ed.), Static Analysis. Vol. 1001: M. Sudan, Efficient Checking of Polynomials
Proceedings, 1995. VIII, 423 pages. 1995. and Proofs and the Hardness of Approximation Problems9
Vol. 984: J.-M. Haton, M. Keane, M. Manago (Eds.), XIV, 87 pages. 1995.
Advances in Case-Based Reasoning. Proceedings, 1994. Vol. 1002: J.J. Kistler, Disconnected Operation in a
VIII, 307 pages. 1995. Distributed File System. XIX, 249 pages. 1995.
Vol. 985: T. Sellis (Ed.), Rules in Database Systems. VOL. 1003: P. Pandurang Nayak, Automated Modeling
Proceedings, 1995. VIII, 373 pages. 1995. of Physical Systems. XXI, 232 pages. 1995. (Subseries
Vol. 986: Henry G. Baker (Ed.), Memory I~anagement. LNAI).
Proceedings, 1995. XII, 417 pages. 1995. Vol. 1004: J. Staples, P. Eades, N. Katoh, A. Moffat
Vol. 987: P.E. Camnrati, H. Eveking (Eds.), Correct (Eds.), Algorithms and Computation9 Proceedings,
Hardware Design and Verification Methods. 1995. XV, 440 pages. 1995.
Proceedings, 1995. VIII, 342 pages. 1995. Vol. 1005! J. Estublier (Ed.), Software Configuration
Vol. 988: A.U. Frank, W. Kuhn (Eds.), Spatial Management. Proceedings, 1995. IX, 311 pages. 1995.
Information Theory. Proceedings, 1995. XIII, 571 Vol. 1006: S. Bhalla (Ed,), Information Systems and
pages. 1995. Data Management. Proceedings, 1995. IX, 321 pages.
Vol. 989: W. Sch~ifer, P. Botella (Eds.), Software 1995.
Engineering - - ESEC '95. Proceedings, 1995. XII, 519 Vol. 1007: A. Bosselaers, B. Preneel (Eds.), Integrity
pages. 1995. Primitives for Secure Information Systems. VII, 239
Vol. 990: C. Pinto-Ferreira, N.J. Mamede (Eds.), pages. 1995.
Progress in Artificial Intelligence. Proceedings, 1995. Vol. 1008: B. Preneel (Ed.), Fast Software Encryption.
XIV, 487 pages. 1995. (Subseries LNAI). Proceedings, 1994. VIII, 367 pages. 1995.
Vol. 991: J. Wainer, A. Carvalho (Eds.), Advances in Vol. 1009: M. Broy, S. J~ihnichen (Eds.), KORSO:
Artificial Intelligence. Proceedings, 1995. XII, 342 Methods, Languages, and Toolsfor the Construction
pages. 1995. (Subseries LNAI). of Correct Software. X, 449 pages. 1995. Vol.
Vol. 992: M. Gori, G. Soda (Eds.), Topics in Artificial Vol. 1010: M. Veloso, A. Aamodt (Eds.), Case-Based
Intelligence. Proceedings, 1995. XII, 451 pages. 1995. Reasoning Research and Development. Proceedings,
(Subseries LNAI). 1995. X, 576 pages. 1995. (Subseries LNAI).
Vol. 1011: T. Furuhashi (Ed.), Advances in Fuzzy Vol. 1032: P. Godefroid, Partial-Order Methods for the
Logic, Neural Networks and Genetic Algorithms. Pro- Verification of Concurrent Systems. IV, 143 pages. 1996.
ceedings, 1994. (Subseries LNAI). Vol. 1033: C.-H. Huang, P. Sadayappan, U. Banerjee, D.
Vol. 1012: M. Bartogek, J. Staudek, J. Wiedermann Gelernter, A. Nicolau, D. Padua (Eds.), Languages and
(Eds.), SOFSEM '95: Theory and Practice of Compilers for Parallel Computing. Proceedings, 1995.
Informatics. Proceedings, 1995. XI, 499 pages. 1995. XIII, 597 pages. 1996.
Vol. 1013: T.W. Ling, A.O. Mendelzon, L. Vieille Vol. 1034: G. Kuper, M. Wallace (Eds.), Constraint
(Eds.), Deductive and Object-Oriented Databases. Databases and Applications. Proceedings, 1995. VII, 185
Proceedings, 1995. XIV, 557 pages. 1995. pages. 1996.
Vol. 1014: A.P. d e l Pobil, M.A. Serna, Spatial Vol. 1035: S.Z. Li, D.P. Mital, E.K. Teoh, H. Wang (Eds.),
Representation and Motion Planning. XII, 242 pages. Recent Developments in Computer Vision. Proceedings,
1995. 1995. XI, 604 pages. 1996.
Vol. 1015: B. Blumenthal, J. Gornostaev, C. Unger Vol. 1036: G. Adorni, M. Zock (Eds.), Trends in Natural
(Eds.), Human-Computer Interaction. Proceedings, Language Generation - An Artificial Intelligence
1995. VIII, 203 pages. 1995. Perspective. Proceedings, 1993. IX, 382 pages. 1996.
(Subseries LNAI).
VOL. 1016: R. Cipolla, Active Visual Inference of Surface
Shape. XII, 194 pages. 1995. Vol. 1037: M. Wooldridge, J.P. MiJller, M. Tambe (Eds.),
Vol. 1017: M. Nagl (Ed.), Graph-Theoretic Concepts IntelligentAgents II. Proceedings, 1995. XVI, 437 pages.
1996. (Subseries LNAI).
in Computer Science. Proceedings, 1995. XI, 406 pages.
1995. Vol. 1038: W: Van de Velde, J.W. Perram (Eds.), Agents
Vol. 1018: T.D.C. Little, R. Gusella {Eds.), Network and Breaking Away. Proceedings, 1996. XIV, 232 pages.
Operating Systems Support for Digital Audio and Video. 1996. (Subseries LNAI).
Proceedings, 1995. XI, 357 pages. 1995. Vol. 1039: D. Gollmann (Ed.), Fast Software Encryption.
Proceedings, 1996. X, 219 pages. 1996.
Vol. 1019: E. Brinksma, W.R. Cleaveland, K.G. Larsen,
T. Margaria, B. Steffen (Eds.), Tools and Algorithms Vol. 1040: S. Wermter, E. Riloff, G. Scheler (Eds.),
for the Construction and Analysis of Systems. Selected Connectionist, Statistical, and Symbolic Approaches to
Papers, 1995. VII, 291 pages. 1995. Learning for Natural Language Processing. Proceedings,
1995. IX, 468 pages. 1996. (Subseries LNAI).
Vol. 1020: I.D. Watson (Ed.), Progress in Case-Based
Reasoning. Proceedings, 1995. VIII, 209 pages. 1995. Vol. 1041: J. Dongarra, K. Madsen, J. Wa~niewski (Eds.),
(Subseries LNAI). Applied Parallel Computing. Proceedings, 1995. XII, 562
pages. 1996.
Vol. 1021: M.P. Papazoglou (Ed.), OOER '95: Object-
Oriented and Entity-Relationship Modeling. Proceedings, Vol. 1042: G. WeiB, S. Sen (Eds.), Adaption and Learning
1995. XVII, 451 pages. 1995. in Multi-Agent Systems. Proceedings, 1995. X, 238 pages.
1996. (Subseries LNAI).
Vol. 1022: P.H. Hartel, R. Plasmeijer (Eds.), Functional
Programming Languages in Education. Proceedings, 1995. Vol. 1043: F. Muller, G. Birtwistle (Eds.), Logics for
X, 309 pages. 1995. Copcurrency. XI, 266 pages. 1996.
Vol. 1023: K. Kanchanasut, J.-J. L6vy (Eds.), Algorithms, Vol. 1044: B. Plattner (Ed.), Broadband Communications.
Concurrency and Knowlwdge. Proceedings, 1995. X, 410 Proceedings, 1996. XIV, 359 pages. 1996.
pages. 1995. Vol. 1045: B. Butscher, E. Moeller, H. Pusch (Eds.),
Vol. 1024: R.T. Chin, H.H.S. Ip, A.C. Naiman, T.-C. Pong Interactive Distributed Multimedia Systems and Services.
(Eds.), Image Analysis Applications and Computer Proceedings, 1996. XI, 333 pages. 1996.
Graphics. Proceedings, 1995. XVI, 533 pages. 1995. Vol. 1046: C. Puech, R. Reischuk (Eds.), STACS 96.
Vol. 1025: C. Boyd (Ed.), Cryptography and Coding. Proceedings, 1996. XII, 690 pages. 1996.
Proceedings, 1995. IX, 291 pages. 1995. Vol. 1047: E. Hajnicz, Time Structures. IX, 244 pages.
Vol. 1026: P.S. Thiagarajan (Ed.), Foundations of 1996. (Subseries LNAI).
Software Technology and Theoretical Computer Science. .Vol. 1048: M. Proietti (Ed.), Logic Program Syynthesis
Proceedings, 1995. XII, 515 pages. 1995. and Transformation. Proceedings, 1995. X, 267 pages.
Vol. 1027: F.J. Brandenburg (Ed.), Graph Drawing. 1996.
Proceedings, 1995. XII, 526 pages. 1996. Vol. 1049: K. Futatsugi, S. Matsuoka (Eds.), Object
Vol. 1028: N.R. Adam, Y. Yesha (Eds.), Electronic Technologies for Advanced Software. Proceedings, 1996.
Commerce. X, 155 pages. 1996. X, 309 pages. 1996.
Vol. 1029: E. Dawson, J. Goli6 (Eds.), Cryptography: Vol. 1050: R. Dyckhoff, H. Herre, P. Schroeder-Heister
Policy and Algorithms. Proceedings, 1995. XI, 327 pages. (Eds.), Extensions of Logic Programming. Proceedings,
1996. 1996. VII, 318 pages. 1996. (Subseries LNAI).
Vol. 1030: F. Pichler, R. Moreno-Dfaz, R. Albrecht (Eds.), Vol. 1051: M.-C. Gaudel, J. Woodcock (Eds.), FME '96:
Computer Aided Systems Theory - EUROCAST '95. Industrial Benefit of Formal Methods. Proceedings, 1996.
Proceedings, 1995. XII, 539 pages. 1996. XII, 704 pages. 1996.
Vol.1031: M. Toussaint (Ed.), Ada in Europe.
Proceedings, 1995. XI, 455 pages. 1996.