Professional Documents
Culture Documents
CS4003 All Slides 4up
CS4003 All Slides 4up
CS4003 All Slides 4up
I Blackboard (mymodule.tcd.ie)
I Exam: 80% I Main Reference Text:
I Coursework: 20% Unifying Theories of Programming,
I Project: 10%
C.A.R. Hoare and Jifeng He,
I Class (Mini-)Exercises: 10% Prentice Hall, 1998.
I Mini-exercises handed out at end of Fri 2pm class. (out of print, will be uploaded to Blackboard)
I Mini-solutions due in at start of following Friday 2pm class. I Secondary Texts:
I These deadlines are hard, as solutions will be given out at I A Logical Approach to Discrete Math, D. Gries &
the start of the 2pm Friday class.
F. B. Schneider, Springer, 1993 (your JF math text!).
I Using Z , J. Woodcock & J. Davies, Prentice Hall 1996.
(out of print, will be uploaded to Blackboard)
I Symbols: ~ z =
I Well-Formed Sequences, H-Things and I-Things, where:
I Specified collection of Symbols (lexicon) H-Thing : A ~ followed by Crosses:
I Specified ways of putting them together (well-formedness) Crosses : Zero or more z
I Specific ways of manipulating symbol-sequences I-Thing : A = followed by two H-Things
(inference rules) I Manipulations (let f1 and f2 stand for arbitrary Crosses).
I Typically the goal is to transform a starting sequence to a
I-absorbii
hh =~~f1 becomes ~f1
final one having desired properties.
hh swap-Cross-Hii =~f1 z~f2 becomes =~f1 ~zf2
Example Interpretation
~ 0 zero
=~zz~zzz z +1 succ
= swap-Cross-Hii
hh = + plus
=~z~zzzz
=~zz~zzz + 0+1+1 0+1+1+1 2+3
= swap-Cross-Hii
hh
=~z~zzzz + 0+1 0+1+1+1+1 1+4
=~~zzzzz =~~zzzzz + 0 0+1+1+1+1+1 0+5
= I-absorbii
hh ~zzzzz 0+1+1+1+1+1 5
~zzzzz
=~f1 z~f2 99K =~f1 ~zf2 (n + 1) + m = n + (m + 1)
=~~f1 99K ~f1 0 + m = m
Well-Typedness Types
{list 7→ h1, 1, 2, 3, 5, 8, 13i, ix 7→ 4, offset 7→ 2} The meaning function, given an expression, returns a
partial function from environment to values.
Example Predicates
P ∈ Pred ::= . . .
w.r.t. environment
[[P ]] : Env → B
ρ = {list 7→ h1, 1, 2, 3, 5, 8, 13i, ix 7→ 4, offset 7→ 2}
Remember that “type” Value contains values of all types,
? including B, so B ⊆ Value.
I I Unlike expressions, where evaluation w.r.t. an environment
[[ix + offset 6 length list ]]ρ = True may be undefined, we insist that predicates always
evaluate to either True or False.
We can now give the meaning for the main two quantifiers as:
We define the domain of an environment (dom ρ) as the set of
all variables mentioned in ρ. [[∀ x : T • P ]]ρ =
b for all values k in T
We can use one environment (ρ0 ) to override part or all of we have [[P ]]ρ⊕{x 7→k } = True
another (ρ), indicating this by ρ ⊕ ρ0 .
The bindings in the second map, extend and take precedence “∀ x : T • P is true if P is true for all x : T ”
over those in the first map — e.g.:
[[∃ x : T • P ]]ρ =
b for at least one value k in T
{a 7→ 1, b 7→ 2, c 7→ 3} ⊕ {c 7→ 33, d 7→ 44} we have [[P ]]ρ⊕{x 7→k } = True
= {a →7 1, b 7→ 2, c 7→ 33, d 7→ 44}
“∃ x : T • P is true if P is true for at least one x : T ”
We can now give the meaning for the other two quantifiers as:
Let T = {1, 3, 5} and evaluate:
[[∃1 x : T • P ]]ρ =
b for exactly one k in T
[[∀ x : T • x < y ]]{x 7→2,y 7→5} we have [[P ]]ρ⊕{x 7→k } = True
For all values k in {1, 3, 5} “∃1 x : T • P is true if P is true for exactly one x : T ”
k = 1 we have [[x < y ]]{x 7→1,y →5} = 1 < 5 = True
k = 3 we have [[x < y ]]{x 7→3,y →5} = 3 < 5 = True b for any ρ0 such that dom ρ0 = dom ρ
[[[P ]]]ρ =
we have [[P ]]ρ0 = True
k = 5 we have [[x < y ]]{x 7→5,y →5} = 5 < 5 = False
Overall result: False. “[P ] is true if P is true for all values of all its variables”
Evaluating Quantifiers
I Consider:
True, or False?
I To evaluate a quantifier, we may need to generate all
possible environments ρ0 involving the bound variable —
tricky if type has an infinite number of values.
I In general, in order to reason about quantifiers we need to
use Axioms and Inference Rules.
(x < 3 ∧ x > 5) ⇒ x = x + 1
I We can view quantifiers as (approximate) shorthand for Assuming all variables are natural numbers, is the statement
repeated logical-and/or:
∀x • ∃x • x < y
∀ n : N • P (n) = P (0) ∧ P (1) ∧ P (2) ∧ P (3) ∧ . . .
∃ n : N • P (n) = P (0) ∨ P (1) ∨ P (2) ∨ P (3) ∨ . . . equivalent to
∀x • x < y,
or
I The equality is exact if the type quantified over is finite:
∃z • z < y,
∀ b : B • P (b) = P (False) ∧ P (True) or neither?
∃ b : B • P (b) = P (False) ∨ P (True) (What precisely do the three statements actually mean?)
What about ∃ z • z < x ?
I This is a statement about x , not y , or z . “∃1 x : T • P is true if P is true for exactly one x : T ”
∃1 x • P (x )
b (∃ x • P (x ))
= Simplify the following predicate:
∧ (∀ x • ∀ y •
(P (x ) ∧ P (y )) ⇒ x = y ) ∃x • x = y + z ∧ x < 2 ∗ y
There exists only one x satisfying P , iff
there is at least one x satisfying P
and for all x and y
if both x and y satisfy P then it must the case that x = y .
Manipulating Free and Bound Variables Replacing Free Variables by Expressions (I)
I The key operation we use to manipulate free and bound
variables is that of substitution.
I This takes two forms: I We often want to replace free occurrences of variables by
I α-Substitution, or Renaming of Bound Variables expressions (of the same type).
I Replacing all Free Occurrences of a Variable by an I We denote the replacement by F of all occurrences of free
Expression. variable a in expression E , by the notation E [F /a].
I We change our substitution notation, instead of E [x := F ], I It is defined if none of the free variables in F have binding
we write occurrences in E .
E [F /x ]
I Example:
to denote the substitution of F for every free occurrence of
x in E . (∀ x • x ≥ 3 ⇒ y = 0 ∧ ∃ y • y > 4 ∧ z = 10) [a + 5/y ]
I This notation extends to cover multiple simultaneous = ∀ x • x ≥ 3 ⇒ a + 5 = 0 ∧ ∃ y • y > 4 ∧ z = 10
substitutions:
Note that only the free occurrence of y is affected.
E [F1 , F2 , . . . , Fn /x1 , x2 , . . . , xn ]
I Note that only the instances of y outside the inner ∃ y were I We see that x becomes bound !
replaced.
∀x : T • P s:T
P [s/x ]
P = Q, Q=R
P =R
P, P =Q
Q I If P equals Q and Q equals R,
then P equals R.
I If P is true, and we know that P = Q then, we can deduce
I This key property of equality allows us to “linearise” our
that Q must be true also.
proofs.
I Simple, really !
I Assume we know that A = B, B = C, C = D and D = E ,
I Not actually mentioned in Gries & Schneider (1994), but and we want to prove that A = E .
used in that text!
I We get branching — we can use the first two equalities to
give us A = C, and the second two to get C = E , and then
combine these.
I Much better !
I if predicate P is true,
I Consider the consequent of Substitution: P [r := Q] then so is P with all instances of any boolean variable (r )
I We do not expect P to appear with an explicit substitution replaced by an arbitrary predicate Q, (provided r does not
(remember, [r := Q] is not in the language). occur inside any quantifier binding any free variable of Q
I Instead P [r := Q] denotes the case where: — no capture).
I P contains an occurrence of Q within it.
I Example:
I We use variable r to refer to that occurrence of Q p∨¬p
[p a boolean variable]
(p ∨ ¬ p)[p := (x 6 y + z )]
F =G
[v a variable]
E [v := F ] = E [v := G]
I Theorems are predicates that are logical consequences of
the axioms and inferences rules
I If F = G then, in any predicate/expression E mentioning
F , we can replace that mention by G.
I A predicate is shown to be a theorem by providing a proof .
I A proof is either:
I a.k.a. “Substitution of equals for equals”.
I an axiom
I Here, E , F and G stand for arbitrary predicates and/or I or a consequence of an inference rule whose premises are
expressions. axioms or pre-existing theorems.
I Example I in practice, we chain these together using the linear form of
the Transitivity inference rule.
x +x =2∗x x +x =2∗x
or
(y −v )[v :=x +x ]=(y −v )[v :=2∗x ] y −(x +x )=y −(2∗x )
I Contradiction: Prove P by proving ¬ P ⇒ false I We can mix these in Reduction proofs (here that P ⇒ R):
¬ P ⇒ false P
Contradiction: ⇒ “ why P ⇒ Q ”
P
Q
≡ “ why Q ≡ R ”
I Contrapositive: Prove P ⇒ Q by proving ¬ Q ⇒ ¬ P R
¬Q⇒¬P
Contrapositive: I We call this ⇒-Reduction
P ⇒Q
I Also possible is <-Reduction, using =, < and 6.
CS4003 Formal Methods CS4003 Formal Methods
Example: x + y > 2 ⇒ (x > 1 ∨ y > 1) (G&S)
I We apply Contrapositive to get:
x +y
< “ assumption: x < 1, y < 1, arithmetic ”
1+1
= “ arithmetic ”
2
hh∀-1ptii (∀ x • x = E ⇒ P ) ≡ P [E /x ], x 6 E
hh∃-1ptii (∃ x • x = E ∧ P ) ≡ P [E /x ], x 6 E
hh∀-distr ii (∀ x • P ) ∧ (∀ x • Q) ≡ (∀ x • P ∧ Q)
hh∃-distr ii (∃ x • P ) ∨ (∃ x • Q) ≡ (∃ x • P ∨ Q) ¬ (∀ x • R ⇒ ¬ P )
hh∀-swapii (∀ x • ∀ y • P ) ≡ (∀ y • ∀ x • P ) = ¬ (∀ x • ¬ R ∨ ¬ P )
hh∃-swapii (∃ x • ∃ y • P ) ≡ (∃ y • ∃ x • P ) = ¬ (∀ x • ¬ (R ∧ P ))
hh∀-nest ii (∀ x , y • P ) ≡ (∀ x • ∀ y • P ) = ∃x • R ∧ P
hh∃-nest ii (∃ x , y • P ) ≡ (∃ x • ∃ y • P )
hh∀-renameii (∀ x • P ) ≡ (∀ y • P [y /x ]), y 6 P
hh∃-renameii (∃ x • P ) ≡ (∃ y • P [y /x ]), y 6 P
∨-∀-distr ii
hh P ∨ (∀ x • Q) ≡ (∀ x • P ∨ Q) x 6 P
gen-deMorgan (∃ x • P ) ≡ ¬ (∀ x • ¬ P )
hh ii
I We can distribute a quantifier through the “other” binary I We have already covered this
propositional operator, under certain circumstances:
hh ∀-renameii (∀ x • P ) ≡ (∀ y • P [y /x ]), y 6 P
hh∨-∀-distr ii P ∨ (∀ x • Q) ≡ (∀ x • P ∨ Q) x 6 P hh ∃-renameii (∃ x • P ) ≡ (∃ y • P [y /x ]), y 6 P
hh∧-∃-distr ii P ∧ (∃ x • Q) ≡ (∃ x • P ∧ Q) x 6 P
I Goal: P ∧ (∃ x • Q) ≡ (∃ x • P ∧ Q), x 6 P
I de-Morgans laws apply to quantifiers:
I Strategy: Reduce left-hand side (lhs) to right-hand side
gen-deMorganii (∃ x • P ) ≡ ¬ (∀ x • ¬ P )
hh (rhs)
¬ (∃ x • ¬ P ) ≡ (∀ x • P ) I Proof:
¬ (∃ x • P ) ≡ (∀ x • ¬ P )
(∃ x • ¬ P ) ≡ ¬ (∀ x • P ) P ∧ (∃ x • Q)
= “ hhgen-deMorganii ”
P ∧ ¬ (∀ x • ¬ Q)
I the first law above is an axiom, the rest are theorems
= “ hh¬ -involii ”
I Proving the three theorems above involves the axiom, and ¬ ¬ P ∧ ¬ (∀ x • ¬ Q)
the law of involution: = “ hhdeMorganii ”
¬ (¬ P ∨ (∀ x • ¬ Q))
hh ¬ -involii ¬ ¬ p ≡ p
= “ hh∨-∀-distr ii, noting that if x 6 P , then x 6 ¬ P ”
(voluntary exercises !) ¬ (∀ x • ¬ P ∨ ¬ Q)
I continued, repeat last line above: I The “one-point” laws are very important.
¬ (∀ x • ¬ P ∨ ¬ Q) hh∀-1pt ii (∀ x • x = E ⇒ P ) ≡ P [E /x ], x 6 E
= “ hhdeMorganii ” hh∃-1pt ii (∃ x • x = E ∧ P ) ≡ P [E /x ], x 6 E
¬ (∀ x • ¬ (P ∧ Q))
= “ hhgen-deMorganii ”
∃x • P ∧ Q I These allow us to get rid of quantifiers.
I The key intuition behind them is that having the condition
x = E in the quantifier allows us to focus on that specific
I Proof complete — from here on we shall adopt this pattern value of x , rather than considering them all.
of stating: I Remember “Logic Example V” in Class 2 ?
I Goal — what is to be proved I Law hh∃-1pt ii will play a key role in the UTP theories we will
I Strategy — our top-level approach
explore later on.
I Proof (Sections) — the gory details
Proof using 1-pt (many, many times …) Proof using 1-pt, cont.
I Proof:
I Goal: ∃ t1 , t2 , x1 , x2 , y1 , y2 •
(∃ t1 , t2 , x1 , x2 , y1 , y2 • t1 = x ∧ x1 = x ∧ y1 = y ∧
t1 = x ∧ x 1 = x ∧ y 1 = y ∧ t2 = t1 ∧ x2 = y1 ∧ y2 = y1 ∧
t2 = t1 ∧ x2 = y1 ∧ y2 = y1 ∧ x 0 = x2 ∧ y 0 = t2
x 0 = x 2 ∧ y 0 = t2 ) = “ hh∃-1ptii, t1 , x1 , y1 6 x , y , t2 , x2 , y2 , substitute ”
≡ x0 = y ∧ y0 = x ∃ t2 , x2 , y2 •
t2 = x ∧ x2 = y ∧ y2 = y ∧
I Where did this strange example come from ?
x 0 = x2 ∧ y 0 = t2
I Remember { int t; t=x; x=y; y=t } ?
= “ hh∃-1pt ii, t2 , x2 , y2 6 x , y , substitute ”
I Strategy: reduce lhs to rhs x = y ∧ y0 = x
0
skip a > b ( t := a ; a := b ; b := t )
I If before-value of n is negative, then the after values of f
and x are 1 and n respectively, otherwise:
I The after-value of f is the factorial of the before-value of n
I The after-value of a is the maximum of the before-values of I The after-value of n is the same as before
both a and b, whilst the after value of b is the minimum. I The after-value of x is 1, except if the before-value of n was
I Using environments 0, in which case so is x ’s after-value.
I With environments
[[skip a > b ( t := a ; a := b ; b := t )]]ρ
= ρ ⊕ {t 7→ ρ(a), a 7→ ρ(b), b 7→ ρ(a)}, a<b [[f := 1 ; x := n ; (x > 1) ~ ( f := f ∗ x ; x := x − 1 )]]ρ
= ρ, a>b = ρ ⊕ {f 7→ 1, x 7→ ρ(n)}, ρ(n) ∈ {0, 1}
= ρ ⊕ {f 7→ ρ(n)!, x 7→ 1}, ρ(n) > 1
= ρ ⊕ {f 7→ 1, x 7→ ρ(n)}, ρ(n) < 0
CS4003 Formal Methods CS4003 Formal Methods
Too much ρ ! Programs are Predicates
I Given A = {S, S 0 }:
I Given A = {S, S 0 }:
skip-def ii
hh skipA =
b S0 = S :=-def ii
hh x :=A e =
b x 0 = e ∧ S 0\x 0 = S\x
I Alphabet constraints
I We see that the net effect is like the simultaneous
assignment t , x := x , y , inα(P c Q) = αc = inαP = inαQ
t now has the value of x , and x has the value of y . out α(P c Q) = out αP = out αQ
I Program: skip a > b ( t := a ; a := b ; b := t ), with I Program c ~ P checks c, and if true, executes P , and
alphabet A = {a, a 0 , b, b 0 , t , t 0 } then repeats the whole process
I Meaning: I We won’t define it yet, instead we give a law that describes
how a while-loop can be “unrolled” once:
skipA a > b t :=A a ; a :=A b ; b :=A t
= “ hh-def ii ” ~-unrollii
hh c ~P = (P ; c ~ P ) c skip
a > b ∧ skipA
∨ If c is False, we skip, otherwise we do P followed by the
¬ (a > b) ∧ ( t :=A a ; a :=A b ; b :=A t ) whole loop (c ~ P ) once more.
= “ hhskip-def ii, hh:=-def ii, hh; -def ii, and hh∃-1pt ii ” I Alphabet constraints
a > b ∧ a0 = a ∧ b 0 = b ∧ t 0 = t
∨ inα(c ~ P ) = αc = inαP
a < b ∧ a0 = b ∧ b 0 = a ∧ t 0 = a out α(c ~ P ) = out αP = (inαP )0
f := 1 ; x := n ; (x > 1) ~ (f := f ∗ x ; x := x − 1)
= “ hh:=-def ii ”
n = n ∧ f 0 = 1 ∧ x0 = x
0
;
x := n; (x > 1) ~ (f := f ∗ x ; x := x − 1)
x := x + y ; y := x − y ; x := x − y
k [e/x ] = k
I We have used some “laws of substitution” in our proofs.
b
v [e/x ] =
b v , v 6= x
I These too have a rigorous basis. x [e/x ] =
b e
I First, we promote substitution to be part of our predicate (e1 + e2 )[e/x ] =
b e1 [e/x ] + e2 [e/x ]
language: (¬ P )[e/x ] =
b ¬ P [e/x ]
(P ∧ Q)[e/x ] =
b P [e/x ] ∧ Q[e/x ]
Pred ::= . . . | P [e1 , . . . , en /v1 , . . . , vn ] (∀ x • P )[e/x ] =
b ∀x • P
(∀ v • P )[e/x ] = ∀ v • P [e/x ], v 6= x , v 6 e
We assume that substitution has highest precedence
b
(∀ v • P )[e/x ] = ∀ w • (P [w /v ])[e/x ],
(binds tighter) than all other constructs
b
v 6= x , v e, w 6 P , e, x
I Next we define its effect on predicates.
The constructs not mentioned above follow the same pattern.
We shall now look at the last four lines in more detail.
(∀ v • P )[e/x ] =
b ∀ v • P [e/x ], v 6= x , v 6 e
(∀ x • P )[e/x ] =
b ∀x • P
The simplest case: x is simply not free in ∀ x • P , I We are not substituting for v , and v does occur in e, so
so nothing changes. there is no possibility of name capture.
I We simply recurse to the body predicate P .
(provided y 6 e1 , . . . , en )
P ; (Q ; R) (P ; Q) ; R
= “ hh; -def ii, {S, S 0 } = out αQ ∪ inαR ” = “ hh; -def ii, {S, S 0 } = out αP ∪ inαQ ”
P ; (∃ Sm • Q[Sm /S 0 ] ∧ R[Sm /S]) (∃ Sn • P [Sn /S 0 ] ∧ Q[Sn /S]) ; R
= “ hh; -def ii, {S, S 0 } = out αP ∪ inαQ ” = “ hh; -def ii, {S, S 0 } = out αQ ∪ inαR ”
∃ Sn • P [Sn /S 0 ] ∧ (∃ Sm • Q[Sm /S 0 ] ∧ R[Sm /S])[Sn /S] ∃ Sm • (∃ Sn • P [Sn /S 0 ] ∧ Q[Sn /S])[Sm /S 0 ] ∧ R[Sm /S]
= “ defn. of substitution, S = 6 Sm ” = “ defn. of substitution, S 0 6= Sn ”
∃ Sn • P [Sn /S 0 ] ∧ ∃ Sm • Q[Sm /S 0 ][Sn /S] ∧ R[Sm /S][Sn /S] ∃ Sm • (∃ Sn • P [Sn /S 0 ][Sm /S 0 ] ∧ Q[Sn /S][Sm /S 0 ]) ∧ R[Sm /S]
= “ hh∧-∃-distr ii, Sm 6 P [Sn /S 0 ] ” = “ hh∧-∃-distr ii, Sn 6 R[Sm /S] ”
∃ Sn , Sm • P [Sn /S 0 ] ∧ Q[Sm /S 0 ][Sn /S] ∧ R[Sm /S][Sn /S] ∃ Sm , Sn • P [Sn /S 0 ][Sm /S 0 ] ∧ Q[Sn /S][Sm /S 0 ] ∧ R[Sm /S]
= “ hhsubst-seqii, S 6 Sm ” = “ hhsubst-seqii, S 0 6 Sn ”
∃ Sn , Sm • P [Sn /S 0 ] ∧ Q[Sm , Sn /S 0 , S] ∧ R[Sm /S][Sn /S] ∃ Sm , Sn • P [Sn /S 0 ][Sm /S 0 ] ∧ Q[Sn , Sm /S, S 0 ] ∧ R[Sm /S]
= “ hhsubst-compii, Sm [Sn /S] = Sm ” = “ hhsubst-compii, Sn [Sm /S 0 ] = Sn ”
∃ Sn , Sm • P [Sn /S 0 ] ∧ Q[Sm , Sn /S 0 , S] ∧ R[Sm /S] ∃ Sm , Sn • P [Sn /S 0 ] ∧ Q[Sn , Sm /S, S 0 ] ∧ R[Sm /S]
“ lhs=rhs, with minor re-orderings ”
CS4003 Formal Methods CS4003 Formal Methods
I We might have been tempted to apply hh; -def ii first, “do” the
substitution, and then use hh:=-def ii:
I Sequential composition is designed to work with predicates
x := e ; x := f
using x , S for before-variables, and x 0 and S 0 for after
= “ hh; -def ii ”
variables.
∃ Sm • (x := e)[Sm /S 0 ]
∧ (x := f )[Sm /S] I In x := e, the variable x stands for the program variable,
= “ doing substitution into assignment ” and not its initial value
∃ Sm • x := e I We cannot do substitutions safely until we have expanded
∧ xm := f [Sm /S] its definition.
= “ h:=-def ii, twice ”
h I In fact, we cannot determine what its free variables are
∃ Sm • x 0 = e ∧ S 0\x 0 = S\x until it has been expanded.
∧ xm0 = f [Sm /S] ∧ S 0\x 0 = S\x
Non-substitutable Predicates
skip P; Q c ∗P
x := x + y ; y := x − y ; x := x − y
I Is simultaneous assignment a programming language I If programs are predicates, then we can join them up using
construct ? predicate notation.
I Depends on the language: I e.g prog1 ∧ prog2
I in languages like C, Java, it is not allowed I e.g prog1 ∨ prog2
I in Handel-C it is allowed, as it targets hardware and so we I e.g prog1 ⇒ prog2
have real parallelism I We can also mix them with non-program predicates
I It does not matter ! I e.g prog ∧ pred
I It is a predicate with a sensible meaning I e.g prog ∨ pred
I It is convenient for certain proofs I e.g prog ⇒ pred
I It can describe outcomes concisely that are not possible I Do these make sense? If so, how?
using only single assignments, e.g. x , y := y , x .
I Are any of these useful?
I Not all predicate language extensions have to be “code”.
I
I Consider the following:
(x := 2) ∧ (x := 3)
(x := 2) ∧ (x := 3) = “ hh:=-def ii, twice ”
(x := 2) ∧ (y := 3) x 0 = 2 ∧ S 0\x 0 = S\x ∧ x 0 = 3 ∧ S 0\x 0 = S\x
(x := 2) ∧ x = 2 = “ 2 6= 3 ”
(x := 2) ∧ y = 3 false
Examining (x := 2) ∧ (y := 3) Examining (x := 2) ∧ x = 2
I
(x := 2) ∧ (y := 3)
= “ hh:=-def ii, twice ” I
x = 2 ∧ y 0 = y ∧ S 0\x 0 ,y 0 = S\x ,y
0 (x := 2) ∧ x = 2
∧ x 0 = x ∧ y 0 = 3 ∧ S 0\x 0 ,y 0 = S\x ,y = “ hh:=-def ii ”
= “ = is transitive, re-arrange ” x = 2 ∧ S 0\x 0 = S\x ∧ x = 2
0
x = x ∧ y 0 = y ∧ S 0\x 0 ,y 0 = S\x ,y
0
I
(x := 2) ∨ (x := 3)
= “ hh:=-def ii, twice ”
I Consider the following: x = 2 ∧ S 0\x 0 = S\x
0
∨ x 0 = 3 ∧ S 0\x 0 = S\x
(x := 2) ∨ (x := 3)
= “ hh∧-∨-distr ii ”
(x := 2) ∨ (y := 3)
(x = 2 ∨ x 0 = 3) ∧ S 0\x 0 = S\x
0
(x := 2) ∨ x = 2
I Variable x ends up having either value 2 or 3, and all other
variables are unchanged.
I The choice between 2 or 3 is arbitrary — nothing here
states how that choice is made.
I
I
(x := 2) ∨ (y := 3) (x := 2) ∨ x = 2)
= “ hh:=-def ii, twice ” = “ hh:=-def ii, twice ”
x = 2 ∧ y 0 = y ∧ S 0\x 0 ,y 0 = S\x ,y
0
x = 2 ∧ S 0\x 0 = S\x ∨ x = 2
0
∨ x 0 = x ∧ y 0 = 3 ∧ S 0\x 0 ,y 0 = S\x ,y
= “ hh∧-∨-distr ii ” I Either x becomes 2, or we started with x equal to 2 and
(x = 2 ∧ y 0 = y ∨ x 0 = x ∧ y 0 = 3) ∧ S 0\x 0 ,y 0 = S\x ,y
0
then anything could have happened !
I if x is not equal to 2 to start, then it equals 2 at the end and
no other variable changes
I Either x ends up having value 2, or y ends up equal to 3, I if x equals 2 at the beginning, then the final values of all
and all other variables are unchanged. variables are arbitrary.
I The choice between changing x or y is arbitrary — nothing
here states how that choice is made.
¬ (x := e)
I Disjoining two programs (prog1 ∨ prog2 ) denotes an
arbitrary choice between the two behaviours.
I Predicate I Let’s calculate:
prog ∨ ~x = ~e ¬ (x := e)
tells us that we get: = “ hh:=-def ii ”
I the behaviour of prog, ¬ (x 0 = e ∧ S 0\x 0 = S\x )
if initial values of ~x are not all equal to ~e ; = “ hhdeMorganii ”
I arbitrary behaviour, if they are all equal. x 6= e ∨ S 0\x 0 6= S\x
0
(x := 2) ⇒ (x := 3)
= “ hh:=-def ii, twice ”
I Consider the following: x 0 = 2 ∧ S 0\x 0 = S\x ⇒ x 0 = 3 ∧ S 0\x 0 = S\x
= “ hh⇒-def ii ”
(x := 2) ⇒ (x := 3) ¬ (x 0 = 2 ∧ S 0\x 0 = S\x ) ∨ x 0 = 3 ∧ S 0\x 0 = S\x
(x := 2) ⇒ (y := 3) = “ hhdeMorganii ”
(x := 2) ⇒ x 0 = 2 x 6= 2 ∨ S 0 6= S ∨ x 0 = 3 ∧ S 0\x 0 = S\x
0
(x := 2) ⇒ x 0 ∈ {1, . . . , 10} = “ hh∨-∧-absorbii, with p matched to S 0 6= S ”
x 6= 2 ∨ S 0 6= S ∨ x 0 = 3
0
Examining (x := 2) ⇒ (y := 3) Examining (x := 2) ⇒ x 0 = 2
I
I
(x := 2) ⇒ (y := 3) (x := 2) ⇒ x 0 = 2)
= “ hh:=-def ii, twice ” = “ hh:=-def ii ”
x = 2 ∧ y 0 = y ∧ S 0\x 0 ,y 0 = S\x ,y
0 x = 2 ∧ S 0\x 0 = S\x ⇒ x 0 = 2
0
= “ hhdeMorganii ” = “ hhexcluded-middleii ”
x 6= 2 ∨ y 0 6= y ∨ S 0 6= S
0 true ∨ S 0\x 0 6= S\x
∨ x 0 = x ∧ y 0 = 3 ∧ S 0\x 0 ,y 0 = S\x ,y = “ hh∨-zeroii ”
= “ hh∨-∧-absorbii, with p matched to S 0\x 0 ,y 0 6= S\x ,y ” true
x 6= 2 ∨ y 0 6= y ∨ S 0\x 0 ,y 0 6= S\x ,y ∨ x 0 = x ∧ y 0 = 3
0
I This is always true: if we assign 2 to x , then the final value
of x is 2.
I Complicated !
CS4003 Formal Methods CS4003 Formal Methods
Examining (x := 2) ⇒ x 0 ∈ {1, . . . , 10} Programs and Implication—Comment
I
(x := 2) ⇒ x 0 ∈ {1, . . . , 10})
= “ hh:=-def ii ” I Predicate prog1 ⇒ prog2 is true if
x = 2 ∧ S 0\x 0 = S\x ⇒ x 0 ∈ {1, . . . , 10}
0 I the before-after relationship described by prog1 does not
= “ hh⇒-def ii, hhdeMorganii ” hold, or …
x 0 6= 2 ∨ S 0\x 0 6= S\x ∨ x 0 ∈ {1, . . . , 10} I prog1 holds, and the behaviour described by prog2
somehow includes the behaviour of prog1
= “ i ∈ {1, . . . , k , . . . , n} ≡ i = k ∨ i ∈ {1, . . . , k , . . . , n} ”
x 0 6= 2 ∨ S 0\x 0 6= S\x ∨ x 0 = 2 ∨ x 0 ∈ {1, . . . , 10}
I Predicate prog ⇒ pred is true if
= “ hhexcluded-middleii ”
I the before-after relationship described by prog does not
hold, or …
true ∨ S 0\x 0 6= S\x ∨ x 0 ∈ {1, . . . , 10} I prog holds, and the situation described by pred is covered
= “ hh∨-zeroii ” by the behaviour of prog
true
Mini-Exercise 3
Q3.1 Prove
(∀ x • P ) ≡ ¬ (∃ x • ¬ P )
using only the following axioms:
hh gen-deMorganii (∃ x • P ) ≡ ¬ (∀ x • ¬ P )
hh¬ -involii ¬¬p≡p
Q3.2 Prove
hh ; -skip-unitii P ; skip = P
[[∃ x • P ]]ρ =
b ...
F =G
E [v := F ] = E [v := G]
The outer x : says, “modifying x only,” I Not be confused with [P ], the universal closure of P ,
I The notation x :[. . .] is called a (specification) “frame”. or x : T , asserting that x has type T ,
or P [e/x ] where e replaces free x in P .
I Seems OK.
[P ⇒ S]
Pred ::= . . . | S v P
hh v-def ii SvP = b [P ⇒ S]
I Reminder: notation [P ] asserts that P is true for all values Consider x :[P ] v x := e, assuming x and y in scope,
of its variables
I [P ] ≡ ∀ x1 , . . . , xn • P , FV P ⊆ {x1 , . . . , xn }
[x := e ⇒ x :[P ]]
I It satisfies the following laws: = “ hh:=-def ii, hhframe-def ii ”
[x = e ∧ y 0 = y ⇒ P ∧ y 0 = y ]
0
hh []-trueii [true] ≡ true
= “ hhshuntingii ”
hh []-falseii [false] ≡ false
[x = e ⇒ (y 0 = y ⇒ P ∧ y 0 = y )]
0
hh[]-∧-distr ii [P ∧ Q] ≡ [P ] ∧ [Q]
= “ (A ⇒ B ∧ A) ≡ (A ⇒ B) ”
hh[]-idem ii [[P ]] ≡ [P ]
[x = e ⇒ (y 0 = y ⇒ P )]
0
hh[]-∀-absorbii [∀ x1 , . . . , xn • P ] ≡ [P ]
= “ hh[]-1pt ii ”
hh[]-1pt ii [x = e ⇒ P ] ≡ [P [e/x ]]
[y 0 = y ⇒ P [e/x 0 ]]
hh[]-split ii [P ] ≡ [P ] ∧ P [~e /~x ], αP = {~x }
= “ hh[]-1pt ii ”
[ P [e/x 0 ][y /y 0 ] ]
I It does not distribute over logical-or [P ∨ Q] 6≡ [P ] ∨ [Q]
= “ substitution ” =-∧-substii
hh x =e∧P ≡ x = e ∧ P [e/x ]
[x ∈ {1, . . . , 10} ∧ 99 = y ]
= “ hh[]-splitii ”
[x ∈ {1, . . . , 10} ∧ 99 = y ] I Proof is by induction over the structure of P , so we omit it
∧ (x ∈ {1, . . . , 10} ∧ 99 = y )[0, 0/x , y ] (consider it an axiom).
= “ substitution ” I The law still holds if we only substitute e for x in part of P ,
[x ∈ {1, . . . , 10} ∧ 99 = y ] ∧ 0 ∈ {1, . . . , 10} ∧ 99 = 0
rather than all of it. We also have the following theorem:
= “ set-theory, arithmetic, logic, hand-waving ”
false =-⇒-substii
hh (x = e ∧ P ⇒ Q) ≡ (x = e ∧ P ⇒ Q[e/x ])
= “ hh[]-∧-distr ii ” = “ hhshuntingii(2) ”
x0= x ∧ y0 = y ⇒ x ≥ y
x ≥ y ∧ skip
⇒ x , y :[x , y := max (x , y ), min(x , y )] ⇒ x 0 = max (x , y ) ∧ y 0 = min(x , y )
∧ ∧
x0 = y ∧ y0 = x ⇒ x < y
x < y ∧ x , y := y , x
⇒ x , y :[x , y := max (x , y ), min(x , y )] ⇒ x 0 = max (x , y ) ∧ y 0 = min(x , y )
= “ hhskip-def ii, hhframe-def ii(2), hhsim-:=-defii ” = “ hh[]-1pt ii(2), replacing x 0 and y 0 ”
x ≥ y ∧ x0 = x ∧ y0 = y x0 = x ∧ y0 = y ⇒
x , v 0 are regular observation variables, P is a predicate variable, Here, the intended meaning of P (b) is that P is a predicate that
e is an expression variable, and v is a program variable. (possibly) mentions b, as a free variable.
In general these match anything of the corresponding type. Then P (True) is P with all such free b replaced by True.
The exception is when such a variable has been defined It is a convenient shorthand, more formally written as
elsewhere to mean something specific. In such a case a
variable can only match itself. (∀ b : B • P ) ≡ P [False/b] ∧ P [True/b]
So far the only example we have seen of this is when we have
defined a specific alphabet. S, S 0 are list-variables that match
themselves, or the alphabet, if known.
Meet and Join (a.k.a. Min and Max) Meet and Join in Refinement
(x u y = x ) ≡ (x 4 y ) ≡ (x t y = y )
I From these we can deduce the following laws:
hhv-joinii (P v Q) ≡ (P t Q ≡ Q)
I Meets and joins exists for all our examples so far: hhv-meet ii (P v Q) ≡ (P u Q ≡ P )
I (N, 6), meet is minimum, join is maximum
I (P T , ⊆), meet is intersection, join is union.
I (Pred , ⇒), meet is logical-and, join is logical-or. I A p.o. in which meets and joins exists for all pairs of
elements is called a Lattice.
x0 = 3 x0 = 6 x := 3 x := 6
x0 = 3 ∨ x0 = 6 x := 3 u x := 6 v
x 0 ∈ {1 . . . 10} x : [x 0 ∈ {1 . . . 10}]
hhv-def ii SvP =
b [S ⇐ P ]
x := 3 u x := 6 v
SvP ≡ [P ⇒ S]
Chaos spec.
hhv-reflii P vP
hh ⇒-reflii P ⇒ P ≡ true
hh⇒-transii (P ⇒ Q) ∧ (Q ⇒ R) ⇒ (P ⇒ R) hhv-transii (S v Q) ∧ (Q v P ) ⇒ (S v P )
hhv-antiii (P = Q) ≡ (P v Q) ∧ (Q v P )
hh≡-⇒-transii (P ≡ Q) ∧ (Q ⇒ R) ⇒ (P ⇒ R)
hh≡-v-transii (S = Q) ∧ (Q v P ) ⇒ (S v P )
hh⇒-≡-transii (P ⇒ Q) ∧ (Q ≡ R) ⇒ (P ⇒ R)
hhv-≡-transii (S v Q) ∧ (Q = P ) ⇒ (S v P )
hhstrengthen-anteii P ∧Q⇒P
hhv-prog-alt ii S v P ∨ Q ≡ (S v P ) ∧ (S v Q)
hhweaken-cnsq ii P ⇒P ∨Q
hhv-spec-split ii S ∧ T v P ≡ (S v P ) ∧ (T v P )
hhante-∨-distr ii P ∨ Q ⇒ R ≡ (P ⇒ R) ∧ (Q ⇒ R)
hh≡-imp-vii (P = Q) ⇒ (P v Q)
hhcnsq-∧-distr ii P ⇒ Q ∧ R ≡ (P ⇒ Q) ∧ (P ⇒ R)
hhspec-weakeningii S∨T vT
hhprog-strengthenii P vP ∧Q
I As refinement is defined as “universally-closed reverse
implication”, we expect to derive many refinement laws
from these, and similar laws.
CS4003 Formal Methods CS4003 Formal Methods
Exploring the Refinement Laws (I) Exploring the Refinement Laws (II)
I P vP hh v-reflii
Anything refines itself (trivially). I S v P ∨ Q ≡ (S v P ) ∧ (S v Q) hhv-prog-alt ii
I (P = Q) ⇒ (P v Q) ≡-imp-vii
hh Refining by an arbitrary choice of alternatives is the same
Equality is a (fairly trivial) refinement. as refining by both options individually
I (P = Q) ≡ (P v Q) ∧ (Q v P ) hhv-antiii I S ∧ T v P ≡ (S v P ) ∧ (T v P ) hhv-spec-split ii
Two programs/specifications are equal if they refine each Satisfying a specification with two mandatory parts is the
other. same as satisfying both parts individually.
I (S v Q) ∧ (Q v P ) ⇒ (S v P ) hhv-transii I S∨T vT hhspec-weakeningii
If Q refines S, and P refines Q, then P also refines S. A specification offering alternatives can be satisfied by
We can do refinement in several steps. choosing one of those alternatives
I (S = Q) ∧ (Q v P ) ⇒ (S v P ) hh≡-v-transii I P vP ∧Q hhprog-strengthenii
I Goal: (S = Q) ∧ (Q v P ) ⇒ (S v P ).
I Strategy: reduce to law hh⇒-≡-transii.
I Proof:
I Select and do in class.
I Note the following meta-theorem: (S = Q) ∧ (Q v P ) ⇒ (S v P )
I If P is a theorem, then so is [P ] = “ hhv-def ii ”
I Why: because if P is a theorem, then it is true for all S = Q ∧ [P ⇒ Q] ⇒ [P ⇒ S]
instantiations of variables it contains. So, it remains true if = “ equality substitution, S = Q ”
we quantify over the expression variables. S = Q ∧ [P ⇒ Q] ⇒ [P ⇒ Q]
= “ hhstrengthen-anteii ”
true
Mini-Exercise 4
I We see that both programs refine the specification: I The programs are deterministic
spec v progi , i ∈ 1, 2 prog1 search left-to-right
prog2 search right-to-left
I Both programs, however, have different outcomes. I The choice between them has some non-determinism
prog1 6= prog2 prog1 ∨ prog2 search either l-to-r, or r-to-l
(p 0 = 3 vs. p 0 = 5).
I The specification has lots of non-determinism
I An arbitrary choice between the programs also refines the
spec: spec search any which-way
spec v prog1 ∨ prog2
(p 0 = 3 ∨ p 0 = 5) I Refinement is essentially about reducing non-determinism.
I / P and x 0 ∈
The conditions x ∈ / P exclude a lot, e.g. all
assignment statements (why?)
1. Formal System Invariant better understood, so more care 1. The informal system had to have a last-minute fix, so the
was taken by resulting initialisation code. code speed got worse.
2. Not a big issue as the system is meant to stay up and 2. If code is formally verified, then you don’t need so many
running. run-time checks (array bounds, etc.)
W = (P ; W ) c skip
SInit v PInit
I We take a simple approach here, setting SInit = PInit
SLoop v PLoop I
PInit
= “ defn. ”
I Here, we have s := 0; i := n
PInit =
b s := 0 ; i := n = “ hhsim-:=-mergeii ”
PLoop =b (i > 0) ~ (s := s + i ; i := i − 1) s, i := 0, n
= “ hhsim-:=-def ii ”
s = 0 ∧ i 0 = n ∧ n0 = n
0
I We then use the following law of refinement (hhv-; ii) to
= “ by design ”
complete:
SInit
(S1 v P1 ) ∧ (S2 v P2 ) ⇒ (S1 ; S2 ) v (P1 ; P2 )
I The question now is, what are SInit and SLoop ? CS4003 Formal Methods CS4003 Formal Methods
i >0∧W ∨i =0∧W ⇒
= “ hh∧-∨-distr ii ” s, i :[s 0 = s + S(i )] ]
(i > 0 ∨ i = 0) ∧ W = “ hhframe-def ii ”
= “ hhexcluded-middleii, noting i : N ” [ i = 0 ∧ s 0 = s + S(i ) ∧ n 0 = n
0
true ∧ W ⇒
= “ hh∧-unitii ” s 0 = s + S(i ) ∧ n 0 = n ]
W = “ prop. law [A ∧ B ⇒ A] ”
true
= “ arithmetic, hhframe-def ii ”
s, i :[s 0 = S(n)]
I Whilst the loop is continuing, we see that W is in some I At the start, i = n ∧ s = 0, so inv holds
sense doing the same thing each time round I At the end, i = 0 ∧ inv , so s = S(n) holds
I Somehow W must embody a property true before and
after each iteration: the loop invariant (inv ).
I However, using invariants requires that inv holds at loop I Reminder: we want to show
start. s, i :[s 0 = S(n)]
We need to ensure that initialization brings this about. v s, i := 0, n ; (i > 0) ~ (s, i := s + i , i − 1)
I Another refinement rule hhrel-init-loop-vii allows us to
couple both initialisation and the loop: using inv = s + S(i ) = S(n).
I We need to get the specification into the correct form,
~v :[pre ⇒ inv 0 ∧ ¬ c 0 ]
which requires the very useful, very general refinement
v ~v :[pre ⇒ inv 0 ] ; c ~ ~v :[c ∧ inv ⇒ inv 0 ] inference rule hhstrengthen-post ii:
~v :[Post1 ] v ~v :[Post2 ]
I Given refinement check S v c ~ P , we no longer look for
an appropriate W . ~v :[pre ⇒ Post1 ] v ~v :[pre ⇒ Post2 ]
I Instead we have to find inv , which is simpler. (The rationale for this law will follow shortly)
s, i :[s 0 = S(n)]
I We have the law hh=-∧-substii:
= “ hh⇒-l-unitii ” x = e ∧ P ≡ x = e ∧ P [e/x ]
s, i :[true ⇒ s 0 = S(n)] I From this we can derive many useful laws, e.g.
v “ hhstrengthen-post ii, using ~x :[A] v ~x :[A ∧ B] ”
hh=-⇒-substii (x = e ∧ P ⇒ Q) ≡ (x = e ∧ P ⇒ Q[e/x ])
s, i :[true ⇒ s 0 = S(n) ∧ i 0 = 0]
hh=-v-subst ii (S v P ∧ x = e) ≡ (S[e/x ] v P ∧ x = e)
= “ 0 = S(0) = S(i 0 ) and frame requires n 0 = n ”
s, i :[true ⇒ s 0 + S(i 0 ) = S(n 0 ) ∧ i 0 = 0]
v “ hhrel-init-loop-vii ” I Remember also, that the laws above also apply even if we
s, i :[true ⇒ s 0 + S(i 0 ) = S(n 0 )] don’t replace all free x with e, but instead only change
; (i > 0) ~ s, i :[i > 0 ∧ s + S(i ) = S(n) ⇒ s 0 + S(i 0 ) = S(n 0 )] some.
= “ hh=-⇒-substii ”
I Identities propagate through logical-and (∧).
[ s = 0 ∧ i 0 = n ∧ n 0 = n ⇒ 0 + S(n) = S(n) ∧ n = n ]
0
I Identities propagate leftwards through implication (⇒), i.e. = “ arithmetic ”
following the arrow. [ s = 0 ∧ i 0 = n ∧ n 0 = n ⇒ true ]
0
= “ hh⇒-r-zeroii ”
true
P ⇒Q
I The specification
pre ⇒ post 0 I Given specification pre ⇒ Rel what should our refinement
do, in situations were pre is False?
I is interpreted as I Guiding principle: guarantees and commitments
“if pre holds at the start, the we must ensure that post holds I The pre-condition pre is a guarantee that some other agent
at the end”. will set things up appropriately.
I relates a “pre-snapshot” to a “post-snapshot”
I Given that guarantee has been met, then Rel is the
I The specification commitment that any refinement must make.
pre ⇒ Rel
I But, but, if the guarantee isn’t met ?
I If the pre-condition is False we can do anything.
I is interpreted as I in general we cannot offer anything better than that
“if pre holds at the start, then we must ensure that start and consider: x , r : R ∧ x ≥ 0 ⇒ (r 0 )2 = x
end states are related by Rel ”. what can we do if x < 0?
I relates a “pre-snapshot” to a relationship between before
and after states
CS4003 Formal Methods CS4003 Formal Methods
Loop Refinement Example 1 (Floyd ’67) Refining Floyd’67: process
Given
a : array 1 . . . n of R
A(0) = 0 We need to
A(x ) = ax + A(x − 1), x > 0 1. find appropriate invariant ainv
2. show ASpec v s, i :[ainv 0 ∧ i 0 > n 0 ]
b s, i :[s 0 = A(n)]
ASpec =
(precondition is true, so we ignore it)
αAProg = {n, i , s, n 0 , i 0 , s 0 } 3. show s, i :[ainv 0 ] v i , s := 1, 0
b i , s := 1, 0 ; (i ≤ n) ∗ (s, i := s + ai , i + 1)
AProg = 4. show s, i :[i ≤ n ∧ ainv ⇒ ainv 0 ] v s, i := s + ai , i + 1
Prove
ASpec v AProg
s, i :[i ≤ n ∧ s + A(i , n) = A(n) ⇒ s 0 + A(i 0 , n 0 ) = A(n 0 )] s, i :[i ≤ n ∧ s + A(i , n) = A(n) ⇒ s + A(i , n) = A(n)]
v s, i := s + ai , i + 1 v s 0 = s + ai ∧ i 0 = i + 1 ∧ n 0 = n
= “ hhsim-:=-def ii ” = “A∧B⇒B”
s, i :[i ≤ n ∧ s + A(i , n) = A(n) ⇒ s 0 + A(i 0 , n 0 ) = A(n 0 )] s, i :[true]
v s 0 = s + ai ∧ i 0 = i + 1 ∧ n 0 = n v s 0 = s + ai ∧ i 0 = i + 1 ∧ n 0 = n
= “ hh=-v-substii ” = “ hhframe-def ii ”
s, i :[i ≤ n ∧ s + A(i , n) = A(n) ⇒ s + ai + A(i + 1, n) = A(n)] true ∧ n 0 = n
v s 0 = s + ai ∧ i 0 = i + 1 ∧ n 0 = n v s 0 = s + ai ∧ i 0 = i + 1 ∧ n 0 = n
= “ law of A( , ) ” = “ hhprog-strengthenii ”
s, i :[i ≤ n ∧ s + A(i , n) = A(n) ⇒ s + A(i , n) = A(n)] true
v s 0 = s + ai ∧ i 0 = i + 1 ∧ n 0 = n
Given
We need to
b r , q :[y 0 > r 0 ∧ x 0 = r 0 + y 0 ∗ q 0 ]
HSpec = 1. find appropriate invariant hinv
2. show HSpec v r , q :[hinv 0 ∧ y 0 > r 0 ]
b r , q := x , 0 ; (y ≤ r ) ∗ (r , q := r − y , 1 + q)
HProg =
(precondition is true, so we ignore it)
Prove 3. show r , q :[hinv 0 ] v r , q := x , 0
HSpec v HProg 4. show r , q :[y ≤ r ∧ hinv ⇒ hinv 0 ] v r , q := r − y , 1 + q
(y ≤ r ∧ x = r + q ∗ y
⇒ x = r − y + (q + 1) ∗ y ) ∧ x 0 = x ∧ y 0 = y
v r = r − y ∧ q0 = 1 + q ∧ x 0 = x ∧ y 0 = y
0
I We shall finish off with one more different loop example
= “ arithmetic ” I A loop whose termination is not pre-determined
(y ≤ r ∧ x = r + q ∗ y ⇒ x = r − q ∗ y ) ∧ x 0 = x ∧ y 0 = y I e.g., searching an array
v r 0 = r − y ∧ q0 = 1 + q ∧ x 0 = x ∧ y 0 = y I p :[ap 0 = w w ∈ a p 0 = 0]
= “A∧B⇒B”
true ∧ x 0 = x ∧ y 0 = y
v r 0 = r − y ∧ q0 = 1 + q ∧ x 0 = x ∧ y 0 = y
= “ hhprog-strengthenii ”
true
a : array 1 . . . N of T
w : T 1. find an appropriate invariant asinv
2. Prove ASSpec v p : asinv 0 ∧ ¬ (ap 0 6= w ∧ p 0 > 0)
p : N
3. Prove p :[asinv 0 ] v p := N
b p : ap 0 = w w ∈ a p 0 = 0
ASSpec =
4. Prove p :[ap 6= w ∧ p > 0 ∧ asinv ⇒ asinv 0 ] v p := p − 1
b p := N ; (ap 6= w ∧ p > 0) ∗ p := p − 1
ASProg = 5. For simplicity, we shall assume w 0 = w and a 0 = a
throughout, so we can ignore the frames.
ASSpec v ASProg
5. Left as exercise for the very keen ! I This doesn’t reduce to true as expected.
CS4003 Formal Methods
I We get an execution of P in an arbitrary starting state ! CS4003 Formal Methods
Wrong Choice ?
I Maybe we chose wrong earlier — should Forever be
miracle ?
I
miracle ; P
= “ defn. miracle ”
false ; P
= “ hh; -def ii ”
∃ Sm • false[Sm /S 0 ] ∧ P [Sm /S]
= “ substitution ”
∃ Sm • false ∧ P [Sm /S]
= “ hh∧-zeroii ”
∃ Sm • false
= “ hh∃-freeii, defn. miracle ”
miracle
Spec v Prog
Spec v Prog
I We shall extend our theory by allowing I Consider the specification: pre ⇒ Post
divergence/non-divergence to be an observable notion. I i.e. if pre holds at start, the Post relates start and end
(provided the program stops)
I We introduce a new variable : ok
I We shall now replace the above by:
ok = True program is non-divergent (a.k.a. “stable”) ok ∧ pre ⇒ ok 0 ∧ Post
ok = False program is diverging (a.k.a. “unstable”) I if started in a stable state ok
I Variable ok is not a program variable I and pre holds at start
I it is an auxiliary or model variable. I then, we end in a stable state ok 0 ,
I We assume no program has such a variable I and the relation Post holds of starting and ending states.
I ok ∈ /S I For sequential imperative programs, stability is termination
I As with program variable, we distinguish before- and after-
execution:
I Important: We assume that neither pre nor Post mention
ok or ok 0 .
I ok — stability/non-divergence at start
(i.e. of “previous” program) I Also, variables ok and ok 0 are added to the alphabet of
I ok 0 — stability/non-divergence at end every language construct.
∃ xs • (A ⇒ B) ∧ (C ⇒ D)
= “ hh⇒-def ii ” I We shall now simplify each of the four components
∃ xs • (¬ A ∨ B) ∧ (¬ C ∨ D) obtained above
= “ Distributivity laws ”
I For convenience we give them labels (using “S;U” as short
∃ xs • ¬ A ∧ ¬ C ∨ ¬ A ∧ D ∨ B ∧ ¬ C ∨ B ∧ D
for hhskip-; -unit ii):
= “ hh∃-distr ii, several times ”
(∃ xs • ¬ A ∧ ¬ C) ∨ (∃ xs • ¬ A ∧ D) ∨ (S;U.1) ∃ Sm , okm • ¬ ok ∧ ¬ (okm ∧ pre[Sm , /S])
(∃ xs • B ∧ ¬ C) ∨ (∃ xs • B ∧ D) (S;U.2) ∃ Sm , okm • ¬ ok ∧ ok 0 ∧ Post [Sm /S]
(S;U.3) ∃ Sm , okm • okm ∧ Sm = S ∧ ¬ (okm ∧ pre[Sm , /S])
If we apply this to our proof we get
(S;U.4) ∃ Sm , okm • okm ∧ Sm = S ∧ ok 0 ∧ Post [Sm /S]
(∃ Sm , okm • ¬ ok ∧ ¬ (okm ∧ pre[Sm , /S])) ∨
(∃ Sm , okm • ¬ ok ∧ ok 0 ∧ Post [Sm /S]) ∨
(∃ Sm , okm • okm ∧ Sm = S ∧ ¬ (okm ∧ pre[Sm , /S])) ∨
(∃ Sm , okm • okm ∧ Sm = S ∧ ok 0 ∧ Post [Sm /S])
CS4003 Formal Methods CS4003 Formal Methods
Proof of hhskip-; -unitii (IV, simplifying S;U.1) Proof of hhskip-; -unitii (V, simplifying S;U.2)
b S ∧ S 0\x~0 = S\~x
~x :[S] = I We also need to note that if ok = False at the outset, then
we can modify any variables, not just those in the frame.
I The definition we get is therefore:
I How do we fit in ok , ok 0 ?
hhframe-def ii
b ok ∧ pre ⇒ ok 0 ∧ Post ∧ S 0\x~0 = S\~x
~x :[pre , Post ] =
hh v-:=ii : x0 = e v x := e Given
b r , q :[true , y 0 > r 0 ∧ x 0 = r 0 + y 0 ∗ q 0 ]
HSpec =
I The specification x 0 = e is no longer valid
— it does not mention ok and ok 0 in a valid way b r , q := x , 0 ; (y ≤ r ) ∗ (r , q := r − y , 1 + q)
HProg =
I We get useful variants if we change the specification to fit
our new theory: Prove
HSpec v HProg
hhv-:=ii ~ :[pre , x 0 = e] v x := e,
w ~
x ∈w
using new version of our theory, with ok and ok 0
ok ∧ y ≤ r ∧ x = r + q ∗ y
⇒ ok 0 ∧ x = r − y + (q + 1) ∗ y ∧ x = x ∧ y = y I We have proved that if the Hoare’69 program starts ok,
v ok ⇒ ok 0 ∧ r 0 = r − y ∧ q 0 = 1 + q ∧ x 0 = x ∧ y 0 = y then it terminates, establishing y 0 > r 0 ∧ x 0 = y 0 ∗ q 0 + r 0
= “ arithmetic, and e = e ≡ true ” I The proof of correctness was very similar to before
ok ∧ y ≤ r ∧ x = r + q ∗ y ⇒ ok 0 ∧ x = r + q ∗ y I We had to handle the extra complexity of
v ok ⇒ ok 0 ∧ r 0 = r − y ∧ q 0 = 1 + q ∧ x 0 = x ∧ y 0 = y ok ∧ . . . ⇒ ok 0 ∧ . . .
= “ (A ∧ B ⇒ C ∧ B) ≡ (A ∧ B ⇒ C) ” I What have we gained ?
ok ∧ y ≤ r ∧ x = r + q ∗ y ⇒ ok 0
v ok ⇒ ok 0 ∧ r 0 = r − y ∧ q 0 = 1 + q ∧ x 0 = x ∧ y 0 = y
I in fact, things aren’t just pointless, they are worse than
= “ (A ∧ B ⇒ C) v (A ⇒ C ∧ D) ” that !
true
f (x , y , r − y , q + 1) < f (x , y , r , q)
4. show
~v :[c ∧ inv , inv 0 ∧ 0 6 V 0 < V ] v P
This is the key difference: we now have to show the variant I intuitively, the variant measures the
is reduced, but never below zero. “distance to go” towards termination.
where a b = a − b, if b 6 a, and equals zero otherwise. I We expand the lhs using hhframe-def ii:
I The proof obligation involving the variant is now
ok ∧ y 6 r ⇒ ok 0 ∧ 0 6 r 0 y 0 < r y ∧ x 0 = x ∧ y 0 = y
r,q : [y 6 r ∧ x = q ∗ y + r,
x 0 = q0 ∗ y 0 + r 0 ∧ 0 6 r 0 y 0 < r y ]
v r , q := r − y , q + 1 I We expand the rhs using hhsim-:=-def ii:
ok ⇒ ok 0 ∧ r 0 = r − y ∧ q 0 = q + 1 ∧ x 0 = x ∧ y 0 = y
b r , q := x , 0 ; (y ≤ r ) ∗ (r , q := r − y , 1 + q)
HProg =
I Again we ignore the variables ok , x , and their dashed
We can now prove
variants and focus on the key variant part:
HSpec v HProg
y 6 r ⇒ 0 6 (r − y ) y < r y
by choosing invariant
0-+-def ii
hh 0+m =m hh+-0-unit ii n+0=n
hhsucc-+-def ii (succ n) + m = succ(n + m)
+-succ-commii
hh (succ n) + m = n + (succ m)
hh0-∗-def ii 0∗m =0 hh+-comm ii n+m =m+n
hhsucc-∗-def ii (succ n) ∗ m = m + (n ∗ m) hh+-associi ` + (m + n) = (` + m) + n
hhˆ-0-def ii m0 = 1 hh∗-comm ii n∗m =m∗n
hhˆ-succ-def ii m succ n = m ∗ m n hh∗-0-def ii
hh1-def ii
n∗0=0
1 = succ 0 hh∗-associi
hh2-def ii
` ∗ (m ∗ n) = (` ∗ m) ∗ n
2 = succ 1 hh∗-+-distr ii
hh3-def ii
` ∗ (m + n) = ` ∗ m + ` ∗ n
3 = succ 2
..
. 1+1-equals-2ii
hh 1+1=2
(Strictly speaking we need to model digit strings and their 2+2-equals-4ii
hh 2+2=4
valuations in order to do the last three definitions properly)
Goal: 1 + 1 = 2
Strategy: reduce to true Goal: n + 0 = n
Strategy: ???
1+1=2 I A traditional proof of this uses induction, on n.
= “ hh1-def ii, hh2-def ii ”
I We show it is true for n = 0, i.e. that 0 + 0 = 0
(succ 0) + (succ 0) = succ 1
= “ hhsucc-+-def ii, hh1-def ii ” I We then assume it (i.e. n + 0 = n) and show it is true for
succ(0 + succ 0) = succ(succ 0) succ n,
= “ hh0-+-def ii ” succ n + 0 = succ n
succ(succ 0) = succ(succ 0)
= “ hh=-reflii ” I We have an induction principle (law hhN-inductionii), but how
true
do we use it in our proof system ?
Class 19
Goal: (m + 0) − 0 = m
Strategy: reduce lhs to rhs
I Goal: (m + n) − n = m
(m + 0) − 0
I Strategy: Induction on n
= “ hh+-0-unitii ”
I Base-Case: (m + 0) − 0 = m m−0
I Inductive Step: = “ hh−-0-def ii ”
((m + n) − n = m) ⇒ ((m + (succ n)) − (succ n) = m) m
(n > 1) ~ n := n/2 even n n := (3 · n + 1)/2 I In the computer science domain, undefinedness is a fact of
life
I However, there is no universal agreement on how to handle
I Prove it refines the following (total-correctness)
it in formal logics.
specification: n > 0 ` n 0 = 1
I Some approaches:
I Done yet ? I Lots of definedness side-conditions
I If so, well done, take a bow, leave the class, go to MIT, I Giving “meanings” to undefined expressions
Stanford, Microsoft Research, wherever … I Triple-valued logics
I It is the so-called Collatz Conjecture (1937): it is known to
terminate for all n 6 260 (May 2011), but no proof that it
terminates for all n has been found.
I The use of a partial function/operator implies its We use σ, τ for lists (a.k.a. sequences)
definedness side-condition
e.g. the term (x + y ) − z implies a condition z 6 x + y . σ, τ ∈ T ∗ ::= ...
I When replacing a term by one that is equal, we shall | hi empty list
explicitly add in definedness conditions if the term that | x oo σ list construction
required it disappears. | hx1 , . . . xn i list enumeration
| head (σ) | tail (σ) list destruction (1)
I e.g consider predicate v = ((x + y ) − z ) − ((x + y ) − z )
and law e − e = 0 | front (σ) | last (σ) list destruction (2)
I The term has an implicit side-condition: z 6 x + y | σaτ list concatenation
I if we replace the term by 0 we get v = 0 | #σ list length
but we have lost information about the side-condition. | σ6τ list prefix relation
I We shall re-introduce the side-condition explicitly, so getting
v = 0 ∧ z 6 x + y.
Some axioms for lists Some definitions for lists (!) (??)
Note that head , tail , front and last are partial , defined only for
non-empty lists.
CS4003 Formal Methods CS4003 Formal Methods
Some theorems for lists Proof of hha-r-unitii (I)
I Goal: σ a hi = σ
a-r-unit ii
hh σ a hi = σ I Strategy: induction over σ
hh#-a-morphism ii #(σ a τ ) = #σ + #τ b σ a hi = σ
Property P (σ) =
Base Case Show P (hi)
Proofs in class
Inductive Step Show P (τ ) ⇒ P (x oo τ )
Inductive Step P (τ ) ⇒ P (x oo τ )
Goal: (τ a hi = τ ) ⇒ ((x oo τ ) a hi = x oo τ )
Strategy: assume LHS to show RHS
Base Case P (hi) ≡ (hi a hi = hi)
Strategy: reduce LHS to RHS
hhind-hypii τ a hi = τ
hi a hi (x oo τ ) a hi = x oo τ
= “ hha-hi-def ii ” = “ hha-oo-def ii ”
hi x oo (τ a hi) = x oo τ
= “ hhind-hypii ”
x o τ = x oo τ
o
= “ hh=-reflii ”
true
front -single-def ii
hh front hx i = hi
hhfront -oo-def ii front (x oo σ) = x oo front (σ)
hhlast -single-def ii last hx i = x Explain in Class
hhlast -o
o-def ii last (x oo σ) = last (σ)
front and last are problematical, but not because they are
partial.
Note that head , tail , front and last are partial , defined only for
non-empty lists.
CS4003 Formal Methods CS4003 Formal Methods
A Theory of Sets
Axioms:
¬ -in-{}ii
hh x ∈
/ {}
hhin-singletonii x ∈ {y } ≡ x = y
hhset-extensionalityii S = T ≡ ∀x • x ∈ S ≡ x ∈ T
Class 20
n, f , x : N
fac(0) = 1
fac(n) = n ∗ fac(n − 1), n>0 I Formal Methods sounds like a good idea in theory
I What about in practise ?
b f , x :[f 0 = fac(n)]
FSpec =
b f , x := 1, 2 ; (x 6 n) ~ f , x := f ∗ x , x + 1
FProg = I Real-world problems result in lots of small, non-trivial proof,
each not quite the same as any other.
Using the partial correctness theory only: I Real-world use of formal methods requires tool support to
5.1 State clearly the proofs that need to be done. be practical.
5.2 Determine a suitable invariant.
5.3 Prove the statement that states that FSpec is
refined appropriately (Hint: one of the proofs in the
answer to 5.1).
(due in next Friday week, 2pm, in class)
CS4003 Formal Methods CS4003 Formal Methods
I U ·(TP )2 is a theorem proving assistant designed for UTP I Follow link from Blackboard (“Tool Support” area)
I It is under development here at TCD I (Windows) Download the ZIP file
I A recet release suitable for CS4003 is now available I Copy everything into a new folder
I U ·(TP )2 0.97α8 I If using Unix/Mac OS X, you can build from source
I Open-source, has been built on Linux/Mac OS X; Binaries http://www.scss.tcd.ie/Andrew.Butterfield/Saoithin/
for Windows I requires Haskell, wxHaskell (good luck!)
Binaries UTP2.exe
I Here we give examples on the Windows version
wxc-msw2.8.10-0.11.1.2.dll Un*x or OS X version will be much the same.
Saoithin-*.wav I To run, simply double-click on the .exe file
keep together I A MS-DOS console window appears, followed shortly by
the top-level window.
Documentation *.txt
I the screenshots shown here are from an earlier years
read !
version, but essentials are the same.
Workspaces Warnings
U ·(TP )2 Top Level Window (Laws) U ·(TP )2 Top Level Window (Conjectures)
Class 21
ok ⇒ok 0 ∧x 0 ∈{1...10}
true ` x 0 ∈ {1 . . . 10}
Theology of Computing
¬ ok Miracle
ok ⇒ok 0 ∧x 0 =3 ok ⇒ok 0 ∧x 0 =6
ok ⇒ok 0 ∧x 0 ∈{1...10}
true Chaos
I Our theory of total correctness is based on the notion of I We view designs as “healthy” predicates,
designs, i.e., those that describe real programs and specifications.
i.e. predicates of the form: ok ∧ P ⇒ ok 0 ∧ Q I We shall introduce “healthiness conditions” that test a
(or P ` Q) predicate to see if it is healthy.
I However, given an arbitrary predicate, how do we I These conditions capture key intuitions about how real
determine if it is a design ? programs behave
I Clearly one way is to transform it into the above form. I Some will be mandatory — all programs and specifications
I This can be quite difficult to do in some cases. will satisfy these.
I Is there an easier way? I Some will be optional — these characterise particulary
I Also, just what is it that characterises designs in any case? well-behaved programs
H1 P = (ok ⇒ P )
What about ¬ ok ∧ ok 0 ∧ x 0 = x + 1 ∧ S 0\x 0 = S\x ?
I P only ‘takes effect’ when ok is true — i.e. program has ok ⇒ ¬ ok ∧ ok 0 ∧ x 0 = x + 1 ∧ S 0\x 0 = S\x
started = “ hh⇒-def ii ”
I When ¬ ok , then P says nothing ¬ ok ∨ ¬ ok ∧ ok 0 ∧ x 0 = x + 1 ∧ S 0\x 0 = S\x
= “ hh∨-∧-absorbii ”
I H1 above is equivalent to satisfying both of the following
¬ ok
laws:
hhskip-; -unit ii skip ; P = P
So it is not H1-healthy
hh; -L-Zeroii Forever ; D = Forever
¬ ok ; skip
= “ hhskip-def ii, hh⇒-def ii ”
¬ ok ; ¬ ok ∨ ok 0 ∧ S 0 = S H4 P ; true = true
= “ hh; -def ii,substitution ”
∃ okm , Sm • ¬ ok ∧ (¬ okm ∨ ok 0 ∧ S 0 = Sm ) I A H4-healthy program cannot force termination of what
= “ hh∧-∨-distr ii, hh∃-distr ii ”
runs afterward.
(∃ okm , Sm • ¬ ok ∧ ¬ okm ) ∨
(∃ okm , Sm • ¬ ok ∧ ok 0 ∧ S 0 = Sm )
I All real programs satisfy this property.
= “ simplify ” I H4 states that a predicate is Feasible.
¬ ok ∨ ¬ ok ∧ ok 0 There is a program that has this behaviour.
= “ hh∨-∧-absorbii ”
¬ ok
So miracle is H3-healthy
CS4003 Formal Methods CS4003 Formal Methods
Forever -def ii
hh Forever =
b true
I However, now we restrict ourselves to designs only, and
use the design definition for skip
I Does it give us the desired laws ?
I Under what conditions does design D(= P ` Q) satisfy the
above law ? hh; -L-Zeroii Forever ; D = Forever
I All designs satisfy it ! hh; -R-Zeroii D; Forever = Forever
It is just law hhskip-; -unitii, already proven.
Here D are designs
H1 P = ok ⇒ P
H2 P = P; J
H3 P = P ; skip
H4 P ; true = true
I We shall define a function that takes a predicate as I Up to now, we have been using “First-Order predicate
parameter, and returns True if the predicate is H1-healthy: calculus”
I we have built predicates from basic parts using fixed
b P = (ok ⇒ P )
isH1(P ) = operators.
I Any functions have only existed inside the expression
(sub-)language
I We refine this further by introducing another function that I All quantification variables have been limited to expression
given a predicate returns an appropriately modified variables only
predicate I Now we are moving towards “Higher-Order Logic”
mkH1(P ) = b ok ⇒ P I We are introducing predicates about predicates (e.g. isH1)
isH1(P ) = b P = mkH1(P ) I We are introducing functions
that transform predicates (e.g. mkH1)
I UTP also allows quantifier variables
I What we have done is to start a new formal game to range over predicates
altogether !
I Given H(P ) =
b body , we have a new law:
I We add a new category to our language: that of a
higher-order definition H (MyPred ) = body [MyPred /P ]
P , Q, R, S ∈ PVar predicate (meta-)variables
Where we now have substitution notation extended to allow
H, F ∈ HOF higher order functions
predicates to replace predicate variables.
HOFDef ::= H(P ) =b body HOF definition
I The addition of higher-order functions
(i.e. those that take predicates as arguments) gives
I We extend our notion of predicate to allow the application “monadic 2nd-order logic”
of a HOF to a predicate argument I If we also allow quantification over predicates
P ∈ Pred ::= . . . | H(P )
Pred ::= . . . | ∀ P • H(P )
Ha(P ) or Ha(Hb(P ))
I Strategy: reduce both sides to same
but not by
Hb(P ).
Proof that H1 and H2 commute (LHS) Proof that H1 and H2 commute (RHS)
H2(H1(P ))
= “ defns., H1, H2 ”
H1(H2(P ))
(ok ⇒ P ); J
= “ defns., H1, H2 ”
= “ hh∨-; -distr ii (see below) ”
ok ⇒ (P ; J )
(¬ ok ; J ) ∨ (P ; J )
= “ hh⇒-def ii ”
= “ defn. H2 ”
¬ ok ∨ (P ; J )
H2(¬ ok ) ∨ (P ; J )
= “ ¬ ok (a.k.a. miracle) is a design and hence H2 ”
H2(¬ ok ) ∨ (P ; J )
We have used hh∨-; -distr ii: (P ∨ Q); R ≡ (P ; R) ∨ (Q; R)
whose proof is left as a (voluntary) exercise.
(P v Q) ⇒ (F(P ) v F(Q))
P vQ ∀ P , Q • (P v Q) ⇒ (F(P ) v F(Q))
⇒ “ G is monotonic ”
G(P ) v G(Q)
⇒ “ F is monotonic ” I An easier (indirect way):
F(G(P )) v F(G(Q)) Consider the definition of F, which will look something like:
b P ∧ (∃ x • Q ⇒ P ) monotonic?
Is F(P ) = b P ∧ (∃ x • Q ⇒ P ) monotonic?
Is F(Q) =
P ∧ (∃ x • Q ⇒ P ) P ∧ (∃ x • Q ⇒ P )
“ mark occurrence polarity ” “ mark occurrence polarity ”
P + ∧ (∃ x • Q − ⇒ P + )+ P + ∧ (∃ x • Q − ⇒ P + )+
“ both P are labelled with + ” “ the sole Q is labelled with − ”
F confirmed monotonic F confirmed non-monotonic
b P ∧ (P ≡ Q) monotonic?
Is F(P ) =
P ∧ (P ≡ Q)
b P ∧ ¬ (∃ x • P ⇒ Q) monotonic?
Is F(P ) =
“ mark occurrence polarity ”
P ∧ ¬ (∃ x • P ⇒ Q) P ∧ (P ± ≡ Q ± )+
+
(P ⇒ Q) ∧ (Q ⇒ P )
I Fixpoints crop up everywhere in CS theory
or I any time iteration or recursion is involved
(P ∧ Q) ∨ (¬ P ∧ ¬ Q) I There are at least two uses of fixpoints in UTP:
1. defining the meaning of recursion/iteration
2. healthy predicates are fixpoints of “healthifiers”
I Then do further simplification, down to the “the or-ing of the
and-ing of possibly negated atomic predicates”
I However this is getting almost as complicated as doing a
direct proof of monotonicity.
CS4003 Formal Methods CS4003 Formal Methods
lub{Fn (⊥) | n ∈ 0 . . .}
F0 (x ) =
b x
F n+1 b F(Fn (x ))
(x ) =
(D v P ) ⇒ F(D) v F(P )
I Having specification and programming constructs that are I Expand our definitions of SPECw and DSGN :
monotonic is very important in allowing practical
refinement. SPECw (P , Q) = ok ∧ P ⇒ ok 0 ∧ Q ∧ S 0 = S
DSGN (P , Q) = ok ∧ P ⇒ ok 0 ∧ Q
I We have seen that the program constructs (SEQ, CONDc ,
WLOOPc ) are monotonic
I What about specification frames (w :[P , Q]), I We find that both are monotonic in their second argument,
or designs (P ` Q) ? but anti -monotonic in their first .
I Lets define HOFs for these: I This explain why in refinement laws, in order to show that
SPECw (P , Q) =
b w :[P , Q] (P ` Q) v (R ` S)
b P `Q
DSGN (P , Q) =
it is necessary to show R v P , rather than vice-versa.
Are they monotonic ? I In other words, preconditions appear in a negative position.
genVC(p⊥ ; P1 ; . . . ; Pn−1 ; v := e; q⊥ )
= genVC(p⊥ ; P1 ; . . . ; Pn−1 ; q[e/v ]⊥ )
I VCs are generated for all the program statements
I We shall define VC generation recursively over program We drop the assignment and replace all free occurrences
structure of v in the last assertion by e.
genVC : Program → P VC I Given that the last statement is an not an assignment
genVC(p⊥ ; P1 ; . . . ; Pn−1 ; r⊥ ; Pn ; q⊥ )
= genVC(r⊥ ; Pn ; q⊥ )
∪ genVC(p⊥ ; P1 ; . . . ; Pn−1 ; r⊥ )
I
I Do in class
genVC(p⊥ ; c ∗ (i⊥ ; P ); q⊥ )
= {p ⇒ i , i ∧ ¬ c ⇒ q} I Solution
∪ true ⇒ x =x ∧0=0
genVC((i ∧ c)⊥ ; P ; i⊥ ) r =x ∧q=0 ⇒ x = r + (y ∗ q)
x = r + y ∗ q ∧ ¬ (y ≤ r ) ⇒ x = r + (y ∗ q) ∧ r < y
I These correspond closely to our proof technique for loops:
x =r +y ∗q ∧y ≤r ⇒ x = r − y + (y ∗ (q + 1))
I p⇒i
— pre-condition sets up invariant
I i ∧¬c ⇒q I These are simple enough to do by hand (or with U ·(TP )2 ).
— invariant and termination satisfies postcondition I Automated provers with good arithmetic facilities should
I (i ∧ c)⊥ ; P ; i⊥
— invariant and loop-condition preserve the invariant
also handle these.
pre ` post 0 v prog It means the weakest pre-condition under which running
= [prog ⇒ pre ` post 0 ] prog will guarantee outcome (condition) post .
= [prog ⇒ (ok ∧ pre ⇒ ok 0 ∧ post 0 )] I In UTP we can define it as
= [ok ∧ pre ⇒ (prog ⇒ ok 0 ∧ post 0 )]
= [ok ∧ pre ⇒ ∀ okm , Sm • prog ⇒ ok 0 ∧ post 0 ] Q wp r b ¬ (Q; ¬ r 0 )
=
= [ok ∧ pre ⇒ ¬ ∃ okm , Sm • prog ∧ (ok 0 ⇒ ¬ post 0 )]
Look familiar ?
= [ok ∧ pre ⇒ ¬ (prog; ¬ post 0 )]
I It can be shown to obey the following laws:
I ¬ (prog; ¬ post 0 ) is the weakest condition under which x := e wp r ≡ r [e/x ]
prog is guaranteed to achieve post 0 P ; Q wp r ≡ P wp (Q wp r )
(P c Q) wp r ≡ (P wp r ) c (Q wp r )
(P u Q) wp r ≡ (P wp r ) ∧ (Q wp r )
CS4003 Formal Methods CS4003 Formal Methods
Using WP The problem with WP
I We then show that the precondition pre implies the overall I Calculating c ∗ P wp r involves some form of a search for
weakest precondition such a w .
I Automating it successfully is equivalent to solving the
pre ⇒ P1 ; . . . ; Pn−2 ; Pn−1 ; Pn wp post Halting Problem (impossible).
(c ∗ P ) wp r b inv
=
∧ (c ∧ inv ⇒ P wp inv )
∧ (¬ c ∧ inv ⇒ r )
This class was live proofs of the wp laws from the previous class
I The formalisms and tools used varied a lot in terms of I Want to establish mutual authentication between initiator A
power and expressiveness. and responder B.
I Surprisingly almost all of the groups got very similar I Using public-key protocol:
results, finding similar errors. I Agents have public keys known to all (Ka , Kb )
I Key conclusions: I Agents have secret keys, known only to themselves (Ka−1 ,
I Using tool-support to formally analyse a real-world system K − 1b )
like Mondex is quite feasible. I Agents can generate nonces (random numbers) (Na ,Nb )
I The precise choice of tool is often not that important. I The Needham-Schroeder Public Key Protocol (NS-PKP)
I A special issue (Vol. 20, No. 1, Jan 2008) of the Formal I published in 1978
Aspects of Computing journal has papers detailing these
I uses a 7-message sequence to ensure A and B know they
are talking to each other.
results.
Recent Research at TCD: “slotted Circus” Recent Research at TCD: “slotted Circus++”
I We are using UTP to model hardware languages like I One project looked at giving a semantics to priority
Handel-C I Concept of event priority is non-trivial and involves time in
I program-like syntax (C with parallelism) an essential way
I synchronous clocks — assignments synchronise with clock I PhD, Paweł Gancarski (2011)
I true parallelism — x , y := y , x works directly
I inter-thread channel communication
I Another project looked at giving a UTP semantics to
probability
I The language Circus describes concurrent/shared-variable I Done already, relating starting state to sets of final
programs
distributiuons
I has a UTP semantics I The trick was to get a homogeneous relations (same
I key auxiliary observation: start/finish type)
Traces : sequences of Events (tr , and tr 0 ) I We relate start distributions to final ones (harder than it
I Developed by Jim Woodcock and Ana Cavalcanti at York sounds)
I In TCD we have slotted-Circus I PhD, Riccardo Bresciani (2012)
I Language has a Wait (for clock tick) primitive. I Both projects looked at extensions to slotted Circus.
I Traces are chopped into discrete time-slots.
I One of the grand challenge pilot studies was proposed by I Current spacecraft have many special purpose computers
NASA JPL I ESA wants to merge these into one general purpose
I verify a POSIX filesystem built on Flash Memory computing platform
I We looked at modelling the Flash Memory devices I to save weight
themselves (in Z) I known as Integrated Modular Avionics (IMA)
I Aircraft did this 20 years ago
I We also looked at modelling the hardware/software
interface and verifying Flash Memory internal control FSMs
I IMA requires an operating system that securely partitions
applications
against that interface
I each app has its own partition
I We found errors in the Nand Flash Specification I all inter-app communication and access to hardware
I Open Nand Flash interface (ONFi) resources is via o/s kernel