Download as pdf or txt
Download as pdf or txt
You are on page 1of 28

Run-­‐‑Time  

Verification*
*Thanks to Gian-Luigi Ferrari for the slides
Verification
•  Static: based on complete analysis of code/models
of code
o  static analysis / abstract interpretation
o  theorem proving
o  model checking

•  Dynamic: based on single executions of program/


system testing
o  runtime verification
RV  Main  Ideas
•  Main idea of RV:
o Execute program to analyze
•  Using instrumentation or in a special
runtime environment
o Observe execution trace
o Build model from execution trace
o Analyze model
Steps above may be combined with other
techniques
Static  Analysis  
(in  a  nutshell)
Code Model
Bug  1
Extract Analyze Bug2

Advantages: Limitations:
+  good  code  coverage -­‐‑  undecidable  problem,  so
+  early  in  development -­‐‑  false  positives/negatives  or  
+  mature  field -­‐‑  does  not  scale
Runtime  Verification
Event  Trace Highly  
customized  for  
property  of  

Execute
interest
Code
Model
Bug  1
Analyze Bug2

Advantages:
+  precise  (no  false  alarms) Limitations:
+  good  scalability  and  rigor -­‐‑  code  must  be  executable
+  recovery  possible -­‐‑  less  code  coverage
How  to  address  
RV  Limitations
•  Code must be executable
o  Use complementary, static analysis, earlier in process
o  Use symbolic execution via abstract interpretation

•  Less code coverage


o  Integrate RV technique with existing testing infrastructure:
the unit tests should already provide good code coverage; invoke RV toolkits
on each test
o  Systematic re-execution: cover new code each time
o  Symbolic execution covers many inputs at once
RV  Definition
•  Runtime Verification is the discipline of computer
science dedicated to the analysis of system
executions (possibly leveraged by static analysis) by
studying specification languages and logics,
dynamic analysis algorithms, programming-
language techs, system instrumentation.

•  Runtime Verification is the study of how to get as


much out of your runs possible
One  field  –  many  names
•  Runtime verification
•  Runtime monitoring
•  Runtime checking
•  Runtime analysis
•  Dynamic analysis
•  Trace/log analysis
•  Fault protection
•  Runtime enforcement
Technique Automated Scalable Coverage

Model   yes no finite


checking
Theorem   partially no complete
Proving
Static  Analysis yes yes complete
Run-­‐‑time   yes yes low
Verification

scalable:  applies  to  realistic  systems  without  too  much  pain


coverage:  to  what  extent  all  possible  executions  are  explored
Combinig  static  and  
dynamic  techniques
•  From static to dynamic:
o  prove as much as possible with static techniques.
o  leave the rest for dynamic techniques.
•  From dynamic to static (a dual view):
o  decide set of program locations to instrument to
drive monitors.
o  use static analysis to reduce that set
RV:  STANDARD  
TERMINOLOGY
•  Events record runtime behavior
snapshots of state or actions performed
•  A finite sequence of events is a trace τ
•  A property φ denotes a language L(φ) (a set of
traces)
•  τ satisfies φ iff τ ∈ L(φ)
TRACES
•  A trace σ is a formal view of a discretized
execution:
o  a sequence of program’s states,
o  a sequences of program’s events,
o  a mix states/events
formal view of a discretized execution:

TRACES
e of program’s states
es of program’s events
tes/events

possible
uring execution:
future(s)
past now

the present moment now


nown
•  we  are  in  the  present  moment  now  
unknown - many possible past  is  known
• 
•  future  is  unknown  -­‐‑  many  possible
Giving verdicts along the way

Verdicts
Should  detect  success/failure  as  soon  as  possible
•  Should detect success/failure as soon as possible
•  Standard  approach  is  to  use  four-­‐‑valued  verdict  domain
•  Standard approach is to use four-valued verdict domain
Consider  all  possible  extensions  of  a  trace
Consider all possible extensions of a trace

current trace ⌧ all suffixes Action


1 ⌧ 2 L(') ⌧ 2 L(') stop with Success T
2 ⌧ 2 L(') unknown carry on monitoring Tstill
3 ⌧2/ L(') ⌧ 2 / L(') stop with Failure F
4 ⌧2/ L(') unknown carry on monitoring Fstill
RV  Pictures
Runtime verification in practice
We might possibly have synthesized monitor from a property.

property&

verdict&
monitor&
Compute  a  verdict  
for  the  trace  
received
Dispatch  each  received  event  
observe& feedback&
to  the  monitor
Possibly  generate  
instrumenta,on& feedback  to  the  system
Instrument  the  system  
to  record  relevant  events.
system&
Monitor
•  Offline: the trace is analyzed aposteriori
e.g., analyzing log file/trace dump
•  Online: the trace is analyzed in a lock-
step manner
o  external monitor runs in parallel with the system e.g.,
communication infrastructure
•  synchronous (system waits for response)
•  asynchronous (buffered communication)
o   internal: monitor’s code is embedded into the
application
Monitor placement

offline& system& monitor&

online&external&
system& monitor&

online&internal&
system& monitor&

About reaction
Reaction
Reaction can take several forms:
•  Display an error message
•  Throw an exception in the monitored program, and
monitored
•  program then deals with it
•  Launch some (recovery) code: the effect depends
on monitor’s placement
Monitor  Specification
•  Program (built-in algorithm focused on specific
problem)
o  data race detection
o  atomicity violation
o  deadlock detection

•  Programming language Primitive


•  Design by contract (pre/post conditions)
o  JML for example

•  Logic – formal specifications


o  state machines
o  regular expressions
o  grammars (context free languages)
o  temporal logic (past time, future time)
Propositional  vs  
parametric
•  Propositional: events are just strings
•  Parametric: events carry data values

•  Solutions exist spanning the two dimensions

•  Expressiveness of specification language


•  Efficiency of monitoring algorithm
The propositional approach : an example

Propositional
Record propositional events, for example
I open, close
Henceforth:   Next:  φ  is  true  at  next  step
Define a propertyϕ  is  always  true;
over propositional events, for example
Until:  close  is  true  at  some  point,
not  open  is  true  until  that  time
I LTL (finite-trace) ⇤(open ! (¬open U close))
I RE (open.close)⇤ Implies  
open
1 2
I DFA close

Check if each trace prefix is in the language of the property


Verification:   Check   if   each   trace   prefix   is   in   the  
language  of  the  property  

Using  projection

Take the trace


open.read.write.close.open.read.close
What do we do with read and write?
Filter out irrelevant events / Project on relevant events
open.close.open.close
Parametric
•  Consider the code
File f1 = new File("manual.pdf");
File f2 = new File("readme.txt");
f1.open();
f2.open();
f2.close();
f1.close();
•  We just focus on propositional events
open.open.close.close

•  Not good, we want to parameterize events with data values


and use those values in the specification Instead record the
parametric trace
•  open(manual.pdf).open(readme.txt).
close(readme.txt).close(manual.pdf)
Parametric properties

Parametric
Using the events
Using the
I open(f) whenevents:
file f is opened
•  open(f)
I close(f) when file f is closed
•  close(f)
the property
the property becomesbecomes
open(f)
1 2
close(f)

Efficient monitoring of parametric properties is a main


challenge
Quantified
Quantify over variables in parametric property:
Update  a  file  collection
 let  rec  updateFiles(itCollection,  info)  =
     if  (itCollection.hasNext)  {
       use  r  =  new  file(itCollection.next())
               if  (updatable(r,info)){
                         ...//  updating  file  with  info
                 updateFile(itCollection,  info))
                 }
     //  dispose  file  resource  implictly  called  
here
     }
     ...
new(r1)    new(r2)  ...  new(rn)  rel(rn)    …  rel(r2)  rel(r1)

Traces  are  words  in  the  context-­‐‑free  language  wwR
Challenges
•  Code instrumentation
•  Definition of specification languages
•  expressive
•  elegant
•  Synthesis of efficient monitors
•  Low impact monitoring
•  Integration of static and dynamic analysis
•  How to control application in case of violation/
validation
•  Programming language design that supports RV
•  Learning specifications from traces
•  Program visualization
...
Some  Refs
•  Klaus Havelund and Allen Goldberg: Verify Your
Runs. Verified Software: Theories, Tools, Experiments
(VSTTE'05), 2005.
•  K. Havelund, G.Reger, D. Thoma, E. Zalinescu:
Monitoring Events that Carry Data. Lectures on
Runtime Verification 2018: 61-102
•  Martin Leucker and Christian Schallhart: A Brief
Account of RuntimeVerification. Journal of Logic
and Algebraic Programming, Volume 78:5, 2009.

You might also like