>> Shuvendu Lahiri: Okay. It's my great pleasure... us from Imperial College London today. Tim has been...

advertisement
>> Shuvendu Lahiri: Okay. It's my great pleasure to have Tim Wood visiting
us from Imperial College London today. Tim has been working with Sophia
Drossopoulou for last couple of years on the idea of program equivalence and
tier improvers and that's something of interest to us. And so I think he's
going to present two talks today. First one he just presented at a workshop
at Splash and the other one more of a work in progress. So yeah, Tim.
Welcome.
>> Tim Wood: Great. Thank you. Nice to be here. Yeah. So I'm going to
talk about two things. The first one is a joint piece of work with my
supervisor, Sophia Drossopoulou. So the context is we're interested in
program maintenance where programmers have to take an existing piece of
software, potentially large piece of software, and make some kind of
modification to it, change its behavior, add a new feature, fix a bug,
something like that. When they do that, they want to usually want to somehow
preserve most of the behavior of the existing program. What they don't want
to do is unintentionally break some existing part of it. Usually they don't
want to or there don't exist specifications for the entire program maybe not
tests or very good tests. And they don't really want to have to read the
whole program first. So they don't want to have to understand the whole
program before they can change it. So the two questions that we're trying to
address here are what exactly does it mean to preserve most of the page of a
program and have we got some way of checking whether or not they've done
that? So what I'm going to do in this talk is try to propose some precise
criteria for this partial preservation of behavior and which is that only
some object should be affected by the change and the rest of the objects
should continue to behave in the way they did previously. We're going to ask
the programmer to tell us what those objects are so they're going to provide
us with some expression which says this change was intended to impact these
objects and if it impacts any others, then it's done something I didn't want
it do. And then we need some way to check this. Now, we're developing a
tool which does test generation to try and produce tests that falsify this
condition. I'm not really going to talk about it at all I'm just going to
focus more on the formal side of it. What we wanted to achieve was that we
didn't need to track all of the unaffected objects and join the whole of the
whole of the program execution in order to be able to check these conditions
because the programs have been very big and we don't want to generate tests
that have to explore the entire program in order to explore the bit we've
changed. So we're going to propose this condition for our equivalence,
behavior equivalence criteria that says if the trace of method calls between
the bit of the program that we wanted to impact and the bit that we didn't
are preserved in some way between the two executions, then we know that the
impact is contained within those affected objects. We did a proof, machine
check proof using Dafny from Microsoft, which maybe I'll talk a little bit
about another time. So this is kind of small example program that gets a
list of students from somewhere from a database and via some calls eventually
tries to award a prize to the top three students in the list. One of the
things that it does is it prints out the names of all of the students that it
considered when deciding who to award the prize to. But it prints out in
whatever order the list happens to be and a developer decides to come along
and change it so that it prints the students in name order. So what we ask
the developers to do is tell us to allege that some of these objects are
affected and then we assume that all -- well, we take that to mean that the
developer did not intend to affect any other objects. So here we expect the
object that is doing the IO to be affected. The logger to be affected. And
the objects in this list to be affected because we're going to try and sort
them so we're going to have to query them, we're going to have to query the
list, going to have to query the students, which means clone an additional
method.
>>:
[Indiscernible] expressions [indiscernible] entire class?
>> Tim Wood: We -- right. So actually what we do, again I'm not going to
talk about, but we -- the program can give us an arbitrary -- well, a side
effect-free Java expression that we run when objects are constructed to
decide whether or not they should be affected. So they can write anything
they want over the whole state of the program in that time so it can be the
class of the object. It can be -- we know which line we're executing. We
know the -- something about the context of some of the approximation of the
context. And so you know, they could say all of the students objects
constructed inside the database so they could say the object on line 20 or
you know, is fairly obvious what they can write, but there's a tradeoff
between that and the performance of the analysis. So if they write something
that's very difficult to calculate, it's going to take a long time to run
because we're going to explore lots of paths through the program and we have
to run it a lot of times.
>>:
[Indiscernible] falsifies the allegedly affected.
>> Tim Wood: Right. Exactly. So we're trying to find some input for the
program that produces if we run by fruitions the program produces at some
point program states that we can't match together. And I'll say precisely
what I mean by match together shortly. So here, this program has a bug.
This sort has a side effect on the list so by the time we get to here, this
object is going to execute differently. It's going to get different students
given the prizes. Alphabetically first students will get the prizes, which
is good for some people and not for me. So I guess the motivation for using
this criterion is that it doesn't require the programmer to say anything
about the program that they aren't impacting. The idea is that they only
have to talk about the objects that they intended to do something to. And
hopefully they cannot even read the rest of the program and we're going to
tell them if it was safe to do that or not. So what I mean by have the
states correspond? Well, we're going to say they correspond when the stack
and the heap have the same shape if we throw away all of the affected
objects. So this is stack and the heap. This is from Version 1, Version 2.
And here, this one has this additional pointer. This is a different program,
by the way. This is actually a program that puts things in and out of a
list. The stack and the heap of the other example is too big to draw. So
these don't correspond. If question consider all of the objects but what we
do is we delete the unaffected parts and now they do correspond. And they
have exactly the same shape. So what we didn't want do was have to precisely
execute both versions of the program at every pair states try and decide if
they correspond or not. So what we're going to do is only compare the method
calls between these affected, allegedly affected, and unaffected objects.
And so this is an execution of Version 1. This is an execution of Version 2.
And these are all different states. I drew them all the same. So we only
have to consider some of the states. So here, this one, this one, this one,
this one, this one, which are where method calls happen, method calls or
returns happen between the affected and unaffected objects. And actually we
only need to consider the top most stack frame of each of those states which
will have the method parameters will return by unit. I guess the nice thing
is that this sort of abstracts the difference in the number of execution
steps which can occur in the affected part so we don't have to search to find
out which things to compare. We just pick these precise points in the
execution. So we end up having do something like this. I guess the
intuition is that the code of the unaffected objects and the values and the
method calls uniquely determine the way that the unaffected objects will
execute. So this is what we do, the traces. So this is the trace of calls
and returns between the unaffected and affected objects. These are values
which are pointers into the -- these are address where we can deal with
primitives and things as well. And this is the trace from an execution of
Version 2. We can see that trace has the same alias in structure so
everywhere there's a ten in this trace, for example, there's a three in this
trace. And that's where you need to check quite straightforward to do
just ->>:
[Indiscernible] stack trace?
>> Tim Wood: No, it's a trace of method calls and returns. So whenever -so we have for example this is a constructor call and then a return. And
then sometime later we add something in the list and then there's a return
and then sometime after that we add something else as a return, remove
something, return, remove something. So it's like I guess you can think of
it as the protocol between the affected and the unaffected objects. We
arbitrarily partition the program and then try to examine the protocol that
is observed between the two parts. And then see if it's preserved between
the two versions. And if this contains all of the information that flows
between those two halves of the program, then we can say something about the
equivalents of where the code is the same. Did that make sense?
>>: These numbers in particular, should we think of them as the values of
the pointers themselves?
>> Tim Wood: Yeah. The values of the pointers. So I mean, they can be
different between different versions of -- they can be different between
different versions of the same version, just due to address allocation. And
they're certainly different between different runs of different versions
because the modified version is free to allocate extra objects and not
allocate stuff, do whatever it wants. And I guess the point is that the -so in Java generally, you can ignore some slight problems which you can
detect and deal with, but the only thing you can tell about an object that
you have been given a reference to without sending it a method is whether or
not you've got the same object to point in the execution or different
objects. So it's sufficient to just check whether or not when you talk to
another object, it's giving you back the same reference or a different one.
And if in all the places in the first version, where it gave you the same
reference it does in the second version, then you can't observe any
further -- anything else about what's happening. There's some things to do
with hash codes and reflection which occur in Java and in variation other
languages there's ways to actually get hold of the real values unfortunately.
But you can easily check whether or not a program is doing that and then tell
the programmer that it's happened. So this is an example of one which is not
the same. So here, there's a four here and there's a one here. But here
there's a two. And a four.
>>:
[Indiscernible]?
>> Tim Wood:
>>:
No.
So what --
[Indiscernible]?
>> Tim Wood: No. We literally just look at these numbers. So what the -we don't look at the heap at all. So what I do in the -- what the toll
actually does is it main tanks two separate address spaces based on whether
or not the objects are affected or unaffected. Which makes detecting the
traces very easy. Whenever you have a call between an object and one address
space and the other, you just -- you know you've got the trace. And then it
keeps track of the order that it saw the objects in. So when it sees this
one, it tags it with one. And then this one it tags with two and then it
sees a new one it tags with three just with the counter. Does the same thing
in this one. And then if the traces are equal, so it's very, very cheap to
check. The thing that makes it less cheap is that if you've got -- I'm not
sure if I should get into this. If you have got primitives in here and
you're trying to slightly more abstractly, which we do this symbolic X here,
you have to check if the parameters were calculated the same way or if
equivalent values have been calculated for the primitives. But certainly for
the object graph is extremely cheap to check. The only problem is obviously
having to produce these traces.
>>: So where does its affected and unaffected set of objects show up in this
picture? Are you taking advantage of that?
>> Tim Wood: Yes. So the only things that appear in the trace are calls
where either the caller was affected and the callee was unaffected or vice
versa. Including constructed calls and returns. So we're trying to capture
the -- I guess the flow of information between the two parts.
>>: Can you turn the whole thing into an inference problem where instead of
letting the programmer specify these sets [indiscernible] too expensive?
>> Tim Wood: It's something we considered. I think that you could -- what
you could probably do is take some -- I mean there's probably other things
you could do as well. Something that occurred to us is that you could take
the test and run it and then try to guess some partition based on that. And
then do this to try and explore whether or not that was a reasonable
partition or what you do, and you can give some kind of meaning to that
partition by looking at whether there's IO operations occur within the -which IO operations occur within the affected part to try and give it some
meaning to the programmer because otherwise if you just come up with some -there is always some partition. So you need to sort of then somehow give the
programmer some help about whether or not this was the one they wanted or
not. And asking them to specify is a very firm way of doing that. They have
to think about it for themselves.
>>:
[Indiscernible] you could try to find a [indiscernible].
>> Tim Wood: Yeah. Yeah. And I think it could be useful to try and do
that. And if you can generate traces somehow, it's very cheap to check so
you can probably try and break force in some way. Yeah. So there's a bunch
of other thing that you have to check that you're not doing. Very few
programming languages actually abstract the address completely it seems. So
I don't know -- so I'll try and talk about this quickly. We can talk about
it more if people want to. I've got some actual slides with more details on
it if you wanted to get into that all. So I gave this talk at a workshop and
I guess I was doing a bit of a Dafny ad so it's not very specific. But what
we do is we write a function that actually executes the operational semantics
of the language that we're interested in. You can run this. This here, we
have the instructions. And then you can just do a case split, calculate the
next state of the program. So instructions. You have this non-ghost method.
And then the lemmas are written as procedures. So we're going to try proof
by induction, which means doing this recursive call back to the lemma. So
it's this ghost imperative code but we're going to use it to prove this
declarative theorem. So you read it as the parameters is university
quantified and return by user is actually quantified so we're going to say
for all of the parameters there exists some return values such that the
precondition employs the post condition. And so here, this is our trace
equivalent. And if we start with these two states, our trace equivalent,
then we're going to end up at equivalent states. Always. And some other
stuff as well, but that's the gist of it. And then you can just write normal
imperative code. So this is the base case of the induction which Dafny was
stable to prove automatically. That's the inductive step. Haven't put any
of the detail on there but I have some extra slides if you want to walk about
it more.
>>:
So what is [indiscernible]?
>> Tim Wood: So the theorem here says that -- so this says S1 executes to S3
by trace LS1. So this is the traces like I showed previously. And we know
that executing S1, executing from S1 and executing from S2 are trace
equivalent. So we carry the version of the code around in the state. So
this says that we've, by some method, we've established that their trace
equivalent is so just by inspecting the traces. And then it's going to
ensure that there exists some trace from S2 to reach a state S4 such that the
traces are equivalent which is fairly -- comes basically for free from that.
This is the main one which is the end of the trace, the states at the end of
the trace will be compatible in the way that I described so if you throw away
the affected part, there will be equivalents. So there is a nice morphism
that relates to these states. And then we argue inductively by looking at
the first stack in the execution and then the rest of the execution. But
it's a big list. It's like 6000 lines of Dafny, something like that.
>>: [Indiscernible] is because it needed when you return?
written that [indiscernible] there exists in S4.
You could have
>> Tim Wood: Oh, right. Yeah. Something that uses that here. So then
there's some more. So there's a determinism argument that goes after this in
the proof. So we have to say that there is only one of these eventually.
>>: You could [indiscernible] the AL in line five the same way. So L S2 and
S4 are [indiscernible] parameters which [indiscernible] you can think of it
as that there exists an L S2 and S4, but you're just going to give them
[indiscernible] the same thing for AL.
>> Tim Wood: Yeah. The only reason I don't is because I didn't have to use
this afterwards. So yeah, but they mean exactly the same thing. I would
have had to assign it here and then not use it. So yeah. It's very useful.
Otherwise you would have to call this and then do like an assign such that
and so on to get these back again but here you just run it all together and
it's a lot neater.
>>:
Does it have a current stack or is it just --
>> Tim Wood: The state, so the state is here. There's another diagram
later. I'll explain it. It's not about that but it will make it clearer why
I give the answer that I do. So it's a reasonably big proof and some of the
predicates are quite large so the operational semantics is reasonably -function is reasonably big. And we have these predicates over the whole
state. One doesn't change for every kind of instruction that it connects to.
They are quite big as well. And we use those to -- when I first started
doing this, we used those together in quite a few of the theorems. So when
the proof is correct and complete, it goes through quite quickly but if it -when you are trying to write it, a lot of the time you end up with timeouts.
Because the SMT's all the same. Trying to proof something that either is not
provable or you haven't given enough detail for it to prove. So a lot of
what this kind of approach that we hoped to do it in was related to not -- is
not related to how well it's going to perform when it's all complete and
working but how well it's going to perform when you're editing and making
changes to it and trying to get it to work. And if you have a lot of I guess
axioms or lots of different ways that the Z three can proceed to then it -with searching for the proof but there is not a proof available, then it can
take a long time to come to the conclusion that there is not a proof
available. So you end up with a lot of timeouts. So what we did was split
the proof up into a number of modules where we -- so I should give some
background. So you can have abstract modules in Dafny where you can define
lemmas with no bodies which essentially act as axioms within those modules.
You can have abstract data types so types where you just give them a name but
you don't give any concrete implementation. And then you can use this
process of refinement where you say this concrete module refined abstract
module and here you have to give bodies for everything. You have to give
concrete types for all of the abstract types. But what it allows is that you
can have some proof here of our main theorem that depends on these abstract
modules and where the final concrete implementations are not exposed to this
theorem. So it doesn't have the -- when you're working on it, it doesn't
have the opportunity to go running off trying to prove things with these much
bigger concrete definitions. It has what it has there and it can exhaust
them quite quickly normally. This is my understanding of what's happening.
Maybe someone can contract me at some point. But this is proof up in this
way. I guess the kind of nice side effect of that is that what we ended up
with is something where the concrete semantics of the language are not
dependent on almost the whole of the rest of the proof. So you can switch
different representations and different semantics in here and reuse the rest
of these pieces. And if you are willing to change these bits, then you can
subject to very different languages into the proof and still reuse these
bits. So this kind of performance led us to produce this much more modular
and potentially reusable thing, which is kind of cool. So to answer your
earlier question, the states in that, the lemma I showed you previously is in
here. So the states acquire abstract they don't actually have anything in
them like objects or heaps or anything. They have -- they don't have
anything in them actually. And then we provide some refinement that talks
about heaps and stacks. But actually, that lemma will apply to anything for
which you can provide this refinement and I haven't tried it but a language
that didn't have a stack and object. That theorem depends really only on
this notion of labor transition systems so if you can provide a label
transition system and a mapping and a nice morphism between two label
transition systems for your language, that is preserved each stack in the
execution, then you can reuse the theorem. So not necessarily even objects.
If you're imaginative enough, which I'm not so far, but you can try it with a
completely different kind of language. So that's the first talk finished. I
don't know if you want -- shall I just jump straight into the next one?
>>:
[Indiscernible].
>> Tim Wood: So the next part is kind of related in some ways. Again, we're
looking at a program equivalent and isomorphisms. And this is joint work
with myself, Shuvendu Lahiri, and again Sophia. Here we're going to look at
kind of program verification-based approach and this work in progress. So
some parts of it are not quite complete. Hopefully, maybe some people have
some things to say about it. So that's good. So why is it useful? Similar
to what I said previously, you've changed the program in some way. You want
to check that you've only changed the bit of it that you wanted to change.
And maybe program equivalence is useful in some other places where people use
it for trying to check the compilers are correct and compiler optimizations
are correct, for example. So one thing that is kind of difficult do with the
current state of the art is deal with modifications to programs that affect
the way the memory is allocated. So if you reorder allocations or you have
extra allocations in one program or remove -- or less allocations in one
program, particularly it's difficult if you allow loops or recursion that
allow -- that can make unbounded extra allocations or different allocations.
So that's what we're going to try to address here. So first of all, what
does it mean -- what's typically done to compare programs for equivalence?
So here's to programs that are more or less the same except the order of the
allocations has been changed between the two versions. So if we use a
program verification approach, we would equate the initial state so we say
that X has the same value on entry, then we make some nondeterministic memory
allocation and here we get these two address and here we get these two
address. Now we need to establish -- in order to establish these procedures
or equivalent, we need to establish that these calls are going to do the same
thing. But they're being called with different values. So that's difficult.
We don't know whether or not those procedures are going to be equivalent. So
kind of typical thing that people do here is try to just assume that the
allocation is going to be deterministic. That we're going to have some pool
of objects available and we're just going to allocate them in the order that
they're asked for. So we get 1, 2, 3 here and 1, 2, 3 here. But because
these allocation orders have changed, we're going to call F here with 2 and 3
and F here with 3 and 2. So again, it's difficult to modularly -- this idea
that we can then equate the initial states of the procedures is we call
maylor [phonetic] assumption here because the initial state's not the same.
There are some of the approaches that people take particularly modeling the
heap graph explicitly up to some bound and trying to look for isomorphisms.
But you can't really do that if you have these unbounded updates by call
procedures. So what we do instead is we require prior to the call we
require -- start again. So what we're going to do is going to say that the
programs are equivalent if there is some mapping between the address, some
isomorphism between the objects that have been allocated that preserves the
structure of the states of the programs. So if you start the stack variable,
stack variable X for example, when you reach 1 and 1 there and 1 and 1 mapped
here. But again we have this same issue with calls. Sorry. I haven't given
this presentation before and I've kind of missed a step. So what we're
trying do is prove that in order to check whether two programs are equivalent
for all isomorphic initial states, it's sufficient to check that they're
equivalent from all equal initial states. And so we say that if we have some
program P and some program P prime, then they're equivalent if and only if
all initial states which are isomorphic then running P is isomorphic to
running P prime on the other one. So that's what we're trying to establish,
that running the two versions of programs from isomorphic initial states, we
reach isomorphic final states. What we are currently in the process of
proving which we believe to be true is that if you run the program from the
same initial state, if when you run the program from the same initial state,
you reach isomorphic final states, so if you can establish this, then so it's
sufficient to check the program for more equal initial states to prove that
it's equivalent for all isomorphic initial states. And what that allows us
do is just at some arbitrary place in the execution of these two programs,
where we've established the states are isomorphic is to swap them for equal.
So that's what we do here. We first [indiscernible] this call so we say,
well, we've established that they're isomorphic here so we can use our rule
that these things are going to behave the same from equal states just by
swapping isomorphic for equal. And then we're going to do it now F is going
to run, it's going to run some isomorphic state from the initial states from
the same initial states. I'm going to swap it again to equal states after it
returns.
>>: [Indiscernible] saying that the execution cannot distinguish between
isomorphic states and actual states, right? Something like that?
>> Tim Wood:
>>:
That is --
If that was proven isomorphism.
>> Tim Wood: Yeah. That's the critical thing.
that allows you to do this.
>>:
That's the critical feature
It makes some assumptions [indiscernible].
>> Tim Wood: You have to have -- yeah. You have to have a program that
doesn't distinguish the value so either a language that doesn't or a program
that doesn't use features that -- where it will distinguish the value.
Similar to what I was saying in the previous presentation actually. And yes,
so it turns out that if you do this and you use this kind of very abstract
map and code in for heaps, that that's something like the boogie intermediate
program verify can do this check for it. So this is, with some lightly
invented syntaxes, what we do. So this is similar to kind of previous work
we've done with mutual summaries so this is the first procedure. We put the
body of it here. And in the other procedure. I don't know why I've got P
prime before P. Sorry about that. That should be P and that should be P
prime. But we'll have to live with it. Then we start again from the same
initial statute. Run it again. And then we're going to check that the final
state is isomorphic.
>>:
[Indiscernible]?
>> Tim Wood: Right. So yeah, the question is going to be what happens if P
calls another procedure. We're not talking about objects orientated programs
here. They could be. But where this is just any kind of program that is
dealing with dynamic the allocated memory but we're not going to make a
distinction between affected and unaffected. We're looking for procedures
which are equivalent. Won't deal with any procedures -- well, at least won't
do it with procedures that are not equivalent in any way and there are
extensions that you can do to deal with procedures that are only equivalent
over like --
>>:
[Indiscernible].
>> Tim Wood: It allows you do some things like allocate some heap temporary
space within a procedure and as long as you throw it away by the end of the
procedure, we can show that the procedure's equivalent, which is very
difficult do with the current systems. So one of the challenges is the
context of the call. You can have some arbitrarily big uncomplicated calling
context and so here for example, we have two procedures which are not
equivalent. Yeah. Right. So they both allocate to new objects. This one
then makes two unreachable from X. And there's this one points it here.
Point it there and then makes it unreachable for X. This one points it here
and then makes it unreachable from X. If you consider this locally, so just
from the context of the callee, then this is isomorphic so you have this
graph. You have this reachable pool of the graph. But actually, these in
any or in some bigger context, these two procedure executions are not
isomorphic. So we have to have some way of saying, talking about global
isomorphism at the heaps. At the end of the procedures, even though some
parts of the heap may no longer be meaningful, you can't just talk about
paths from the procedure parameters for example. So what we do is we require
the isomorphism that we end up with to extend the initial isomorphism.
You're not allowed to produce some incompatible isomorphism just in case
there's some more calling context that can distinguish the two isomorphics
that you produce. And because we always start from the same initial states,
the isomorphism that we extend is the identity isomorphism so essentially say
you must construct an isomorphism which is the identity for all previously
allocated objects and is free for any newly allocated objects. If that makes
sense. Our controller diagram again is there. So as you said, if you have a
procedure that calls some function inside it, some other procedure inside it,
you have to have some way of using this fact about the equivalence of these
procedures in boogie. And so this is how we do it. So we have this mutual
summary of the two procedures. So we give the procedures F and G this free
post condition which is a post condition that you just get to assume after a
call to either of these procedures but it's not checked so we're going to
check it for the body of these procedures. And this summarizes the pre-state
of the procedure call and the post state. And we have to pass the pre-state
as ghost state here. And then we have this axiom which mutually summarizes
these two procedures so this is the summary for a call. This is a summary
for a call to G. And if we can establish that this states are isomorphic
before the call, then we get that they're the same after the call. And this
way we don't have to do any further work to kind of match up which procedure
calls match with each other. Boogie is going to search and try and match
them up for us. So I guess I'm just going to talk about the kind of
difficulties with the boogie end coding and it's where I guess we're not
entirely satisfied with what we've got. The first thing is it's quite
difficult to use this idea of there exists a mapping between two heaps as
your definition for isomorphism because you have to somehow give boogie
something it can instantiate that existential with and we don't have it. So
what we do instead is try to universally quantify everything so we quantify
over paths from previously allocated objects and from stack frames. That
works. The other issue is that it's very difficult to give this notion of
unbanded reachability to an SMT so overtly, so I don't know how to do it.
But it turns out that you don't actually require an unbanded reachability
predicate to deal with unbanded reachability. Because although recursive
calls can make unbounded updates, what we do is we change loops into tail
recursion and then each procedure call itself can only make a bounded update.
And so we only ever have to check isomorphism to some bounded amount of
updates done within the statements of a particular procedure. But some nice
formation of reachability would be very helpful. And I guess the other thing
that helps is this is always exactly one or zero isomorphisms. There's never
many isomorphisms that we consider. So I've got some more slides on that
actual boogie encoding that are kind of intricate so we can go through them
if people are interested or we can stop here and so I'll leave it up to you.
>>:
[Indiscernible].
>> Tim Wood: Yeah. Well, it gets more complicated. Or if I anyone has any
questions about what I previously said, because it was maybe not the best
presentation.
>>: I do have a question but I could also wait. I mean, you're writing
proofs of encoding essentially, and you also prove some formalization. In
both cases, you're dealing with the same engine below. Do you find a
difference in the two? For example, if you were able to encode this in Dafny
instead, if that would have been useful, would you have preferred that or for
the Dafny encoding that you did of your first system, would you have
preferred to have done that in boogie?
>> Tim Wood: So boogie feels a lot like programming the SMT solver. In
order to use it, I had to read a learn a lot about what the SMT solver's
actually trying do underneath, how it's going to try and search the actions,
how it's going to try and build the proof tree. So the first several
attempts I had doing this in boogie were not very good. I guess the critical
thing comes down to getting -- understanding how the triggers caused the
matching loop to work. But I think that is very important here because we're
talking -- we're trying to talk about these unbounded modifications and there
isn't a decision procedure available so you have to compromise in some places
and how you write the trigger expressions and construct the predicates is I
guess how you encode -- how you tell boogie what compromise you want.
Whereas in Dafny, it felt much more like writing a program and doing kind of
[indiscernible] proofs not a program and I didn't most of the time have to
think very much about what the SMT solver was going to do. So Dafny feels
very different. I guess because Dafny has a carefully craft set of trigger
expressions that it's already got in its encoding. A learned quite a lot
about boogie actually from going and reading how Dafny was encoding the
programs. But yeah, but I'm not sure how to bridge that so here where I feel
that I need a lot of control over what I'm asking the theorem proof do, and I
might be mistaken, but I -- it seems to be useful to be able do that. And
I'm not sure how I would achieve that to the same degree in Dafny.
>> Shuvendu Lahiri:
Any more questions?
Let's thank Tim.
[Applause]
Download