>> Nikolaj Bjorner: It’s my great pleasure to welcome Yakir... Princeton today and tomorrow. He will be talking about Fast...

advertisement

>> Nikolaj Bjorner: It’s my great pleasure to welcome Yakir Vizel who is visiting from

Princeton today and tomorrow. He will be talking about Fast Interpolating Bounded

Model Checking.

>> Yakir Vizel: Thank you for the introduction Nikolaj. So I’m going to talk about a fast interpolating bounded model checker. The reason I’m going to be talking about it is that interpolation used for model checking usually requires sort of a naive BMC implementation since it really relates to the structure of the formula. What I’m going to show in this talk is how we can interpolate while implying simplifications while using a state of the BMC engine.

So bounded model checking was introduced in 1999. It came up as a very efficient model checking algorithm that at the beginning was used for “bug hunting”, meaning to find counterexamples. Full verification wasn’t something that it would scale to. The basic idea is that we take our transition system and we represent bounded paths by unrolling the transition system, in a propositional formula represent these paths and then we can use a SAT-solver to determine the satisfy ability of this formula. When it is satisfiable we have a counter example and when it is un-satisfiable we don’t have a counter example, but again it’s only good for bounded paths in our design. Now this method can also be extended to richer logics, the first order logic with SMT-solvers, but here in this talk we are going to be handling propositional logic.

So I will start with an example of a circuit. This circuit is a counter, it has 3 bits even though the last bit is stuck at 0 and it counts to 3 from 0 to 3 and then resets. A symbolic representation of it looks like this. We have initial states which here tells us that the initial states all bits are 0. Then we have the transition relation that shows how a next state value to V0 for example is the negation of the current state of V0. The next state value for V1 is simply XO of V1 and V0. The property that we want to verify is that it never happens that all bits are 1. So it’s a simple circuit.

Now when we are coming to talk about unrolling we are simply gluing this transition system or gluing the design together. So here we can see the first instance of the transition system and then we connect it to the second instantiation of this transition system. Now if I want to represent this with a formula it simply looks like this. So again we have the initial states and then we are describing the next state values, which are these values right here. Then in order to connect the values in the second time frame we connect them to these valuables here in that manner and we have the property both for bound 1 and the property for bound 2 represented by this formula right here.

Now if I want to look at it in the context of reach ability analysis. So I have a transition system and I want to check if the property holds in this transition system. So I have a set of initial states, I have a set of bad states and I have a set of states reachable in the system. So usual reach ability analysis we are computing these onion rings of reachable states and if I am looking at BMC in this context it simply shows that there is a path in this reachable space and I’m trying to see if there is a path that can reach the bad states.

The problem here is that I don’t get any kind of information about R1 and R2, etc. The

SAT-solver that works on this formula enumerates these states, but I can’t get any explicit information on what are the reachable states.

So just to look at simple server code for simple BMC first we check if the initial states satisfy the property and then we simply enumerate from 1 to the bound N. We create the unrolling formula with a property and ask SAT-solver if it is satisfiable or not. If none of these queries are satisfiable then we know that there is no counter example up to bound

N. So this is a simple BMC implementation.

Now that was the first implementation and since then there has been a lot of advancement in bounded model checking. Some of them are due to the SAT-server. So when SATserver became more efficient and faster so did the BMC engine, but some of them are on an algorithmic level. The most important one I would say is Circuit-aware simplification.

Since we have a circuit the unrolling of a transition system is like a combination circuit.

We can simplify this circuit prior to sending this formula into the SAT solver and by that getting a smaller formula which will be easier for the SAT solver to handle.

Other methods are like using the incrementally of SAT-solving, so when SAT solver can solve something incrementally you want to use that because in bounded model checking each bound relates to the previous bound. So it makes sense to use an incremental SAT solver and lastly is lazily adding the clauses for the problem as we go along. Doing that are calling to a [indiscernible] of influence and I will show that in a minute.

So code for a fast BMC would look like this, just like before we are checking the initial states, after that we are unrolling the entire formula that describes a path of length K.

Then we are taking this formula and we are applying simplification for it. Now this simplify function takes a formula and a set of constraints and it returns a simplified formula and the consequences of the inferred constraints that it found. That is what we are assuming. Once we’ve done that we are going into the BMC loop, we have this get kind of influence. So we are giving the simplified formula and the literal that we want to check, which is the property at the given bound. We are getting a set of clauses, we are adding them into the SAT solver and then we are checking the satisfy ability with an assumption that this literal needs to hold. So this is lazily adding clauses, this is the incrementally of the SAT solver and this is the circuit-aware simplification.

>>: [inaudible].

>> Yakir Vizel: Yes, so it is equivalent or you can think that G implies its equivalent.

There are no more satisfying assignments. So if I am looking at this circuit and unrolling if I want to solve the property at this bound then the cone of influence would be this part of the circuit. So I only need to add the clauses that describe this. Then when I move to the next bound I will only add these clauses that were missing from the previous bound, etc. Now the circuit-aware simplification as I said this simplify function takes a circuit G and constraints E and produces a new circuit G prime with a set of consequence E prime.

So you can think about the simplifiers as some sort of an inference engine that can get

this formula and can get [indiscernible] or some kind of constraints that are implied by this formula.

>>: [inaudible].

>> Yakir Vizel: Yes you unfold it. So in this talk even though we can use a general simplification algorithm we are going to look at SAT-sweeping. A SAT-sweeping finds equivalences in a combination circuit. You can think about it as taking any two nodes, any pair of nodes in the circuit, and trying to see if they are equivalent. Obviously this is a lot of choices for pairs, but there is a lot of work of how to reduce these candidates for checking the equivalent. For example you can use simulation if something is not equal by simulation you know that you don’t get to even try and verify the equivalence. So you are finding equivalent nodes and nodes that are equal to constants. So it simulates also constant propagation and this is something that is usually. We checked what things are being used instead of the bounded model checkers and we saw that it was mostly SATsweeper.

>>: And you exploit that it’s circuits?

>> Yakir Vizel: Yes, it exploits the fact that it’s a circuit. So it works from you can think from inputs to outputs. It sweeps and merges nodes inside the circuit.

>>: And when you check equivalence of nodes –.

>> Yakir Vizel: Think that you take these two nodes and you are saying that they are not equal and you will give –.

>>: [inaudible].

>> Yakir Vizel: You have the entire formula, you take two literal’s in that, you are saying they are not equal and you ask the SAT solver if this is satisfiable or not.

>>: [inaudible].

>> Yakir Vizel: Yea.

>>: [inaudible].

>> Yakir Vizel: There is a [indiscernible] sweeper that will do the same thing.

>>: And these are cheap?

>> Yakir Vizel: Usually they are cheap. They can get expensive and you can get degradation because this process may require too much time, but there are a lot of parameters that you can control. You can say, “I’m going to do a complete

[indiscernible], but until you hit 500 conflicts.”

>>: [inaudible].

>> Yakir Vizel: Yea, so you can do that and if it doesn’t finish then you don’t know.

>>: So you experimented with this.

>> Yakir Vizel: We did set some parameters. I don’t remember what we set, but we set some parameters for that. Also the number of simulation steps that you are doing in order to reduce the number of candidates, so things like that.

So how does SAT-sweeping look in the context again of this circuit? So remember that we had an initial state that says that all of these are 0. So we replace all of these by 0 and now if I have a 0 here and a not gate the I can infer the value for this. And I have a 0 here and a 0 [indiscernible] gate I can infer the value of this. So we can simply do that and then this becomes a constant. You can see and then it has been removed. You can keep on doing that and you will end with nothing really. So by just applying SAT-sweeper to this example we don’t even need to call the SAT solver. You have the result immediately even though here it’s simple constant propagation, but in some other example you can find that for example this node equals another node and then you just merge them.

>>: [inaudible].

>> Yakir Vizel: Yes, so the V2 is simply a buffer. So V2 prime equals V2 and V2 prime, prime equals V.

>>: So encoding it’s just the same?

>> Yakir Vizel: So in the encoding usually what you would do if you don’t have smart encoding you would say, “V2" and you will have V2 prime and you will have two clauses that say they are equal.

>>: [inaudible].

>> Yakir Vizel: Yes, yes.

>>: [inaudible].

>> Yakir Vizel: So I’m talking about bounded model checking and fast bounded model checking, which is the evolvement of bounded model checking over the years with algorithms like SAT-sweeping that use the structure of the formula to reduce it before giving it to the SAT solver. Now usually when you use interpolation you don’t use simplification and other things that make bounded model checking strong. In this work we are showing how you can use these algorithms and still interpolate. So I showed an example of –. So I’m assuming I have some kind of a simplify function that takes a formula and returns a simplified formula. You have a set of constraints on that formula

and you get a new set of constraints. And as an example if you have this circuit right here and you have an initial state that says that all of these are 0 you can simply propagate these values and you are getting at the end that this formula is simply false without even needing to translate it into clauses and giving it to SAT solver.

>>: [inaudible].

>> Yakir Vizel: So I’m assuming that a simple BMC implementation simply goes from 1 to N. You unroll a formula which means the current disjunction of transition relation with the property at the end and you are checking for satisfy ability. A fast implementation would be something like this where up front you enroll a bounded path of length K, then you are trying to simplify this formula and you are getting something which is smaller hopefully. Then you have this loop that get’s clauses in a certain cone of influence for the current bound that you are checking. You are solving it incrementally in a SAT solver. Then the three differences are the simplification algorithm, getting only clauses in the cone of influence and solving incrementally in the

SAT solver.

Okay, so interpolation this is the slide from [indiscernible]. If we have a pair of formula

A and B and their conjunction is un-satisfiable there is an interpolant which is Itp, it is implied by A, with B it is un-satisfiable and it only refers to common variables of A and

B. Now an interpolant is computed with respect to a proof of un-satisfy ability that a

SAT solver produces. So it produces a resolution graph, it’s a [indiscernible] and then by traversing it in a linear time you can compute an interpolant.

Now McMillan showed in 2003 how this can be used to have full unbounded model checking, with BMC when you add interpolation into the mix. So the interpolant you compute from bounded model checking formulas they are an over-approximation of the image computation of the set of reachable states. And you are using that in order to say if you reach a fixed point or not, if you need to continue with increasing the bound or not.

So just to see how it looks we have this bounded model checking formula right here and we can divide it into A and B and if it un-satisfiable we have an interpolant such that A implies the interpolant and it is over V1.

So we can see that it’s an over approximation of the states reachable from in it after one transition because of these two things. And since with B it’s false we know that from these states you cannot reach a bad state. So then you iterate this, you take this set of states, you replace the initial state, you solve the same formula again and you are trying to see if you can reach a fixed point. So we can see here how it is important to have this structure that divides this formula into two parts in order to interpolate.

A generalization over the simple interpolant is a sequence interpolance that there you can have a sequence of formula, not just A and B, but for example A, B and C and a sequence interpolant would be for example I1 and I2, such that I1 is implied by A, I1 and B implies

I2 and I2 and C implies false. And in work from 09 we showed how this can be used also to have full verification with BMC. So if I have this switch ability analysis view of R1,

R2, etc, and I have this bounded model checking formula I can divide it in this manner to a sequence and then I get a sequence interpolance that have this property of R1 in each of them. You can use that to determine a fixed point.

So the challenge is that BMC that is used for interpolation is usually not the BMC used for bounded model checking. It is a weaker version of that and since interpolation requires some sort of a naive BMC you can say and since simplifications destroy the structure of the bounded model checking formula then you need to do something in order to be able to interpolate. Also to combine an incremental solver with interpolation while it’s not hard it is still challenging. Since you need to log a proof over time it starts to grow and we just know how to handle that. So we also did something in that one. So

FIB, which is a fast interpolating bounded model checker, includes all of theses: the

SAT-sweeping, incremental proof-logging and lazy addition of clauses, but can still interpolate.

So just to remind us that we have this kind of formula and we can see that when we divide it in this manner it’s important to know what shell vocabulary we have between the different partitions and the structure we need to know that this T is right here and this

T is right here, etc. So that’s what we need to handle. So I’m going back to this example of the unrolling and we have the SAT-sweeping that goes through the circuit and removes everything. So if I had this original formula by simple concept propagation it is equivalent to false. So the problem is that since interpolation requires a resolution proof in this case there is no resolution proof. So you cannot interpolate. So there is a need to recover from that.

So what we are saying is there were some works in the past that wanted to handle proofs in the context of simplification and what they did was usually say, “Okay, if I have some kind of a simplification step I can create a resolution proof of that step and log that. Then at the end I will have a complete resolution proof, but we are saying that we don’t need that. That might be too much and what we need in the context of a simplifier, for example if it’s SAT-sweeper or constant propagation, in the case of constant propagation it’s only constant’s that we need to log. In the case of the SAT-sweeper we need to log the equivalences that it finds and that are it. We don’t need a proof of that.

Then in order to maintain the structure of the bounded model checker we are modifying in a way the simplifier to take into account the structure of the bounded model checking formula, meaning that it has these instantiation of the transition relation. After all of that we have a simplified formula, we are solving it, there is a resolution proof of the simplified formula and we are getting an interpolant with respect to a simplified formula and as a [indiscernible] step we are fixing this interpolant to fit and to match the original formula.

>>: So in your example you did constant propagation. So how would that look like?

You just add equalities?

>> Yakir Vizel: No I am not; I am actually taking the formula and removing everything.

So at the end I am left with an empty formula which is simply telling me either 2 or 4.

>>: [inaudible].

>> Yakir Vizel: So I’ll skip that [indiscernible] and then go back to it. So if I had this example and I’m still considering the structure of this path and then that path and this property so it will amount to true, true and false after SAT-sweeping or constant propagation. So a sequence interpolant to that would simply be true and true, which is obviously not a sequence interpolant with respect to the original problem, but what I’m logging here in this case are these two sets. I’m saying, “Okay this set of consequences, the constants are implied by this part of the circuit and this sort of constants are implied by this part of the circuit.” So I’m just logging this and remembering which values I got.

If I had equalities then I would just have for example V0 equals V2 in this set. This thing is like saying V0 equals 1 so I can have V0 equals V5. So this is what I am logging; nothing more than that.

I can go back to just show our serial code. So our serial code breaks down the simplification into the different partitions. So it says, “Okay if I have this G0 to GK, which represents an unrolling of a formula I’m going to take each one of them, run simplification, get the inferred constraints and then give them to the next one to run the simplification again, etc. So this is how we are breaking it down and keeping the structure, but we are still propagating whatever consequences that we found forward inside the unrolling. So that’s how we are handling that.

So here again we had this and we logged this thing. Now what is important to note here is for example this is G0 so I know that G0 implies this. Now more over if this is the G1, the second instantiation I know that given these constraints on G1 it implies this. So it somehow is similar to a sequence interpolant. This and this implies this and this and something implies something else. So we are going to take this and add them to the sequence interpolant that we got with respect to the simplified formula and we are having an interpolant with respect to the original formula.

So being more formal about it if I have the original BMC formula, which is this, and G0 is simply this and G1 is this, yes it’s a bit messy, but you can believe me and a sequence interpolant would be this thing right here. So G0 implies I1, I1 and G1 imply I2 and I2 and not P will imply false. But since we had simplifications we are getting something like this: G0 prime and G1 prime where this one is true, this one is true and the last part not be as false. So we have the sequence interpolant that we have is with respect to that simplified formula.

But, by the simplification rules we have all of this. So G0 implies the simplified one because we got it from there and G0 implies the set of constraints that we got. So we can combine all of this in order to get okay if I define I1 to be the interpolant that I got with respect to the simplified formula plus all the constraints that I inferred it’s an interpolant

with respect to the original formula. So we can build a proof of that, but it’s pretty straightforward, it’s not that complicated.

Now this works, but the number of consequences that a simplifier can generate is large and mostly you don’t want to add all of them because you don’t want to create noise in your proof. Most of them might not even be needed for the proof. So we are implying a second step of minimizing these sets of consequences and finding only those consequences that are needed and in order to do that we have some kinds of a backward search and I show what it means that finds which consequences are needed.

So remember that we have for example this formula right here and I created this I1 and I2 with respect to the original formula. So I can say, “Okay I2 and not P needs to be unsatisfiable, but now I2 includes all these consequences and from the answered call for example, even though you don’t have to use unanswered call, you can find which ones are needed. Now once you’ve found which ones are needed only those are remaining here. Now you are gong backwards and saying, “Okay I1 NT needs to imply this.” So which one of the consequences here that I’ve added is needed to have this implication?

Once you find this you are good and you can go back.

>>: [inaudible].

>> Yakir Vizel: Like [indiscernible] simulation you mean?

>>: No the proof of any one of those facts might involve [indiscernible].

>>: [inaudible].

>>: No each one of those equivalences that you’ve got might have come from a full SAT problem. They weren’t generated [indiscernible]. The unit propagation’s are different, but assuming that you had equivalences you can have some arbitrarily complicated resolution proofs with the SAT solver generated. So I assume what you want to do is just go backwards and use un-SAT course to throw away the ones that were complicated.

>> Yakir Vizel: So basically what we are doing is we are creating a literal for each of these constraints, solving under assumptions and getting the last conflict which is a subset of these assumptions that these are needed. Now you can actually try and minimize those, but that depends on how much effort you want to put into it.

>>: [inaudible].

>> Yakir Vizel: So we are using full SAT calls for this, but it’s pretty fast, it doesn’t take a long time and I will show it later. So we are doing that and we are getting in a way a minimal interpolant meaning all the constraints that are in the interpolant are all the constraints that are needed. You cannot remove any of them and still have a sequence interpolant. So it’s a minimal set of constraints that you need. So if I’m looking at this example again then for example to have this that I2 and not P is un-satisfiable, if this is I2

and not P is this, you can see that all you need for example is not V0 and V0 double prime. That is all that is needed for this un-satisfy ability. So this is the only constraint that you need. Then if you go check this one again only V0 is needed. So from the conjunction of 3, so it started from this, so you get that the interpolant is actually only this. So that is the minimization step.

So how does FIB look? So it’s the same structure as the fast BMC. You start by unrolling everything so you define, but you remember the partitions. You have G0 and

GI and then you are implying this structure based simplification that gives you G prime, the simplified formula and inside it’s the same thing. You get the cone, you edit, you solve incrementally in the SAT solver, you get the interpolation with respect to the simplified formula, you add the constraint to it, and minimize and you have the interpolation sequence. If this is built into a model checking algorithm you will have something being done with this and hen you keep on going. So it’s the same structure, but generating interpolance.

So, I touched this and the second part relates to the SAT serving and the incremental SAT serving. So not long ago Arie and I presented a way to compute interpolant from DURP proofs and this was actually something that Nikolaj put in our minds in [indiscernible], not this year, but before that in Russia. He said, “Have you looked at clausal proofs?”

We were like, “Ah, no we’ve never thought about clausal proofs.” Then we found out like a few months later about this when [indiscernible] was about the theory of DRUP proofs that we are calling, which is a clausal proof with the orders in which learned clauses deleted.

Once we have implemented this mechanism inside MiniSAT we have the MiniSAT unchanged, because you don’t change the SAT solver for proof logging, it’s just the learned clauses. Then you just have a procedure that works through these learned clauses and produces and interpolant. Now one would assume that since you are not logging a proof this will be a process that takes time, because it’s like rerunning the SAT solver, but it’s not. It’s [indiscernible] in the exact time as traversing a [indiscernible] of a resolution proof, but without the hold fast of changing the SAT solver.

And it is more scalable because you don’t have all the memory requirements that the proof logging server. So we actually didn’t evaluate how much, because we know the rule of thumb is that the proof logging SAT solver is 10 percent to 20 percent slower than a non-proof logging server because of the memory requirements it calls, but we didn’t check how much we are improving on that front because we are not logging a proof in the regular sense of logging a proof.

>>: But when you do the proof reconstruction you go backwards [inaudible] and there’s some amount of search going on.

>> Yakir Vizel: Yea, but the search is easy because what you are doing is simply running the CPU. You have no decisions because of this, because you know that the learned clause is derived by trivial resolution, meaning that if you have a set of clauses X then X

and not C, the negation of this clause, reaches a conflict by just BCP. Now what Marion added here is the fact that you can say, “Okay, if these clauses are deleted I don’t need to

BCP through them.” That’s what makes it efficient, because you don’t propagate on your entire database of clauses. You are saying, “Okay, at this point of time the database looks like this so I know what I need to propagate to.

Also you are marking those clauses that you need and you are doing a sort of BCP that prefers these clauses first, so it’s a small subset. If it can’t reach a conflict it will go to the rest of the clauses which are not deleted, but if it can then you are really restricting it to a small size of clauses to propagate through. And also since we are doing it inside the server, so for Marion it was a different tool because it was meant for certification of a

SAT solver. You could run your server, you emit a DRUP proof and then it reads the original problem and the proof and verifies it. We did it inside a server so we were using the entire data structure that was still alive in the server. We know the state of the train at the end of the run. You don’t need to reconstruct it, it’s already there. So we are saving all these things by using MiniSAT or clause DRUP, we have them both.

>>: So Arie said that the [inaudible].

>> Yakir Vizel: Yea, so that’s true for the non-incremental mode, because once you delete a clause you don’t really remove it from your memory manager. But, we are putting it at the end in case you are heading into memory problems for example by not removing a clause, which does not take a lot of memory anyway, then you will page these pages into the disk because you are not using them. A deleted clause it not used, you don’t touch it, not during BCP or anything. So, you can actually page it and remove it, but we are keeping it still because we may need it at then end when we construct the interpolant, because you un-delete. When you go backwards and you reach the clause that was deleted you are un-deleting it, you are putting it back.

>>: And MiniSAT of course deletes the clause, right?

>> Yakir Vizel: So MiniSAT also it deletes, but it also doesn’t go away. It waits to a garbage collection to happen and then it would put it away. What we are doing is simply putting it at the end of the memory manager, the last pages that are not being touched. So if a page has only deleted clauses it won’t be touched anyway. So the OS would put it aside.

>>: [inaudible].

>> Yakir Vizel: Yes, we thought about it, but we didn’t implement. We thought that we can let the OS take care of it. We could actually take it out of the memory manager and put it somewhere else and then recover it. We could do that, but we didn’t.

So that was some information about DRUP, but what makes it special also is that the clausal proof, I think that the nicest feature about it is that it’s a space of interpolant’s.

It’s a space of resolution proofs; it’s not just one. So once you have it you can now say,

“Okay, I’m going to traverse this space of proofs and I’m going to choose the one that I would like.” We had all sorts of tricks to do that, for example when you reconstruct it in your BCP you would BCP in the order of the BMC formula because you wan to create some sort of a proof that goes in a certain way from the initial state and up.

And we had this part that would produce an interpolant, which some part of it is a CNF, because you will take the local chains, the local trivial resolution derivations and you will try to reorder them in a way that will give you a CNF clause for the interpolant, but in a way that does not explode exponentially. So it will try and if it doesn’t succeed it will just give up. So it gives you a lot of options when you do that.

>>: When you don’t get a CNF interpolant do you then feed it back in a bootstrapping way?

>> Yakir Vizel: No.

>>: [inaudible].

>> Yakir Vizel: Yea, but we are using [indiscernible], which takes a non-CNF interpolant and makes it into CNF with PDL. It uses PDL to transform it into CNF, but if you have a path already in CNF then you can say, “Okay, I’m going to load the CNF into the PDL already. I have clauses and I know that my interpolant contains these clauses.” Now take the non-CNF part and only transform that. So I’m making it more efficient and then you can think about other things, for example once you find this kind of clause you can stop your proof and you can try and generalize this clause because you know what your problem looks like. Maybe after you generalize this clause you are getting a whole different proof. So there are a lot of things that can be done with this.

>>: So were you able to show that ultimately that being able to push the interpolant’s more towards being conjunctive say by reordering or whatever, that ultimately that would give better convergence or cluster model checking?

>> Yakir Vizel: So we did not check that. The paper was such that there was no room for that.

>>: But I thought there might have been something after the paper.

>> Yakir Vizel: Yea, exactly, but we still haven’t got to that, because we moved to the

[indiscernible] before, but it’s still on our plate to check what we can do with that and see if it makes any difference, but I assume that it will make a difference if you actually use them while constructing the proof. Try to generalize such a clause while constructing a proof to reduce it even further considering the problem something that a SAT solver cannot do by itself.

>>: When you say while constructing the proof, who constructs the proof?

>> Yakir Vizel: No, after the SAT solver constructed the proof we are moving through this proof –.

>>: You are reconstructing the proof right?

>> Yakir Vizel: Yea, but it’s a space of proofs, but once you generalize something inductively for example you are adding more information that the SAT solver didn’t have.

>>: So you want to use inductive generalization in the process of reconstructing sort of a more forward proof for interpolation.

>> Yakir Vizel: Yea and then you might get a completely proof which would be better for model checking. So I wasn’t sure to go through this, but I will go back and go through this real quickly. Let’s assume that I have this just to understand what a conflict clause is and how these DRUP things work. I have this CNF and I have this implication graph and this is a conflict that we have reached. So usually this implication graph is being traversed and it has all these resolution steps and you get to your learn clause.

Now logging a proof means that you log all these kinds of local resolution steps and you have a DAG at the end. For example something that looks like this, but the clausal proof would be something like this, meaning you just remember these two learned clauses that you’ve learned during your run, that’s it. So that’s the big difference memory wise between logging the entire resolution proof or just the clausal proof.

Now if I’m looking at it, so this was my CNF and these were my two learned clauses what I mean by saying that something is implied by BCP or by trivial resolution is that if

I take X and this was the clause that was learned at first and I add the negation of it then

I’m going to hit a conflict. So by here I am getting not A1 and from these two clauses a conflict because I am getting G1 and not G1. So it is simply by BCP, there were no decisions here.

Now this is how Ken’s method works to compute interpolate. You have the resolution proof, and you label all the leaves, and you work up the resolution proof, and you have some labeling operation and at the end you get your interpolant. For us what we have is we are taking a learned clause, we are adding its negation and we are hitting a conflict.

So we are just traversing the implication, graphing the same manner and labeling each step until we reach the top, which is our learned clause and its label is the PowerShell interpolant for that learned clause from the DRUP proof.

Now what is the challenge that I was referring to before about using an incremental server? So we have multiple calls to the SAT solver. We are iterating a process of adding clauses, solving, adding clauses, solving, adding clauses, solving, so what needs to be done is that you need to maintain a consistent DRUP proof. A DRUP proof says that if you had a set of clauses that you learn the clause and it is implied by BCP by these set of clauses. So you need to remember the order for example if you would want to

certificate the operation of the SAT solver. For our purposes we don’t really need that, but we still need to have a consistent proof, which means that at any given point of time learned clauses can be derived by BCP.

So you would say normally you would need to keep all learned clauses in an incremental server. If you are proof logging you will need all learned clauses; you cannot remove anything, but we said a different thing. We said, “Okay, so DRUP has this operation of finding the core clauses that are needed for a specific proof that you found so keep those.

In addition to that whatever is no the trail, which is the implied units, keep those as well, because units are really strong consequences. So it is described here.

>>: [inaudible].

>> Yakir Vizel: So we delete everything else. After you are done with the call you get the non-SAT and whatever is not in the core of the proof, whatever is not a unit, it’s going away. Now you don’t need to manipulate any proof object, because the proof object is simply the learned clauses. So whatever is there is there and whatever we removed is not. That’s it, so it’s really straightforward and easy to use this infrastructure for proof logging and proof manipulation.

>>: [inaudible].

>> Yakir Vizel: I guess that we could do that. We have thought about that, but we said we would go with this heuristic first. We didn’t put too much effort into finding the best heuristics of what would make the SAT solver a few percent faster of yes or no. But, we decided to go with the core and the units. But yes, I am guessing binary clauses would be a good idea as well.

>>: [inaudible].

>> Yakir Vizel: You might, so then you can say, “Okay I am going to keep that,” but also you can’t know because you might need the same clause for two subsequent bounds, but then different bounds you no longer need this fact. It’s heuristic, we can’t know, because you also don’t know how much of the proof for bound 5 is need for proof of bound 6, but heuristically you are thinking that maybe.

Okay, so AVY is why we did this whole thing. So, there is this model checking algorithm AVY that we introduced in CAV’14. What it is doing is it is built around bounded model checking and sequence interpolance. It does the bounded model checking loop, it extracts interpolance, but then transforms them into CNF through something like a PDR trace, checks for a fix point and does the pushing and all kinds of

[indiscernible] kinds of tricks. So in the [indiscernible] paper we took this box and made it better and now we said, “Okay, what is the weak part?” It’s this part, because we could see that native BMC algorithms reach much deeper into the design and sometimes you need to reach deeper in order to converge. So how do we make that more efficient? So that’s why we decided to go at this box and try to improve that.

So we integrated this FIB engine in AVY and then we evaluated it on both these benchmarks and both from BMC performance just to see how well we perform as a pure

BMC engine and the overall model checking performance. So we got two more cases here on the HWMCC’13, but the better performance we got was on HWMCC’14. All of these cases that we solved we could see that the implementation without the fast interpolating bounded model case stuck at a specific bound while you need to reach a deeper bound in order to converge and that was achieved by this. Now regardless of that we go a better runtime overall, because the underlying bound in model checker was running faster so you could get convergence faster.

Now if we are looking at the scatter then we can see that first of all we see that the same trend that we showed in CAR’14 remains. AVY and PDR are very different in the sort of problems they solve, everything is on the extreme. There is nothing in the middle. They don’t compete on the same sort of test cases. So that was nice because still AVY solves way more examples than PDR, but when we compare to the old AVY then we can see there is a better performance most of the time for AVY with FIB than AVY with just

BMC. Now there are cases where it looks like it is slower. These cases are because

AVY with the regular BCM converges at the lower bound.

>>: [inaudible].

>> Yakir Vizel: Yes, exactly. It creates noise in the proof, it pushes the initial states too much into the design and then you need to reach deeper, but you can see the majority of the cases are not that. It is more the other way, because even if you converge at a deeper depth you still get there faster. Now in terms of BMC performance comparing to

ABC/BMC depth is almost mostly the same, the depth that they can reach. And compared to the old BMC we can see that we can reach much deeper depth with this

BMC engine, but that’s what we expected. It’s not something that is out of the ordinary.

Now when compared in runtime to simple BMC we can see that FIB is much faster obviously, but still when you compare it to ABC’s BMC it seems lower. So it is somewhere in between and by analyzing white in between, so first of all breaking down the simplification can affect performance, because you are putting more effort because you are not looking at the entire thing. So simulation does not remove a lot of things.

And you might loose simplification that goes across bounds, across partitions. We do know how to end on that also, by the way, but we didn’t do it for this.

>>: [inaudible].

>> Yakir Vizel: In the simplification? Yes so that’s why we didn’t look only at the SAT solving part, we took a look at the entire runtime including that.

>>: [inaudible].

>> Yakir Vizel: So to summarize what we did until now we have this open source freely available AVY framework with the AVY and the DRUP interpolance stuff and FIB. So after we did AVY we were trying to breakdown stuff that can make everything more efficient and we are attacking like problems that are bugging. I think Ken said that he also thought about these things way back.

>>: You are talking about the SAT-sweep.

>> Yakir Vizel: Yea the simplification.

>>: We sort of started implementing that and sort of didn’t do it.

>> Yakir Vizel: So its things that people were thinking about, but we decided to go and do it. I think it has more potential than just that. So we can think about this kind of simplifier, or SAT sweeper, or any simplification, algorithm is some consequence generating algorithm. So you have your formula and you are generating facts, lemmas and how you are going to use them and still interpolate. And the naive approach can say,

“Okay I am going to log the proof for that lemma,” but maybe you don’t need that. You just need some of it, just the lemma itself for example as in this work. But I think that integrating this entire thing into a solver that has a loop that generates lemmas, and solving, and using these lemmas to re-simplify the formula, for example you could think about the PDR traces lemmas. You could use them as constraints and re-simplify your formula.

>>: [inaudible].

>> Yakir Vizel: It doesn’t necessarily have to be just that. We looked at one kind of simplification, but it can be other kinds of simplification as well, a general synthesis. So that is the second thing. This entire thing has been done with word-level in mind.

>>: [inaudible].

>> Yakir Vizel: So we are just starting to think about it, how to do that and taking a Bit

Vector solver that has a word-level formula and then implies simplification before

[indiscernible] and goes to the SAT solver.

>>: [inaudible].

>> Yakir Vizel: I don’t know yet. It’s in the direction of bit level interpolance for bit vectors for now. You can think about it how it’s the same thing, but I’m not sure yet what kind of simplification can be supported and what kind is more difficult to support.

It’s something to keep in mind about what we are doing here. That’s that, we have everything available and that’s it. Thank you.

[Applause]

>>: Just a small clarification. So when you are doing the SAT-sweeping are you doing the SAT-sweeping frame by frame?

>> Yakir Vizel: Yes.

>>: So that’s why the SAT-sweeping proofs sort of don’t go across frame boundaries, which is why you only have to keep the equivalences [indiscernible].

>> Yakir Vizel: Yea, but you could actually also log consequences.

>>: Right you could if you had proofs that crossed the frame boundaries, you could log the interpolant.

>> Yakir Vizel: Is there a way to take this up?

>>: [inaudible].

>> Yakir Vizel: Yea, but there are different problems when you have consequences across partitions.

>>: [inaudible].

>> Yakir Vizel: Yes, because then you create new shell vocabulary and you need to know how to recover from that. Now if I’m looking at the AB case it’s easy because if I have this circuit and this is B and this is A and I found that this node right here equals that node right here, let’s call it X. So X now becomes a shell vocabulary. So if X appears in your interpolant you need to get rid of it. Now getting rid of it is easy, you are just taking the function of this over the shell vocabulary and you substitute. That’s it, you’ve recovered from that, because you know that this function equals that, we just proved that. You don’t need the entire thing; you just need the function over this shell vocabulary. So you do the substitution and you’ve got an interpolant with respect to the original shared vocabulary. So we just need to find the cone over the shell vocabulary.

>>: Okay, I’m not sure I understand that, but we can talk about that later.

>> Yakir Vizel: We can discuss that. Then it becomes more complicated when you have

A, B or C and you have X that goes all the way up there, but you can still recover from that. And it becomes even more interesting if you start with more complicated simplifications as synthesis, because synthesis says, “Okay, I’m going to add a new cone right here and this cone is equal to that thing.” So now it’s an introduction of a definition. So now you are going away from resolution, because this is extended resolution and how you can interpolate with that if you can.

>>: Well you can’t right.

>> Yakir Vizel: Well maybe you can with increasing the size of your proof, because if you can now solve this because you did that you don’t care that you increased your proof you solved it whereas before you couldn’t solve it. So still you get something.

>>: I think I see your point. The fact that node B is a function of things in the common vocabulary. The fact that [inaudible].

>>: Think of it as a [indiscernible] function.

>>: Yea, it’s a witness function. You just have the witness function there because it’s a function and because all the [inaudible].

>> Yakir Vizel: Okay, thanks.

Download