>> Nikolaj Bjorner: Okay. It's my pleasure to introduce... Berkeley with San JIT and is now a postdoc at SRI.

advertisement

>> Nikolaj Bjorner: Okay. It's my pleasure to introduce Wenchao. He studied at

Berkeley with San JIT and is now a postdoc at SRI.

I know from talking with Wenchao that he has probably ten times more things to say than he will be able to fit into this talk.

Nevertheless, it's going to be quite -- I think it's going to be quite a rich talk. He has done research in both very broadly and deeply in -- around specification mining in this talk and but also other -- some other topics. So doing one-one, you should definitely hear more on what he has done.

So I'll leave the work to you.

>> Wenchao Li: So thank you, Nikolaj, for the introduction and hopefully my presentation will live up to your expectation.

So everyone welcome to my talk. It's great to be here again. I obviously interned here back in 2009. And today I'll mostly talk about some research work that I did when I was a graduate student at UC Berkeley, and specifically on this topic called specification mining.

And I'll also touch up on a little bit of other stuff that I did during my Ph.D.

So I before I talk about details of specification mining, I want to start with a simple picture, sort of illustrating my perspective on automated reasoning to motivate this talk.

So typically for an automated reasoning engine you're taking two inputs, a formal model, for example, a Boolean program or something else, and a formal spec, typically, you know, maybe an SMT formula, maybe an LTL formula. And then you're trying to reason about whether the model satisfies the specification.

My perspective is that this is really not the entire picture. And these two inputs sometimes only represent entities that can be further upstream.

So, for example, we may have some very complex systems, for example, like a biological network or a flight control system, and from these we anyhow want to construct a formal model, some of them. And to do that a lot of effort will go into doing abstractions, doing even human engineering and even statistical estimation. So there's sort of a gap in between what we really want to reason with and what we were able to produce at the end of this output.

On the other hand, formal specifications are quite difficult to come by and especially the good quality ones. And I think there are mainly two reasons for this. The first one is people still like to write requirements sometimes in plain English or sometimes in semiformal languages.

So there's a language barrier from what we like to specify and what we were able to formalize.

And the other reason is maybe we simply just don't know or cannot anticipate how the environment behaves. For example, in verification, if you do formal verification or model checking of a module, then you really need to also capture the environment of the module in order to produce something useful. If not, you will get lots of spurious kind of examples and it will make the technology very difficult to be adopted in practice.

So this is the other gap in this. And this sort of gap does not only access the inverification, also access in, for example, synthesis where we try to automatically construct the model from some specifications.

So this area here is where I would like to dedicate some efforts into, into really understanding the problems, and as well the implications into the downstream reasoning and hopefully find solutions for them.

Okay. So here's just a high-level overview of my research that I did in Berkeley. So I'm passionate about analyzing and improving the dependability of computing systems.

There are a number of areas that I've worked on. So I started looking into applying formal methods to reason about the reliability about reactive systems and, for example, digital circuits.

But very soon we realized that we really need good-quality specifications to be able to say anything meaningful here. And then we thought about whether there's a way to synthesize these specifications.

So to that end I've done some work on generating environmental assumptions as well as finding specifications that are useful for bug localization.

So I will talk about these two parts in today's talk. So I've also done some work on hardware security and in the context of trying to reverse engineering netlist. And we were able to apply formal method in this area and apply them to actually technical problems several orders of magnitude bigger than previous approach.

And here is really the model part of the story. I will also briefly talk about one technique that we develop here and to incorporate uncertainties, for example, those arise in -- yes, please.

>>: I don't understand what the hardware [inaudible].

>> Wenchao Li: Yeah. So here is really, for example, [inaudible]. And we -- for example, you get a third-party IP and you want to figure out if it contains some

[inaudible] parts. But everything is already synthesized into netlist, for example, our gates. So you need a way to reverse engineering some of the components into more high-level components. Yes, please.

>>: [inaudible] question. Can you not use the green pointer because I have eye sensitivity?

>> Wenchao Li: Okay.

>>: So if you could just use the mouse, I'd appreciate that.

>> Wenchao Li: Okay. Okay. I'll try not to use any pointer at all.

>>: No, you can use a pointer, just please don't use the green one.

>> Wenchao Li: Okay. Okay. Right. So yeah. So that's the context. And then we're trying to do some reverse engineering and to -- essentially from a really flat analysis and post synthesis analysis to come up with high-level structures.

Okay. So I'll mainly talk about specification mining that are relevant in environment -- generating environment assumptions for temporal logic synthesis as well as those for doing bug localization.

So to further motivate my talk a little bit, so here's sort of the picture that I'm looking at.

And here are sort of the positions that I'm trying to argue for.

So I think specification is an extremely important step in both formal verification and correct-by-construction synthesis. By synthesis I typically mean the specification-driven synthesis process.

And we use a lot of formal tools to do bug finding. But much of the challenge in bug finding actually lies in finding the specifications that the automated tools can then use to find bugs. If you have lots of sort of irrelevant specifications, you are not going to narrow down to the cause of the error.

Okay. So with that here's the -- sort of the research effort that I've spent on doing specification mining. On the left-hand side I specifically look at verifications, generating requirements for formal verification as well as leveraging those mining specifications to do bug localization.

On the right-hand side, I want to study the topic of synthesis from temporal logic and how to come up with assumptions such that the synthesis process will be successful.

We also use the same approach to synthesize human-in-the-loop controllers, which I will talk about in this talk.

>>: What does that mean, human-in-the-loop controller?

>> Wenchao Li: Yeah. So typically you have sort of operators in the control loop. So it's not just an autonomous controller that takes care of everything. So this narrows where human intervention is necessary. So this example is these semiautonomous driving cars, and this narrows that the auto controller cannot handle, and then the driver has to intervene.

So I've also tried to broaden the scope of specification mining, and mainly there are two efforts. One is trying to formalize from natural language requirements into logic specifications. And the other one is try to leverage human inputs in the creative way and try to identify interesting specifications.

So just an overview of sort of the picture of what I'm trying to do in the space of specification mining, and the thesis of that is really that I think temporal specifications can be mind or generated systematically by observing behaviors both of the design as well as its environment. And these mine specifications can be used to automate a tedious task such as localizing bugs or finding missing assumptions in both verification and synthesis.

Okay. So to this end, I think we've done some interesting work in both the formalism of the problem as well as the algorithms for solving them.

For the formalism part, we have a new formalism of specification which essentially takes a linear algebra view of specification instead of the sort of more autonomous-driven approach. And we use a sparse coding method for solving the specification mining problem in that space.

Okay. And the second contribution in algorithm is the problem of doing temporal logic synthesis. And we were the first to come up with a framework that is based on counterstrategy-guided assumption mining to systematically guide the specifications towards realizability, which I will explain later.

We've applied our techniques on quite a number of applications. We've done bug localizations, we've done temporal logic synthesis, which I've mentioned, as well as synthesizing human-in-the-loop controllers.

So without further ado, this is the outline of my talk today. In the first part I will mainly talk about assumption mining for temporal logic synthesis. I will divide that into three parts. The first one I will show how we can map from that natural language requirements to logic specifications.

And then I will talk about how do we actually reason with these generated specifications.

And one of the problem is you want to find if -- you want to be able to say there -- with

[inaudible] implementation that can satisfy your requirements. And one of the technique

I'll talk about today is called counterstrategy-guided assumption mining. And we also have adapted that technique to synthesizing human-in-the-loop controllers.

So in the second part of my talk, I will briefly touch on two topics. One is using specification mining for bug localization. And the other thing is to really address the problem of uncertainties as well as limit the observabilities in creating the model, the formal models for automated reasoning.

And then I'll end my talk with outlining some directions in future work.

Okay. So just a general introduction to temporal logic synthesis. This is really the proposition that you can automatically construct an implementation that is guaranteed to satisfy its behavioral description. And here psi is some logical formula describing the behavioral description typically in temporal logic, linear temporal logic. And M you can think about it as some finite state machine.

So there's a long history of this problem, first stated by Church, and subsequently a number of people have tried to tackle this problem with different logic descriptions.

And more notable events started from the introduction of LTL by Pnueli in 1977. And

Pnueli and Rosner was able to show that this synthesis problem is 2EXPTIME complete, which is very intractable problems.

And the other thing that they notice is really need good quality and complete specifications to be able to make this synthesis process successful.

So people have tried to identify fragments in LTL that are more amenable for synthesis.

So you can actually have an efficient algorithm for synthesizing that particular fragment.

And Piterman and others, Pnueli and [inaudible] actually discovered this generalized reactivity one fragment, which has a cubic time algorithm for solving the synthesis problem.

And our contribution is on the second part addressing the issue of needing complete specifications, and we take the angle of trying to generate candidate assumptions.

>>: Wenchao.

>> Wenchao Li: Yes, please.

>>: I have a question [inaudible] nothing to do with the contribution you're describing.

>> Wenchao Li: Right.

>>: What I want to understand is why is this a good thing to do.

>> Wenchao Li: Ah. Yeah. So that's a very good question. So there's a lot of framework for doing synthesis, like example based, by demonstration, and so you can think about -- so maybe your question is really on the intractability of this problem --

>>: No, no --

>> Wenchao Li: -- on the burden of specifying things.

>>: Supposing you solve this tomorrow --

>> Wenchao Li: Right.

>>: -- how would it change the world?

>> Wenchao Li: Right. So maybe in the next -- in a few slides I will show some applications of what people have tried to do with this.

>>: Okay.

>> Wenchao Li: So people mainly try to apply this approach to synthesizing some control components, which are simply typically difficult to get right in large systems.

And it's good in sort of two sense. One, let's say I have a very efficient algorithm, and in terms of maintenance, oh, I need to maintain my specification. If my specification change, I can push a button and then I get new implementation. I don't have to, for example, reprogram everything. Right?

And the other thing is just this is automatic. Of course, I mean, there are problems with these, of course. So there's a huge burden on the user in describing everything and describing everything in the correct way so that the synthesis is really what you --

>>: I guess my main question was that I think the other fundamental assumption here that the thing that is being fed into the left --

>> Wenchao Li: Yes.

>>: -- somehow easier to specify than the thing on the right. If it is just as complex and as difficult to understand, then we don't get anything out of this, right?

>> Wenchao Li: Right. But let's say I don't have the synthesis process. I -- somebody create implementation, and we still spend a lot of effort in trying to do model checking, for example, right? We are still trying to write lots of specifications and trying to check if M satisfy psi. So this just taking the other direction. We started with psi, and then we just get M.

>>: [inaudible] question is that [inaudible] specs are very in portion, right, so that to get a decent M your behavior description has to be fairly complete.

>> Wenchao Li: Right. Correct.

>>: It's not good enough to just have a few properties [inaudible].

>> Wenchao Li: Exactly.

>>: And so then, you know, the complexity of the thing on the left matches the scene on the right --

>>: Right. So what I'm saying is that, you know, if the thing that you're trying to describe on the left, then, you know, just complexity methods is not -- that is required to make programmers use it, there's a [inaudible] syntax for understanding, okay, time out.

[laughter].

>> Wenchao Li: Okay. I'll try to answer -- hopefully answer that question in the Q and

A section. Yeah, I think it's -- I mean, it's sort of --

>>: And 50 more slides.

>> Wenchao Li: And it's -- I think that's sort of typically the first question that -- I mean,

I would ask or I would get in any presentation that talk about temporal logic synthesis. I think it's a very fair question.

So I think everybody's fairly familiar with linear temporal logic, so I won't spend much time on this. This is really about specifying temporal properties over infinite traces, and these are -- sorry. These are -- the G, F, and XU are the temporal operators, and then you have some Boolean operators. And you can use this logic to specify things like every request has to be followed by a grant. So I will just skip over this part.

So here are the applications. So since we identified this fragment of LTL and be able to synthesize things from it, it has gained some traction in the community. For example,

Bloem and others have applied this to synthesizing this to circuits of some reasonable size. And Pnueli and Klein have looked at the problem of using -- applying [inaudible] synthesis to programs.

And the real sort of traction is really in the robotics community where they're trying to synthesize planners and controllers by starting with these temporal logic specifications.

And there's a lot more in the space of industrial automation as well as synthesizing the controller for the power distribution system in aircraft.

So, I mean, these are sort of nowhere near probably was Shaz [phonetic] hoped to get at the end, but I think there's enough traction so that we can explore this space further.

So the main advantage of using temporal logic specification, of course, is you get something correct-by-construction. But the caveat here is you really need good quality specifications and maybe complete specifications.

And experience tell us that writing complete formal specification is nowhere near trivial, even for simple things like an arbiter. And, in fact, from some of the standards that would define what the design is, many aspects are just not there. So how do you actually have a way of, you know, using this technology and at the end getting something useful?

Okay. So with this I will start with what these requirements look like in some actual design documents. So TT Ethernet is this product code that is used in aircraft control systems. And here is one specification that is taken from a document, a draft document of that standard.

So basically what this says is when this system is in some state and then it receives some input, then it performs a set of actions in this -- in bullet forms. And this is the way that -- a very common way that engineers write their requirements. Okay? So somebody has to go in and translate this to whatever logic that you want, maybe in LTL, and then you can do your automated reasoning. You can do verification, maybe you're more, you know, daring and you want to do synthesis from the specifications.

Okay. So here is a slightly simpler example from this Isolette design, which really is a design for infant incubators. And taken from a requirement engineering handbook released by the federal administration of aviation administration. Had the same flavor, so basically says if, you know, some input is within some range, then some signal will get set to some value. Okay?

But, again, these are the way that people still write specifications. They're not going to write formal specifications. Okay?

So my vision is to make these formal tools more useful and make them more accessible for a large population of nonexpert users. It would be nice to be able to encapsulate these formal tools in some natural language processing layer so that the interaction model is really that the nonexpert users provide some requirements written in semiformal language and queried on the requirements that he writes.

For example, you want to check whether the requirements are consistent or whether you can actually implement the requirements. Okay? But in this process, the user should be sort of oblivious to the actual formal tools, the only things that he understands are maybe some natural language or semiformal language. And that's my vision. I just want to know that this is also part of the project being done at SRI.

So there are a number of challenges in formalizing semiformal and natural language requirements into formal specifications. One of them is these requirements are written typically in some stylized form. It's neither really natural language, it's not really the sentences that you would see, for example, on the journey article, but they are, of course, not already in the form of some temporal logic formulas.

So people have tried to tackle this problem of trying to reconcile the syntactic difference.

And most of the times that want domain-specific front end. So there are some templatized language for which you write your specification and then they will get compiled down to logic specifications.

But that, you know, pose a really steep learning curve for whoever that wants to use that technology. So what we are trying to explore is whether we can adapt a generic NL parser in some way so that is flexible to handle the requirements of different forms.

The second is of course what you get from this parser of parsing your requirements is only the syntax. We need to associate certain semantic information to them given the grammatical relations you get from the parser and be able to recognize things like variables, values, logic, and temporal relations.

Okay. So here's a high-level pipeline of how we tackle the problem starting with some natural language or semiformal requirements, we preprocess them in the form that the normal generic parser can gracefully handle. And then we try to associate semantics to the output of that parser.

And after that, we apply a set of rules to generate the formulas based on the target language that you want.

Okay. So I will use an example to illustrate this through flow. So here's the requirement.

It's as if the regulator mode equals INIT, and then the heat control shall be set to off.

This is a sentence taken from this ISO lab example. It's a very simple sentence.

But you realize, sort of recognize a few things. One is there are variables in this sentence. There's regulator mode, there's the heat control. There's values that these variables should take, which in the form of these named entities INIT and off. And there's a Boolean connective. So there's the implication in this sentence. And there are also natural language artifacts that should be really filtered out. They're not important in formalizing these into let's say an LTL formula.

So we first preprocess these in the form that a generic parser can handle. And the reason, of course, again is that we need to recognize domain-specific names like regulator mode,

INIT. There's not going to be things that would appear on Wall Street Journal articles.

And oftentimes you may have some complicated expressions embedded in the natural language sentence like if A is in the range of some lower bound and some upper bound.

Okay. It's in the range of some interval.

So for these examples, it's very simple. We just, you know, add an underscore between regulator mode and heat control so that basically the Stanford typed dependency parser won't choke up after this. So we actually produce the correct grammatical relations amongst is entities here.

So with that we'll actually get a set of dependencies, and these are called Stanford dependencies. They're in the form of a triplet. You have the name of the relations and then some mentions which can take the form of a governor or can be the dependent of that relation.

So one example is from the same sentences you have the variable -- or the named entity heat control, and this is related to the other mention, the, by this relation called det. So it's a determiner.

So if we apply STDP on this sentence, this is the set of dependencies that we would get in the form of a directed graph. Okay. And we would like to be able to generate a formula from this dependency graph.

And the key idea for doing that is we have defined a set of type rules. So a type rule is essentially a mapping from a set of grammatical relations, dependencies, to a set of predicates with built-in semantics.

So here is an example. If I see this particular dependency called [inaudible] with this placeholder which should be mapped to some mention, and the dependent of that relation is the, then we have in unary predicate called unique that essentially says that the argument of that is really unique term, for example, a variable. Okay?

So with this STDP output, we apply a set of type rules to it. And then we can get another directed graph be with our own predicates. Okay.

So what being shown here is that the blue box essentially is a shortcut to these unary predicates and the edges are specifying relations. Okay.

And now we have sort of a more easier form to deal with because we sort of have this, know the semantics of these relations, and we can generate formula from there.

So for a specific tart language, we apply a set of expression translation rules to this predicate graph. So, for example, if I have this unique with the mention X, simply I will just take the name of it, which is heat control, and so on. And we can apply these set of rules recursively starting from the root.

And here's what -- how the algorithm will work. So by applying this set of expression translation rules, first I would get regulator mode equals to INIT, because that's the first argument of equal is regulator mode and the second argument is INIT. I do the same thing for set, and you will get this. And then eventually it handled the implication. And at the top that's the formula that will get. So just, you know, trust me for now there's really no black magic going on, and then you can handle for some subset of sentences written in that specific way and be able to generate formulas from them.

Okay. So now that we're able to translate some natural language requirements into properties, there will be queries that the user would like to ask, and we'd like to provide feedback also to these nonexpert users.

>>: From that sentence, do you infer a frame assumption?

>> Wenchao Li: Do I infer a flame --

>>: A frame assumption.

>> Wenchao Li: A frame assumption. Which means that it happens at the same --

>>: Which means that nothing else in the world changes except the one variable that I'm supposed to set.

>> Wenchao Li: So there would be lots of sentences. For each of them, we just sort of take the context-free approach. We don't assume there's any context in this work.

>>: Yeah, but if I have some variable that's not mentioned to be set in any sentence, does it not stay the same?

>> Wenchao Li: Um...

>>: I mean, would that not be a missing part of the specification in that case, a missing requirement that's implicit?

>> Wenchao Li: Right. But, I mean, whose responsibility is that, right? I mean, the users really need to specify something.

>>: Maybe you misinterpreted the [inaudible] and maybe it's inherent [inaudible] description that if I don't [inaudible] this is like McCarthy 69.

>> Wenchao Li: Right, right.

>>: It's not a new problem.

>> Wenchao Li: Right. Yeah. Right now we don't handle context across sentences. We parse them independently. But you're right. There could be -- I mean, there could be even ways that says, you know, after this sentence is that, no, it shall be -- get sent to force. I mean, after do this co-reference resolution across sentences, and currently we don't do that.

But for this particular example, everything is -- just happen to be written nicely in that document. And we look at a few documentations as well, and typically people try to be precise about what is being set to what.

So there still could be a lot of context, a lot of also redundant stuff. But for this particular effort, we look at this set of sentences that are essentially labeled that these are requirements, requirement by requirement too, and we just take those requirements.

>>: Okay. Offline.

>> Wenchao Li: Yeah, yeah, yeah.

>>: If the requirements are very precise, what amount of [inaudible] take to just get them to [inaudible]?

>> Wenchao Li: Yeah. So we actually came across -- so this is actually an easy one.

But we have come across that span over seven lines. And it's a single sentence. And it, you know, has and or, it has number of actions, has some temporal relations.

So it could be argued that it's -- you know, if you have sufficient knowledge of temporal logic, maybe you can do that. But in practice that's just not the case. And that's just the way that is, and that's how requirements are written. Yes. Yes, please.

>>: Just the example you gave, just complicated sentence with seven lines generate some formula, how do you know that what you generate reflects complicated several -- I mean, maybe even a human doesn't really understand what's there. I mean, you're generating something, you have some way to --

>> Wenchao Li: Yeah. So right. So I was hoping this would be a precursor into some of the more technical stuff that I wanted to describe.

So we have developed some metrics to evaluate the reliability of this NLP pipeline and basically whether we get if the formulas really truly reflect what the sentence is saying.

But yes. Yes. I think it's important part.

>>: And so -- and you don't need to say the next time heat control equals off here.

>> Wenchao Li: Yeah, yeah. So there's the implicit knowledge. So actually in the flow that I show --

>>: Those two equal signs don't mean the same thing.

>> Wenchao Li: They don't mean the same thing. For LTL here, the first one is really assignment, the second one is equality. Right. So we have other targets we actually generate models [inaudible] and stuff which actually have type information, have variables and values.

But for this purpose, all we care about is --

>>: How do I know what is the interpretation of the equals sign on the two sides of the implication? I mean, the second segment seems to be an assignment to me.

>> Wenchao Li: Yeah.

>>: Seems to say change the value of heat control off.

>> Wenchao Li: Right.

>>: All right. So that means heat control is going to be off at the next time.

>> Wenchao Li: Yeah, yeah.

>>: But might be on now.

>> Wenchao Li: Yeah, yeah. So I tried to self skip some of the technicalities here. So there is -- you're exactly right. That's actually the information that the user -- the only information that we require the user to provide for the formula generation.

So we look at these as requirements for transition systems. And oftentimes is implicit about what our state variables and what really are just wires. So the user has to say that, you know, we have figured out these other variables and these are the domain of the variables. But now you have to say that whether that variable really is a state variable or is just a pure output, or maybe just a local wire. So that's the information that we need.

And after that we either can put a next there or we don't. Yes. So I can take this offline.

There is probably a much longer discussion.

So here there are a few things that the user might like to query. For example, are the requirements consistent. This is assuming that the properties are correct. This is just

LTL satisfiability. And the second part is really whether you can implement something that satisfies the requirements. And if not, we hope to provide some feedback to the user on how to fix it.

Okay. So the problem of -- the question of asking whether the requirements can be implemented is really the realizability problem in temporal logic synthesis. We know that this is intractable problems, but there are efficient algorithms for a fragment of LTL.

And for the feedback, essentially when the specification is not realizable, so you cannot find implementation for the specification, can we do something about it. So can we let's say generate some feedback to the user in the form of these are the assumptions that you might have missed.

So just a very brief overview of what GR(1) is in the form of some assumption implying some guarantees. And essentially you restricted the syntax to describe the initial states, translations, and fairness, essentially acceptance states.

And doing -- solving the synthesis problem is solving also the GR(1) gain, so sort of two-player gain. And the complexity is polynomial. So it's quadratic in the size of the state space and then linear in the number of transitions.

Okay. So let's look at a concrete example. So given that this is the specification, there's a psi equals to some environment assumption implying some system guarantee with two variables, X and YX being the input and Y being the output.

So what the environment assumption says is that global eventually there X should at some point be false. And what we require about the system is that whenever X is false the output should also be false. And but eventually at some point for -- at any state of my computation that Y should become true.

Okay. So this is satisfiable because, you know, there's input trace. And then you can produce an output trace that satisfies these requirements. But it's not realizable because it hasn't worked for all inputs under that assumption. For example, you might have a sequence of zeros and you are forced to produce a sequence of zeros at the output. And this would violate the last guarantee that requires that Y has to be true infinitely often.

Okay. So now what do we do? We want to be able to say that maybe the user has missed some assumptions. And we want to recommend those assumptions in the way that if you make these assumptions, then the spec would be realizable. Okay? So unrealizability is also -- is often caused by underspecifying the environment. Okay.

So we want to produce these assumptions. But assumptions are really things that sort of you don't know you are making. So how do we do that? We look at -- exploit some structures in the Java synthesis process. The message here is that you can compute some winning regions using some [inaudible] formulas. And then you can extract a strategy from the winning region. Okay? And then you can -- essentially it's a relation and then you can find the circuit that is consistent with that relation. That is not really our contribution.

And if the specification is not realizable, that means you can also compute in the region for the environment, and that happened in the intersection with initial states. And from that winning region, you can extract what is called a counterstrategy, which is a strategy for the environment to win.

And if it's not realizable, then we know that initial states is in this winning region. And the key idea of providing this feedback, of generating these assumption, is that we want to find additional assumption psi such that I can prohibit this counterstrategy.

Okay. So back to the same example. This is what would be generated as a counterstrategy on the bottom left. So you have some finite memory, so you start a memoriless strategy, and then you have started with some initial memory. And what this relation at the end essentially says that given the state and given the memory content at that state, this is the next input that environment should pick in order to win and then you update the memory content.

So this is how this counterstrategy would look like in the form of a graph. We have three initial states on the left, and according to this strategy, the environment can just pick not

X and to force the system to output not Y, and then it will be [inaudible] in this state.

Okay.

So the observation that we make is that these kind of strategy really satisfies some specification. For example, here is one that says eventually globally you will start get not

X. Okay.

So this have -- contain other behaviors that can be exhibited by these kind of strategy graph. And we've just -- if we assert the negation of these property in the form of an additional assumption, then it would rule out this kind of strategy. And then in this case, if you ruled this part out, the final -- the resulting specification would become realizable.

Okay? So that's the idea here.

So we call this approach a counterstrategy guided approach. And the graph that you see is nothing but a discrete transition systems that contains all the game states if the environment adheres to that particular counterstrategy.

And the general solution that we have proposed is that if we start with some candidate assumption psi, all we need to do is check if this is consistent with the current assumption and then see if it helps to rule out other behaviors that can be exhibited by the counterstrategy. So we can check whether GC satisfies the negation of that. If it does, then we can add this back to the original assumption and see if the resulting specification will become realizable. If not, then we will iterate, start with a new candidate or try a different strategy.

Now, the question here obviously is how do we pick this candidate assumption. So here is how we consider the problem. We want the assumptions to be in the form that we can take advantage of the synthesis process. We want this to be simple formulas because we are essentially generating as feedback to the users. So they shouldn't be too hard to understand. And we also want them to be representative. So we want to cover the three types of specification; namely, the initial conditions, transitions, as well as fairness.

So we use a simple approach based on template generalization, and here are the templates that we use. So these are very simple formulas with at most two literals over the input variables.

So there's some related work in trying to synthesize sort of the minimal assumption for

LTL synthesis. But the result is that if you represent it as a single Büchi automata, it becomes very complicated, even for a few lines of specifications. And if you produce that as a feedback to a nonexpert user, and even to an expert, it takes a lot of time to really understand if that truly represent what you have in mind about the environment.

So from a practical point of view, it's of -- the theoretic results are nice, but it's not very useful.

And of course once we define our templates, we can do some optimizations. This is, again, the counterstrategy graph. And there are only two kinds of violations here. One is you have a terminal state which means that if the environment makes a move, the system cannot choose anything without violating a safety specification. And the other one is -- will be fairness stuck in some strongly connected components. So you would violate some fairness specifications.

So there are only these two kinds of violations. And each of these would correspond to one of the template that we show. And of course you can find some cuts in this graph such that you would never get to any of these failure states, and that corresponds to the translation type of properties that we want as an assumption.

And, again, these all can be done symbolically, and the relations also represent this symbolically for the counterstrategy.

Now with this sort of technology, which is really counterstrategy-guided framework, we can now do something about the specifications that we generate in the -- using the NLP stage. Okay.

So we pick 15 natural language requirements from a particular module in this Isolette design. So these are actual requirements written for this requirement engineering handbook. And we're able to correctly translate them into LTL formulas, and they happen to be also in GR(1).

And now it turns out that these specifications are satisfiable, but they are not realizable.

And we want to provide some feedback to the user because if I just say that, okay,

[inaudible] implementation that can satisfy your requirement, it's not very useful. So you want some sort of debug information. And the kind of information that we choose to provide is in the form of an assumption.

So applying this assumption mining technique, we can generate an assumption of this form that essentially says these two variables cannot be true simultaneously. And when we look back at the documentation and the reason is actually quite simple because there's this state machine that you actually want to sort of implement at the end. And the state variables are INIT, normal, and fail.

So when regulator status is true, you actually transition to the normal state. When the timeout is true, you go to the fail state. But it doesn't really specify what happens when both of these are true. So there's some nondeterminism in the system that maybe you can choose to implement. And from a design point of view, that's not something that is desirable.

Okay. But we're able to generate this assumption.

Now, of course, again, assumptions that you don't know what you are making. So what if the users just refute the assumption? No, this is not what I know about the environment.

Then what can we do?

Okay. So it turns out that there's some class of systems where we can monitor the assumptions, and there exists some fallback mechanisms if the assumptions are violated.

So these are what we call the human-in-the-loop systems. And in fact many safety-critical systems interact with humans. So the correctness of these systems does not only depend on the correctness of the autonomous controller but also on the actions of the human and the interaction between the two.

So their examples are just surgical robots, the system in flight cockpit and also self-driving cars.

So if you look at the semiautonomous vehicle as an example, you have some controller, you have some control loop that try to actuate the car, and then you send from the environment and how the car moves and then you actuate again.

So there's some decision that's been make on the left-hand side which represents some controller. And what we call human-in-the-loop controllers are really a composition of three components. You have really the human -- in this case, a driver -- you have some auto controller that is mostly good that can perform most of the functions, but you also have this component called advisory controller that essentially says there are conditions during which the driver should really take control, and there are other most of the time you can basically let the autonomous controller drive the car.

Okay. So if you look at these, it's essentially there is some [inaudible] going on, and the decision is made by the advisory controller. But of course there's some lag between the -- when the driver can respond to the signal is issued by the advisory controller. So we have to incorporate this human response time in synthesizing this controller as well.

And the key question of automatically coming up with advisory controller as well as the autonomous controller is that we have to decide when to switch control. If you switch control too late, then the car may just crash.

So this is inspired by some recent standard or statement made by NHTSA that really specifies the level of automation that we'll be looking at in the near future for semiautonomous vehicles.

So it basically says that for most of the time you can let the auto controller take control, but there are scenarios where you need to transition back to driver for control. But it should be done in a way that you allow sufficient transition time. Okay.

So based on this, we formalize a criteria for human-in-the-loop controllers. The details are in this upcoming TACAS paper. These are informal description. Essentially need to monitor, and you can only monitor up to the next input provided by the environment, so we look at this as reactive system. It should be minimally intervening in the way that it should minimize the probability of when human intervention is needed and so on.

Okay. So we have established these criteria for human-in-the-loop controllers. And we want to synthesize these. So here is a quick run-through of the algorithm. If we take the approach of synthesis from temporal logic specifications, we notice that we can essentially monitor the transition assumptions that I described in the previous part of counterstrategy-guided assumption mining. But we want to find a set of assumptions that will satisfy the criteria that we establish. They should be minimally intervening, they should allow sufficient time for the driver to respond.

So, again, if we look at this counterstrategy graph, what we can do is we can have some parameter T that represents the amount of the time that the human need to respond. So T is the number of transitions that would get to some failure state. Essentially we just remove that part out of the picture. Okay.

And now we want to satisfy the criteria of this is minimally intervening, so we need to assign weights to the edges and we make the assumption the environment is uniformly random in choosing their inputs.

And you can easily solve these as typical S-T min-cut problem once you have the remainder of that graph. And you can synthesize the assumptions in the form of the advisory controller. So whenever some assumption is violated, it would do the signaling to the human operator in the way that you have at least T transitions or some T seconds to respond to that failure.

Okay. So we have some more details in the paper. But basically here's the first part of my talk, but I'm probably running out of time. I want to quickly touch upon two things.

One is using specification mining for bug localization and also addressing issues in uncertainties and limited observabilities in models.

So I want to really highlight one of the stuff that we did, which is using this idea called sparse coding, and we want to formalize specifications as basis subtraces, which I'll explain in a bit.

And if we do that, essentially behaviors of a design forms a subspace span by these specifications. And then the problem of doing error localization is essentially finding a subtrace that lies outside this subspace. Okay. So I'll describe these in the next few slides.

So we are really drawing inspirations from machine learning literature and in particular sparse coding. So sparse coding is a way of helping to uncover latent structures. For example, people have applied this algorithm to doing feature learning in image classification.

So what you do is you start with some image, you divide them into multiple smaller images, and you require that for each of them it should only be expressible as a linear combination of a few basis images. And it turns out that these basis images actually correspond to edge detectors. So you're essentially getting edge detectors in an unsupervised way. So we don't have to define what edge detectors are.

Okay. So we borrow the same idea. And we expressed each subtrace as the Boolean combination of a few subtraces. So you can think about --

>>: What is a subtrace?

>> Wenchao Li: Yeah, I'll explain the next slide. So subtrace, essentially there's a trace of the design, a subtrace is a particular window of that trace, of a fixed length K. And we formalize this problem called a sparsity-constrained Boolean matrix factorization.

Okay. So how does this look like in the Boolean space? So these are -- let's say have some set of events, and that's the timeline going to the right. And let's say in the first time step events A and B are true, and then in the next time step C becomes true. So we can write this down as a subtrace which represented as the matrix over there. So we have

110 and 001 representing the first subtrace. And then you have this.

And now if you have some trace of the environment and you want to be able to express these as a superimposition of the basis, okay, so the idea behind this is if you have some event in time T1 and in the next time step maybe it would trigger some other events, okay, and there would be multiple -- there's a collection of multiple of these spaces. But you want this to be able to express all the possible subtraces that the design can exhibit.

Okay. So we formalize the specification problem as a set of basis subtrace of a particular length L. And under some sparsity constraint. So it's an integer describing the maximal number of basis that you can use to superimpose to get any subtrace.

And now we can talk about completeness of specifications. So in the logic setting, you can have a specification that is specified by design, but you can keep adding specifications. And there's really no end to that.

But in this formalization, you can reason about completeness with respect to some length

L. So if every subtrace of length L that can be exhibited by the design, they really span or can be described by the basis and some constraint that is complete.

Okay. Now, the specification mining problem becomes that given some example behaviors in the form of a set of subtraces, you want to find the basis B that can describe all these subtraces and potentially more. So we want to spend these subtraces.

Okay. So here's making things a little bit more concrete. You're given a trace, so the rows are signals and the columns are cycles. And let's say I choose the subtrace length L to be 2. And you can collect each of these subtrace into a vector. So you have 1100, I just write it down as a column vector. And for this trace, actually you can represent it as a Boolean matrix where the columns are essentially individuals of traces.

And you want to do a factorization of this matrix. So this will become a product of two matrices. The first one would represent the basis and the second one would tell me what are the basis that I need to use to construct each subtrace or describe each subtrace. And of course the product here really is not the usual matrix multiplication, and multiplication here over the Boolean domain is -- I use it as and and addition is or. Okay.

>>: Some of those coefficients are required to be 0, I guess.

>> Wenchao Li: Yes.

>>: The ones that would correspond to overlapping different traces or something?

>> Wenchao Li: Yes. So some of these coefficients would be 0, and this is under the sparsity constraint. So basically if there's a sparsity constraint C, then you can have a number -- maximum number of C ones in each column in the coefficient matrix.

>>: Why is sparsity important?

>> Wenchao Li: Because it actually help you uncover some structure. So assume that I don't enforce sparsity. You can have a very trivial basis, which is the identity matrix. It tells you nothing about the structure of this trace. So you want to enforce some sparsity so that each portion of the trace they observe is really a composition of some behaviors,

but not everything. Right? And then you can -- for different subtraces you can pick from the basis set. Okay.

So I won't go into detail of how to solve these. The idea is this is doing Boolean matrix factorization and is equivalent to doing -- finding some biclique cover of a bigraph represented by that Boolean matrix under the sparsity constraint.

Okay. And with this specification formalism, we can also formulate the error localization problem. And here is the pictorial illustration. So let's say this is a space of all possible subtraces. And these are the set of correct subtraces that I have observed. From these subtraces I learned a basis be that potentially spend more subtraces but including the correct ones.

Now, if there's a subtrace in an error trace, if it lies outside this space, then I know that the error should happen in that particular window. So I'm doing localization using this approach.

So there is some experiments that we did. By enforcing the sparsity constraint suitably, we can find meaningful specifications. For example, this is a simple two-part arbiter.

And what is said -- what happens is that if two -- there are two competing request, one of them will wait and they would get service in the next one. Okay? We don't have to specify any template or logic formula to generate this type of basis.

And then we apply this error localization to a chip multiprocessor router, and then it outperforms to baseline approach quite significantly, by 50 percent. Okay.

>>: What does error localization have to do with mining specifications?

>> Wenchao Li: So what I want to do is I have essentially good traces, and then I may get an error trace. I mean here error very late at the end. And the first problem that I want to find out is, you know, where the error actually occurred in time.

>>: So the interface you got was --

>> Wenchao Li: Is a label trace.

>>: In the field there was some bug happening.

>> Wenchao Li: Yeah, some bug happen. I want to localize it in time. Yeah.

>>: So what would the size of my minimum basis be if my circuit was [inaudible] binary

[inaudible]?

>> Wenchao Li: It depends also on the sparsity constraint. Because the basis here can be overcomplete. So depends on two parameters. One is the length of the subtrace --

>>: So let's suppose I don't have a sparsity constraint. What would the minimum basis be?

>> Wenchao Li: You just -- you can just get -- you can always just use the identity matrix.

>>: [inaudible] sparsity and low rank.

>> Wenchao Li: Yeah.

>>: Right? You can also -- you can defend a low rank and then you [inaudible] I identity matrix is not another option.

>> Wenchao Li: Right. Right. Right. So, I mean, it can just be the writeoff whatever a matrix that you get to represent your subtrace.

So, I mean, so for this particular case, we are really -- they're really looking from the angle of trying to localize errors and trying to find things that really reflect some behaviors in the design.

So, for example, one approach is, you know, I do the same thing and then maybe I do some principal component analysis and then from that maybe we can look at let's say the -- not the -- the really -- the -- so the lowering rank components and do some

[inaudible] detection with them.

But notice that this is over the Boolean space. And usually the sort of vanilla PCA doesn't apply there. And yeah. So -- so, again, sort of there are two goals. Maybe we could take this offline as well. There's one more thing I want to cover.

>>: I mean, how does this technique work [inaudible]?

>> Wenchao Li: Yeah. So right now actually doing this basis learning can be quite expensive. So it depends on -- also on the sparsity of the matrix they're trying to factorize. So you have very sparse events. Then what essentially you get as a bipartite graph has small number of edges. Okay. So the number of bicliques that you actually get is relatively small. So you have, you know, a smaller set to choose from to form the basis. If you have a fairly dense matrix, then it's very expensive to learn the basis. Yes.

So it works well for sparse matrix.

Okay. So the last thing that I want to quickly touch on is addressing this gap on the model side of things. Okay. I want to look at uncertainties and limit the observabilities.

And I want to focus on dealing with uncertainties. And so typically people want to do some probabilistic model checking. And what it means is your model has some probabilities. And then we take that model and check some properties against that.

But in reality, these models are usually estimated from observations. So it's not really correct to just pick a single number for each transition probabilities. They are really just a representation or just a mean that you estimate. There's some uncertainty enveloped around that number. So whatever quantitative analysis that you do by doing this probabilistic model checking is not really correct. It's only correct with respect to this model but not with respect to your observations.

Okay. So and this is why we want to incorporate uncertainties and be able to deal with them efficiently. So on the theory side of thing, we are able to improve the complexity of these [inaudible] P to P time, and we are able -- really have actual algorithm to model check these properties.

And the models here are either discrete time Markov chains or Markov decision process.

But the transition probabilities lie inside some region. Okay? So for these type of regions, typically things that we can handle are convex uncertainties, a subset of convex uncertainties. But these already extends to the previous best prior work which only works for intervals.

Okay. So the algorithm that we have developed -- so we have our own two, but the algorithm has already been incorporated into prism, which is this probabilistic model checker.

And the good thing about this is for the same model we can do it in comparable time and actually in many cases even faster than the model that does not have uncertainty.

So I will describe a little bit of what the key idea here is.

So we'll start with some stochastic models, and you want to generate some formal models in the form of let's say discrete time Markov chain. And then you want to reason about them with respect to logic specified in probabilistic CTL.

And the bottom line here is that these models are really constructed or estimated from observations.

I want to say something about these models. So one example is that I want to ask for what is the maximum probability that some critical fault would happen during the execution of this system. And there are some inherent uncertainties. For example, in wireless network, there's some noisy channels. And typically that probability you really have to estimate. You can't just assume that is 0.9 or 0.8.

For biological networks, there's really -- we don't understand much of the dynamics, and you again estimate from observations.

So it turns out that one key procedure, and it's a procedure in doing PCTL model checking, is checking the unbounded N2 operator. And there's a classical linear programming formulation for checking these properties.

So here we have -- first we have to figure out the set of states that would satisfy this property with 0 probability and the things that would satisfy them with probability 1. So these are denoted as S no and S yes. And then you have these from LP formulation. And what they are essentially respecting is the probability distribution for which -- when you are in a state S and if you take the action A, then here's the probability distribution over the next states that you can get to in the Markov chain.

Okay. So if we have some uncertainties, basically these would -- the last row will become a maximization over some region. Okay? Over some region C. And what the problem here is you cannot directly solve this problem because you really have infinitely

many points now in this region. Okay? So for each transition is going to be, for example, in some interval, and there are infinitely many points in that interval. So you cannot directly solve this optimization problem.

So we just make a very simple observation and use the idea of duality in convex optimization to address this problem. So we know that if the condition for duality holds that the dual function would always overestimate the primal function, and in this case here the primal functions are this maximization that we have, and we have -- we can come up with some dual function. Okay?

And so what happens if we replace that dual function since it always overestimates my primal solution. We can put it on the right-hand side of the inequality. And better we can drop the minimization because if it's greater than the minimal of that, it should be greater than G. Okay. Because the optimization is -- we force it to become the minimal.

And that is the key.

And, again, this will become just an LP for interval constraints or interval uncertainties.

And we can solve these with a standard LP techniques. And this is how we get the P time algorithm.

And of course this will generalize to some other convex models. For example, if you have an ellipsoidal model for uncertainty, the problem that you're trying to solve with the optimization problem we are trying to solve here, it becomes SOCP. Okay. So these are really the idea.

And to conclude my talk, I will try to describe some future directions that -- some ongoing work as well as future directions.

So in the short term I want to look at cyber-physical systems. I'm working on generating contract language for Simulink models as well as a description for the architecture that this model would implement on.

I want to leverage SMT techniques [inaudible] for SMTs in that domain. One immediate extension of the assumption mining work that we did is compositional reasoning or compositional synthesis in the lazy way. So you can imagine that you just specified the guarantees of two systems interacting with each other and you're trying to synthesize these two systems. And you can put the assumption mining in the loop such that lazily you would come up with assumptions for one system and then that becomes a guarantee for the other system. You can keep continue until you are able to synthesize something.

Okay. Or simply it's not synthesizable if you review all assumptions.

And there's also a composition in the vertical direction, which is not explored very much, and this is related to the first one that I'm exploring. And I want to be able to incorporate platform constraints or assumptions and really be able to reflect those in my contract language.

So I have some computation models for my contracts. And once I bring my architectural platform description, I should be able to say that this is how much my contracts would change.

And of course I want to apply this to more domains like programs and distributed systems. Especially I think the sparse coding work will have some application in doing bug localization in distributed systems.

So in the long term I want to do specification-driven assisted design. This doesn't really have to be, you know, temporal logic synthesis. But we can imagine that we can apply specification mining to large corpus artifacts and learn some similarities among those and to assist the state-of-the-art synthesis process.

I want to explore areas on whether you can program with natural language-like language and really understand what the user interaction modeled in this programming paradigm.

And I would want to extend the assumption mining to the security domain, for example, explicating security assumptions, and to other applications.

So I just want to end my talk with this picture, and I think there's really an emerging trend of cyber-physical systems today. And I think formal methods and automated reasoning has a lot to offer, not just in the cyberspace but also in this large area which contains networks, contains people interactions and so on. Lots of physical entities, and a lot to be able to do verification and synthesis of programs with the real space. And that's the end of my talk. Thank you very much. I'll be happy to do the questions.

[applause].

>> Nikolaj Bjorner: Any questions?

>>: Are we late or are we -- there's time for questions? So in your NL specification scheme, so if I have an NL specification that contains N Boolean variables in there, besides my game graph would be 2 to the N, right, so there seems to be a disconnect with the idea of the synthesis complexity of this GR(1) language, right, and the size of that game graph. So how do you actually get from the natural language specification to the

GR(1) specification? Is that a linear translation?

>> Wenchao Li: So I think you're talking about also two things. One is -- trying to see if

I actually understand you correctly. One is the complexity of algorithm from going from natural language specifications to formal specifications.

>>: GR(1), yeah.

>> Wenchao Li: To GR(1). If they're already in the form of GR(1), so even though they're in natural language if they're in GR(1), then you're fine. Okay? Then you're just -- it's a simple translation that I showed.

>>: In terms of whether my specification really is in GR(1), translates linearly to GR(1), because if I have a bunch of Boolean variables, like the heater is on, the heater is off, and so on, is there then a linear translation to GR(1), or is that --

>> Wenchao Li: You're talking about domains of variables.

>>: What's that?

>> Wenchao Li: You're talking about domains of the variables.

>>: Let's say all the variables are Boolean and I have N of them.

>> Wenchao Li: Okay. Right, right, right.

>>: Right?

>> Wenchao Li: Yeah. So it's natural translation. Just linear translation to GR(1).

>>: Yeah. But the size of my game graph isn't going to be exponential. So the synthesis is still polynomial?

>> Wenchao Li: So the state space of the game graph is exponential in the number of variables. The algorithm is polynomial in the size of the game graph. Okay. Which is exponential in the number of variables. Yeah.

>>: Isn't that an upper bound?

>> Wenchao Li: But that is also a lower bound. I mean, if I have to -- if I serve as a human engineer, I have these requirements, I've tried to implement the system, let's say I don't have prior knowledge, I cannot sort of borrow things that I have done before, and then trying to implement that. And I think the complexity of doing that -- I mean, I'm essentially facing the same problem. There's no --

>>: Yeah, but human engineers kind of solve exponential problems. So, you know, I'm taking this thing that essentially is a description of algorithm. It says if this happens, set this variable to that.

>> Wenchao Li: Right.

>>: And I'm now trying implementing it into an exponential problem.

>> Wenchao Li: Oh. So the -- the result of this implementation -- the state space of the implementation may not be exponential in the number of variables. It can be. But typically it can -- I mean, there is a lot of work on trying to actually optimize the implementation that you get in the synthesis process.

So let's say there acts as an implementation with a very small state space. Then -- and then for the current synthesis approach is not going to get to that implementation, then of course is more advantageous for the human to do that. But that requires a lot of expertise, right? So you know lots of optimization tricks to be able to come up with that implementation.

I mean, sort of conceptually the engineer is really doing the same thing. He has to really overcome this complexity himself.

>>: No, the engineer decomposed the problem going into a bunch of separate variables, so the engineer is never constructing an --

>> Wenchao Li: Right. So the same thing here. I mean, we're not trying to say that we are going to synthesize the entire system, we're going to do this component by component. That's the proposition. And really we're looking at the sort of difficult control state machines, and that's the part I think where this technology may shine.

So I -- personally I acknowledge that there is sort of a lot of problems still, especially on the specification side.

So what -- based on my experience, actually that's part of the reason why I wanted to do assumption mining, is that even for a very simple design, we tried to write out all the possible specifications in GR(1) to describe the design, and it turns out that the number of lines for that specification is much larger than the number of lines of code I need to write to implement the design. Yeah. So the burden really -- the complexity really has been shifted to writing this specification. But once you get that part right, then hopefully you have an efficient synthesis algorithm to get to your implementation. Yes. Yeah.

>> Nikolaj Bjorner: Okay. It's almost noon. Thank you again.

[applause].

>> Wenchao Li: Thank you for taking your time.

Download