22715 >> Rustan Leino: Good morning, everyone. My name... pleasure once again to introduce Jean-Raymond Abrial for our next...

advertisement
22715
>> Rustan Leino: Good morning, everyone. My name is Rustan Leino. It's my
pleasure once again to introduce Jean-Raymond Abrial for our next lecture on
the Event-B method. And today we're going to hear more about sequential
programs.
Jean-Raymond Abrial: Thank you, Rustan. And good morning. So last time we
stopped before the sorting development. This one.
So I'm going to present a sorting algorithm which is not particularly clever like
Quicksort or HeapSort or all the other kind of sorting algorithm. My intention here
is to show you an example where we have nested loops. So this is what we are
going to present here and show how when we have nested loop, the business of
merging events when they're all on the table is an easy task. We start with
numerical array F and we want to build another numerical array G. And G and F
are the same element, and G is a sorted in ascending order.
Okay. So here is an example. We have the initial RF and we have the final
array G. So we start as usual by defining our axioms, where we present the
parameters. N is a natural number. 0 is smaller than, is -- N is positive. And F is
a function from the interval 1 N to the natural number. I have decided to take an
injective function here to simplify things. So all elements are distinct in this array.
And at this point for G just a binary relation between natural numbers and here is
you remember we define the final which is an event with a post condition, and an
empty action.
So G is a natural -- an array from 1 to N. The range are the same. And here is
the way I define the ascending order for I1 N minus 1, I and J plus 1 N and I is
older than GJ.
And we have as unusual an unanticipated event progress, which just say that G
is binary relation between N and N.
Okay. So the first -- so this was the initial model. In the first refinement it's very
simple. We suppose, we introduce a variable K and we suppose that the array is
sorted from 1 to K minus 1 and all element in the first part from 1 to K minus 1
are smaller than the remaining element here in KN. But these elements are not
sorted yet.
And as usual we are going to make some progress. We are going to move K to
K plus 1, so we'll move the border here one step further and here are the
invariant I've just said. G is now an injective function, range are the same K is 1
N and here this is exactly what I've said here in this diagram.
For all I in A in 1, 2 K minus 1 and J plus I plus 1. GI is smaller than GJ. This
means that this is sorted until K minus 1 and then all the elements from 1 to K
minus 1 are smaller than the remaining elements.
And we introduce an anticipated variable, anticipating variable L at this point. So
let me just give a little animation. So here is the initial one. And we figure out
that one is the minimum of all these sets.
So we exchange 1 and 3. And then 2 is the minimum of this one, of the black
ones. Then we change 7 and 2 and then 3 is the minimum of 7 to there, et
cetera.
So we move, we move, and this is it. So we see that we have one loop, which
corresponds to K increasing from 1 to N, and we have something else, which is
determining the minimum of the black part.
So here is the new model. When K is equal to N, then we are done. And here
this is the progress. We have a nondeterministic introduction of L such that G of
L is a minimum of the remaining part, and here we have the exchange. Okay. K
plus 1 and here this anticipating variable is just any numbers. And we have pug
which is the second anticipating event with L taking any value.
So the next step now, of course, is to compute G of L, the minimum, and so we
introduce a second variable J, which goes from K to N, and we have a loop here
to determine what is the minimum between K and N. And then we will do the
switch.
So the current situation, G of L, is the minimum of G from K to J, from K to J
here.
And then we are going to have J progressing each steps. So this is now the new
invariant with a definition of J and L and the invariant concerning G of L and now
we have the sorting. Let me show that to you.
So we have three. And then we progress here determining the two is candidate
for being the minimum, but we don't know yet. So we further, further, we find that
1 is the 1. Then we change 1 and 3. And then we do the loop again to the
candidate. It's a good candidate. And we change 2 and 7. And then we go
further, 3 is the minimum. 5 -- 4 is the minimum. We change, 8, 9, 5, 7. 5. So
we change 5 and 8. And then 9, 8, 7. 7 and 9. 8, 9. So 8 is the minimum. And
then this is the end of it.
So we see that we have now these two loops. So here is the initialization of the
variable J is F and K and G and L are all one. Here this is the progress.
So we do this as we have done before. And then we assign the K, J and L to K
plus 1, all of them to K plus 1. And then we have two events which are
refinement of the anticipating event pug. So pug 1 and pug 2, and we just check
if G of L is smaller than or equal to G plus 1 then G of L remains the minimum.
Otherwise we change the value of L to J plus 1, and we have also a variant.
You remember, we also had a variant in the first case. So we have a variant for
the inner loop, which is not yet the inner loop, of course, and we have a variant
for the outer loop.
And this variant again and again corresponds to the fact that we have new
events here and new events cannot take control forever.
Okay. So we are now ready to -- let me move to -- I've done a few more slides
this morning because it was going a bit too fast. So here is the final situation. In
it, final, progress, prog 1 and prog 2. So prog 1 and prog 2 are at the same level.
And progress is one level up prog 1 and prog 2 and final is the first level, okay?
So now you remember about this problem of level.
So because prog 1 and prog 2 are at the same level, we can do an if then else.
And we have some common guards here, but we have here two guards that are
complement of each other. So we could define now a prog 1, 2, which is this
one, which is exactly -- well, it's a putting together of these two.
And so we have a sort of a funny event here, which is half of an event and half a
program here. And we still have these two guards. And now we are going to
continue. We are going to put together progress with prog 1, 2. Now prog 1, 2,
we put together two programs, two events that were at a certain level, level 3.
And now we compare this with this one, which was at level 2. So we can now
merge these two and form a loop, and remember we've proved that these two
statements, these two actions here decrease a certain variant, which was N
minus J.
So putting these together, we have a loop. And you see here J is smaller than N.
No -- yeah, yeah, this is common, this guard here is common. Still in the when.
And here we have J equal to N and J different from N. So we have a wild loop
here.
And we put this at the end of it after the semicolon.
Either we loop on this while J is not equal to N, or when it is finished, we just do
this. So it's completely natural.
Okay? You got it? So now the next step is easy. We are going to put together
final, which is the little thing here, and progress 1, 2, and prog 1, 2, progress.
And we will see here K is different from N. And here K is equal to N.
So we put them together. And we have an outer loop, which corresponds to this
Y here. So we loop here. And then we finish, but we do not have to add
anything, because this is skip here. So we do not have to add a semicolon,
something else and the final step now is to add the in it on top of it. And here we
have the sorting program.
Okay.
>>: Did you say -- where was J introduced? J was introduced when you refined
the progress action; is that right or was it introduced before?
Jean-Raymond Abrial: No, J was introduced to determine the minimum.
>>: When was the variable J introduced?
Jean-Raymond Abrial: The variable J was introduced at that level.
>>: Right. So when you refine the progress event?
Jean-Raymond Abrial: Yes. Yes. Okay. So you see what we have done with
Event-B here is we use Event-B as a proving machine. So we proved the thing
and then the construction of the program is just completely straightforward. It
can be done by a tool which works on the syntactic level only.
Just purely syntactic. Fine? Questions?
>>: How do you determine which [inaudible] to [inaudible] I know you had the
wild loop and you had the remaining stuff after it and you put them together with
the stuff coming after it. Is there any reason -Jean-Raymond Abrial: We are -- we are directed by the guards. You see, here,
this is the first level. Okay. We have all these guards. So we have all these
guards in common here. And here we have two complementary guards. And
prog 2 and 3 are at the same level so they're good candidate. So we don't know
exactly where we go.
We do things naturally so these two are candidate to form an if then else
because they're the same level. And now we look at the guard here and now
we -- what is the situation? We have this one and we have this one.
We also have -- we also have a final here. But we look at the guard, and we
have the guard here, which is the same. And we have these two complementary
guards.
So we are naturally ->>: So this isn't necessarily something that could be automated. It still requires
humans to sort of figure out -Jean-Raymond Abrial: Yeah, it could be probably automated, yeah. Yeah. The
nice thing is it might be possible sometimes to find out that things could be done
in parallel. Not in this example.
But it might be possible. And, yeah, even this could be automatized in some
cases, I agree with you.
>>: You need more rules then.
Jean-Raymond Abrial: You probably need more rules, yeah. Okay?
So, so for this one, so let's go back -- let's go back here. So you see I've added
this morning these guards, because I went directly here. So it was a bit too
much. But you could recognize -- you could recognize the in it, prog 1 and prog
2 and then progress here.
It requires this number of proofs. I have not been looking recently at these things
in seven -- I hope it's less interactive now.
Okay. So let me now go to another example. And this is the last example with
array. And after that we'll go and example with pointer.
In place reversing of an array. So we have a carrier set S. We have two
constants, as usual. We have an array here. And we want to build another array
such that the other array is just equal to the same in the other reverse order.
So we have here this array and here this is the array in the reverse order. And
here we have five. And five. And if the index of five is I, then because we have
eight elements, then the index of five in the reverse array is eight minus I plus
one. So we have to change things like this.
Okay. So it's not a very complicated program. So final here is just the
precondition of all K -- K is in 1 N. J of K is F of N minus K plus 1 and we have
the usual anticipated.
And the final has got skipped. And here I always -- you see I always do some
diagram to show exactly what I'm going to do. Here we suppose that from -- I
introduce I and J to variables that are both in 1 N. And initially I is 1 and J is N.
So I is here and J is here.
And this part is reversed. This part is reversed, and this part is unchanged. And
what we are going to do is very simple. We are going to do this, precisely.
And so here are the elements of the invariant, the definition of J, definition of I
and J. And I and J have certain relation. I plus J is N plus 1, initially N 1, N plus
1 N and this one is moving one step up and this one's one step down. So there's
sum remains the same, of course.
So I plus J is N plus 1. And in fact we will be finished either if we have a pair odd
numbers they go like this or the same number in N for N then we just go like this.
So we have I is smaller than O equal to J plus 1. This is always a trick when you
have two indices, indexes, you have to be careful at the end. Either they touch
or they're just like this, depending on the parity of N.
And here is the refinement. So this is what I've said before. So this is inverted in
the first part. Inverted in the second part, and the same in the same in the
middle.
Okay. So the actions is quite simple. Progress is just switching the two. And
then increasing and decreasing the value of I and J. And final when J is smaller
than or equal to I.
And the variant, of course, is J minus I. And the final program is just now very
simple, it's just this.
The reason why I show this reversing of an array is that now we're going to do
the same, but not with an array, with pointers. So we'll have a link of pointers.
And we will do exactly the same. Reversing completely the thing.
So it's not -- it's rather different because we have no indices anymore, we have
just things pointing to each other.
So we want to reverse a linear chain. And a linear chain is made of nodes, and
the nodes are pointing to each other by means of pointers.
And to simplify the nodes I have no information field. We are just interested in
the pointing. So this is very simple. Here is a linear chain. To simplify, I have
the first node is called F, and the second node is called L.
And I suppose that they are distinct. Well, this is just to simplify things. So at
least we have two elements in the chain. And the nodes of the chain are taken in
a set S. So I have to formalize this. And to formalize what it means to reverse
this chain. So let me do this. I have D is the elements that are in the chain. F is
in D. L is in D. F is different from L. And here I have something which seems to
be a bit complicated.
We are going to define C, which is the function corresponding to the pointing.
And this is a bijection between the two set D minus L and D minus F. So we go
from here -- we go from here to here. And this is a bijection, from D minus -from D minus L. Yeah, from D minus L. So from this set to this set. To D minus
F. So it's a bijection. It's written like this. It's an injection. And it is a surjection,
so all the elements are taken in the range.
And now I have this strange thing here, which express that it is a finite linear
chain. So let me do an aside here. And I take some slides from a talk I did some
time ago about doing mathematics with the Rodin platform, and I stated various
things like the way it's found in set relation, fixed point recursion and transitive
closure crease and graph and trees and what I want to express here is well
founded set and well founded relation.
So and then that will explain exactly what we do for the linear chain, which is a
special case of a well-founded relations.
So a motivation for a well finite set relation. This mathematical structure
formalized the notion of reachability. Things are reachable. And it is discrete
condition, which either terminates or eventually reach certain states.
It formalizes well founded traces, and here is a well founded relation so you have
a relation. It goes -- it goes into the infinity in this direction.
But when we go in the other direction, at any node there is only finite path to
these red nodes. So this is the definition, informal definition of well founded. So
we can see this. So we have a path here and the path is finite.
From any point here when we go down the path is finite. It's not the same in the
other direction. It could be infinite. But we cannot, of course, well founded
direction in both direction. But here this is the most general one.
So we want to -- we want to formalize this.
So from any point in the graph you can always reach a red point after a finite
[inaudible]. That's the idea of well foundation. And it's absolutely fundamental in
force. A relation which is not well founded -- so when there is no red point, let
me just describe, it could be either a cycle, something like this. We have no red
point. Because we could go forever. Or like this, an infinite chain.
We can go forever at infinity. So these are the two pathologies, these are the
two cases where things could not be defined as well founded.
So let P containing a cycle or an infinite -- I want to formalize the reverse notion.
So the notion of cycle or infinite chain. So let P be a set. And let a point X and
here I write in English and here I write in mathematic. For all X and P, for all X
and P there exists a Y and P. There exists a Y and P such that P is related -- is
related to X by relation R.
So you go -- you take up any point in P and for any point P you can find another
point in P where you have this. Okay. Therefore, why is still in P. Therefore,
you can find another one. You can find another one, et cetera. So we have
either a cycle, we can go back here, or we can have an infinite chain that goes
forever like this. So this is the definition of a chain of either infinite chain or
cycles.
>>: Your definition also says that there's nothing else in -Jean-Raymond Abrial: Sorry?
>>: Your definition also says that there's nothing else in P.
Jean-Raymond Abrial: Right. There might be something else. It doesn't matter.
>>: Oh.
Jean-Raymond Abrial: Okay. No, but for all X and P we have this.
>>: If X was just one red dot -Jean-Raymond Abrial: Ah, then it is well founded, of course.
>>: You could have, say, infinite chain N like a small finite set, right?
Jean-Raymond Abrial: Yeah.
>>: But that wouldn't be a cycle order according to your definition but it would still
be equal?
Jean-Raymond Abrial: Here I'm just defining the definition from set P containing
a cycle or infinite chain. Not well founded here.
>>: Right. But it contains a cycle of infinite chain but it could also contain paths
that are finite?
Jean-Raymond Abrial: Yeah.
>>: But your mathematics and English would not allow that.
Jean-Raymond Abrial: Um, well let me continue and then we'll come back.
Okay. So now I can define -- I can simplify this predicate calculus statement
here by P. P is included into R minus 1 of P. So the inverse major of P and the
R. So this corresponds for -- this corresponds to this.
So now definition of well-founded relation. A well-founded relation does not
contain such a set P. Does not contain such a set P. In this set P all the points
are these properties. Now, a well-founded relation does not contain such a set P
unless it is the empty set, because the empty set clearly the empty set obeys this
okay when P is empty, P is the empty set certainly included into this.
So a well founded relation does not contain such set P unless it is the empty set.
Therefore, this is the definition of the well founded for all P, for all P, for all sets, if
P is included in the R minus 1 of P then P is empty.
So this is the definition of well founded set. You agree now?
>>: Yes.
Jean-Raymond Abrial: Okay. Thank you. Fine. And so here can I write
somewhere? Oh, yeah, here. So here we have defined this for any well
founded, and now a special case of a well-founded set is a tree. So a tree is
something like this. And we have here a parent function for each node. We go
to the parent function and moreover -- moreover, if we take any node, there is a
finite path to the top and this is here the top. So you see a tree could be defined
with the function on the set, a tree on a set S, could be defined by the function P
going from S to S, partially, because this one, it is not on top. And then we
apply -- we give this as the additional condition.
Now, if we take a special case of a tree which is a -- which is a linear list. Now
we come to the linear list. And the linear list is a special case of tree, where we
have only, where we are just like this. Okay. It could go forever in this direction.
But we have this.
So here for a linear tree, for a linear tree, we go -- we have P that goes from S
injectively to S. So we see the difference. A well founded relation is in general
binary relation. For a tree we have a function and for a list we have an injective
function. So we go down, down, down to this.
>>: What you call P on the board is called R on the slide, right?
Jean-Raymond Abrial: Yes. And so now we can go back to our problem. And
we see here, because this is moreover -- this is a finite list here. So it's
well-founded in both directions.
And so that is the reason why we have here something more, something more.
When we are now not an infinite list, linearly, but the finite one, then we have
something like this.
And then we have -- and we have C, not C minus 1, because we go -- we go the
reverse direction.
So this is a definition of a linear list. So you see it's a little more complicated than
an array. But, in fact, in an array, things are completely implicit, because this is
the definition, the very rich definition of the natural number that gives us exactly
the same thing. When you have the natural number, in fact you have an infinite
list. 1, 2, 0, 1, 2, 3 et cetera. When we go up by using the pred function we have
exactly this property. So this is the reason why with a strong relation between a
list, a linear list with pointers and an array, because again -- again for the array
the structure is hidden inside the natural numbers.
Okay. Now, the definition of the thing is extremely simple. R is equal to C minus
1. Just reverse. So mathematically it's completely trivial. So now we are going
to refine.
Okay? So the idea -- you remember when we are dealing with arrays, we always
add an index K moving little by little from the beginning of the array to the end of
the array. We had two times, for example, in the sorting algorithm. So we are
going to do exactly the same thing here. But so the idea is that this part is
reverse from F to P. And this part is not reverse yet.
Okay? And so we have two linear lists. We have A and we have B. So B is at
the beginning of A. And also at the beginning of B. And the main invariant is
very simple. A union B minus 1 is C minus 1. And at the end of it, when P is
here, then B is 0, therefore A is just equal to -- B sorry not 0 empty so A is
exactly equal to C minus 1.
So you see what we are going to do. We are going now exactly like we
progressed with an index. We are going to move one step here by removing this
from B and adding this to A. It's exactly plus 1. It's exactly plus 1 in an index, but
we do it on the nodes.
Okay. And this is a description of the progress that we have here. Okay. So
let's do this. So now we have the definition of A and B. So now the set
corresponding to A is the -- this is the reflexive transitive closure operator so on
C minus 1 of P of union P, union -- sorry, I should have put the brackets here.
Minus F and the other direction. And then the second one. And then we have
this strong invariant. So this is the definition of A. The definition of B. P is inside
D. And we have this.
And now we're going to progress. So P becomes B of P, P becomes B of P. So
this one. This is a new P. A of B of P is equal to P. So we move. And then we
remove P from B.
So B becomes B where we remove -- where we remove P. So we have moved
one step.
And when B is 0, then R is equal to A. Second refinement, second refinement,
we are going to introduce nil, which is a special node at the end of the chain D
that we add to it.
And we also introduce a second pointer, which is BN of P. So the reason for this
is that here you see B of P and B of P. So we are going to have a second pointer
equal to this.
And B now has been changed to BN, because we have added nil at the end of it.
So this is what is written -- sorry.
Yeah. And so this is what we do. And with the second pointer here. And then
the final situation will be with single chain, D, where we cut here between P and
Q. So we have P going in this direction and Q going into this direction. And so
we move Q to the right.
So that gives us P becomes 2 Q. D of Q becomes P and Q becomes D of Q. So
we move to the right.
And we have here the initialization, and here the reverse. And so this is the final
classical program for reversing, for reversing a linear list.
So what is strange is that the thing is quite simple here, these three statements.
But the mathematics behind it is a little more complicated, because we have to
define very, very carefully the notion of linear list, and we have to define very
carefully this notion of progress.
So my point is that many, many, many algorithms dealing with pointers and have
these things of pointers moving, it's very, very frequent. For example, in the
short weight algorithm, which I will not have the time to present here, we work
always with this kind of thing. Okay?
So let's change now completely to the final presentation of this algorithm. So we
go into numerical, little numerical algorithm. So this product function is defined
on natural number.
It is an injective function. So the inverse function does exist. But it's not defined
for all numbers. The inverse of 16 is 4. But inverse of 15 is not a natural
number.
So we want to make this total. So therefore we define the inverse as by defect
as with this result R squared is included into smaller or equal to N and strictly
smaller and N is strictly smarter than F plus 1 squared. So this is the square root
by defect.
The other one would be where we changed the value.
So, for example, integer square root of 17 is 4 because we have this or 16 it is 4
also and 15 it is 3 because we have this.
Okay. So this is what we want to compute.
So again still the same technique, we define the final. I just define -- give the
definition here on the precondition. So this is exactly this one. And then I have a
progress which is anticipated and simple assignment for the other variables.
And then, of course, it's very, very simple. When N is smaller -- this is the final.
And here I just do this, when R plus 1 squared is smaller or equal to N, I just
increment R.
And then I have the variant which is this for this one. And here is the square root
program. Very simple. But what is annoying in this program is that each time we
have to compute R plus 1 squared. So it could take some time. So we would
like to make this a bit better.
So we are going to do a second refinement at each step. We do not want to
compute this. So we observe the following, which is very simple. R plus 1, plus
1 squared is R plus 1 squared plus 2 R plus 3.
And 2 R plus 1 plus 3 is 2 R plus 3 plus 2. Okay. And so what we see here, we
see 2 R plus 3 two times here. So and we say R plus 1 squared here. So we
say A is R plus 1 squared, and B is 2 R plus 3. And now we have the nice thing
is that -- so we have these two, and we could have this in the progress. A
becomes A plus B. A was this. So you see A becomes A plus B. And B
becomes B plus 2. B plus 2.
Okay. So here we have no competition of the square. And so this gives us the
following program. Very simple. So you see again and again we start from the
mathematical little mathematical properties about R plus 1 and about 2 R plus 3
and then we do this.
Okay. So now we want to go further, and the idea of going further is to -- this
squaring function is an injective increasing function. It's a function like this. And
we want to take the inverse.
So let's raise a little the problem. So more generally we are given a total
numerical function F. And the function is supposed to be strictly increasing like
this. And it is also injective, one-to-one. And we want to compute its inverse by
defect. So exactly the same thing. But not specially for square root. And we
should -- and also we should borrow some idea from the binary search
development that we have seen before. So let's put all this together. So this is
the definition of F. It's a total function from N to N. It is strictly increasing.
Because it is strictly increasing and it will function, it can be proved as a theorem
that it is subjective. It's 1-1, and then we have N, which is the value we want to
get.
And so we take exactly the same approach as for the square root by defect. F of
R, smaller than or equal to N, strictly smaller than F of R plus 1, and we have
some anticipation. Okay. So the idea now, because we are going to take
advantage of the fact that it is increasing, so what we are going to do is well
suppose we are given to begin with two numbers A and B with these properties.
F of A smaller or equal to N. Smaller than F of B plus 1. And we are very certain
that the result is in the interval AB.
Okay. And exactly like for binary search we're going to make this interval
narrower until the moment where we reach the final. And so we introduce a
constant Q, which is such that F of R smaller than or equal to N, strictly smaller
than F of Q plus 1.
And now we -- this is the definition of the new axioms for the first refinement.
And then here is the invariant for the refinement. Okay. So we have this, which
was defined here. And when R is equal to Q, then this is finished. And now we
have exactly the same approach as for the binary search. We have something
which is still nondeterministic here. Any X such that X is R plus 1 Q is N, if N is
smaller than F of X we do this. And if F of X is smaller than or equal to N we do
this, and we have the following invariant.
And now we reduce the nondeterministity exactly like in the binary search, Q is
equal to this, R is equal to this, and we have these two properties here.
And finally this gives us the program. So the program -- this program is very
general, because F, A and B are just generic constants. And now what would be
nice is to instantiate this with F, A, and B, with different values.
So, for example, we go back to the square root. So in the case of -- so here I
explain what we have to do by instantiating them, we obtain some new programs
almost by free. But we have to prove the property of instantiated constants.
Because these constants have got some properties. So we have to now give in
another more specialized framework, we have to give the properties of F to prove
the property of F and the properties of A and B. So let's go back to the square
root function.
So F is instantiated according to the function. So it is increasing. It is injective.
So A and B are instantiated to 0 and N, because we have this. So this is okay.
And then we obtain the square root program, another square root program for
free.
We just replace here -- we just replace by F. And then we have this. And now
let's do the same thing with another function, which is the function multiplied by
M. So A, B are instantiated again by 0 and M because we have this, and we
should obtain the integer division program, which is very close to the square root,
because we have just changed this. The body here is exactly the same. And
here we have changed -- we have changed this. Okay.
So this is this notion of instantiation. So this notion of instantiation is very
important in Event-B. You have the context in the context you define the sets
and define the axioms for these sets.
And everything in the development is generic with regard to these sets and these
axioms. And now instantiating an entire development is to put ourselves in
another development, and then to point, to look at this generic one and to give
values to the generic set, it's exactly what we've done here, and give values to
the constant. And of course prove locally in our smaller theory to prove that the
axioms that we have here are mere theorem. And if we succeed in doing this,
then we can import completely the entire developments without redoing the
proof, just changing the value of the set and the value of the constant.
And this is exactly what we have done here. This notion of instantiating a
complete development is not yet fully implemented on the Rodin platform but this
is something that we are going to do quite soon.
And this is some sort of a second dimension. And in fact the separation between
context and machines is exactly because we had that in mind. Okay. And this is
a very well-defined notion or concept or methodology in mathematics,
mathematics you do this all the time. You define algebra. Some algebraic
structure, and then in another theory you say, ah, I recognize that I have some
algebraic structure here. So I can take all the algebraic -- I can instantiate locally
this algebraic structure, and all the theorem that have been proved here are now
theorem without reproving them in my other theory. So mathematicians do that
all the time, all the time, all the time. And we want to borrow this idea in this
notion of context and machine.
Okay. This is the end of it. I have no time -- I would have liked to have develop
the [inaudible] algorithm, but I have clearly no time to do that. Rustan has got all
the text, the slides and so if you are interested, you could ask Rustan. Thank
you very much for listening.
[applause]
>>: I have a question.
Jean-Raymond Abrial: Oh, yeah.
>>: The sequential programs that you've shown, you've shown them in using
Event-B. If you did it in the B method the older method I imagine you could do
similar things. Could you say if you did them now, you would prefer to do them in
Event-B.
Jean-Raymond Abrial: Oh, definitely.
>>: Could you say why?
Jean-Raymond Abrial: Why? Because in Event-B -- so in classical B, there's if
then and else and there are loops and things like this. So in order to prove things
about this, you have to introduce a variant an invariant for the loop.
Or you have also semicolon. And you have to prove also the refinement of some
things that are connected by semicolon. But let me give you just a simple
example. Suppose you have an if then else, you refine it by an if then else, you
can refine this way or you can refine this way.
So you have in the refinement of if then else you have four proofs but two of
them are not very interesting. Only two of them are interesting. Now
generalized -- and this is also possible in classical B. Generalize the if then else
to a case, suppose you have a case with ten cases you refine with ten cases,
100 proofs, only tens are important.
And this is the same with semicolon. When you have semicolon, if you apply
rules, you have to introduce assertions in between the semicolon. So you have
to find them. And it generates very, very big proofs. Okay. So all this after years
and years, I didn't like these big proofs for things that are simple. So gradually it
gives us a definition what is the sense. The sense of all these things are
transition, just events. So we get things, and of course we want to do some
sequential programs and then we merge.
So it is possible, and people are doing it in classical B. But I definitely prefer the
approach of Event-B.
Now, the problem is even that classical B is widely used in industry in the train
industry, there are many, many trains now -- I don't know in the United States,
maybe some in the United States, but certainly in China and Argentina, in many
countries in Europe. So there is some B inside it. And the most, the largest one
is, I think, the shuttle in [inaudible] airport. It's about 150,000 lines of code,
entirely generated from classical B.
And the number of proofs for this is about 30 or 40,000 proofs. Among which I
think 10 percent have to be done manually. And the people suffer a bit in doing
this.
Also one of the problems with classical B is that the systems study at the top is
not done formally. So if there is an error there, there will be again in the
development.
So the idea -- so apart from this, so we had all this idea about Event-B. Also the
strong inference of action system, and other people, so little by little we
developed Event-B. And so, of course, people say but then you have classical B
and Event-B, and, well, is there a connection between the two? Yes, there is a
connection. And the idea is precisely the following. You define the Event-B at
the system level until you reach a point where you have taken -- you remember
all the requirements. And then you can jump into classical B. The idea is to
change the events into little procedures and then -- or do some aggregation as I
have done here. So there are various ways to jump into classical B. Also what
we are thinking, I've been speaking to you some days ago, we also are thinking
of incorporating some co-generation directly at the level of Event-B, for example,
doing things like this, and also developing directly from events, an event system
developing directly some codes. And this is work in progress at the present time.
So sorry for this long answer to your question. Okay? Thank you very much.
[applause]
Download