Document 17864774

advertisement
>> Leonardo de Moura: It's my pleasure to introduce Maria Paola
Bonacina. She's a professor at the University of Verona. She's no
stranger to us. She was a visiting associate here in 2008. And this
year she was the chair of Cade and she took in New York the
professorship this week here. Thank you.
Maria Paola Bonacina: Thank you very much. Okay. So thank you very
much for coming. The talk is entitled on fairness in proving. I'm
going to go through some motivation and then distinguish two concepts.
Uniform fairness for separation and fairness for theorem proving.
Now what is the main message of this talk? The main message
talk is that theorem proving is search, not separation. The
property to distinguish between [indiscernible] is fairness,
to do with the control of the infrequencies rather than with
infrequencies themselves.
of this
relevant
which has
the
And the message is that fairness really should earn us in separation
because separation is too much for theorem proving. In fairness should
in concept [indiscernible] roles namely the inference rules which
deduce consequences from given premises but infraction reference goals
that are those goals that remove redundant formula.
>>: What do you mean by [indiscernible].
Maria Paola Bonacina: Yield. Provide. Now, fairness is a common
concept in computing. For instance when it comes to scheduling, we are
used to the idea that a schedule should be fair in the sense of
preventing any process from starving.
It is also common concept in all applications which require research.
Because in a search we don't want to neglect moves that are useful.
Now, crucial point we define in fairness is exactly to determine what
is useful.
Now, fairness comes up a lot in automated reasoning where we have
essentially two ingredients. We have a system of rules, which we
usually call inference system but we can also call more generally
transition system.
And we have a search plan which is responsible for guiding the search
from deciding which inferences to apply to which premises in the
guiding the search for reproof or modeling. Now the inference rules
themselves are nondeterministic. They define the search space we have.
The search plan is the ingredient which adds determinism in the sense
that once we have an input in the inference rules we have many
derivations one we add search plan from derivation from a given input
becomes unique and therefore we do have a procedure at that point.
So simply given a set of inference rules
procedure. We need to add a search plan
procedure. So a procedure of strategies
two ingredients, a system of rules and a
doesn't really give us a
to have a deterministic
always given by coupling these
search plan.
Now, the typical requirements on these two is that for the system of
rules, we wanted it to be complete in the sense that there exists
successful derivations, what does it mean to be successful. Well, if
we are looking for verification, a successful derivation will be one
which terminates with the empty clause with a contradiction, if we are
looking for remodeling successful derivation will be one determinism
with the model. Now, because the inference system, determines many
possible derivations ->>: Just a clarification question about [indiscernible] so my
understanding of inference systems that you're inferring facts in
theorems. But you somehow overloaded the derivations due to facts and
models.
Maria Paola Bonacina: Right. Because I mean in this talk I'm going to
talk about mostly theorem proving in the sense of semi decision
procedures for validity, for satisfiability and for intentional
validity but the concept of having a system of rules and a search plan
therefore the issue of making sure that the search plan is fair applies
to any inference or transition system, it applies also to a decision
procedure which will give you a proof if the input is unsatisfiable and
the model if the input is satisfiable.
So this general concept don't need to be given only for [indiscernible]
proven can be given for model building.
Now, the
that are
needs to
from the
system of rules needs to make sure that there are derivations
successful in the search plan needs to be fair, which means it
ensure that the unique derivation the procedure will generate
given input will be a successful one.
Whenever successful derivations exist, we find one. Now, let's see,
for instance, in theorem proving. In theorem proving the completeness
property we want to have is refutationnal completeness. If the input
set is unsatisfiable, we want the inference system to generate
derivations that arrive at the empty clause and therefore at the proof.
The search plan needs to be fair, means that the generated derivation
should reach the contradiction. And the for complete theorem proving
strategy will be given by receivetationnal complete system and the
failed search plan in this sense.
Now, the topic of this talk is fairness. Clearly there is a good focus
way to be fair and that needs to be exhaustive, eventually all
applicable steps. But that's not especially promising. And
nevertheless it's amazing how much good force exhaustive search still
reasoning mechanisms and theorem prove too. So the question is how to
be fair without being exhaustive. So we want nontrivial definitions of
fairness, nontrivial search plans, the idea is nontrivial fairness will
reduce the gap between completeness and efficiency, because we want
somehow to reconcile these two to preserve completeness and still have
efficiency.
Now, one could count up to this that in practice we often work with
systems that are not complete. So we give up on completeness. We have
completeness in the papers in the theorems, the major theorems about
the systems. But when we run the experiments with the theorem provers
we often give up completeness for reasons of efficiency.
That's true but it's true that we give up on completeness in a
controlled way by weakening a system of which we know up front that it
is complete. So we are still interested in fairness to know where to
start from in order to weaken the system, and also we're interested in
fairness because we would like to have nontrivial failed search plans
that somehow reconcile completeness and efficiency. And this is the
very long term goal and probably we are still far from that. But I
think it is important to state it at least as a challenge.
Now, remember what I said about search. We don't want to neglect
anything that is useful so the point is what is useful, what is needed,
in order to make sure that our derivation will be successful if in the
realm of all possible derivations there exists successful ones.
So what is useful or even more stringent, what is needed. Now, duly we
can ask what is not needed that is what is redundant, what should not
be done, because one way of doing and focusing on what is needed is
avoiding doing what is not needed. So these two state two concepts
that are important in theorem improving namely fairness and redundancy
are rerelated so let's take first let's see first notion of redundancy.
Assume to have resolution and assumption to known inference rules.
Then we can say that a clause is redundant if it can be subsumed.
I assume you all know what is sub assumption, closed C can subsume a
closed D if D is less general, which formally means that there exists a
substitution sigma such that C sigma is a subset of D where the closest
is seen as market sets.
And then we can say that the subsumed clause is redundant.
see --
Let's
>>: Can you go back one slide?
Maria Paola Bonacina:
Yes.
>>: In that case if C sigma, that's a subset of D, then which one is
redundant, D or C.
Maria Paola Bonacina: D is redundant, because C implies D. Whenever C
is true, all it sees are true therefore C is true sigma is a subset
disjunction when it's C is true D is also true. It can go away the
bigger clause.
Another example, the notion of redundancy that we usually have when we
couple super position and simplification, simplification is resolution
with a quality built in.
We assume to have a well-founded on terms and literals and we
superimpose maximal sides of equations into maximal sides of other
equations of maximal nonequational [indiscernible] simplification uses
these in order to do well founded guiding, and then one definition of
redundancy is the following that works well with these inference rules.
We say that a ground close D is redundant in a set S if there exists
ground instances C1 CN of closest in S such that C1 CN are smaller than
D, we can see C1 CN is a multi set and D is a multi set made of only
one element and use the multi set extension of the ordering.
And one sees N in detail and now we can say this is for a ground clause
and we can say that the clause D is redundant if its ground distance is
R. So this notion of redundancy is not related to a single inference
rule like sub assumption in the previous slide. But it says that the
closest redundant if there is smaller clauses that implied essentially.
>>: Here?
Maria Paola Bonacina:
Yes.
>>: Here I should read that following the conjunction of C1 to CM
implies D; is that how I should read it?
Maria Paola Bonacina: Yes. C1 CN -- let's see. C1 CN here is a set
of conjunction and this one is logical entailment. So D is logical
consequence of C1 CN.
>>: So what is the difference between -- so there's a word and there.
There's a word left to the of N and to the right of N. What's the
difference between this and this?
Maria Paola Bonacina: Okay. Sorry. So this one says that C1 CN are
smaller in the ordering. So this ordering on terms and literals can be
extended to clauses by using a multi set extension.
And then it can be extended to multi set of clauses by doing another
multi set extension. So we can compare the multi set of clauses
contained in C1 CN with the multi set of clauses containing only D.
And this guy essentially will be bigger than each of these.
>>: Okay.
Maria Paola Bonacina: Okay. So we have smaller clauses that entail.
Now, once we have a notion of redundancy for clauses, we can have also
notion of redundancy for inferences. Because we can say that an
inference is redundant if it uses or generates a redundant clause.
Now, fairness is a global property, though. It does not apply to a
single inference, it applies to the whole derivation. So we need to
say what the derivation is fair, sorry, and search plan is fair if all
of the derivations it generates R.
So we need to define the limit of a derivation, because a derivation in
theorem proving is not necessarily finite. The limit is defined as the
set of big system clauses, that is clauses that appear at some point
and never go away, that means to be persistent. Why do we need to
define it in this way? Because we have both inference rules that
generate clauses and inference rules that eliminate clauses.
And so as infinity, the limit, will be defined in the union for which
equal to 0 of the intersection for R greater than equal of J of the
sets SI in the derivation, that means when J is equal to 0 we take the
clauses that are in the input that persist always for a J equal 1 we
take those that appear at stage one and then always remain and so on.
>>: Is this another way to say it is the set of clauses that appear
infinity often, no.
Maria Paola Bonacina: They only appear once but always remain. Always
remain. They never go away. Okay. Now, these are the definitions
that are most common in the literature. So to say -- and this is what
is usually called fairness, but instead I call it uniform fairness
because the message of this talk is that this property of fairness is
good for separation but it is somewhat too strong for theorem proving.
So let's say that I sub E represent the closest that can be generated
from S by expansion. And then we have derivation. So we say that
derivation is fair if all clauses C that can be generated by expansion
from persistent clauses, there exists a stage J where the clause either
appears or is redundant.
>>: I'm sorry, what does expansion mean?
Maria Paola Bonacina: Expansion is something like resolution for
position, other than inference rule which adds something so they expand
the set of existing clauses. Alternatively, in the literature, it has
also been formulated in a slightly different way which is equivalent.
People say for all clauses C that can be generated by expansion from
redundant persistent premises there exists a J such that C appears at
stage J or equivalently one can say only redundant expansion inferences
are done eventually. This is how fairness is usually defined.
Now, the question is can we have a weakened notion of fairness and
claim that uniform fairness is for separation in fairness is for
theorem proving. Now, this can be done by working with a proof
ordering rather than with a formula ordering. The notion of proof
ordering is also a classical notion in the theorem proving literature
and it appeared several years ago papers by Leo Bachmair and
[indiscernible] and [indiscernible] proof forwarding.
Now, one could say well what's the difference? Indeed, the proof
ordering can agree with formula ordering of reduced formula ordering if
we compare proofs by their premises, which is what certainly can be
done.
But the motion of proof ordering is more flexible, because we can
actually have small proofs. We can define proof ordering in such a way
small proofs of large orderings vice versa, so it doesn't have to go
exactly with the formula.
Now, if we have a well founded proof ordering we can talk about proof
reduction. So we call justification a set of proofs and we can compare
justifications and we can say that a set of proofs Q is better than a
set of proofs P if for all proofs in P there exists a proof in Q which
is smaller or equal. Now, the ordering is defined in such a way that
we compare proofs of the same theorem, of course, so that it makes
sense to compare them.
If we have a proof ordering, not only can we compare sets of proofs,
but we can also compare presentations by the proofs they make
available. So say that S is a presentation of the theory HS so the HS
is the set of all the theorems that follow from S. Then we denote by
PF of S the set of proofs with premises in S. And then we can say that
given two presentations as prime and S that are equivalent, so they are
the presentation of the same theory of the same deductively closed set
of theorems, S, S prime, S prime is simpler than S and therefore it's
preferable intuitively. If the set of proofs are S prime is better
than the set of proofs of S.
So this was --
>>: What is the representation?
>>: A presentation is a set of formula or closest which presents
theory. It's like an axiomatization. Presentation is a synonym for
axiomatization. You have a bunch of formula which a presents a synonym
for axiomatization, you have a bunch of photos that present theorem in
defining the set of all theorems that are logical consequences of those
formula.
But for a given set of theorems you have more than one presentation, of
course. So the presentations are equivalent. And one is better than
is simpler than the other if it has better proofs.
According to the proof ordering. So more proofs according to the proof
ordering. Now, again with the well founded proof ordering we can look
at the minimal proofs in a given set of proofs, P. So let's call mu of
P the minimal proofs in the set of proofs P.
And then we can define the normal sum proofs
We take its deductive closure, so the H of S
presentation, then we take all the proofs in
minimal ones. And these are the normal form
NF of S. How? We take S.
is the deductively closed
here and then we take the
proofs.
So for any set of proofs because the ordering is well founded, we can
look at the minimal ones. Normal formal proof ones are the minimal
ones if ideally I could consider S presentation the whole set of
theorems.
Now, of course it is not something that I have in practice because in
practice I have a complete presentation. I don't have and I don't even
want to have the whole set of theorems. But I'm going to be interested
in the normal form of proof of the one theorem I want to prove.
So the normal form of proof is the minimal proof with respect to all
possible formula that follow from the presentation and that I could use
to build that best proof.
Now, here comes an important distinction. I said we would like to have
a notion of fairness which gives us something we can seek as
separation. So we need a weaker notion than separated set. Now, we
call separated set a presentation which provides all the normal form of
proofs.
We call complete a presentation which provides at least one normal form
of proof for every theorem. Now, the two will coincide if minimal
proofs are unique. For instance, if the proof ordering is total.
However, this is not necessarily the case because in general ordering
is in proof orderings are partial so in many cases we can distinguish
these two, in the complete we only have one normal form of proof in the
separated we have them all.
Let's see an example. Assume
equals C and A equals C. And
proofs the valid proofs which
equal to T by regarding S and
we only have equations A equal B, B
assume that we can consider as minimal
means the proof that proof that S is
T to a common form.
So what we have here, this is regarding zero more steps because there's
a star over the guiding this is just composition. So this means there
exists some term U such that S regards to U and T regards to U and this
happens. And this happens by pure hiding so basically we have
something like this, which goes down to a common form. Now, if A is
greater than B and B is greater than C is the ordering, B equals C and
A equals C is a complete presentation for the above set.
We do not need the equation A equal B because the valid proof A goes to
C by A equals C, C is A greater than C and B goes to C by B equals C
since B is greater than C, gives me a minimal proof of A equal B. On
the other hand the separated set has them all. Because it wants to
have all the normal form proofs and the four it wants to have it keeps
also the equation A equal B because it wants to have also the proof A
goes to B in addition to this proof. This is also a valid proof of
course because this can be zero. Okay.
Of course the notion depends on the ordering because if we take the
same set A equal B, B equals C, A equals C and the same notion of valid
proofs that's minimal proofs, but A and B are not comparable. So this
symbol means that neither A is greater than B nor B is greater than A,
nor A and B are equal. So they're not comparable.
Now rather than complete coincide. Why? Because also a separated
presentation does not want to have A equal B anymore. Since with A and
B are comparable, the proof of A equal B by the equation A equal B is
this. With no orientation between A and B since they are uncomparable.
And this proof is not minimal with respect to their definition.
Okay. Now, based on the notion of proof orderings, and notion of
canonicity was developed we say a presentation is contracted if it
contains all and only the premises of its minimal proofs and then we
say that the presentation is conical and represented with these sharp
symbol up here. So S sharp is the conical presentation for the theory
of S. If it contains all and only the premises of normal form of
proofs.
Now contracted means contain only and only the premises of minimal
proofs. Conical says normal form proofs. So if you think about it, it
means that conical is separated and contracted because the separated is
the one where because they are all the minimal, all the normal form of
proofs minimal and normal form essentially coincide. And then one can
prove that one can also recognize the presentation as the smallest
separated presentation with respect to subset ordering and the simplest
presentation with respect to the ordering of presentations based on the
ordering of proofs.
Now, all these apply to equational theories in the standard way that
people may know from guiding normal form of proof for theorem of all X
is S equal T is a valid proof, connecting this columnized forms of S
and T. S with a hat is S with all the variables are in place call them
constants and the same for T.
Now, in this context, separated means convergent, that traditional
notion of regarding systems that are confluent in terminating,
confluent means uniqueness of the normal form.
Contracted would mean introduce. So all the equations in the
presentation are reduced by regarding one with respect to the other.
And conical would be convergent and interdeduced and conical system if
it's finite gives a decision procedure but we all know that this is
[indiscernible] because unfortunately finite conical presentations are
difficult to find.
However, they do exist for some fortunate theory. But now let's go to
see how redundancy changes if we use proof ordering regarding formula
ordering as before.
We can say that a close is redundant if to our set does not prove any
of the minimal proofs. So the set of minimal proofs that we have in
the presentation S is the same as the set of minimal proofs that we
have in the presentation with C other because C is redundant it doesn't
help. Ideally we can say C is redundant if removing it does not work
in proofs.
So if we have S and we remove C from S, S minus C is simpler. It's
smaller. It's simpler, because its proofs are better because C is
irrelevant for the minimal proofs, for the good proofs.
Now, with proof orderings we're going to view inferences as a sort of
proof reduction, both expansion inferences and contraction inferences
will be seen as proof reduction.
And we say that an inference is good if an inference SI plus one from
SI and if SI plus one is simpler than SI. So it has better proofs.
Better or equal proofs. Nothing gets worse.
And the whole derivation is good if this happens at all the steps for
all Is.
Now, an important property of good derivation in theorem proving can be
synthesized by saying once redundant always redundant. So if something
becomes redundant at some point, it will be redundant forever. So we
can commit to throw it away and it will never need to come back.
This is why these inference systems although they tend to remove data
they can do so permanently and therefore they're also what's called a
proof confluent in the sense they don't need backtracking to go back
and possible put back something that was removed before.
So formally if we take the intersection of what is in a cycle of one is
was redundant in SI, it is also redundant in SI plus one. So if it was
redundant before and it is still there, it is redundant also in SI plus
1 so essentially once we throw it away it's there forever.
Now, I already talked about expansion and contraction and here comes
the formal definition. Expansion means adding logical consequences.
So from A we infer A union B will be R logical consequences of A.
Contraction means we remove something. We remove B from A union B we
go to having only A. But we want A to be simpler than A union B, which
means they are equivalent. So whatever I throw away is still a logical
consequence of what I keep.
And moreover what I keep has better or equal proofs. And both
expansions and contractions are good. Expansion is trivially good
because if I only add things, the proof's only going to worsen because
I had everything I had before. The crucial thing is contraction
doesn't make the proofs worse. Now, I can then qualify the derivations
based on the properties the issuer to the limit. So I'm going to say
that the derivation is set rating if it generates a limit. I'm going
to say it is complete if it generates a complete limit. Contracting if
it generates a contracting unit and conical if it's separating and
contracting, so it generates a separated and contracted limit which is
conical.
Now fairness at least. How can we define fairness based on proof
orderings? So what is the intuition? We would like that when it is at
a stage of derivation say SI there exists a minimal proof of the target
theorem that is the specific theorem we'd like to proof, not all the
possible theorems in the theory which is reduced by our inferences by
an expansion or contraction inference. Then it will be reduced
eventually.
Now, how can we guide this formula? We can say for all I greater than
equal 0 and all proofs P of the Target theorem they're very interested
in, that is minimal at stages I. If there exists inference from SI to
take in finitely many steps to some S prime and there's a proof Q in
the minimal proofs in S prime which is strictly smaller than P and
would like to have a stage J in our derivation greater than I such that
there exists a proof R in the set of minimal proofs provided by these
SJ such that R is smaller or equal in the proof ordering and this Q was
better than P.
This notion applies to both expansion and contraction. So while the
standard -- the classical notion of fairness focuses only on expansion,
this one focuses strict expansion and contraction as first class
citizens because it decides contraction is not only deletion but it's
also generation of reduced forms of closes like in simplification.
Now, when it comes -- now, how can we instantiate this intuition.
Because this one is nice but basically it says it is a draft more than
how to achieve it. We would like that when a reduction is possible
somehow we will get it. So how can we point out exactly how to achieve
this? We did the notion of critical proof because in the realm -- if
we don't want to segregation reducing everything essentially we need to
focus on everything. We need a focus of critical proof. Critical
proof is a minimal proof which is not a normal form, but such that all
its proper sub proofs are normal form. And the classical example in
equational theories is a peak done like that but we have something and
the possibility of going down from U to S or to T. This is what
typically super position step down, and super position step will detect
these and will generate these and he yanked it into -- sorry. One way
or the other.
So then we call CS the set of critical proofs of S. And as usual we're
going to look at the critical proofs with persistent premises, so we
look at those.
And we said the derivation is fair if eventually it manages to reduce
all the critical proofs. So for all the critical proofs from
persistent premises, there exists some proof in the set that I'm going
to generate in the derivation that is strictly smaller.
Now how about uniform fairness, how we recover it in this proof
ordering framework. Now, uniform fairness will be a stronger property.
So we call trivial proof a proof made of the theorem itself so we put a
hat on S to denote the set of all the trivial proofs of formula in S.
And then the trivial proofs with persistent premises is S infinity with
the hat. And then uniform fairness will say that all these trivial
proofs that are in the persistent set accept those that needs to be in
the canonical system will be reduced eventually. So it's a stronger
property than fairness and we show it with these theorems so we show
that if the derivation is fair, then it is completing which means the
limit is complete.
The derivation is uniformly fair if and only if it is saturating. So
it gives situation, and if there's a situation in the end it must be
uniformly fair. That's an if and only if.
And fairness is sufficient for theorem proving for proofs searched
rather than separation, essentially it's not necessary to add all
consequents of critical concepts of proof, but just enough to provide
smaller proof of each critical proof.
Now, this is a nice theory, but in practice it is still a challenge
because we only have set of closes in practice. The proofs are not
there. They exist implicitly. It's not that we have explicitly a
proof and we may know how to reduce. The question is which properties
the search plan should have to try to approximate it in practice this
viable famous property. Other I claim that already having at least in
theory a notion of fairness that earns us something weaker than
separation is a good thing because at least it makes it possible to
work towards fair search plan that are not uniformly fair. We want a
search plan to search enough expansion contraction and therefore
completing. It's schedule enough contraction to be contracting, and we
also wanted in practice to schedule contraction before expansion. So
we talk about the contraction.
How does this happen in existing theorem proving's. We need to
distinguish between two forms of contraction, forward and backward
contraction. Forward contraction is when we have generated by
expansion, for instance, by resolution of super position a new close C,
and we contracted with respect to previously existing ready existing
clauses and we get a good form of C prime. Backward contraction is
when we use C prime to contract to possibly reduce simplified subsume,
the already existing clauses.
Now first basic idea in the implementation of theorem proffers is to
implement backward by forward contraction. One uses an indexing scheme
to detect the clause is reducible and once it's been detected the
reducible clause is just treated like if it were new a clause and
subject to forward contraction. So we need really only one procedure
for the tool.
And then what does it mean to have equal contraction? Ideally we would
like that at every stage I there is no redundancy in SI but that's not
possible if every step represents a single inference. So what we can
get in practice is that periodically the set of redundant clauses is at
stage I is empty. For instance the existing theorem provers implement
the given closed loop which means they execute a cycle whereby every
iteration they select a clause and they pick all form of possible steps
between that clause and the clauses that were already selected before.
Now, these two sets are usually called active and passive. Active are
the clauses that we've already selected as given clauses and therefore
have used this premises for expansion inferences that's why they're
called active. Passive contains all the other clauses already been
generated but have not been selected yet. For instance. Now, a given
close loop may do enough contraction to keep the union of active and
passive reduced. So that's the most aggressive contracting strategy.
A more prudent one, one that decides to invest less in contract is to
keep only the active set into reduce because since we don't want to use
redundant clauses for expansion inferences it's enough to keep to
reduce the set out of which we're going to give the premises for
expansion.
And most theorem proofs implement these two search plans and sometimes
one is better and sometimes the other is better.
Now, let's see an example of the kind of aggressive contraction that we
can get with this kind of framework. For instance, working with
conditional equation now to go beyond the equational theories and see
also additional equation and theories of equational theorizes, assume
we have A equal B implies FA equals C. And A equal B implies FB equals
C. Now, assume that we have an ordering of symbols where F, the
ordering on symbols is F is greater than A, A is greater than B. B is
greater than C. Then A equal B implies FA equals C. You deduce this
to A equal B implies C equals C. Because C equals C is trivial, these
can be deleted.
So how does this reduction happen? Because A equal B is the same
condition. So we can, the idea is that we use the condition itself to
do guiding. So A, this A reduces to B. And then because this B is
equal to C, FB being reduced to C. So the entire first clause can go
away.
So the idea is we can use also the conditions as the right goals. For
another example, say we have A greater than B, A greater than C. This
set A equal B implies B equals C, A equal B, A implies A equal B is
segregated.
The set with only A equal B implies B equal C. It's the same theory.
It is complete but not separated. You see it is a smaller. It doesn't
have A equal B implies A equals C. And it is reduced.
Why? Why can we get rid of A equals BA implies A equals C. Because
these are conditional equations can self-reduce, can reduce itself to A
equal B, B equals C, which is subsumed by the other copy of A equal B
implies B equal C which we already have. Because, again, we allow most
of the condition to be used for guidance that this A goes to B.
Alternatively, we can reduce -- we can reduce A equal B implies A
equals C to A equals C implies A equal C. Why? Because under A equal
B we can, what is it? We can reduce the B to the C this time, and so
on. What is it? Yes, so we're reducing the B in the condition to C.
By using these one. A equal B implies B equals C and so this B goes to
C and also to be deleted because it's trivial.
Anyway, to summarize, the idea is that fairness should be something
weaker than separation. This can be achieved in the theoretical
framework of definitions by adopting proof orderings as opposed to
formula orderings and these are the possibility to start nontrivial
affair and intercontracting search plans and to prove them fair and
this is in my opinion available to reduce the gap between the theory
about theorem proving which also asks us to get separation and even
theorem proving is presented as separation-based strategies. And the
search plans and the techniques that we have in practice which usually
allow more contraction and the claim is that that can be done not at
the expense of completeness by preserving this weakness of fairness.
And this is all I have. Thank you very much.
[applause]
These are some papers where this kind of work was developed. So if
there are any questions, I'd be happy to take them. Well, I mean, I
realize that it is somewhat abstract theoretical topics. It's been
with me a long time, actually, because you see 2013, 2007, 1995, these
actually was a paper publishing results in my Ph.D. thesis, because I
have this notion that we should have -- we should do less in
separation, it focuses on then, and it's something that I worked on
over time more than once because they really think that the very same
notion of segregation-based theorem proving, the name itself is
somewhat misleading. I mean, why should we want to do separation when
in reality we are searching for one specific -- for at least one proof
of one specific theorem. So separation is too much, and we should
focus more on weaker requirements that may also bring the theory and
the practice of theorem proving closer by giving more importance to
contraction and not uniformly fair search plans. Please.
>>: I was wondering, I was wondering how the different -- can you
compare the type of things you discuss here with the some of the
standard definition of fairness for -- it's going to be conquering a
setting, and so looks like strong weak fairness it's a drop. If you
see for instance every -- I'm not -- familiar with techniques I could
say expansion versus contraction for instance and you see a different
tactical process each, but I don't know if this is related -- but
there's a way to have a mapping or it's completely different basically,
the guarantees that you get I guess also are different here. I'm not
sure.
Maria Paola Bonacina: That's a very good point. I don't think that
the theorem proving community took inspiration from the notions of
fairness in other fields they were in computing. That's probably
something that we should look into. Because fairness was enshrined
pretty early as something that should produce the separation effect,
somehow that was sealed that way and the very same theorem proving
based super position on resolution was called segregation-based and
that was it. So fairness was longer a problem. And sometimes I've
been contending for all these years that fairness is something to be
discussed instead of that we should weaken that notion and maybe
precisely in looking at how fairness is defined in other fields. We
might get inspiration for non trivial practical phase plans.
you. That's a nice lead.
Thank
>>: Or maybe, some sense, things are fairness versus footprints they're
diminished. There may be a few standards notion of fairness, I don't
know how many there are, there's a general notion for fairness for
algorithmics, basically. And so that's the concrete setting probably
doesn't perhaps address. If perhaps closer to that or more general in
a way. But I don't know the effect of that made that etherial
fairness, so to speak or whatever. I don't know.
Maria Paola Bonacina: If I may, consider also how we don't really
discuss much fairness for decision procedures. The whole theoretical
framework about separation was first step of theorem proving when all
we have is the semi decision procedure and things in general it's
infinite so we'd have this search process and we would move as a
segregation accumulation process.
But also in the decision procedures, there is a search, because even if
the search thesis is finite, we still don't want to be exhaustive,
because it's huge.
So the question of starting more fairness also for decision procedure,
the order with which we do things, in my opinion should be relevant.
Somehow this is a topic that has not gotten much attention because much
of the effort went into designing the revolutionary complete system and
the decision procedure completing the sense of being able to generate
the on the one hand or the model oh the other.
But I think also we could discuss what does it mean to be fair in
practice and how we can best do it.
>>: So I have a question about that. Let's consider proposition larger
and satisfiability question. So what I don't understand is that how
can fairness or paying attention to fairness it cannot help us in the
worst case, right? The worst case you have to be exhausted. So what
is the point of it? Is it that it gives you some heuristics that if
the thing is that in some cases maybe hopefully related to practical
problems that arise, you terminate faster?
Maria Paola Bonacina: I would say two things, on the one hand it can
inspire heuristics, but on the other hand we cannot improve -- okay.
For a decision procedure, the search basis is finite, we have an
algorithm and the form we also have the classical notion of complexity.
We know it's exponential. So we cannot do better than that in the
worse case.
However, we can investigate other notion of complexity. Because one
can investigate notions of proof complexity, one can investigate
notions of search complexity. So although the search space remains
exponential, I can still compare to procedures by which part of the
search space they visit, for instance.
So we could develop measures that say orderings that capture how large
is the search space visited by one and how large is the search space
visited by the other. And establish a comparative result which doesn't
change the fact that in the worst case it's exponential, but it can say
for first class of A generates a smaller space than generate B. But
this is also something that might be useful to investigate if we see
fairness as something to be discussed also for decision procedures.
Any more questions? Thank you very much for coming, thank you for listening.
Download