Lattices of Knowledge in Intelligent Systems Validation

advertisement
From: Proceedings of the Twelfth International FLAIRS Conference. Copyright © 1999, AAAI (www.aaai.org). All rights reserved.
Lattices
of Knowledge in Intelligent
Klaus P. Jantke
Deutsches Forschungszentrum fiir
Kiinstliche Intelligenz GmbH
Stuhlsatzenhansweg 3
66123 Saaxbriicken, Germany
jantke~_dfki.de
Abstract
Validation is the art of investigating whetheror
not a given system is "the right one" according
to the users’ expectations and to the necessities
of the application domain. Researchers all over
the worldare currently striving hard to transform
intelligent systemsvalidation into a science.
The focus of the present paper is rather narrow.
It almsat a very little contributionto advancethe
formal background of systems validation. The
authors hope that, even in case they do not complete succeedin their efforts towardsa science of
intelligent systemsvalidation, their contribution
mightadvancethe art a little.
The paper deals with problems concerning the
knowledgesources which have to be utilized in
systemsvalidation. Thereis a particular focus on
knowledgeabout the system under inspection.
The key technical term of this publication is lest
case and the key computational process is the
reduction of sets of test cases to subsets of data
whichaxe still sufficiently expressive, but feasible. The crucial theoretical concept underlying
this approachis inheritance of validity and the
key perspective which is underlying the knowledge processing approach is circumscribed by the
concept of a lattice o] knowledge.
As one camusually build Booleancombinationsof
knowledgeunits, i.e. one mayask for conditions
that hold together or for somefinite collections
of properties whereat least one of them must be
true, system knowledgemaybe seen as a lattice
structure.
Those lattices of knowledgeare introduced. The
usage of the lattice of knowledgeperspective is
exemplifiedwithin the area of validating learning
systems.
Copyright(~) 1999, AmericanAssociation for Artificial Intelligence (www.aa~i.org).All rights reserved.
Systems Validation
JSrg
Herrmann
Deutsches Zentrum fiir
Luft- und Raumfahrt e.V.
DFD-IT
82234 Oberpfaffenhofen, Germany
joerg.herrmann@dlr.de
Science is knowledge which we understand so well
that we can teach it to a computer;
and if we don’t fully understand something,
it is an art fo deal with it.
... we should continually be striving
to transform every art into a science:
in the process, we advance the art.
DONALDE. KNUTH
Turing AwardLecture ]974
Motivation
and Introduction
Validation is the art of investigating whether or not a
given system is "the right one" according to the users’
expectations and to the necessities of the application
domain (cf. (Boehm 1984) and (O’Keefe & O’Leary
1993),
e.g.).
Researchers
arecurrently
striving
hard
transform
intelligent
systems
validation
intoa science.
The present
investigations
aim at a rathernarrow
contribution
to thisconcerted
endeavor.
Thegoalis to
developconcepts
andmethodologies
towardslocating
knowledge
needsin intelligent
systems
validation
such
thattheseintellectual
toolsallow
fora moresystematic
development,
reduction,
andapplication
of testcases
for interactive validation.
Even in case the authors do not completely succeed,
they hope to advance the art slightly.
The novelty proposed in the present paper is the
concept of lattices of knowledge and its usage in test
case development and reduction.
The authors refrain from any introduction into the
necessities of validation and verification of complexsystems. Interested readers may find several convincing
arguments in the topical publications collected in the
list of references below.
The present introductory chapter aims at a sketch
of the authors’ underlying assumptions. The focus
of the investigations is on validation scenarios like
those developed resp. used in (Knauf ctal. 1998b)
and in related publication like (Knauf, Philippow,
Gonzalez 1997), (Knauf et al. 1997), or (Knauf et al.
1998a), e.g. The interested reader may consult (Jantke, Abel, & Knauf 1997) for the fundamentals of socalled TURIN(]test approach adopted and, furthermore
VERIFICATION,
VALIDATION,
& KBREFINEMENT499
(Abel, Knauf, &. Gonzalez 1996), (Arnold & Jantke
1997), (Gonzalez, Gupta, & Chianese 1996), (Jantke
1997), and (Terano gg Takadama
1998) for a variety
applications in different areas.
Interactive approaches to system validation based on
experimentation with the target system consist of the
following main stages of
¯ test case generation and optimization,
¯ experimentation by feeding in test cases and receiving system response,
¯ evaluation of experimentation results, and
¯ validity assessment based on experimentation resuits,
which ,,fight be looped and dovetailed, in several ways.
The key computational process underlying this publication is the reduction of test sets, and the key perspective of is knowledgeprocessing approach is circumscribed by the concept of a lattice of knowledge.
There is abundant evidence for the need of test ca.se
reduction. If one takes, carelessly, just all imaginable combinations of system inputs as potential test
data, one arrives at computationally unfeasible sets
of test cases. (Michels el ai. 1998) reports about
quite small knowledge based system with an original
set of 35 108736000 test cases. ’Fhe reader may consuit (Michels, Abel, &; (’~onzalcz I998), for details.
This paper extends work formerly published, but not
mentioned here, according to the submission policy.
The present focus is on locating knowledge sources
which allow tbr the substantial reduction of sets of test
data without any loss of validation power. Emph~is
is on system knowledge of different strength forming
somelattice. (cf. (Grfitzer 1978), e.g.).
The crux is that if test case reduction of this type
succeeds, this implies the existence of someinduc].ion
principle which, in turn, allows for an induction step
from the system’s validity on a rather small set of test
cases to its validity on the usually muchlarger set of
all relevant cases.
In its right perspective, the crucial process of test
ease reduction is the abductive construction of assumptions supporting some suitable step of induction.
From Karl Pepper’s seminal work on the logic of
scientific discovery (cf. (Popper 1934) resp. (Popper
1965)), it is already well-known - and also discussed
in (Herrmann, Jantke, & Knauf 1998), in some detail
- that there does not exist any universal approach towards a deductive justification of induction. Thus, one
has to search for dornain-specific and even for systemspecific variants of validity and their inheritance.
Test Cases and Test Data
Whenhumansare probing systems interactively,
two
substantially different situations mayoccur. In the
one situation, thc system is triggered by some input
with somc expected system’s reaction in mind. In the
5OO JANTKE
other situation, the humandoing the experiments has
no particular expectations anti is just looking for what
may happen. More abstractly speaking, in the first
case there are related input and output data giwmprior
to experimentation, whereas in the second situation
there are only input data given explicitly.
When human validators are dealing with complex
systems, it is assumed that they have expectations
about the system’s behaviour. The actual behaviour
is then compared to these expectations and ,waluated
accordingly. The overall validity assessment is synthesized upon the results of evaluating a possibly large
number of individual experimeats.
Without any a priori expectations, it might be quite
difficult to evaluate what actually happensand t.o validate the whole system. This case is more ilk,’ exploration rather than validation.
In particular, exploration is mor~involved th;ul validation, as it has to face several issues being Iwyond
validatiOlL Validation assumes domain knowledge to
some extent, whereas exploration problems may deal
with building theories to inlG~rpret unexpected ,w,nts.
All this is far beyondthe present investigations.
To sum up, the behavior of a system uniter validation
may reasonably be abstracted as some relation over
tlw Cartesian product of the system’s input spare and
its omput space. Thus, lest cases arc always pairs
of inputs and related outputs. The inpnt parts of test
cases are called tesl data. The out put ])arts (’orrespolld
to the expected system’s reaction on these t~,st data.
Test Case Reduction
Assumeany target system ,b’ under itlvestigation.
Dcterutined by the needs of the application domain, there
is somerelation R.* el’all cases of potentially acceptable
system be.havior. There might be several subsets R° of
R* which are sufficiently comprehensive such that successflfl testing of S on R.* wouldguarantee the wdidity
of ,b’. Weare looking for minimaof those sets.
For any set of test cases R., the tbrmula valid(,b’, R)
is iatended to express the validity of the system ,S’ on
all test data in R. ’[’here might be several int,.,rpretations of valid(S. R.). for instance equality vs. st.mantic
equivalence of the system’s output on R and those expectations which are eucoded in the outpul, part of
any test case. R.t~gardless of that. a poinl.wis~, testing
of valid(S.e), for all cases cE R, nfight be computa]tonally unfeasible if R is large.
~l’~st case reduction aims at [inding somefairly small
subset Mof R such that (i) valid(S,M) can be experimentally justified and, furthermore, (it) there is an inheritance relationship inh~aua(M, R) which allows for
1 he conehtsion:
valid(S.M)
A inh,,~ua(M,R)
~ valid(S.R)
In its right perspective, this tbrmula is repr,senting
some scheme of induction.
Lattice
Structures
For the purpose of this paper, an intuitive approach
will be sufficient. More formal knowledgeof so-called
lattice theory can be found in standard references like
(Gr~itzer 1978), e.g.
In lattice theory, one is always dealing with paxtially
ordered sets of certain objects. Minima, maxima, inflma, and suprema axe understood as usual. Such a
partially ordered set is called a lattice, exactly if any
two objects always have their infimum and their supremumwithin the set.
3. Is there any way to trade knowledgeof the one type
against knowledge of another type?
The approach exhibits the necessity to specify a few
more details. Just for illustration, what precisely is a
validation task?
Wehave undertaken a case study in the validation of
learning systems. This may be seen as a prototypical
example of how to invoke the formalisms developed
towards reasoning in an partially informal way.
Towards Lattices
of Knowledge
There are, at least, the following knowledgesources to
be taken into account:
¯ domain knowledge about the application
AI systems have to be validated,
area where
¯ problem knowledgeabout the particularly chosen test
cases for validation,
¯ system knowledge about the individual system actually under inspection.
Figure 1: Lattice Structure
Usually, knowledgebases axe at least propositional,
i.e. knowledge units may be combined by Boolean operators forming other admissible knowledge. Under the
assumption of a partial ordering between knowledge
determined by logical implication, knowledge bases
turn out to form lattices, as the logical disjunction
may be interpreted as forming the supremum of its
arguments, whereas the logical conjunction yields the
infimum.
Figure 1 is illustrating the structure of a lattice of
knowledge.
The Present
Approach Revisited
From the view of organizing and using knowledge for
test case reduction we are going to propose some concepts for relating system knowledge. These formalized
concepts should support the investigation of questions
like the following:
1. Given a particular validation task, which amount of
knowledgeis necessary resp. sufficient for reducing
sets of test cases to a feasible size?
2. Given some particular knowledge, which additional
knowledge may be required?
Prior to all validation problems, there is the application domain. A remarkable amount of domain knowledge is rather independent of any validation tasks.
Syslem knowledge may or may not come with the
system to be validated. This knowledge is normally
not tailored towards system validation, but it is, doubtless, essential. In the extreme, a system is only given
as a black box, that means that nothing but the system’s behavior is accessible. In the other extreme, the
system is given as a so-called white box (cf. (Gupta
1993)) which provides internal details of the system’s
mechanismsto be exploited for validation.
The chosen test cases relate to particular problems
which, intuitively, fbrm a bunch of prototypical application problems to which the system currently under
investigation has to be applied experimentally. The
property of exhaustiveness means that these problems
are chosen well enough such that validity on these
problems implies validity, in general. Such problem
knowledge must be somehowessential, because of the
rather axnbitious implicit property of exhaustiveness.
Even more important, the problem knowledge is the
only of these three knowledge sources which can be
intentionally tailored towards the needs of system validation, whereas domain knowledge is determined prior
to any validation attempt and system knowledge can
hardly be influenced, if a large class of unforeseeable
systems shall be potentially validated. In the consequence, we are especially interested in questions like
these:
¯ Under which circumstances
sufficient?
is problem knowledge
¯ Howto tailor problem knowledge appropriately such
that it supports validation substantially?
VERIFICATION,
VALIDATION,
& KBREFINEMENT501
Inductive
LearningA Case for
Intelligent
Systems Validation
The problems addressed seern to be hard, and there
is not much hope for universal answers to any of the
questions stated in the preceding section.
Underthese circurnstances, a case study in an application domain which, on the onc hand, is sulficicntly
complex and, on the other hand., is well-understood, is
deemed a reasonable way towards a better understanding of key phenomena, towards prototypical results,
and towaxds a new perspective which might allow for
guiding future research and development.
The area of inductive inference of recursive functions has been chosen as a prototypical application domain, because there is very recent preparatory work
(of. (Beick 1998), (Grieser, Jaatke, &-Lange1998a),
(Grieser, Jantke, &Lange1998b), (Grieser, Jaatke,
Lunge 1998c), (Grieser, Jantke, & Lunge 1998d), artd
(Grieser, Jantke, &Langc1998e)) which is providing
scenarios, formalizations, as well as particular topical
results.
The authors refrain from an introduction into this
domain and direct the reader to (Angluin & Smith
1983) and (Angluin & Smith 1992), for excellent surveys. and to (Jantkc &Beick 1981), e.g., for a collection
of results illustrating a variety of system types which
might be subject to validation.
In the area of inductive inference, information about
target functions to be learnt is presented as a finite
set j of input/output examples describing the target
function’s behavior, it is one of the oversimplifications
assumed here that such a linite set, denoted by f[n],
is always an initial segment of the goal fimction f, i.e.
it contains exactly all input/output pairs (at,f {x))
to some point n:
{ {0,/(0)),(1,It1)),...
,(n,f(n))
As learning systems one admits arbitrary
computable fimctions S which get fed in some information of the form .f[n] and hypothesize about f. Pairs
(f[n],f) form sets of test. cases ((x.,f(z)),pl)
corresponding to the dcfinition of test cases above, where
p! denotes some program for computing f.
Hypotheses S(f[n]) are programs which are intended
to compute f. In the formal setting of recursion theory
(of. (Rogers jr. 1967) or (Machtey & Young 1974),
e.g.), those programs are indices with respect to some
Gt)DEL numbering.
Success of learning any function f within a setting
like this, naturally, means to comeup, eventually, with
someprogramp correctly describing 2 the target f. It is
t For the purpose of te present investigation, we adopt
every possible simplification. Thoughthe overall approach
worksin moregeneral settings as well, there is no needfor
this generality.
2In terms of computabilitytheory, this is abbreviated by
9v = f, where ~ denotes the underlying G6nELnumbering.
50"2
JANTKE
a quite intuitive standard requirement that after some
point in time such aa ultimately correct hypothesis is
never changed.
Classes U of computablc and totally defined functions are posing a learning problem. Does there exist
any learning device ,5" such that. given any fimction f
of U and feeding growing samples f[n] to S, the hypotheses generated by S’ eventually conw~rgcto a correct program p for f ? All those classes of uniformly
learnable functions are forming what one calls an identification type. Variants of identification types can be
determined by posing additional criteria of success (of.
(Jantke & Beick 1981)).
Combining U, S and I, a validafion task becomes a
triple [U,S,I], where the question is whether or not S
is able to learn all functions f of U according to the
criteria of success specifying the identification type 1.
By means of a case study we exemplify tmwto solve
ta.sks [l.!,S, i], respectively, howto use different knowledge sources in a prototypical validation scenario.
Two Sample Validation Tasks
Wcassume two slightly different validation tasks with
the following three parameters each. respectively.
The identification
type sketched above (which is
called LIM in (Jantke & Beick 1981), and EX in (Angluin & Smith 1992)) is adopted and briefly named
/. Intuitively. a learning system S learns a particular function f on subsequently prcseuted information f[n] (with growing n) according to I. exactly
if after, perhaps, finitely manyrnistakes it comes up
with a Iinal program p=.b’(f[n]) that computes
3n.Vm
> n ( ~st/[,,,] J = f
The one example problem class Ui belongs to II¢on~,,
the class of all functions which are almost everywhere
constant, l"ormally, the domain is defined ~m
U¢o,,,t = {fl3cf E/’N V°°z (.f(z) el) }.
The other problem U2 belongs to Upoty, the class of
all polynomials in one variable which have exclusively
integer roots:
~.)oly = {fl3ke tN3al ..... ak EZ
V~.(f(:~:)= I]~=1 (,e -ai} )
Both samplesl.:t and I.:., are finite, arbitrarily fixed
subclasses of the corresponding domain. Thus, UI C
[;~o,,.,, and U.., C Urot,j, respectively.
For the beginning, any cornputable learning device
S is assumed. This leads to validation tasks [U!,,b’, []
and [Uz,S,[]. In case particular knowledge needs to
become obvious in the course of investigation, S may
be further specified.
Specific
System Knowledge
for Test Case Reduction
As seen from previous inw~stigations, system knowledge is inevitable tbr checking the correctness of the
particularly chosen scheme of induction, underlying
the test case reduction. Instead of illustrating this
matter in more detail wc refer to (Herrmann 1998),
e.g.
For the present case study we premise the class of
systems under consideration:
s(f[0])
S(f[n])
S(f[n
+ 11)
(
if -~cr(f[n+l])
#(S(f[n]),n+ 1,/(n + 1))
otherwise
Figure 2: The Normal l~brm of a
Learning Systcm S
Systems comply with a fixed standard or normal
form. The normal form is the extra system knowledge
provided, and the individual settings of parameters are
used for tuning a device towards some particular problem. It remains a still sufficiently difficult task to validate those normal form learning systems.
In the sample normal form, there is some default
mechanism6 for constructing initial hypotheses. The
subroutine e determines how to update hypotheses incrementally, and the condition tr when to do that.
Extra system knowledge means (i) to krlow that
system is of that normal form, (ii) to knowthe default
mechanismb, (iii) to knowthe subroutine #, and (iv)
to knowthe condition a for applying Q.
For example, 6 may return for every value y--f(x)
some program for computing tile constant function being everywhere equal to y. This is an appropriate default whenlearning functions of U~.onst.
Under the system knowledgeof this default, the justification
of the induction scheme above depends on
the specific subroutine ~ and on a.
Suppose ~a(f[n + 1]) implies that the system under
consideration does never change its hypotheses on f
after f[n + 1] has been processed. If now one can justify (perhaps, even verify by means of some theorem
proving techniques and tools) for all hypotheses h and
h~ = ~(h,n+ 1,f(n+ I)) the truth
if z<n
ph,(z)
= f(n+l)
otherwise
¯ [1] S works incrementally (with someinitial hypothesis h0).
¯ [2] h0 is the constant 1.
¯ [3] S is conservative, i.e. if hypothesis hn, produced
on f[n], is consistent with the new information in
fin + 1] then S will not substitute hn: h,+l = h,.
* [4] S is ignorant on IN x IN+, i.e.:
f(n+ l) > ~ hn+l = h, ~
¯ [5] f(n+l) = 0 --* ~ = (z-(n-F1))
These five characteristics are sufficient for learning
l.:,, from sequences f[n] according to 1. The question
is, howmayone check this ability of S if initially any
of these characteristics is unknown?
The first proposal was to test S for some appropriate
test set. Without having any knowledge concerning S
the task [U2, S, I] is undecideable. ThoughU2 shall be
finite, its elements are functions on IN, i.e. they are
infinite objects. This is the same problem as in case of
Wewill analyse the situation for l,~ozy by means of
lattices over the system characteristics listed above.
(~h(z)
then this yields the justification of induction.
Let us change the para~mter settings of S so that S
will be able to learn any class U2_ Upoly in the sense
of [U2,S,/].
For the default ~ we require
(x-n)
if f(n)=0
P6tl[’q)(x)
= 1
otherwise
The condition t~ is set to ct(f[z]) (f (x) ¢ 0)and
o(S(f[n]),n+ 1,f(n+ 1)) = r complies with
p~(x) = Ps(/[-l)* P~C/[n+I)).
Obviously, now the algorithm is characterized by the
following properties:
Figure 2: Validation Task [U2,S, 1]
Figure 3 illustrates some of the relations in S. As
shown, thc five characteristics are not independent of
each other:
[4] A [5] ~ [1]
[4] ^ [5] --. [a]
By proving ([4]A [5]) we do possess already four
five prerequisites for stating that S is valid in terms of
[U2, S, I]. It remains to prove[2] only.
Again, if [4] and [5] are valid the task is easy. One
might check [2] simply by testing, not for infinite
VERIFICATION,
VALIDATION,
& KBREFINEMENT 50,t
amounts of information but for finite only. There even
is a really small work to do. f[0] = (0,0) provides a minimal set of test data, sufficient for checking[2] against
the output of ,b’. For comparison, an analogous test of
someinitial hypothesis in Ueo,,,t would be unfeasible.
To be precise, [2] is testable if one of the two conditions [4] or [5] is knownto be valid.
Beside that opportunity of verifying [2] by testing,
one could draw [2] by looking into the system definition, i.e. into 6.
It turns out, that, [4] is the only property that might
neither be tented nor be concluded from other properties. This exhibits the necessity to start validation
even in this point by falling back uponavailable syst.em
knowledge.
Conclusions
The present work mainly reflects insights from a c0se
study in the validation o1" inductive inference systems.
Although it is somehowrisky to generalize, the resuhs
bear evidence of some nmre general phenomena, at
least. As inductive inference of recursive functions is ;~
rather narrow, thoroughly formalized, and sufficiently
understood area, one might expect that intelligent systems validation can hardly do better in application domains which are less constrained.
The first general insight is that black box validation
does not work. Ew’n more, white box validation can
only succeed, if the system to be validated can be assunmd to belong to some subclass of candidates.
Second, knowledge sources for intelligent systems
validation have been classified into domainknowledge.
problem knowledge, and system knowledge. System
knowledgeis considered as a lattice, to allow for navigation to separate system properties which can be verified by testing from those which have to be knownin
advance. This yields a well-formalized background of
determining testability issues.
References
Abel, T., and Gonzalez, A. 1997. Utilizing criteria to
reduce a set. of test cases for expert system validation.
In Dankel II, D., ed., FLAIRS-97. Prec. Florida AI
Research Symposium, Daytona Beach. FL, U.qA, May
ll--14, 1997, 402-.406. Florida AI R.es~arch Society.
Abel, T.; Knauf, R.: and Gonzalez, A. 1996. Generation of a minimalset. of test cases that is fimctionally
equivalent to an exhaustiw; set, for use in knowledgebased system validation. In Stewman, J., ed., Prec.
Florida A1 Research Symposium (FLAIRS-96), Key
West, FL, USA, May gO 22, 1996, 280-284. Florida
AI Research Society.
Angluin, D., and Smith, C. 1983. A survey of inductive inference: Theory and methods. Computing
Surveys 15:237-269.
Angluin, D., and Smith. C. 1992. Inductive inference. In Shapiro, S., ed., Encyclopedia of Artificial
504 JANTKE
Intelligence. Second Edition (I, blume l). Johu Wiley
& Sons, Inc. 672 682.
Arnold, O.. and Jantke, K.
1997.
Towards
validation of lnternet agcnls,
hi Wil.tiz. W..
and Grieser, G., eds., L[T 97.. Prec. 5. L~+l~’=~.qer
Informalik-7hge, Leipzig, 25./:J(i.. q eplcm&.r Iq97,
89- 100. Forschungsinstitut fiir lnt’ormations’fi’chnologlen Leipzig e.V.
Beick. II.-R. 1998. Validation of inductive inlbrence
systems -- towards gem;ralizable approaches and results. In Beick. H.-R,., and Jantke, K. P., eds., Fulure Trends in l.nlelligenl Syslems Validation. mtrnber
MEME-.MMM-98--5in Series of Meme Media Management, 57-62. MemeMedia Laboratory, Ilokkaido
Un iwrrsity.
l]enningb.ofen, B.; Kemmerich,S.; and Richter. M.
1987. Systems of fled.vctions, volume277 of Le,.I,rt
Notes in Compulcr Science. Springer-\:erlag.
Boohm.B. W.1984. Verit~’ing and validating sc, l’tware
requirementsand design specifications, ll,.’EI-" Tvalas.
,qoflware1 ( 1 ):75 88.
Freivalds, R,.; Kinber, E.; and Smith. C. 1995. On
the intrinsic complexity of learning, htformation and
(’omputation123( 1 ):64-7
Gonzalez. A.: Gupta, [I.; and f".hianeso, R.. 1996. Peribrmance evaluation of a large diagnostic expert system using a heuristic test case generator. EngnJcering
Applications of Artificial Inlelligence l (3):275 2S,I.
Gr/i.tzor. (.;. 1!)78. GeneralLallice "l’heory. Aka,h, mie
Verlag.
(Irieser. (I.; .lantke. K.: and Lange. S. I.q.qSa. Abstract computational complexity & relativiz,,d
comImtability for characterising expertise in learning systems validation.
Tt’ehnical R.eport MEME-MMM!18 3, MemeMedia Laboratory, ltokkaido University.
Grieser, G.; Jantke, K.; and Lange, S. 1998b. ( :haracterizing sufficient expertise for learning systems validation, lit Cook, D., ed., Prec. l"lorida A I Research
,q~mposium(FLAIRS-98), Sanibel lslaud, I"L. U,’;A,
May17 20, 19.98. 452 ,t56. AAAIPrcss.
Grieser, G.: ,lantke, K.; and Lange, S. 1998c. Validal.km of inductive learning systems: A case for charac..
lerizing expertise in complexinteract iw, systems wdidation. Technical Report MEMEMMM
98 I, Memo
Media Laboratory, llokkaido University.
Grieser, G.; Jantke, K. P.; and Lange, S. 1998d. Towards TCS concepts for characterizing expertise in
learning systems validation. In lwama, K., ed., AI.qorithms and Theory of Computing, 1998 l’l"inler LA
Symposium, Prec., l"ebr. ’2-4, 1998, Kyolo, Japan.,
volume 1041 of RIMS Kokyuroku, 103--110. Kyoto
University R~search Inst.. Mathem.Sciences (RIMS}.
Grieser, G.; Jantke.. K. P.; and Lange, S. l!LqSe. Towards the validation of inductive learning systems.
In Richter, M. M.; Smith, C.. 11.; Wiehagen, R.: and
Zeugnaann. T.. cds.. Pro<’. 9lh lnl<rnalionat Work-
shop on Algorithmic Learning Theory, (ALT’98), October 8-10, 1998, Otzenhausen, Germany, volume
1501 of LNAI, 409-423. Springer-Verlag.
Gupta, U. 1993. Validation and Verification of Expert
Systems. IEEE Press, Los Alamitos, CA.
Herrmann, J.; Jantke, K.; and Knauf, R. 1997. Using
structural knowledgefor system validation. In Dankel
II, D., ed., FLAIRS-97, Proc. Florida AI Research
Symposium, Daytona Beach, FL, USA, May 11-1~,
1997, 82-86. Florida AI Research Society.
Herrmann, J.; Jantke, K.; and Knauf, R. 1998. Variants of validity and their impact on the overall test
space. In Cook, D., ed., Proc. Florida AI Research
Symposium (FLAIRS-98), Sanibel Island, FL, USA.
May 17-20, 1998, 472-477. AAAIPress.
Herrmann, J. 1997. Order dependency of principles
for reducing test sets. In Wittig, W., and Grieser,
G., eds., LIT’g7, 5. Leipziger Informatik-Tage an
der HTWKLeipzig, ~5-26. September 1997, Tagungsbericht, 77-82. Forschungsinstitut fiir InformationsTechnologien Leipzig e.V. (FIT Leipzig e.V.).
Herrmann, J. 1998. Locating knowledge needs for test
ease reduction. In Beick, H.-R., and Jantke, K. P.,
eds., Future Trends in Intelligent Systems Validation,
number MEME-MMM-98-5in Series of Meme Media Management, 43-55. Meme Media Laboratory,
Hokkaldo University.
Huet, G., and Oppen, D. 1980. Equations and rewrite
rules: A survey. In Book, K., ed., Formal Language
Theory: Perspectives and Open Problems. Academic
Press. 349-405.
Jantke, K., and Buick, H.-R. 1981. Combining postulates of naturalness in inductive inference. EIK
1718/9):465-484.
Jantke, K.; Abel, T.; and Knauf, R. 1997. Fundamentals of a TURINGtest approach to validation.
Technical Report 01/97, Hokkaido University Sapporo, MemeMedia Laboratory.
Jantke, K. 1997. Towards validation of data mining systems. In Wittig, W., and Grieser, G.: eds.,
LIT’g7, Proc. 5. Leipziger Informatik-Tage, Leipzig,
25./26. September 1997, 101-109. Forschungsinstitut
fiir InformationsTechnologien Leipzig e.V.
Knauf, R.; Jantke, K.; Abel, T.; and Philippow, I.
1997. Fundamentals of a TUPdNGtest approach to
validation of AI systems. In Guns, W., ed., IWK97, 42nd International Scientific Colloquium, lime.
nan University of Technology, volume 2, 59-64. TU
Ilmenau.
Knauf, R.; Abel, T.; Jantke, K.; and Gonzalez, A.
1998a. A framework for validation of knowledge based
systems. In Grieser, G.; Beiek, H.-R.; and Jantke, K.,
eds., Aspects of Intelligent Systems Validation, January 1998, llmenau, Germany, Meme Media Laboratory, Hokkaido University, Series of MemeMedia
Management, Report MEME-MMM-98-2, 1-20.
Knauf, R.; Jantke, K.; Gonzalez, A.; and Philippow,
I. 1998b. Fundamental considerations
of competence assessment for validation. In Cook, D., ed.,
Proc. Florida AI Research Symposium (FLAIRS-98),
Sanibel Island, FL, USA, May17-20, 1998, 457-461.
AAAIPress.
Knauf, R.; Philippow, I.; and Gonzalez, A. 1997.
Towardsan assessment of an AI system’s validity by a
Turing test. In Dankel II, D., ed., FLAIRS.97, Proc.
Florida AI Research Symposium. Daytona Beach. FL,
USA, May11-14, 1997, 397-40t. Florida AI Research
Society.
Knuth, D. E. 1974. Computer programming as an art.
In ACMTaring Award Lectures. The First Twenty
Years, ACMPress Anthology Series. AC.MPress. 3346.
Machtey, M., and Young, P. 1974. An Introduction
to the General Theory of Algorithms. North-Holland.
Michels, J.-E.; Abel, T.; and Gonzalez, A. 1998. Validating a validation tool: Experience with a test case
providing strategy. In Grieser, G.; Beick, H.-R.; and
3antke, K., eds., Aspects of bltelligent Systems Validation, January 1998. 11menau. Germany. McmeMedia Laboratory, Hokkaido University, Series of Meme
Media Management, Report MEME-MMM9,~ 2, 27
37.
Michels, J.-E.; Abel, T.; Knauf, R.; and Gonzalez,
A. 1998. Investigating the validity of a test case selection methodologyfor expert system validation. In
Cook, D., ed., Proc. Florida AI Research Symposium
(FLAIRS-98), Sanibel lsland, FL, USA, May 17-20,
1998, 462-466. AAAI Press.
O’Keefe, R. M., and O’Leary, D. E. 1993. Expert
system verification and validation: A survey and tutorial. Artificial lntelhgenee Review 7:3-42.
Pitt, L. 1997. On exploiting knowledge aml concept use in learning theory. In Li, M., and Maruoka.
A., cds., Proc. 8th International Workshop on Algorithmic Learning Theory, (ALT’97), October 6-8,
I997, Sendal, Japan, volume t316 of LNAI, 62-84.
Springer-Verlag.
Popper, K. 1934. Logik der Forschung. J.C.B. Mohr,
Tiibingen.
Popper, K. 1965. The Logic of Scientific Discovery.
Harper & Row.
Rogers jr., H. 1967. The Theory of Recursivc Functions and Effective Computability. MeGraw-Ilill.
Shoham, Y. 1988. Reasoning about Change. Cambridge, MA:MIT Press.
Terano, T., and Takadama, K. 1998. Validating a
GA-based machine learning system, hi Beiek. H.-R.,
and Jantke, K. P., eds., Future Trends in Intelligent
Systems Validation,
number MEME-MMM-98-5
in
Series of Meme Media Management, 69-76. Meme
Media Laboratory, Hokkaido University.
VERIFICATION,
VALIDATION,
& KBREFINEMENT
505
Download