Materi Pendukung : T0264P06_2 Representation

advertisement
Materi Pendukung : T0264P06_2
Representation
In the 1960s and 1970s, students frequently
asked, "Which kind of representation is best?" and
I usually replied that we'd need more research. ...
But now I would reply: To solve really hard
problems, we'll have to use several different
representations. This is because each particular
kind of data structure has its own virtues and
deficiencies, and none by itself would seem
adequate for all the different functions involved
with what we call common sense.
- Marvin Minsky
"The mind's mechanism for storing and retrieving knowledge is fairly
transparent to us. When we 'memorize' an orange, we simply examine
it, think about it for a while, and perhaps eat it. Somehow, during this
process, all the essential qualities of the orange are stored. Later, when
someone mentions the word 'orange,' our senses are activated from
within, and we see, smell, touch, and taste the orange all over again.
Computers, unfortunately, are not as adept at forming internal
representations of the world. ... Instead of gathering knowledge for
themselves, computers must rely on human beings to place knowledge
directly into their memories.
This suggests programming, but even before programming begins,
we must decide on ways to represent information, knowledge, and
inference techniques inside a computer."
- Arnold, William R. and John S. Bowie. 1985. Artificial Intelligence: A Personal Commonsense Journey.
Englewood Cliffs, NJ: Prentice Hall. Excerpt taken from the Introduction to Chapter 3 at page 46.
Good Places to Start
What is A Knowledge Representation? Davis, Randall, Howard Shrobe,
and Peter Szolovits. AI Magazine (1993); 14 (1): 17-33. "What is a
knowledge representation? We argue that the notion can best be
understood in terms of five distinct roles it plays, each crucial to the
task at hand: * A knowledge representation (KR) is most fundamentally
a surrogate, a substitute for the thing itself, used to enable an entity to
determine consequences by thinking rather than acting, i.e., by
reasoning about the world rather than taking action in it. * It is a set of
ontological commitments, i.e., an answer to the question: In what terms
should I think about the world? * It is a fragmentary theory of intelligent
reasoning, expressed in terms of three components: (i) the
representation's fundamental conception of intelligent reasoning; (ii) the
set of inferences the representation sanctions; and (iii) the set of
inferences it recommends. * It is a medium for pragmatically efficient
computation, i.e., the computational environment in which thinking is
accomplished. One contribution to this pragmatic efficiency is supplied
by the guidance a representation provides for organizing information so
as to facilitate making the recommended inferences. * It is a medium of
human expression, i.e., a language in which we say things about the
world."
Knowledge Representation: Logical, Philosophical, and Computational
Foundations. By John Sowa. 2000. Pacific Grove: Brooks/Cole.
"Knowledge representation is a multidisciplinary subject that applies
theories and techniques from three other fields: 1. Logic provides the
formal structure and rules of inference. 2. Ontology defines the kinds of
things that exist in the application domain. 3. Computation supports the
applications that distinguish knowledge representation from pure
philosophy." - from the Preface.
Computational Intelligence - A Logical Approach. By David Poole, Alan
Mackworth and Randy Goebel. 1998. Oxford University Press, New
York. "In order to use knowledge and reason with it, you need what we
call a representation and reasoning system (RRS). A representation
and reasoning system is composed of a language to communicate with
a computer, a way to assign meaning to the language, and procedures
to compute answers given input in the language. Intuitively, an RRS
lets you tell the computer something in a language where you have
some meaning associated with the sentences in the language, you can
ask the computer questions, and the computer will produce answers
that you can interpret according to the meaning associated with the
language. ... One simple example of a representation and reasoning
system ... is a database system. In a database system, you can tell the
computer facts about a domain and then ask queries to retrieve these
facts. What makes a database system into a representation and
reasoning system is the notion of semantics. Semantics allows us to
debate the truth of information in a knowledge base and makes such
information knowledge rather than just data." - excerpt from Chapter 1
(pages 9 - 10).
Computers versus Common Sense. Video (May 30, 2006; 1 hour, 15
minutes) from Google TechTalks. Dr. Douglas Lenat, President and
CEO of Cycorp, talks about common sense: "It's way past 2001 now,
where the heck is HAL? ... What's been holding AI up? The short
answer is that while computers make fine idiot savants, they lack
common sense: the millions of pieces of general knowledge we all
share, and fall back on as needed, to cope with the rough edges of the
real world. I will talk about how that situation is changing, finally, and
what the timetable -- and the path -- realistically are on achieving
Artificial Intelligence."
Lesson: Object-Oriented Programming Concepts. Part of The Java
Tutorial available from Sun Microsystems. "If you've never used an
object-oriented language before, you need to understand the
underlying concepts before you begin writing code. You need to
understand what an object is, what a class is, how objects and classes
are related, and how objects communicate by using messages. The
first few sections of this trail describe the concepts behind objectoriented programming. The last section shows how these concepts
translate into code."
"I think the elemental insights that was had at the very beginning of
the field still holds up very strongly which is that you can take a
computing machine that normally, you know, back in the old days we
think of as crunching numbers, and put inside it a set of symbols that
stand in representation for things out in the world, as if we were doing
sort of mental images in our own heads, and actually with
computation, starting with something that's very much like formal
logic, you know, if-then-else kinds of things, but ultimately getting to
be softer and fuzzier kinds of rules, and actually do computation
inside, if you will, the mind of the machine, that begins to allow
intelligent behavior. I think that crucial insight, which is pretty old in
the field, is really in some respects one of the lynch pins to where
we've gotten." - Ron Brachman on The Charlie Rose Show
Knowledge Representation research at the Computational Intelligence
Research Laboratory (CIRL) at the University of Oregon. "Knowledge
representation (KR) is the study of how knowledge about the world can
be represented and what kinds of reasoning can be done with that
knowledge. Important questions include the tradeoffs between
representational adequacy, fidelity, and computational cost, how to
make plans and construct explanations in dynamic environments, and
how best to represent default and probabilistic information." In addition
to the helpful Pointers, be sure to follow the links to Subareas at the
bottom of their pages for additional information.
Understanding Musical Activities. A 1991 interview with Marvin Minsky,
edited by Otto Laske. "I want AI researchers to appreciate that there is
no one 'best' way to represent knowledge. Each kind of problem
requires appropriate types of thinking and reasoning -- and appropriate
kind of representations."
The Semantic Web. By Tim Berners-Less, James Hendler, and Ora
Lassila. Scientific American (May 2001). "Traditional knowledgerepresentation systems typically have been centralized, requiring
everyone to share exactly the same definition of common concepts
such as 'parent' or 'vehicle.' But central control is stifling, and increasing
the size and scope of such a system rapidly becomes unmanageable."
Readings Online
AI in the news: Representation
Claude E. Shannon: Founder of Information Theory. By Graham P.
Collins. Scientific American Explore (October 14, 2002). "Shannon's
M.I.T. master's thesis in electrical engineering has been called the most
important of the 20th century: in it the 22-year-old Shannon showed
how the logical algebra of 19th-century mathematician George Boole
could be implemented using electronic circuits of relays and switches.
This most fundamental feature of digital computers' design -- the
representation of 'true' and 'false' and '0' and '1' as open or closed
switches, and the use of electronic logic gates to make decisions and to
carry out arithmetic -- can be traced back to the insights in Shannon's
thesis."
Diagrammatic Reasoning: Cognitive and Computational Perspectives.
Edited by Janice Glasgow, N. Hari Narayanan, and B.
Chandrasekaran. AAAI Press. The following excerpt is from the
Foreword by Herbert Simon which is available onlone: "That reasoning
using language and using diagrams were different, at least in important
respects, was brought home by the Pythagorean discovery of irrational
numbers. ... Words, equations, and diagrams are not just a machinery
to guarantee that our conclusions follow from their premises. In their
everyday use, their real importance lies in the aid they give us in
reaching the conclusions in the first place."
Natural Language Processing and Knowledge Representation:
Language for Knowledge and Knowledge for Language. Edited by
Lucja M. Iwanska and Stuart C. Shapiro. AAAI Press. The following
excerpt is from the Preface which is available online: "The research
direction of natural language-based knowledge representation and
reasoning systems constitutes a tremendous change in how we view
the role of natural language in an intelligent computer system. The
traditional view, widely held within the artificial intelligence and
computational linguistics communities, considers natural language as
an interface or front end to a system such as an expert system or
knowledge base. In this view, inferencing and other interesting
information and knowledge processing tasks are not part of natural
language processing. By contrast, the computational models of natural
language presented in this book view natural language as a knowledge
representation and reasoning system with its own unique,
computationally attractive representational and inferential machinery.
This new perspective sheds some light on the actual, still largely
unknown, relationship between natural language and the human mind.
Taken to an extreme, such approaches speculate that the structure of
the human mind is close to natural language. In other words, natural
language is essentially the language of human thought."
Alternative Representations: Neural Nets and Genetic Algorithms.
Section 1.2.9 of Chapter One (available online) of George F. Luger's
textbook, Artificial Intelligence: Structures and Strategies for Complex
Problem Solving, 5th Edition (Addison-Wesley; 2005). "Most of the
techniques presented in this AI book use explicitly represented
knowledge and carefully designed search algorithms to implement
intelligence. A very different approach seeks to build intelligent
programs using models that parallel the structure of neurons in the
human brain or the evolving patterns found in genetic algorithms and
artificial life."
Programs with Common Sense. A classic paper by John McCarthy
(1959). "This paper will discuss programs to manipulate in a suitable
formal language (most likely a part of the predicate calculus) common
instrumental statements. The basic program will draw immediate
conclusions from a list of premises. These conclusions will be either
declarative or imperative sentences. When an imperative sentence is
deduced the program takes a corresponding action. These actions may
include printing sentences, moving sentences on lists, and reinitiating
the basic deduction process on these lists."
The St. Thomas Common Sense Symposium: Designing Architectures
for Human-Level Intelligence. By Marvin Minsky, Push Singh, and
Aaron Sloman. AI Magazine 25(2): Summer 2004, 113-124. Abstract:
"To build a machine that has "'common sense' was once a principal
goal in the field of artificial intelligence. But most researchers in recent
years have retreated from that ambitious aim. Instead, each developed
some special technique that could deal with some class of problem
well, but does poorly at almost everything else. We are convinced,
however, that no one such method will ever turn out to be 'best,' and
that instead, the powerful AI systems of the future will use a diverse
array of resources that, together, will deal with a great range of
problems. To build a machine that's resourceful enough to have
humanlike common sense, we must develop ways to combine the
advantages of multiple methods to represent knowledge, multiple ways
to make inferences, and multiple ways to learn. We held a two-day
symposium in St. Thomas, U.S. Virgin Islands, to discuss such a
project --- to develop new architectural schemes that can bridge
between different strategies and representations. This article reports on
the events and ideas developed at this meeting and subsequent
thoughts by the authors on how to make progress."
Logical Versus Analogical or Symbolic Versus Connectionist or Neat
Versus Scruffy. By Marvin Minsky. AI Magazine (1991); 12 (2): 34-51.
Takes the position that AI systems should assimilate both symbolic and
connectionist views.
A Framework for Representing Knowledge. By Marvin Minsky. MIT-AI
Laboratory Memo 306, June, 1974. Reprinted in The Psychology of
Computer Vision, P. Winston (Ed.), McGraw-Hill, 1975. Shorter
versions in J. Haugeland, Ed., Mind Design, MIT Press, 1981, and in
Cognitive Science, Collins, Allan and Edward E. Smith (eds.) MorganKaufmann, 1992. "It seems to me that the ingredients of most theories
both in Artificial Intelligence and in Psychology have been on the whole
too minute, local, and unstructured to account -- either practically or
phenomenologically -- for the effectiveness of common-sense thought.
The 'chunks' of reasoning, language, memory, and 'perception' ought to
be larger and more structured; their factual and procedural contents
must be more intimately connected in order to explain the apparent
power and speed of mental activities. ... I try here to bring together
several of these issues by pretending to have a unified, coherent
theory. The paper raises more questions than it answers, and I have
tried to note the theory's deficiencies. Here is the essence of the theory:
When one encounters a new situation (or makes a substantial change
in one's view of the present problem) one selects from memory a
structure called a Frame. This is a remembered framework to be
adapted to fit reality by changing details as necessary. A frame is a
data-structure for representing a stereotyped situation, like being in a
certain kind of living room, or going to a child's birthday party."
Enabling Technology For Knowledge Sharing. Robert Neches, Richard
Fikes, Tim Finin, Thomas Gruber, Ramesh Patil, Ted Senator, and
William R. Swartout. AI Magazine (1991); 12(3).
The Knowledge Level. Allen Newell. [AAAI Presidential Address, 19
August 1980.] AI Magazine (1981); 2 (2): 1-20. A classic article
describing the differences in viewing computer programs at the symbol
level or the knowledge level.
Logical Agents. Chapter 7 of the textbook, Artificial Intelligence: A
Modern Approach (Second Edition), by Stuart Russell and Peter
Norvig. "This chapter introduces knowledge-based agents. The
concepts that we discuss -- the representation of knowledge and the
reasoning processes that bring knowledge to life -- are central to the
entire field of artificial intelligence. ... We begin in Section 7.1 with the
overall agent design. Section 7.2 introduces a simple new environment,
the wumpus world, and illustrates the operation of a knowledge-based
agent without going into any technical detail. Then, in Section 7.3, we
explain the general principles of logic. Logic will be the primary vehicle
for representing knowledge throughout Part III of the book."

Some wumpus world resources:
o The Wumpus World. From the course materials for CIS
587, "the introductory course in Artificial Intelligence for
graduate computing students at Temple University." "A
variety of 'worlds' are being used as examples for
Knowledge Representation, Reasoning, and Planning.
Among them the Vacuum World, the Block World, and the
Wumpus World. We will examine the Wumpus World and
in this context introduce the Situation Calculus, the Frame
Problem, and a variety of axioms. The Wumpus World
was introduced by Genesereth, and is discussed in
Russell-Norvig. The Wumpus World is a simple world (as
is the Block World) for which to represent knowledge and
to reason."
o Agents That Reason Logically. Lecture slides from Dr.
Timothy W. Finin, Computer Science and Electrical
Engineering Department, University of Maryland
Baltimore County (UMBC).
Representation and Learning in Robots and Animals - an IJCAI-05
tutorial organised by Aaron Sloman and Bernt Schiele on behalf of the
EC-Funded CoSy Project.

Detailed tutorial programme with abstracts

Tutorial booklet
Logic and Artificial Intelligence. Entry by Richmond Thomason. The
Stanford Encyclopedia of Philosophy (Fall 2003 Edition); Edward N.
Zalta, editor. "1.2 - Knowledge Representation In response to the need
to design this declarative component, a subfield of AI known as
knowledge representation emerged during the 1980s."
Related Web Sites
AI on the Web: Logic and Knowledge Representation. A resource
companion to Stuart Russell and Peter Norvig's "Artificial Intelligence: A
Modern Approach" with links to reference material, people, research
groups, books, companies and much more.
Cognitive Systems for Cognitive Assistants (CoSY), an EU FP6 IST
Cognitive Systems Integrated project. "The main goal of the project is
to advance the science of cognitive systems through a multi-disciplinary
investigation of requirements, design options and trade-offs for humanlike, autonomous, integrated, physical (eg., robot) systems, including
requirements for architectures, for forms of representation, for
perceptual mechanisms, for learning, planning, reasoning and
motivation, for action and communication."

DR.2.1 Requirements study for representations. Deliverable from
the School of Computer Science, University of Birmingham.
o 1.1) Requirements study for representations Background: representational issues in natural and
artificial systems: "Although in the last decade and a half
it has become fashionable in some circles (following
[Brooks, 1991]) to claim that representations are not
needed in intelligent systems we regard this sort of claim
as merely part of a strategy to focus attention on a
particular narrow class of types of representation. For
insofar as animals and machines process information that
information somehow needs to be available to the
mechanisms that perform the processing. Whatever
encodes or embodies the information in a usable way can
be regarded as a representation in the general sense of
being 'some- thing that presents information'. In this
sense, talk of representations and information-processing
is now commonplace among many kinds of scientists
including not only Computer Scientists and AI
researchers, but also psychologists, neuroscientists,
biologists and physicists."
Knowledge Representation. A great collection of links from Enrico
Franconi. Be sure to see the section which feaures KR projects.
Knowledge Systems Research at AIAI, the Artificial Intelligence
Applications Institute at the University of Edinburgh's School of
Informatics. "AIAI's Knowledge Systems Research concentrates on
those areas of Artificial Intelligence that are concerned with explicit
representations of knowledge. These are Knowledge Representation,
including Ontologies, Enterprise Modelling, and Knowledge
Management; Knowledge Engineering, including tools for acquiring
formal models and checking their structure; and, more recently,
services and brokering on the Semantic Web."
X-Net Knowledge Bases Project at Commonsense Computing @
Media [the MIT Media Lab]. "What are good ways to represent and
organize commonsense knowledge? Rather than building a single
monolithic commonsense knowledge base we are exploring different
ways to 'slice' the problem into more specific but still broad-spectrum
knowledge bases. For example, we are developing separate
knowledge bases that represent knowledge in the form of semantic
networks, probabilistic graphical models, and story scripts."
Related Pages in AI Topics







AI toons - see our representation toon
Commonsense
General Index to News by Topic: Representation
Languages and Structures
Natural Language Understanding & Generation [parsing]
Ontologies
Reasoning
More Readings
Allen, J. F. 1991. Time and Time Again: The Many Ways to Represent
Time. International Journal of Intelligent Systems 6: 341-355.
Barr, Avron, and Edward A. Feigenbaum, editors. 1981. The Handbook
of Artificial Intelligence, Volume 1: 143. (Reading, MA: Addison-Wesley,
1989)

"In AI, a representation of knowledge is a combination of data
structures and interpretive procedures that, if used in the right
way in a program, will lead to 'knowledgeable' behavior. Work on
knowledge representation in AI has involved the design of
several classes of data structures for storing information in
computer programs, as well as the development of procedures
that allow 'intelligent' manipulation of these data structures to
make inferences."
Brachman, Ronald, and Hector Levesque. 2004. Knowledge
Representation and Reasoning. Morgan Kaufmann (part of Elsevier’s
Science and Technology Division). Excerpt from the publisher's
description: "Knowledge representation is at the very core of a radical
idea for understanding intelligence. Instead of trying to understand or
build brains from the bottom up, its goal is to understand and build
intelligent behavior from the top down, putting the focus on what an
agent needs to know in order to behave intelligently, how this
knowledge can be represented symbolically, and how automated
reasoning procedures can make this knowledge available as needed.
This landmark text takes the central concepts of knowledge
representation developed over the last 50 years and illustrates them in
a lucid and compelling way. Each of the various styles of representation
is presented in a simple and intuitive form, and the basics of reasoning
with that representation are explained in detail."
Brachman, R. J., and H. J. Levesque, editors. 1985. Readings in
Knowledge Representation. San Mateo, CA: Morgan Kaufmann.
Davis, E. 1990. Representations of Commonsense Knowledge. San
Mateo, CA: Morgan Kaufmann.
Hayes, Patrick J. 1995. In Defense of Logic. In Computation and
Intelligence: Collected Readings, ed. Luger, George F., 261-273. Menlo
Park/Cambridge, MA/London: AAAI Press/The MIT Press.
Hendrix, G. 1979. Encoding Knowledge in Partitioned Networks. In
Associative Networks, ed. Findler, N., 51-92. New York: Academic
Press.
Holmes, Bob. 1999. Beyond Words. New Scientist Magazine (7/10/99).
"[V]isual language works better for some kinds of information than for
others. 'It's best at being able to grasp things in context and see how
they're related,' says Terry Winograd, a computer scientist who directs
the Program on People, Computers, and Design at Stanford University.
'It's correspondingly less good at precision and detail."
Koller, Daphne and Brian Milch. Multi-Agent Influence Diagrams for
Representing and Solving Games. In Proceedings of the 17th
International Joint Conference on Artificial Intelligence, 2001. Abstract:
"The traditional representations of games using the extensive form or
the strategic (normal) form obscure much of the structure that is
present in real-world games. In this paper, we propose a new
representation language for general multi-player games --- multi-agent
influence diagrams (MAIDs). This representation extends graphical
models for probability distributions to a multi-agent decision-making
context. MAIDs explicitly encode structure involving the dependence
relationships among variables. As a consequence, we can define a
notion of strategic relevance of one decision variable to another: D' is
strategically relevant to D if, to optimize the decision rule at D, the
decision maker needs to take into consideration the decision rule at D'.
..."
The Charlie Rose Show (December 21, 2004) : A Conversation About
Artificial Intelligence, with Rodney Brooks (Director, MIT Artificial
Intelligence Laboratory & Fujitsu Professor of Computer Science &
Engineering, MIT), Eric Horvitz (Senior Researcher and Group
Manager, Adaptive Systems & Interaction Group, Microsoft Research),
and Ron Brachman (Director, Information Processing Technology
Office, Defense Advanced Research Project Agency, and President,
American Association for Artificial Intelligence). "Rose: What do you
think has been the most important advance so far? Brachman: A lot of
people will vary on that and I'm sure we all have different opinions. In
some respects one of the - - - I think the elemental insights that was
had at the very beginning of the field still holds up very strongly which is
that you can take a computing machine that normally, you know, back
in the old days we think of as crunching numbers, and put inside it a set
of symbols that stand in representation for things out in the world, as if
we were doing sort of mental images in our own heads, and actually
with computation, starting with something that's very much like formal
logic, you know, if-then-else kinds of things, but ultimately getting to be
softer and fuzzier kinds of rules, and actually do computation inside, if
you will, the mind of the machine, that begins to allow intelligent
behavior. I think that crucial insight, which is pretty old in the field, is
really in some respects one of the lynch pins to where we've gotten. ...
Horvitz: I think many passionate researchers in artificial intelligence
are fundamentally interested in the question of Who am I? Who are
people? What are we? There's a sense of almost astonishment at the
prospect that information processing or computation, if you take that
perspective, could lead to this. Coupled with that is the possibility of the
prospect of creating consciousnesses with computer programs,
computing systems some day. It's not talked about very much at formal
AI conferences, but it's something that drives some of us in terms of our
curiosity and intrigue. I know personally speaking, this has been a core
question in the back of my mind, if not the foreground, not on my lips
typically, since I've been very young. This is this question about who
am I. Rose: ... can we create it? Horvitz: Is it possible - - - is it possible
that parts turning upon parts could generate this?"
Schank, Roger C. 1995. The Structure of Episodes in Memory. In
Computation and Intelligence: Collected Readings, ed. Luger, George
F., 236-259. Menlo Park/Cambridge, MA/London: AAAI Press/The MIT
Press.
Stefik, Mark. 1995. Introduction to Knowledge Systems. San Francisco:
Morgan Kaufmann.
Stewart, Doug. Interview with Herbert Simon, June 1994. Omni
Magazine. [No longer available online.]
Download