jan10

advertisement
CS 4100 Artificial Intelligence
Prof. C. Hafner
Class Notes Jan 10, 2012
Goals of artificial intelligence (AI) field
1. Artificial systems with humanlike ability to
understand and reason (cf. cognitive science)
2. Solve problems that are too large to find the
best answer algorithmically, using heuristic
(incomplete) methods
3. Solve problems that are not well-understood
Artificial systems with humanlike ability to
understand and reason
•
Main techniques: ontology, automated reasoning,
formal logic, state-space search
•
Uses: problem-solving/planning, natural language
processing, intelligent HCI
•
Applications: game characters, search engines,
recommender systems
Artificial systems with humanlike ability to
understand and reason (cont.)
•
Main techniques: evidential logics (probability,
fuzzy logic, . . .), Bayesian inference nets, Markov
models
•
Uses: Problem solving under uncertainty, decision
support systems (“expert systems”)
•
Application: medical diagnosis and advice
Solve problems that are too large to find the best
answer algorithmically, so require incomplete
methods
•
Main techniques: heuristic search; dependencydirected backtracking
•
Uses: scheduling, resource allocation
•
Application: factory production
Solve problems that are not well-understood
•
Main techniques: weighted linear models; bayesian
inference nets; statistical induction and machine
learning in general
•
Uses: finance, computational science (discovery),
data mining
•
Application: oil exploration
Go over syllabus
History of AI
•
Initial Optimism 1960’s
• Samuel’s Checker player – early ML
• Problem Solver (Simon & Newell)
Employed means-ends analysis (a pre-cursor of
backward chaining now used in many systems)
Goal: Transform situation A to situation B
Match A to B to
find difference D
none
Success
Subgoal:
Reduce D
fail
Fail
A’
Transform A’ suceed
Success
into B
fail
Fail
Means-ends analysis (cont)
Goal: Reduce difference D between situations A and B
Search for operator
Q relevant to
reducing D
Subgoal: Apply Q to
A producing A’
A’
Success
fail
none
Fail
Goal: Apply operator Q to A
Match A to the
conditions of Q,
finding difference F
none
Subgoal:
Reduce F
fail
Fail
A’’
Apply Q to
A’’
fail
Fail
A’
Success
History of AI (cont.)
•
Knowledge-based systems - 1970’s – mid 80’s
• “Micro-world experiments
•
•
Rule-based “expert” systems
•
•
SHRDLU (Terry Winograd)
Mycin, Dendral (Ed Feigenbaum)
Acceptance by industry – huge oversell
•
The knowledge acquisition bottleneck
History of AI (cont.)
•
Late 80’s – mid 90’s – AI winter
• Hopes pinned on neural nets/ML to overcome KA
bottleneck
• Late 90’s to present – more computing power
• Rise of probabilistic approaches
• Lexical tagging breakthrough in NLP
• More rigorous experiments/evaluation methods
• 2000’s – influence of the Web revives AI
• Massive text corpuses and need for better web
browsers inspires NLP
• Hardware advances inspire robotics
• Intelligent Agents/Web Bots – applications to ecommerce
Agents and environments
Black box vs. glass box approach
Input
(stimulusi)
f
Output
(responsej)
Weighted linear functions
Bayesian models
Neural nets
Support vector machines
Black box vs. glass box approach
General Knowledge:
Concepts/categories
Relationships
Schemas/scripts
rules of behavior
The current environment (situation)
Models of other agents
Goals
Strategies, plans
Goal: tell Sam the details of a social event
Designing an Intelligent Agent
•
What can the agent do? (range of possible actions)
•
What about the environment ?
• Inputs to the program are called percepts
•
•
•
•
Symbolic input from keyboard, files, networks
Sensor data from the physical world
Often must be interpreted into meaningful concepts
What can the agent know?
•
•
•
•
History of its own previous inputs and actions
Properties of the environment + world knowledge
Knowledge of its own goals, preferences, etc.
Strategies for its behavior
Vacuum-cleaner world
• Percepts: location and contents, e.g., [A,Dirty]
• Actions: Left, Right, Suck, NoOp
Types of Agents
•
Reflex agent:
• no “state” or memory
• Reacts to current input according to its
program (condition  action rules)
• Knowledge-based agent:
• Uses an explicit knowledge base
• Exhibits “understanding” of its input by
relating it to prior knowledge
• Reacts according to rules, but the conditions
may be complex and require inference to
evaluate
Types of Agents (cont)
•
Planning agents
• Explicitly represent their own goals and/or
preferences (“utilities”) and can reason about
them (i.e., planning)
• Exhibit a kind of autonomy – actions do not follow
directly from a table lookup
• Learning agents
• Learning from positive and negative examples –
“supervised learning”
• Learn from experience to improve its outcomes –
“reinforcement learning”
How do we judge whether we have succeeded?
getting the “right answer” ?
having a good outcome ?
(using some “utility” function)
the Turing Test ?
(and modified versions)
we know it when we see it?
Types of Agents (cont.)
Reflex agent
• Behavior depends only on current input (no
history or model of the overall environment)
• Ex: Vacuum Agent
– Percepts: 2-tuple: A or B, Clean or Dirty
– Actions: Left, Right, Suck, NoOp
Vacuum agent’s behavior
• The vacuum agent might follow this simple
strategy:
– if current location is dirty, clean it (Suck)
– If current location is clean move to the other
location
Knowledge Representation: Two Approaches
• Procedural representation
– Program’s statements directly encode the
knowledge (for example, of the strategy to follow)
• Declarative representation
– A data structure encodes the knowledge and the
program’s statements act like an interpreter
Implementing the Vacuum Agent
Approach 1 (procedural)
if status = Dirty then return Suck
else if location = A then return Right
else if location = B then return Left
Implementing the Vacuum agent
Approach 2 (declarative):
state  perceive()
rule  rule-match(state, rule base)
action  RHS(rule)
return action
Rule Base
Per cept
Action
[A, Clean]
Right
[A, Dirty]
Suck
…
Simple reflex agents
Production Rule Systems
• Behavior is expressed as a set of production
rules (called “table-driven” by RN)
1.
Condition  Action
2.
Condition  Action
...
Condition called left-hand-side (LHS)
Action called right-hand-side (RHS)
In what sense is an agent that
uses declarative knowledge
more intelligent ??
Production rule systems
• Drawbacks:
– Huge RULE BASE (time consuming to build by
hand)
– What if more than one condition is satisfied?
– Inflexible (no adaptation or learning)
Analyzing Agent Performance
• Discussion of “rationality”
– Must define a performance measure
• How clean (but penalty for extra work?)
– Rationality maximizes expected value of the measure
– Depends on knowledge of the environment
• Can a clean square get dirty again? At what rate?
Knowledgeable agents
Q: What formal language(s) can we use to represent
•
•
•
•
Current facts about the state of the world
General facts about how the world behaves
General facts about the effects of actions that
the agent can perform
Condition  action rules that specify how the
agent is to behave
A: Formal logic
• Syntax and semantics well understood
• Computational tractability known for important
subsets (Horn clause logic)
Introduce Python
Assignment 1
http://relationalagents.com/demos/index.html
Relational Agent Systems
How are you feeling today ?
Download