model''(t

advertisement
Introduction to Intelligent Agents
Jacques Robin
Ontologies
Reasoning
Components
Agents
Simulations
Outline






What are intelligent agents?
Characteristics of artificial intelligence
Applications and sub-fields of artificial intelligence
Characteristics of agents
Characteristics of agents’ environments
Agent architectures
What are Intelligent Agents?
 Q: What are Software Agents?
 A: Software which architecture is based on
the following abstractions:
 Immersion in a distributed environment, continuous
thread, encapsulation, sensor, perception, actuator,
action, own goal, autonomous decision making
 Q: What is Artificial Intelligence?
 A: Field of study dedicated to:
 Reduce the range of tasks that humans carry out
better than current software or robots
 Emulate humans’ capability to solve approximately
but efficiently most instances of problems proven
(or suspected) hard to solve algorithmically (NPHard, Undecidable etc.) in the worst case, using
innovative, often human inspired, alternative
computational metaphors and techniques
 Emulate humans’ capability to solve vaguely
specified problems using partial, uncertain
information
Software
Engineering
Artificial
Intelligence
Agents
Distributed
Systems
Artificial Intelligence: Characteristics
 Highly multidisciplinary inside and outside computer science
 Ran-away field - by definition - at the forefront of computation
tackling ever more innovative, challenging problems as the one it
solved become mainstream computing
 Most research in any other field of computation also involves AI
problems, techniques, metaphors
 Q: What conclusions can be derived from these characteristics?
 A: Hard to avoid; very, very hard to do well
“Well” as in:
 Well-founded (rigorously defined theoretical basis, explicit simplifying
assumptions and limitations)
 Easy to use (seamlessly integrated, easy to understand)
 Easy to reuse (general, extendable techniques)
 Scalable (at run time, at development time)
What is an Agent?
General Minimal Definition
 Any entity (human, animal, robot, software):
Situated in an environment (physical, virtual or simulated) that
Perceives the environment through sensors (eyes, camera, socket)
Acts upon the environment through effectors (hands, wheels, socket)
Possess its own goals, i.e., preferred states of the environments
(explicit or implicit)
 Autonomously chooses its actions to alter the environment towards its
goals based on its perceptions and prior encapsulated information about
the environment




 Processing cycle:
1.
2.
3.
4.
Use sensor to perceive P
Interprets I = f(P)
Chooses the next action A = g(I,G) to perform to reach its goal G
Use actuator to execute A
What is an Agent?
Agent
Sensors
P
Environment
Effectors
Perception
Interpretation:
I = f(P)
Autonomous
Reasoning
A
Action Choice:
A = g(I,O)
1. Environment percepts
2. Self-percepts
3. Communicative percepts
Goals
1. Environment altering
actions
2. Perceptive actions
3. Communicative actions
Agent
x
 Intentionality: Encapsulate own
goals (even if implicitly) in addition
to data and behavior
 Decision autonomy:
 Pro-actively execute behaviors to
satisfy its goals
 Can negate request for execution
of a behavior from another agent
 More complex input/output:
percepts and actions
 Temporal continuity: encapsulate
an endless thread that constantly
monitors the environment
 Coarser granularity:
 Encapsulate code of size
comparable to a package or
component
 Composed of various objects when
implemented using an OO language
Object
 No goal
 No decision autonomy:
 Execute behaviors only reactively
whenever invoked by other
objects
 Always execute behavior invoked
by other objects
 Simpler input/output: mere
method parameters and return
values
 Temporally discontinuous: active
only during the execution of its
methods
Intelligent Agent x
Simple Software Agent
Percept Interpretation: I = f(P)
Sensors
Environment
Effectors
AI
Conventional
Processing
Goals
Action Choice: A = g(I,O)
AI
Conventional
Processing
Intelligent Agent
Classical AI
System
Disembodied
AI
System
Situated Agent
Sensors
Environment
Effectors
Percept
Interpretation
AI
Input
Data
Reasoning
Goals
Goal
Action Choice
AI
Output
Data
AI
What is an Agent?
Other Optional Properties
 Reasoning Autonomy:
 Requires AI, inference engine and knowledge base
 Key for: embedded expert systems, intelligent controllers, robots,
games, internet agents ...
 Adaptability:
 Requires IA, machine learning
 Key for: internet agents, intelligent interfaces, ...
 Sociability:
 Requires AI + advanced distributed systems techniques:
 Standard protocols for communication, cooperation, negotiation
 Automated reasoning about other agents’ beliefs, goals, plans and
trustfulness
 Social interaction architectures
 Key for: multi-agent simulations, e-comerce, ...
What is an Agent?
Other Optional Properties
 Personality:
 Requires AI, attitude and emotional modeling
 Key for: Digital entertainment, virtual reality avatars,
user-friendly interfaces ...
 Temporal continuity and persistence:
 Requires interface with operating system, DBMS
 Key for: Information filtering, monitoring, intelligent control, ...
 Mobility:
 Requires:
 Network interface
 Secure protocols
 Mobile code support
 Key for: information gathering agents, ...
 Security concerns prevented its adoption in practice
Welcome to the Wumpus World!
Agent-Oriented Formulation:
 Agents: gold digger
 Environment objects:
 caverns, walls, pits, wumpus, gold,
bow, arrow
 Environment’s initial state
 Agents’ goals:
 be alive cavern (1,1) with the gold
 Perceptions:
 Touch sensor: breeze, bump
 Smell sensor: stench
 Light sensor: glitter
 Sound sensor: scream
 Actions:
 Legs effector: forward, rotate 90º
 Hands effector: shoot, climb out
Wumpus World: Abbreviations
4
3
P
B
S
W
G
P
B
P
B
S, B, G
2
S
1
B
A
B
start
1
2
3
4
A - Agent
W - Wumpus
P - Pit
G - Gold
X? – Possibly X
X! – Confirmed X
V – Visited Cavern
B – Breeze
S – Stench
G – Glitter
OK – Safe Cavern
Perceiving, Reasoning and Acting
in the Wumpus World
 Percept sequence:
nothing
breeze
 Wumpus world model maintained by agent:
4
4
3
3
2
ok
1
A
ok
ok
1
2
t=0
3
4
2
ok
1
V
ok
1
P?
b A
ok
P?
2
t=2
3
4
Perceiving, Reasoning and Acting
in the Wumpus World
 Percept sequence:
stench
{stench, breeze, glitter}
 Wumpus World Model:
4
4
P?
A
3
W!
3
W!
2
s A
ok
2
S V
ok
V
ok
V
b V
ok
1
V
ok
1
ok
b V
ok
2
P!
3
1
4
 Action Sequence:
t=7: Go to (2,1),
Sole safe unvisited cavern
ok
1
P?
SBG
2
ok
P!
3
4
t=11: Go to (2,3) to find gold
Classification Dimensions
of Agent Environments
 Agent environments can be classified as points in a multi-dimensional
spaces
 The dimensions are:








Observability
Determinism
Dynamicity
Mathematical domains of the variables
Episodic or not
Multi-agency
Size
Diversity
Observability
 Fully observable (or accessible):
 Agent sensors perceive at each instant all the aspects of the
environment relevant to choose best action to take to reach goal
 Partially observable (or inaccessible, or with hidden variables)
 Sources of partial observability:




Realm inaccessible to any available sensor
Limited sensor scope
Limited sensor sensitivity
Noisy sensors
Determinism
 Deterministic: all occurrence of executing a given action in a given
situation always yields same result
 Non-deterministic (or stochastic): action consequences partially
unpredictable
 Sources of non-determinism:
 Inherent to the environment: quantic granularity, games with
randomness
 Other agents with unknown or non-deterministic goal or action policy
 Noisy effectors
 Limited granularity of effectors or of the representation used to
choose the actions to execute
Dynamicity: Static
and Sequential Environments
 Static: Single perception-reasoning-action cycle during which environment is
static
State 1
Static Environment
State 2
Agent
Ação
Percept
Reasoning
 Sequential: Sequence of perception-reasoning-action cycles during each of
which the environment changes only as a result of the agent’s actions
Sequential Environment
State 1
State 2
...
State 3
Reasoning
Ação
Percept
Reasoning
Action
Percept
Reasoning
Action
Percept
Agent
State N
Dynamicity: Concurrent
Synchronous and Asynchronous
 Synchronous: Environment can change on its own between one action and the
next perception of an agent, but not during its reasoning
Synchronous Concurrent
Environment
State 2
State 1
State 3
State 4
State 5
...
Action
Reasoning
Percept
Reasoning
Action
Percept
Agent
 Asynchronous: Environment can change on its own at any time, including
during the agent’s reasoning
Asynchronous Concurrent
Environment
State 1
State 2
State 3
State 4
State 6
State 5
Action
Reasoning
Percept
Reasoning
Action
Percept
Agent
...
Dynamicity: Stationary and
Non-Stationary
 Stationary: The underlying laws or rules that govern state changes in the
environment are fixed and immutable; they remain the same during the
entire lifetime of the agent
 ex, a soccer game is asynchronous, yet stationary
 Non-Stationary: The underlying laws or rules that govern state changes in
the environment are themselves subject to dynamic changes (meta-level
changes) during the lifetime of the agent
 ex, an accounting agent acts in a non-stationary environment, since the tax laws
are subject to changes from one year to the next
Multi-Agency
 Sophistication of agent society:
 Number of agent roles and agent instances
 Multiplicity and dynamicity of agent roles
 Communication, cooperation and negotiation protocols
 Main classes:




Mono-agent
Multi-agent cooperative
Multi-agent competitive
Multi-agent cooperative and competitive
 With static or dynamic coalitions
Mathematical Domain of Variables
 MAS variables:
 Parameters of agent percepts, actions and goals
 Attributes of environment objects
 Arguments of environment relations, states, events and locations
Boolean
Discrete
Binary
Qualitative
Dichotomical
Nominal
Ordinal
Quantitative
Interval
Fractional
Continuous
R
[0,1]
Mathematical Domain of Variables
 Binary:
 Boolean, ex, Male  {True,False}
 Dichotomic, ex, Sex  {Male, Female}
 Nominal (or categorical)
 Finite partition of set without order
nor measure
 Relations: only = ou 
 ex, Brazilian, French, British
 Ordinal (or enumerated):
 Interval:
 Finite partition of ordered set
with measure m defining distance
d: X,Y, d(X,Y) = |m(X)-m(Y)|
 No inherent zero
 ex, Celsius temperature
 Fractional (or proportional):
 Partition with distance and
inherent zero
 Relations: anyone
 ex, Kelvin temperature
 Finite partition of (partially or totally)
ordered set without measure
 Relations: only =, , , >
 Continuous (or real)
 ex, poor, medium, good, excellent
 Infinite set of values
Other Characteristics
 Episodic:
 Agent experience is divided in separate episodes
 Results of actions in each episode, independent of previous episodes
ex.: image classifier is episodic, chess is not
soccer tournament is episodic, soccer game is not
 Open environment:
 Partially observable, Non-deterministic, Non-episodic, Continuous
Variables, Concurrent Asynchronous, Multi-Agent.
 ex.: RoboCup, Internet, stock market
Size and Diversity
 Size, i.e., number of instances of:
 Agent percepts, actions and goals
 Environment agents, objects,
relations, states, events and
locations
 Dramatically affects scalability of
agent reasoning execution
 Diversity, i.e., number of classes of:
 Agent percepts, actions and goals
 Environment agents, objects,
relations, states, events and
locations
 Dramatically affects scalability of
agent knowledge acquisition process
Agents’ Internal Architectures








Reflex agent (purely reactive)
Automata agent (reactive with state)
Goal-based agent
Planning agent
Hybrid, reflex-planning agent
Utility-based agent (decision-theoretic)
Layered agent
Adaptive agent (learning agent)
 Cognitive agent
 Deliberative agent
Reflex Agent
Sensors
Environment
Effectors
Rules
Percepts  Action
A(t) = h(P(t))
Remember …
Agent
Sensors
P
Environment
Effectors
Percept
Interpretation:
I = f(P)
Reasoning
A
Action Choice:
A = g(I,O)
Goals
So?
Sensors
Environment
Effectors
P
Rules
Percepts  Action
A(t) = h(P(t))
A
Percept Interpretation:
I = f(P)
Goals
Action Choice:
A = g(I,O)
Reflex Agent
 Principle:
 Use rules (or functions, procedures) that associate directly percepts to
actions
 ex. IF speed > 60 THEN fine
 ex. IF front car’s stop light switches on THEN brake
 Execute first rule which left hand side matches the current percepts
 Wumpus World example


IF visualPerception = glitter THEN action = pick
see(glitter)  do(pick) (logical representation)
 Pros:
 Condition-action rules is a clear, modular, efficient representation
 Cons:
 Lack of memory prevents use in partially observable, sequential, or nonepisodic environments
 ex, in the Wumpus World a reflex agent can’t remember which path it has
followed, when to go out of the cavern, where exactly are located the
dangerous caverns, etc.
Automata Agent
Sensors
Percept Interpretation
Rules:
percepts(t)  model(t)  model’(t)
Environment
Model Update
Regras: model(t-1)  model(t)
model’(t)  model’’(t)
(Past and) Current
Enviroment Model
Goals
Action Choice
Effectors
Rules:
model’’(t)  action(t),
action(t)  model’’(t)  model(t+1)
Automata Agent
 Rules associate actions to percept indirectly through the
incremental construction of an environment model (internal state of
the agent)
 Action choice based on:
 current percepts + previous percepts + previous actions + encapsulated
knowledge of initial environment state
 Overcome reflex agent limitations with partially observable,
sequential and non-episodic environments
 Can integrate past and present perception to build rich representation
from partial observations
 Can distinguish between distinct environment states that are
indistinguishable by instantaneous sensor signals
 Limitations:
 No explicit representation of the agents’ preferred environment states
 For agents that must change goals many times to perform well,
automata architecture is not scalable (combinatorial explosion of rules)
Automata Agent Rule Examples
 Rules percept(t)  model(t)  model’(t)
 IF visualPercept at time T is glitter
AND location of agent at time T is (X,Y)
THEN location of gold at time T is (X,Y)
 X,Y,T see(glitter,T)  loc(agent,X,Y,T)  loc(gold,X,Y,T).
 Rules model’(t)  model’’(t)
 IF agent is with gold at time T
AND location of agent at time T is (X,Y)
THEN location of gold at time T is (X,Y)
 X,Y,T withGold(T)  loc(agent,X,Y,T)  loc(gold,X,Y,T).
Automata Agent Rule Examples
 Rules model(t)  action(t)
 IF location of agent at time T = (X,Y)
AND location of gold at time T = (X,Y)
THEN choose action pick at time T
X,Y,T loc(agent,X,Y,T)  loc(gold,X,Y,T)  do(pick,T)
 Rules action(t)  model(t)  model(t+1)
 IF choosen action at time T was pick
THEN agent is with gold at time T+1
 T done(pick,T)  withGold(T+1).
(Explicit) Goal-Based Agent
Sensors
Percept Interpretation
Rules: percept(t)  model(t)  model’(t)
Model Update
Rules: model(t-1)  model(t)
model’(t)  model’’(t)
Environment
Effectors
Goal Update
Rules: model’’(t)  goals(t-1)  goals’(t)
Action Choice
Rules: model’’(t)  goals’(t)  action(t)
action(t)  model’’(t)  model(t+1)
(Past and) Current
Environment Model
Goals
(Explicit) Goal-Based Agent
 Principle: explicit and dynamically alterable goals
 Pros:
 More flexible and autonomous than automata agent
 Adapt its strategy to situation patterns summarized in its goals
 Limitations:
 When current goal unreachable as the effect of a single action, unable
to plan sequence of actions
 Does not make long term plans
 Does not handle multiple, potentially conflicting active goals
Goal-Based Agent Rule Examples
 Rule model(t)  goal(t)  action(t)
 IF goal of agent at time T is to return to (1,1)
AND agent is in (X,Y) at time T
AND orientation of agent is 90o at time T
AND (X,Y+1) is safe at time T
AND (X,Y+1) has not being visited until time T
AND (X-1,Y) is safe at time T
AND (X-1,Y) was visited before time T
THEN choose action turn left at time T
Y+1
Y
 X,Y,T, (N,M,K goal(T,loc(agent,1,1,T+N))  loc(agent,X,Y,T)
 orientation(agent,90,T)  safe(loc(X,Y+1),T)
  loc(agent,X,Y+1,T-M)  safe(loc(X-1,Y),T)  loc(agent,X,Y+1,T-K))
 do(turn(left),T)
ok
v
ok
A
X-1
X
Goal-Based Agent Rule Examples
 Rule model(t)  goal(t)  action(t)
 IF goal of agent at time T is to find gold
AND agent is in (X,Y) at time T
AND orientation of agent is 90o at time T
AND (X,Y+1) is safe at time T
AND (X,Y+1) has not being visited until time T
AND (X-1,Y) is safe at time T
AND (X-1,Y) was visited before time T
THEN choose action forward at time T
Y+1
Y
 X,Y,T, (N,M,K goal(T,withGold(T+N))  loc(agent,X,Y,T)
 orientation(agent,90,T)  safe(loc(X,Y+1),T) 
 loc(agent,X,Y+1,T-M)  safe(loc(X-1,Y),T)  loc(agent,X,Y+1,T-K))
 do(forward,T)
ok
v
ok
A
X-1
X
Goal-Based Agent Rule Examples
 Rule model(t)  goal(t)  goal’(t)
//If the agent reached it goal to hold the gold,
//then its new goal shall be to go back to (1,1)
 IF goal of agent at time T-1 was to find gold
AND agent is with gold at time T
THEN goal of agent at time T+1 is to be in location (1,1)
 T, (N goal(agent,T-1,withGold(T+N))  withGold(T)
 M goal(agent,T,loc(agent,1,1,T+M))).
Planning Agent
Sensors
Percept Interpretation
Rules: percept(t)  model(t)  model’(t)
Model Update
Rules: model(t-1)  model(t)
model’(t)  model’’(t)
(Past and)
Current
Environment
Model
Environment
Goal Update
Rules: model’’(t)  goals(t-1)  goals’(t)
Prediction of Future Environments
Goals
Rules: model’’(t)  model(t+n)
model’’(t)  action(t)  model(t+1)
Effectors
Action Choice
Rules: model(t+n) = result([action1(t),...,actionN(t+n)]
 model(t+n)  goal(t)  do(action1(t))
Hypothetical
Future
Environment
Models
Planning Agent
 Percept and actions associated very indirectly through:
 Past and current environment model
 Past and current explicit goals
 Prediction of future environments resulting from different possible
action sequences to execute
 Rule chaining needed to build action sequence from rules capture
immediate consequences of a single action
 Pros:
 Foresight allows choosing more relevant and safer actions in sequential
environments
 Cons: little point in building elaborated long term plans in,
 Highly non-deterministic environment (too many possibilities to
consider)
 Largely non-observable environments (not enough knowledge available
before acting)
 Asynchronous concurrent environment (only cheap reasoning can reach
a conclusion under time pressure)
Hybrid Reflex-Planning Agent
Reflex Rules
Percepts  Actions
Sensors
Reflex Thread
Synchronization
Environment
Planning Thread
Percept Interpretation
Goal Update
Current,
past and
future
environment
model
Action Choice
Goals
Current Model Update
Future Environments Prediction
Effectors
Hybrid Reflex-Planning Agent
 Pros:
 Take advantage of all the time and knowledge available to choose best
possible action (within the limits of its prior knowledge and percepts)
 Sophisticated yet robust
 Cons:





Costly to develop
Same knowledge encoded in different forms in each component
Global behavior coherence harder to guarantee
Analysis and debugging hard due to synchronization issues
Not that many environments feature large variations in available
reasoning time in different perception-reasoning-action cycles
Layered Agents
 Many sensors/effectors are too fine-grained to reason about goals
using directly the data/commands they provide
 Such cases require a layered agent that decomposes its reasoning in
multiple abstraction layers
 Each layer represent the percepts, environment model, goals, and
actions at a different level of details
 Abstraction can consist in:
 Discretizing, approximating, clustering, classifying data from prior
layers along temporal, spatial, functional, social dimensions
 Detail can consist in:
 Decomposing higher-level actions into lower-level ones along temporal,
spatial, functional, social dimensions
Decide Abstractly
Abstract
Detail
Perceive in Detail
Act in Detail
Layered Automata Agent
Percept Interpretation
Layer2: s(A,B)   r(B)  q(A)
Layer1: P(s)   P(z| y).P(y)
Sensors
Layer0: y   f(x).dx
Environment Model Update
Ambiente
Layer2: s(A,B)   r(B)  q(A)
Action Choice and Execution Control
Layer2: s(A,B)   r(B)  q(A)
Layer1: P(s)   P(z| y).P(y)
Effectors
Layer0: y   f(x).dx
Environment Model
Layer2:
s(A,B)   r(B)  q(A)
Exemplo de camadas de abstração:
Y
X
Abstraction Layer Examples
Y
X
Utility-Based Agent
 Principle:
 Goals only express boolean agent preferences among environment states
 A utility function u allows expressing finer grained agent preferences
 u can be defined on a variety of domains and ranges:
actions, i.e., u: action  R (or [0,1]),
action sequences, i.e., u: [action1, ..., actionN]  R (or [0,1]),
environment states, i.e., u: environmentStateModel  R (or [0,1]),
environment state sequences, i.e., u: [state1, ..., stateN]  R (or [0,1]),
environment state, action pairs,
i.e., u: environmentStateModel x action  R (or [0,1]),
 environment state, action pair sequences,
i.e., u: [(action1-state1), ..., (actionN-stateN)] R (or [0,1]),





 Pros:
 Allows solving optimization problems aiming to find the best solution
 Allows trading-off among multiple conflicting goals with distinct probabilities of
being reached
 Cons:
 Currently available methods to compute (even approximately) argmax(u) do not
scale up to large or diverse environments
Utility-Based Reflex Agent
Sensors
Percept Interpretation:
Rules: percept  actions
Environment
Goals
Action Choice:
Effectors
do( argmax U(a))
a actions
Utility Function
u:actions  R
Utility-Based Planning Agent
Sensors
Percept Interpretation
Regras: percept(t)  model(t)  modelo’(t)
Past &
Current
Environment
Model
Model Update
Regras: model’(t)  model’’(t)
Environment
Future Environment Prediction
Regras: model’’(t)  ação(t)  model(t+1)
model’’(t)  model(t+1)
Hypothesized
Future
Environments
Model
Utility Function:
u: model(t+n)  R
Action Choice
Effectors
do( argmax
actioni1
 U(result([ action ... action ])))
i
i
1
i
n
Adaptive Agent
Performance Analysis Component
Sensors
Environment
Effectors
Acting
Component
•
•
•
•
•
•
Reflex
Automata
Goal-Based
Planning
Utility-Based
Hybrid
Learning Component
• Learn rules or functions:
•
•
•
•
•
•
•
•
•
•
percept(t)  action(t)
percept(t)  model(t)  modelo’(t)
modelo(t)  modelo’(t)
modelo(t-1)  modelo(t)
modelo(t)  action(t)
action(t)  model(t+1)
model(t)  goal(t)  action(t)
goal(t)  model(t)  goal’(t)
utility(action) = value
utility(model) = value
New Problem Generation Component
Simulated Environments
 Environment simulator:
 Often themselves internally follow an agent architecture
 Should be able to simulate a large class of environments that can be
specialized by setting many configurable parameters either manually or
randomly within a manually selected range
 ex, configure a generic Wumpus World simulator to generate world
instances with a square shaped cavern, a static wumpus and a single gold
nugget where the cavern size, pit numbers and locations, wumpus and gold
locations are randomly picked
 Environment simulator processing cycle:
1.
2.
3.
4.
Compute percept of each agent in current environment
Send these percepts to the corresponding agents
Receives the action chosen by each agent
Update the environment to reflect the cumulative consequences of all
these actions
Environment Simulator Architecture
Simulation
Visualization
GUI
Environment Update
Rules: model(t-1)  model(t)
action(t)  model(t-1)  model(t)
actions
Rede
Percept Generation
Environment
Simulation
Server
Rules: model(t)  percept(t)
...
Simulated
Environment
Model
Agent
Client 1
percepts
Agent
Client N
AI’s Pluridisciplinarity
Economics
Decision
Theory
Sociology
Zoology
Game
Theory
Paleontology
Neurology
Psychology
(Cognitive)
Linguistics
Operations
Research
Information
Theory
Mathematics:
• Logic
• Probabilities & Statistics
• Calculus
• Algebra
Computer Science:
• Theory
• Distributed Systems
• Software Engineering
• Databases
Philosophy
Artificial
Intelligence
AI Roadmap
Generic Sub-Fields:
• Heuristic Search
• Automated Reasoning & Knowledge Representation
• Machine Learning & Knowledge Acquisition
• Pattern Recognition
Specific Sub-Fields:
• Multi-Agent Communication,
Cooperation & Negotiation
• Speech & Natural Language Processing
• Computer Perception & Vision
• Robotic Navigation & Manipulation
• Games
• Intelligent Tutoring Systems
Computational Metaphors:
• Algorithmic Exploration
• Logical Derivation
• Probability Estimation
• Connectionist Activation
• Evolutionary Selection
Generic Tasks:
• Clustering
• Classification
• Temporal Projection
• Diagnosis
• Monitoring
• Repair
• Control
• Recommendation
• Configuration
• Discovery
• Design
• Allocation
• Timetabling
• Planning
• Simulation
AI Metaphors, Abstractions
Problem
+ P(A|B)
Algorithm
Today’s Diversity of AI Applications
 Agriculture, Natural Resource
Management, and the Environment
 Architecture & Design
 Art
 Artificial Noses
 Astronomy & Space Exploration
 Assistive Technologies
 Banking, Finance & Investing
 Bioinformatics
 Business & Manufacturing
 Drama, Fiction, Poetry, Storytelling &
Machine Writing
 Earth & Atmospheric Sciences
 Engineering
 Filtering
 Fraud Detection & Prevention
 Hazards & Disasters
 Information Retrieval & Extraction
 Knowledge Management
















Law
Law Enforcement & Public Safety
Libraries
Marketing, Customer Relations & ECommerce
Medicine
Military
Music
Networks - including Maintenance,
Security & Intrusion Detection
Politics & Foreign Relations
Public Health & Welfare
Scientific Discovery
Social Science
Sports
Telecommunications
Transportation & Shipping
Video Games, Toys. Robotic Pets &
Entertainment
AI Pays !
 AI Industry Gross Revenue:




2002: US $11.9 billions
Annual growth rate: 12.2%
Projection for 2007: $21.2 billions
www.aaai.org/AITopics/html/stats.html
 Companies specialized in AI:
 http://dmoz.org/Computers/Artificial_Intelligence/Companies/
 Corporations developing and using AI:
 Google, Amazon, IBM, Microsoft, Yahoo, ...
 Corporations using IA:
 www.businessweek.com/bw50/content/mar2003/a3826072.htm
 Wal-Mart, Abbot Labs, US Bancorp, LucasArts, Petrobrás, ...
 Government agencies using AI:
 US National Security Agency
When is a Machine Intelligent?
What is Intelligence?
Turing Test
Who’s smarter?
 Your medical doctor or your
cleaning lady?
 Your lawyer or your two
year old daughter?
 Kasparov or Ronaldinho?
 What did 40 years of AI
research discovered?
 Common sense intelligence
harder than expert
intelligence
 Embodied intelligence
harder than purely
intellectual, abstract
intelligence
 Kid intelligence harder
than adult intelligence
 Animal intelligence harder
than specifically human
intelligence (after all we
share 99% of our genes
with chimpanzees !)
?
1997:
2x1
2050?
2x1
www.robocup.org
 New benchmark task for AI
 Annual competition associated to conference
on AI, Robotics or Multi-Agent Systems
Tomorrow’s AI Applications
Blade Runner
M
A
T
R
I
X
A.I.
Download