Uploaded by Dhara Darji

P12A5AIR Unit-1

advertisement
Artificial Intelligence
What is AI ?
• Textbooks often define artificial intelligence as “the study and design of
computing systems that perceives its environment and takes actions like
human beings”.
• The term was introduced by John McCarthy in 1956 in the well-known
Dartmouth Conference.
• Artificial Intelligence (AI) is the field of computer science dedicated to
solving cognitive problems commonly associated with human intelligence,
such as learning, problem solving, and pattern recognition.
• AI which stands for artificial intelligence refers to systems or machines that
mimic human intelligence to perform tasks and can iteratively improve
themselves based on the information they collect.
Definitions of Artificial Intelligence
Agents in Artificial Intelligence
• An AI system can be defined as the study of the rational agent and its
environment.
• The agents sense the environment through sensors and act on their
environment through actuators.
• An AI agent can have mental properties such as knowledge, belief,
intention, etc.
Agents and environments
• An agent can be anything that
• perceive its environment through sensors and
• acting upon that environment through actuators.
• An agent can be:
• Human-Agent: A human agent has eyes, ears, and other organs which
work for sensors and hand, legs, vocal tract work for actuators.
• Robotic Agent: A robotic agent can have cameras, infrared range finder,
NLP for sensors and various motors for actuators.
• Software Agent: Software agent can have keystrokes, file contents as
sensory input and act on those inputs and display output on the screen.
Agents and environments
• Before moving forward, we should first know about sensors, effectors, and
actuators.
• Percept: The term percept to refer to the agent’s perceptual inputs at any given
instant. An PERCEPT SEQUENCE agent’s percept sequence is the complete history
of everything the agent has ever perceived.
• Sensor: Sensor is a device which detects the change in the environment and
sends the information to other electronic devices. An agent observes its
environment through sensors.
• Actuators: Actuators are the component of machines that converts energy into
motion. The actuators are only responsible for moving and controlling a system.
An actuator can be an electric motor, gears, rails, etc.
• Effectors: Effectors are the devices which affect the environment. Effectors can be
legs, wheels, arms, fingers, wings, fins, and display screen.
Agents and environments
• Agents interact with environments through sensors and actuators.
• The agent behavior is described by the agent function that maps any given
percept sequence to actions:
• The agent program runs on the physical architecture to produce f
• agent = architecture + program
Agent functions and programs
• An agent is completely specified by the agent function mapping
percept sequences to actions
• One agent function (or a small equivalence class) is rational
• Aim: find a way to implement the rational agent function concisely
Agent program for a vacuum-cleaner agent
• \input{algorithms/Vacuum-Agent-algorithm}
function Vacuum-Agent([location,status]) returns an action
• if status = Dirty then return Suck
• else if location = A then return Right
• else if location = B then return Left
Example: Vacuum-cleaner world
• Percepts: location(Which Square) and contents, e.g.,
[A,B,Dirty]
• Actions: Move Left, Move Right, Suck, do nothing
• Agent Function: maps percept sequence into actions
• Agent Program: function’s implementation
• How should the program act?
• This world is so simple that we can describe everything that happens; it’s
also a made-up world, so we can invent many variations.
• This particular world has just two locations: squares A and B. The vacuum
agent perceives which square it is in and whether there is dirt in the
square. It can choose to move left, move right, suck up the dirt, or do
nothing. One very simple agent function is the following: if the current
square is dirty, then suck; otherwise, move to the other square.
Partial tabulation of a simple agent function for the vacuum-cleaner world
Rationality
• Rationality is nothing but status of being reasonable, sensible, and
having good sense of judgment.
• Rationality is concerned with expected actions and results depending
upon what the agent has perceived. Performing actions with the aim
of obtaining useful information is an important part of rationality.
• The rationality of an agent is measured by its performance measure.
Rationality can be judged on the basis of following points:
• Performance measure which defines the success criterion.
• Agent prior knowledge of its environment.
• Best possible actions that an agent can perform.
• The sequence of percepts.
Rational Agent
• RATIONAL AGENT For each possible percept sequence, a rational agent should
select an action that is expected to maximize its performance measure, given the
evidence provided by the percept sequence and whatever built-in knowledge the
agent has.
• A rational agent is an agent which has clear preference, models uncertainty, and acts
in a way to maximize its performance measure with all possible actions.
• E.g., performance measure of a vacuum-cleaner agent could be amount of dirt
cleaned up, amount of time taken, amount of electricity consumed, amount of noise
generated, etc.
• A rational agent is said to perform the right things. AI is about creating rational
agents to use for game theory and decision theory for various real-world scenarios.
• For an AI agent, the rational action is most important because in AI reinforcement
learning algorithm, for each best possible action, agent gets the positive reward and
for each wrong action, an agent gets a negative reward.
Specifying the task environment (PEAS)
• PEAS:
• Performance measure
• Environment
• Actuators
• Sensors
• In designing an agent, the first step must always be to specify the task
environment (PEAS) as fully as possible
Specifying the task environment (PEAS)
• PEAS: Performance measure, Environment, Actuators, Sensors
• P: a function the agent is maximizing (or minimizing)
• Assumed given
• In practice, needs to be computed somewhere
• E: a formal representation for world states
• For concreteness, a tuple (var1=val1, var2=val2, … ,varn=valn)
• A: actions that change the state according to a transition model
• Given a state and action, what is the successor state (or distribution over
successor states)?
• S: observations that allow the agent to infer the world state
• Often come in very different form than the state itself
• E.g., in tracking, observations may be pixels and state variables 3D
coordinates
PEAS Example: Autonomous taxi
• Performance measure
• Safe, fast, legal, comfortable trip, maximize profits
• Environment
• Roads, other traffic, pedestrians, customers
• Actuators
• Steering wheel, accelerator, brake, signal, horn
• Sensors
• Cameras, LIDAR, speedometer, GPS, odometer, engine sensors, keyboard
PEAS Example: Spam filter
• Performance measure
• Minimizing false positives, false negatives
• Environment
• A user’s email account, email server
• Actuators
• Mark as spam, delete, etc.
• Sensors
• Incoming messages, other information about user’s account
PEAS Example: Medical Diagnosis System
• Performance measure
• Healthy patient, minimize costs, lawsuits
• Environment
• Patient, hospital, staff
• Actuators
• Screen display (questions, tests, diagnoses, treatments, referrals)
• Sensors
• Keyboard (entry of symptoms, findings, patient's answers)
PEAS Example: Satellite Image Analysis System
• Performance measure
• correct image categorization
• Environment
• downlink from orbiting satellite
• Actuators
• display categorization of scene
• Sensors
• color pixel arrays
PEAS Example: Part-Picking Robot
• Performance measure
• Percentage of parts in correct bins
• Environment
• Conveyor belt with parts, bins
• Actuators
• Jointed arm and hand
• Sensors
• Camera, joint angle sensors
PEAS Example: Refinery Controller
• Performance measure
• maximize purity, yield, safety
• Environment
• refinery, operators
• Actuators
• valves, pumps, heaters, displays
• Sensors
• temperature, pressure, chemical sensors
PEAS Example: Interactive English Tutor
• Performance measure
• Maximize student's score on test
• Environment
• Set of students
• Actuators
• Screen display (exercises, suggestions, corrections)
• Sensors
• Keyboard
Properties of Environment
• Fully observable(vs. partially observable): An agent's sensors give it access to
the complete state of the environment at each point in time.
• Deterministic(vs. stochastic): The next state of the environment is completely
determined by the current state and the action executed by the agent. (If the
environment is deterministic except for the actions of other agents, then the
environment is strategic)
• Episodic (vs. sequential): The agent's experience is divided into atomic
"episodes" (each episode consists of the agent perceiving and then performing a
single action), and the choice of action in each episode depends only on the
episode itself.
• OR: In an episodic environment, each episode consists of the agent perceiving
and then acting. The quality of its action depends just on the episode itself.
Subsequent episodes do not depend on the actions in the previous episodes.
Episodic environments are much simpler because the agent does not need to
think ahead.
Properties of Environment
• Static (vs. dynamic): If the environment does not change while an
agent is acting, then it is static; otherwise it is dynamic.
• Discrete(vs. continuous): A limited number of distinct, clearly defined
percepts and actions. E.g. The environment is discrete (For example,
chess); otherwise it is continuous (For example, driving).
• Single agent(vs. multiagent): The environment may contain other
agents which may be of the same or different kind as that of the
agent.
What is Intelligent agent?
• In artificial intelligence, an intelligent agent is anything which perceives its
environment, takes actions autonomously in order to achieve goals, and
may improve its performance with learning or may use knowledge.
• A thermostat is an example of an intelligent agent. (used in room/water/oil
heater, geyser, electric kettle, refrigerator) (a device that automatically regulates
temperature, or that activates a device when the temperature reaches a certain point.)
•
•
•
•
•
Following are the main four rules for an AI agent:
Rule 1: An AI agent must have the ability to perceive the environment.
Rule 2: The observation must be used to make decisions.
Rule 3: Decision should result in an action.
Rule 4: The action taken by an AI agent must be a rational action.
Structure of an AI Agent
• The task of AI is to design an agent program which implements the agent
function. The structure of an intelligent agent is a combination of
architecture and agent program. It can be viewed as:
• Agent = Architecture + Agent program
• Following are the main three terms involved in the structure of an AI agent:
• Architecture: Architecture is machinery that an AI agent executes on.
• Agent Function: Agent function is used to map a percept to an action.
• f:P* → A
• Agent program: Agent program is an implementation of agent function. An
agent program executes on the physicalarchitecture to produce function f.
Types of AI Agents
• Agents can be grouped into five classes based on their degree of
perceived intelligence and capability. All these agents can improve
their performance and generate better action over the time. These
are given below:
Simple Reflex Agent
Goal-based agents
Utility-based agent
Model-based reflex agent
Learning agent
1. Simple Reflex agent:
• The Simple reflex agents are the simplest agents. These agents take decisions on
the basis of the current percepts and ignore the rest of the percept history.
• These agents only succeed in the fully observable environment.
• The Simple reflex agent does not consider any part of percepts history during
their decision and action process.
• The Simple reflex agent works on Condition-action rule, which means it maps the
current state to action. Such as a Room Cleaner agent, it works only if there is dirt
in the room.
• Problems for the simple reflex agent design approach:
They have very limited intelligence
They do not have knowledge of non-perceptual parts of the current state
Mostly too big to generate and to store.
Not adaptive to changes in the environment.
1. Simple Reflex agent:
2. Goal-based agents
• The knowledge of the current state environment is not always sufficient to decide
for an agent to what to do.
• The agent needs to know its goal which describes desirable situations.
• Goal-based agents expand the capabilities of the model-based agent by having
the "goal" information.
• They choose an action, so that they can achieve the goal.
• These agents may have to consider a long sequence of possible actions before
deciding whether the goal is achieved or not. Such considerations of different
scenario are called searching and planning, which makes an agent proactive.
2. Goal-based agents
3. Utility-based agents
• These agents are similar to the goal-based agent but provide an extra component
of utility measurement which makes them different by providing a measure of
success at a given state.
• Utility-based agent act based not only goals but also the best way to achieve the
goal.
• The Utility-based agent is useful when there are multiple possible alternatives,
and an agent has to choose in order to perform the best action.
• The utility function maps each state to a real number to check how efficiently
each action achieves the goals.
3. Utility-based agents
Download