Uploaded by golaluuu

Assignment on AI

advertisement
An Assignment on
Artificial Intelligence
Course No: CSE-3201
Course Title: Artificial Intelligence
Submitted To:
Jabed Al Faysal
Lecturer
Computer Science & Engineering
Discipline
Khulna University, Khulna
Submitted By:
Md. Nazmul Abdal
Student ID: 180225
Computer Science & Engineering
Discipline
Khulna University, Khulna
Date of Submission: 10 January, 2021
CSE-3201 (AI)
Assignment 1
1. Develop a PEAS description of the task environment:
(i)
Robot playing soccer.
(ii)
Robot dust collector from classroom.
(iii)
Autonomous mars rover.
(iv)
Part-picking robot.
Solution:
PEAS stand for Performance, Environment, Actuators, and Sensors. Based on
these properties of an agent, they can be grouped together or can be differentiated
from each other. Each agent has these following properties:
Performance Measure: The output which we get from the agent. All the results
that an agent gives after processing comes under its performance measurement.
Environment: All the surrounding things and conditions of an agent fall in this
section. It basically consists of all the things under which the agents work.
Actuators: The devices, hardware or software through which the agent performs
any actions or processes any information to produce a result are the actuators.
Sensors: The devices through which the agent observes and perceives its
environment are the sensors of the agent.
Now we are developing a PEAS description for the given task environments:
(i) Robot Playing Soccer:
Performance Measure: To play, goals for or against, score of the team, the
competitors, winning the game.
Environment: Soccer field, ball, teammates, opponents, referee, audience, own
body.
Actuators: Navigator, legs of robot, view detector of robot.
Sensors: Camera, communication links among team members, orientation sensors,
touch sensors, accelerometers, wheel or joint encoders.
(ii) Robot dust collector from classroom:
Performance Measure: Cleanliness, distance traveled to clean, battery life.
Environment: Room, table, wood floor, carpet, different obstacles.
Actuators: Wheels, different brushes, vacuum extractor.
Sensors: Camera, dirt detection sensor, cliff sensor, bump sensors, infrared wall
sensors.
(iii) Autonomous mars rover:
Performance Measure: Collect, analyze & explore samples on mars, terrain
explored & reported, samples gathered & analyzed, distance the rover traverses,
along the number of collected samples or possibly finding life, or maximize
lifetime.
Environment: Launch vehicle, lander, mars.
Actuators: Wheels or legs, robot arm, drill, radio transmitter, sample collection
device, analysis devices.
Sensors: Camera, touch sensors, accelerometers, spectrometers, orientation
sensors, communication links, wheel or joint encoders, radio receiver.
(iv) Part-picking robot:
Performance Measure: Percentage of parts in correct bins.
Environment: Conveyor belt with parts, bins.
Actuators: Jointed arm and hand.
Sensors: Camera, joint angle sensors.
2. You are asked to create an “autonomous car”. To create this car, you
can choose from either “Model based agent” or “Goal based agent”.
Which model will you choose for the above system? Provide
comparison to establish facts for your answer.
Solution:
Agent: An agent can be anything that perceives its environment through sensors
and act upon that environment through actuators. An Agent runs in the cycle
of perceiving, thinking and acting.
Model Based Agent: The Model-based agent can work in a partially observable
environment, and track the situation. These agents have the model, "which is
knowledge of the world" and based on the model they perform actions.
A model-based agent has two important factors:
o
o
Model: It is knowledge about "how things happen in the world," so it is called
a Model-based agent.
Internal State: It is a representation of the current state based on percept
history.
Updating the agent state requires information about:
a. How the world evolves
b. How the agent's action affects the world.
Goal Based Agent: The knowledge of the current state environment is not always
sufficient to decide for an agent to what to do. The agent needs to know its goal
which describes desirable situations. For this reason, we have goal-based agent.
These kinds of agents take decision based on how far they are currently from
their goal (description of desirable situations). Their every action is intended to
reduce its distance from the goal. This allows the agent a way to choose among
multiple possibilities, selecting the one which reaches a goal state. Goal-based agents
expand the capabilities of the model-based agent by having the "goal" information.
They choose an action, so that they can achieve the goal. These agents may have to
consider a long sequence of possible actions before deciding whether the goal is
achieved or not. Such considerations of different scenario are called searching and
planning, which makes an agent proactive.
If I have to create an autonomous car, then I will choose goal-based model
for the system. The reason is given below:
A model-based agent has the knowledge of the world. So, based on the
model they give output. If we create an autonomous car using model-based agent
then it will have the information about how the world evolves independently of the
agent. For example, if we have an overtaking car on the road, then it will generally
be closer than it was a moment ago. So, a model-based agent can sense it and make
the right decision based on the model it has.
On the other hand, if we create an autonomous car using goal-based
agent, then the agent will have the knowledge of the world as well as the goal. Based
on the goal, it can take some different decisions from the model-based agent. For
example, if we have the situation of an autonomous car on the road where a car is
at a road junction, form where it can turn left, turn right or go straight on. The correct
decision depends on where the car is trying to get to. If the car doesn’t have the goal
information, then it can go to the wrong way. In other words, as well as a current
state description, the agent needs some sort of goal information that describes
situations that are desirable. With model-based agent, there is a chance that the car
will take the wrong path. So, for these types of situations an autonomous car must
have the information about the goal. That’s why to create an autonomous car, I will
choose goal-based agent.
3. For following graph, “The heuristic is consistent but not admissible” –
Is the statement true? – Explain.
Solution:
For the given graph, “The heuristic is consistent but not admissible” – this statement
is not true. The reason is explained below:
Admissible heuristic: When for each
overestimates the cost of reaching the goal.
node n in
the
Admissibility is measured by the condition:
h (n) ≤ h* (n)
where h*(n) is the real distance between n and the goal node.
Here,
we have h (S) = 7
h (A) = 1
h (B) = 5
& h (G) = 0
So,
f (S) = g (S) + h (S)
=0+7
=7
f (A) = g (A) + h (A)
=4+1
=5
f (B) = g (B) + h (B)
=2+5
=7
graph, h(n) never
If we follow, S
A
G
Then, f (G) = g (G) + h (G)
=4+4+0
=8
Again, if we follow, S
B
A
G
Then, f (G) = g (G) + h (G)
=2+1+4+0
=7
We can see that, for all cases the heuristic is smaller than or equal to the real cost.
Which means, the given graph follows the h (n) <= h* (n) rule.
So, we can say that the graph is admissible.
Consistent heuristic: A heuristic h(n) is consistent if, for every node n and every
successor n’ of n, the estimated cost of reaching the goal from n is less than or
equal to the actual step cost of getting to n’ plus the estimated cost of reaching the
goal from n’. In an equation, it would look like this:
h (n) ≤ C (n, n’) + h (n’)
For S
B: h (S) ≤ C (S, B) + h (B)
7
≤ 2+5
7
≤ 7 which is TRUE
For S
A: h (S) ≤ C (S, A) + h (A)
7
≤ 4+1
7
≤ 5 which is FALSE
For B
A: h (B) ≤ C (B, A) + h (A)
5
≤ 1+1
5
≤ 2 which is FALSE
For A
G: h (A) ≤ C (A, G) + h (G)
1
≤ 4+0
1
≤ 4 which is TRUE
Here we can see that, the rule for being consistent is not satisfied everywhere.
Therefore, the graph is not consistent.
So, we can conclude that the given graph is admissible but not consistent.
4. Consider the following set of statement:
(i) Whoever can read is literate
(ii) Dogs are not literate
(iii) Some Dogs are intelligent
(iv) Everyone who is not intelligent is liked by no one
From the above statements, conclude that “Some who are intelligent
cannot read” using Resolution.
Solution:
Given,
(i)
(ii)
(iii)
(iv)
Whoever can read is literate
Dogs are not literate
Some Dogs are intelligent
Everyone who is not intelligent is liked by no one
We have to prove “some who are intelligent cannot read” using resolution.
First, we define the vocabulary:
• Dog(x): x is dog
• Literate(x): x is literate
• Intelligent(x): x is intelligent
• Read(x): x can read
• Like(x): liked by x
The given sentences can be rewritten in first order logic as below:
(i) ∀x: Read(x)
Literate(x)
(ii) ∀x: Dog(x)
¬Literate(x)
(iii) ∃x: Dog(x) ∧ Intelligent(x)
(iv) ∀x: ∀y: ¬ Intelligent(x)
¬Like(y)
The sentence that we have to prove can be rewritten in the first order logic as:
(v) ∃x: Intelligent(x) ∧ ¬Read(x)
By converting every sentence to clause form, we get:
1. ¬Read(x) ∨ Literate(x)
2. ¬Dog(x) ∨ ¬Literate(x)
We need to divide the two parts according to the clause form rule:
Now, assigning a constant value to variable x we get:
3. a. Dog(z)
b. Intelligent (z)
4. Intelligent(x) ∨ ¬Like(y)
If we negate (5) and then if we convert that to clause form then we get:
5. ¬ Intelligent (x) ∨ Read(x)
The proof is shown in the following figure:
¬ Intelligent (x) ∨ Read(x)
3(b)
x/z
1
Read(z)
x/z
2
Literate(z)
x/z
3(a)
¬Dog(z)
x/z
NIL
As we get NIL from the negation of the expected result, we can say that, using
resolution “Some who are intelligent cannot read” can be concluded.
5. Apply A* search for going to E node from S node. Show steps and the
path.
Solution:
Step-1: We start by adding the start node (S) to the OPEN list with the path distance
as 0. and expand its immediate successors by adding them to the OPEN list.
Node
S
OPEN
g(n) h(n)
f(n)
0
7
7
CLOSED
Node
Parent
Node
Step-2: Now, we move the first node in the OPEN list to the CLOSED list and
expand its immediate successors by adding them to the OPEN list.
OPEN
Node
A
B
g(n)
1
2
CLOSED
h(n)
f(n)
Node
5
6
6
8
S
Parent
Node
Here, the OPEN list is already in ascending order so we don’t have to reorder it.
Step-3: We move the first node from the OPEN list to the CLOSED list and expand
its immediate successors by adding them to the OPEN list.
OPEN
Node
B
X
Y
g(n)
2
5
8
CLOSED
h(n)
f(n)
Node
6
5
8
8
10
16
S
A
Parent
Node
S
Here, the OPEN list is again in ascending order so we don’t have to reorder it.
Step-4: We move the first node from the OPEN list to the CLOSED list and expand
its immediate successors by adding them to the OPEN list.
OPEN
Node
X
Y
C
D
CLOSED
g(n)
h(n)
f(n)
Node
5
8
9
3
5
8
4
15
10
16
13
18
S
A
B
Parent
Node
S
S
Step-5: Now we reorder the OPEN list in ascending order of the combined heuristic
value.
OPEN
Node g(n)
X
C
Y
D
5
9
8
3
CLOSED
h(n)
f(n)
Node
5
4
8
15
10
13
16
18
S
A
B
Parent
Node
S
S
Step-6: We move the first node from the OPEN list to the CLOSED list and expand
its immediate successors by adding them to the OPEN list.
OPEN
Node
C
Y
D
E
CLOSED
g(n)
h(n)
f(n)
Node
9
8
3
7
4
8
15
0
13
16
18
7
S
A
B
X
Parent
Node
S
S
A
Step-7: Now we reorder the OPEN list in ascending order of the combined heuristic
value.
OPEN
Node
E
C
Y
D
CLOSED
g(n)
h(n)
f(n)
Node
7
9
8
3
0
4
8
15
7
13
16
18
S
A
B
X
Parent
Node
S
S
A
Step-8: We move the first node from the OPEN list to the CLOSED list and expand
its immediate successors by adding them to the OPEN list.
OPEN
Node
C
Y
D
CLOSED
g(n)
h(n)
f(n)
Node
9
8
3
4
8
15
13
16
18
S
A
B
X
E
Parent
Node
S
S
A
X
Now, the goal node (E) has moved to the CLOSED list. If we backtrack the
CLOSED list then we get the optimal path E X A S
We can also show the steps by a tree. The tree is given below:
S
A
Y
1+5=6
1+7+8=16
0+7=7
B
2+6=8
X
1+4+5=10
E
1+4+2+0=7
Figure: A* Search Algorithm
6. Apply alpha-beta pruning on the following minimax search tree to
determine the best move for max at the root position. Specify the
branches that should be pruned.
[ ID = Last 2 digits of your
ID]
Solution:
Alpha-beta pruning is a modified version of the minimax algorithm. It is an
optimization technique for the minimax algorithm. It cuts off branches in the game
tree which need not be searched because there already exists a better move available.
It is called alpha-beta pruning because it passes 2 extra parameters in the minimax
function, namely alpha and beta.
The two-parameter can be defined as:
(i)
(ii)
Alpha: The best (highest-value) choice we have found so far at any point
along the path of Maximizer. The initial value of alpha is -∞.
Beta: The best (lowest-value) choice we have found so far at any point
along the path of Minimizer. The initial value of beta is +∞.
The graph with alpha-beta pruning is given below:
Initial Step:
max α=-∞
β=∞
A
α=-∞
min β=∞ D
C
max K α=-∞
β=∞
Y
13
X
9
J
W
25
I
V U
11 9
B
H
T
5
S
25
max α=-∞ 13
β=∞
A
G
R
12
Q
5
P
6
F
E
O
5
N M
9 3
L
2
Final Step:
α=-∞
min β=∞13
max K α=-∞13
β=∞
Y
13
X
9
C α=13
β=∞13
D
W
25
J α=-∞25
β=13
V U
11 9
I α=13
β=∞
T
5
S
25
B α=13
β=∞13
G α=13
β=∞
H
R
12
Q
5
Fig: Alpha-Beta Pruning
P
6
O
5
F
E
N M
9 3
L
2
Download