Uploaded by avalsands4

Custom SCOA 032 Slides

advertisement
Artificial Intelligence:
Its Roots and Scope
1.1 From Eden to ENIAC: Attitudes toward
intelligence, Knowledge, and Human
Artifice
1.2 Overview of AI Application Areas
1.3 Artificial Intelligence – A Summary
1.4 Epilogue and References
1.5 Exercises
George F Luger
ARTIFICIAL INTELLIGENCE 5th edition
Structures and Strategies for Complex Problem Solving
1
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
What is AI?
• AI stands for Artificial Intelligence
• Who concerns about intelligence:
– Computer Science, Psychology,
mathematicians, logic, linguistics.
Biology,
• Can machines be Intelligent? Debate
2
Definition of AI
• AI: The branch of computer science that is
concerned with the automation of intelligent
behaviour.
– Ooops ... What is intelligent?
– Possibilities: Ability to solve a problem and
ability to memorise and access the information.
• AI: The collection of
methodologies
studied
intelligence researchers.
problems and
by
artificial
3
Questions
• What is intelligence?
• Is intelligence a single faculty or a collection of distinct and
unrelated abilities?
• What is creativity?
• Can intelligence observed from behaviour or does it require
evidence from an internal mechanism.
• How knowledge is represented in the nerve tissue or living being?
• Can computers be intelligent or does intelligence require the
richness of sensation and experience of biological existence?
• What does happen when learning occurs?
4
What is AI?
• The goal of (weak) artificial intelligence is:
Build intelligent entities (model intelligence).
• Understand
intelligent
entities,
and
understand and engineer human intelligence
(strong AI).
5
What is Intelligent entity
Four possible definitions (Russel and Norvig):
1. Systems that act like humans. Strong
2. Systems that think like humans. Strong
3. Systems that think rationally, according to
logic laws. Weak
4. Systems that act rationally, or do the right
thing. Weak
6
Figure 1.1 Turing Test: Objective measurement
Turing: An entity deemed intelligent if it demonstrates an
ability to achieve human-level performance in all cognitive
tasks, sufficient to fool an interrogator
7
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Turing test
• Objective notion of intelligence and avoids
the debate over the nature of intelligence.
• Avoids the questions of whether a computer
uses appropriate internal processes.
• Focuses on the questions and answers and
avoids of the question of living organisms /
machine intelligence.
8
Requirements of a computer to Pass Turing test
• Natural Language Processing capabilities.
• Knowledge representation to store information
provided before or during the interrogation.
• Automated reasoning to use stored information
to answer questions and draw new conclusions.
• Learning capabilities.
9
ELIZA (Therapist system)
•
•
•
•
PERSON: My Wife says I’m often depressed.
ELIZA: I’m sorry to hear you are depressed.
PERSON: The Truth that I am quite unhappy.
ELIZA: Do you think coming here will help
you not to be unhappy.
• PERSON: Well, perhaps by talking to you I
will find out why I hate my brother.
• ELIZA: Tell me more about your family.
10
Eliza
• ELIZA failed Turing test !
11
Specific Area
• It is difficult to learn everything, intelligent
entities should concentrate on a specific
domain.
• We need a domain expert
12
Important Research and Application Areas
1.2.1
Game Playing
1.2.2
Automated Reasoning and Theorem Proving
1.2.3
Expert Systems
1.2.4
Natural Language Understanding and Semantic Modeling
1.2.5
Modeling Human Performance
1.2.6
Planning and Robotics
1.2.7
Languages and Environments for AI
1.2.8
Machine Learning
1.2.9
Alternative Representations: Neural Nets and Genetic Algorithms
1.2.10
AI and Philosophy
Other areas
13
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Important Research and Application Areas (Continued)
1.2.1
Game Playing: Uses Heuristics (chapter 4), it searches a state space
Board games (played using well-defined rules):
e.g. Chess, 8-tile puzzle, 16-tile puzzle.
Initial state
1
5
2
4
3
7
1
8
6
2
1
5
2
8
3
4
5
3
4
7
8
6
7
6
14
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Important Research and Application Areas (Continued)
Automated Reasoning and Theorem Proving (more in Chapter 13)
1.2.2
E.g. Answering questions
R1 If I have enough time I will study
R2 If I study I will pass
R3 I have no time (fact)
Q: Shall I pass? Answer: No
Why: You have no time.
How: Explanation (Justification)
e.g. Mathematical reasoning, Program analysis, state transformation problem (liquid to solid)
Note: Theorem proving helped in formalizing search algorithms and the development of
predicate calculus and Prolog
15
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Important Research and Application Areas (Continued)
1.2.3
Expert Systems (more in chapter 8)
Programs to do reasoning and to solve problems, diagnosis
Modelling an expert: Doctor (diagnose illness), Geologist (discover minerals)
We need domain-specific knowledge from a domain expert obtained from an AI specialist
(knowledge engineer).
e.g. Dendral (Stanford University late 1960s). Infer the structure of organic molecular from
their chemical formulas and other information.
e.g. MYCIN : Medical system developed in mid 1970s by the medical school at Stanford
university. Discover bacterial infections with uncertain or incomplete information.
e.g. PROSPECTOR: Decides the probable location of minerals based on geological info.
e.g. INRWENIST, Dipmeter advisor, XCON (VAX configuration).
+ve: Save time, save money, replace the expert in rural areas or when not available, acquire
experience from experts
16
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Important Research and Application Areas (Continued)
1.2.3
Expert Systems: example to solve a second order equations
ax2 +bx + c = 0
Expert: mathematician
User: Student
Knowledge base:
Rule 1: If a<> 0 and b2 – 4ac>0 then
,
Rule 2: If a<> 0 and b2 – 4ac=0 then
Rule 3: If a<> 0 and b2 – 4ac<0 then no solution
e.g. a=2,b=-3, c=1 šŸ”Ŗ x1=1, x2=1/2
17
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Important Research and Application Areas (Continued)
1.2.3
Deficiencies of Expert Systems
1.Difficulty in obtaining deep knowledge
2.Lack of robustness and flexibility: Lack the ability of going around the problem.
3.Inability to provide deep explanation
4.Difficulty in verification.
5.Little learning from experience.
18
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Important Research and Application Areas (Continued)
1.2.4
Natural Language Understanding and Semantic Modelling
(more in chapters 7, 14)
Programs capable of understanding and generating human language.
It is part of human intelligence.
1.2.5
Modelling Human Performance (more in chapter 17)
Design of systems explicitly model the organization of the human mind
19
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Important Research and Application Areas (Continued)
1.2.6
Planning and Robotics
Breaking the problem into smaller parts.
e.g. Going from Amman to Cairo
Go to Amman Airport through either taxi or bus
Go from Amman airport to Cairo airport using either Royal Jordanian plan
or Egypt airways
Going to a hotel from Cairo airport through either a taxi or a shuttle bus.
20
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Important Research and Application Areas (Continued)
1.2.7
Languages and Environments for AI (LISP, Prolog) (more in
chapters 15 and 16)
Prolog: Programmation en Logique (Logic Programming).
Alain Colmerauer 1973
LISP: List Programming.
Programming languages to help programming AI applications.
Characteristics of such languages:
Knowledge representation
Search (e.g. Unification technique)
21
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Important Research and Application Areas (Continued)
1.2.8
Machine Learning: (more in chapters 11 and 12)
Learning from previous experience
Expert system performs the same computations once and once again without
remembering the solution it reached the first time.
Solution: Programs learn on their own from experience, analogy, examples
or by being “told” what to do.
e.g. Techniques: Case Based Reasoning (CBR), Instance-Based Learning
(IBL), exampler-based learning, ID3 trees.
e.g. Systems: Automated Mathematician, meta-DENDRAL, Teiresias,
22
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Important Research and Application Areas (Continued)
1.2.9
Alternative Representations: Neural Nets and Genetic Algorithms
Alternative: Knowledge is not represented explicitly.
Artificial Neural Networks: Parallel Distributed Processing.
Genetic Algorithms: Natural selection and evolution.
Fuzzy Logic: Things are not black and white, there is a grey too.
MATLAB® : ANNs, GAs, Fuzzy Logic toolboxes.
23
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
A simple Neuron (Crick and Asanuma, 1986)
Synapse
Axon
Cell Body
Dendrite
24
Important Research and Application Areas (Continued)
1.2.10
AI and Philosophy
Philosophy and AI. Philosophy contributed in the development of AI.
Now, AI is affecting philosophy.
AI opens some deep philosophical questions about thinking and natural
language understanding.
Other areas:
Perception: Voice recognition, Pattern recognition, image processing,
character recognition
Vision: Surveillance, CCTV
25
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Important Features of Artificial Intelligence
1.
The use of computers to do reasoning, pattern recognition, learning, or some other
form of inference.
2.
A focus on problems that do not respond to algorithmic solutions. This underlies
the reliance on heuristic search as an AI problem-solving technique.
3.
A concern with problem-solving using inexact, missing, or poorly defined
information and the use of representational formalisms that enable the programmer
to compensate for these problems.
4.
Reasoning about the significant qualitative features of a situation.
5.
An attempt to deal with issues of semantic meaning as well as syntactic form.
6.
Answers that are neither exact nor optimal, but are in some sense “sufficient”. This
is a result of the essential reliance on heuristic problem-solving methods in
situations where optimal or exact results are either too expensive or not possible.
7.
The use of large amounts of domain-specific knowledge in solving problems. This
is the basis of expert systems.
8.
The use of meta-level (knowledge about knowledge) to affect more sophisticated
control of problem-solving strategies. Although this is a very difficult problem,
addressed in relatively few current systems, it is emerging as an essential area of
26
research.
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Domain Specific Knowledge
•
•
•
•
•
•
•
•
•
•
clear(c)
c
clear(a)
a
b
ontable(a)
ontable(b)
on(c,b)
cube(a)
cube(b)
pyramid(c)
For all X, there does not exist a Y such that on(X,Y) šŸ”Ŗclear(Y)
Movement definition:
– hand_clear, clear(X), clear(Y)šŸ”Ŗ on(X,Y)
27
Features of AI Programs
• Knowledge representation:
– Knowledge is represented explicitly in AI using
knowledge representation language e.g. Prolog.
– Knowledge acquisition methods such as Machine
Learning.
• Search Algorithm.
• Use heuristics: may reach a suboptimal solution.
• Symbolic Reasoning such as LISP and Prolog.
28
The Predicate Calculus
2.0 Introduction
2.4 Application: A Logic-Based
Financial Advisor
2.1 The Propositional Calculus
2.5 Epilogue and References
2.2 The Predicate Calculus
2.3 Using Inference Rules to Produce
Predicate Calculus Expressions
2.6 Exercises
George F Luger
ARTIFICIAL INTELLIGENCE 5th edition
Structures and Strategies for Complex Problem Solving
1
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Propositional & Predicate Calculus
• Languages
to
express
(represent)
knowledge.
• They use words, phrases, and sentences to
represent knowledge and reason about
properties and relationships of the world.
2
Propositional Calculus
3
4
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Examples of sentences
• P represents (denotes) “My car is green”
• Q represents “It is raining”
• R represents “I like my job”
5
P ^ Q is a sentence
P: Premise or antecedent
Q: Conclusion or consequent
6
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Example of a WFF
7
8
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
For propositional expressions P, Q and R:
9
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Figure 2.1: Truth table for the operator ∧.
10
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Figure 2.2: Truth table demonstrating the equivalence of:
11
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Prove the following
12
Give the representation for the
following sentences
• I will go to Aqaba or I will visit the zoo
• Ali likes sweet and Ahmad doesn’t eat
waffles
• If Omar is ill then he goes to the doctor
13
Propositional Calculus
• Also called propositional logic
• Zero-order logic
– Does not contain variables
– Always uses constants
– Uses verb
• Action: eat, like
• Static: is, was
14
Propositional Calculus – Lack of
enough power of representation
• Example 1:
– All students are smart
– Ali is a student
– With propositional calculus we cannot conclude
that Ali is smart
• Example 2:
– Ali likes sweet
– Ali eats everything he likes
– We cannot conclude that Ali eats sweet
15
Examples of differences
• Propositional calculus: “It rained on Tuesday”
– Single statement
• Predicate calculus expression: weather(tuesday, rain)
– We can access single components
– Relationship between components
– Through inference rules, we can manipulate
predicate access components and infer new rules.
16
Predicate Calculus (First-order logic)
Advantages:
• More representation power.
• Expressions may contain variables (General
assertions).
• Well-defined formal semantics.
• Sound and Complete inference rules.
17
Predicate Calculus Example
• For all values of X, where X is a day of the
week, weather (X, rain) is true.
• i.e. it rains everyday.
18
19
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Predicate calculus symbols
• Symbols denote objects, properties, or
relations.
• Use meaningful names.
– w(tu,r) Vs. weather(tuesday, rain)
20
21
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Examples of predicate calculus terms
• Terms are symbols which are either variables,
constants, or function expressions.
• Examples
–
–
–
–
–
–
cat
times(2,3)
X
blue
mother(jane)
kate
22
Examples of predicate calculus functions
• father(ali) its value may be ahmad
• plus(4, 5) its value may be 9
• price(apple) its value may be 75
• Replacing the function with its value is
called evaluation.
23
24
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Predicates
• Begin with a lower case letter.
• A predicate is a relationship between zero or
more objects in the world.
– likes, equals, on, near, part_of
– red(book1) represents property
– on(labtop,table1) relationship between labtop
and table
25
Atomic sentences, atoms, or
propositions
• Atomic sentences: predicate of arity n
followed by n terms:
• Examples:
–
–
–
–
–
likes(george, kate)
friends(ali,hasan)
friends(farah,areej,school)
helps(ahmad,faris)
friends(father_of(ali), brother_of(ahmad))
26
Variable quantifiers
• universal quantifier (for all)
• existential quantifier (there exists)
27
Some Properties of quantifiers
28
Examples of mapping between English
language and predicate calculus
29
Examples of mapping between English
language and predicate calculus
30
Quiz Translate from English into first
order logic
• Every hardworking student who attends his
exams will pass
• Every student who does his homework and
revises his lectures is a hardworking student
• ali revises his lectures and does his
homework
• ali attends all his exams
31
Quiz Translate from English into first
order logic
32
33
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Test for well-formedness
34
verify_sentence algorithm
35
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
World description using predicate
calculus
36
37
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
38
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
39
Figure 2.3: A blocks world with its predicate calculate description.
Note:
Predicate
calculus
is
declarative. i.e. No timing or
ordering is assumed.
PROLOG is an example of
procedural semantics where
expressions are evaluated over
time.
40
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Blocks World - Continued
41
Inference Rules
• Logical inference: The ability to infer
(produce)
new
correct
expressions
(sentences) from a set of true assertions.
• An expression X logically follows from a set
of predicate calculus expressions S if every
interpretation that satisfies S also satisfies
X.
42
Inference Rules
• Inference rule: A mechanism of producing
new predicate calculus sentences from other
sentences.
• When every sentence X produced by an
inference rule operating on set of sentences
S logically follows from S, the inference
rule is said to be sound.
43
Inference Rules
• If the inference rule is able to produce every
expression that logically follows from S, it
is said to be complete.
44
45
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
46
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
47
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Sound
48
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Inference Rules
Resolution Inference Rule
winter
Unification
• We need to check whether two expressions are
the same or match?
• Unification: An algorithm for determining the
substitutions needed to make two predicate
calculus expressions match.
52
Elimination of there exists quantifier
• Unification requires the elimination of
existential quantifiers as it requires all variables
be universally quantified to give freedom in
substitutions.
53
Examples of substitutions
• The expression:
foo(X, a, goo(Y))
• Can yield many expressions using legal
substitutions (bindings):
– foo(fred, a, goo(Z))
using {fred/X, Z/Y}
– foo(W, a, goo(jack)) using {W/X, jack/Y}
– foo(Z, a, goo(moo(Z))) using {Z/X, moo(Z)/Y}
54
In unifying the expressions p(X) and p(Y) the substitution
{Z/X, Z/Y} is more general than the substitution {fred/X, fred/Y}
55
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
56
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Figure 2.5: Further steps in the unification of
(parents X (father X) (mother bill)) and
(parents bill (father bill) Y).
57
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Figure 2.6: Final trace of the unification of (parents X (father X)
(mother bill)) and (parents bill (father bill) Y).
58
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Unification Answer
{bill/X, (mother bill)/Y}
59
Sec 2.6: A Logic-Based Financial
Advisor
1. If you have inadequate saving account,
increase your savings regardless of your
income.
2. If you have adequate saving account and
adequate income, invest in stock market.
3. If you have adequate saving account but
inadequate income, split your savings
between your bank and an investment in stock
market.
60
Sec 2.6: A Logic-Based Financial
Advisor
• Adequate saving corresponds to $5,000 for
each dependent.
–
Define function called minsavings(X)=5000*X
• Adequate income means $15,000 plus $4,000
for each dependent.
–
Define
function
minincome(X)=15000+(4000*X)
called
61
62
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Added Predicates
• 12. saving(adequate)
• 13. income(inadequate)
The conclusion is
investment (combination)
63
English to Predicate
• Represent the following sentences in first-order
predicate logic.
1. Every white cat is larger than a black dog.
(∀X)(∃Y)(cat(X)^white(X)^dog(Y)^black(Y) → larger(X, Y))
2. Every frog is green if it has a white bird
(∀X)(∃Y)(frog(X)^bird(Y)^white(Y)^has(X, Y) → green(X))
English to Predicate
3. Some cats do not like large dogs.
(∃X)(∀Y) (cat(X)^dog(Y)^large(Y) → ¬like(X,Y))
4. Only green or yellow frogs hop if it rains
rain() → ((∀X)(hop(X) → frog(X)^(green(X) V yellow(X)))
5. if sami passes all his IT exams, he will be happy.
(∀X)(it(X)^pass(sami, X) → happy(sami))
English to Predicate
6. If an animal lay eggs, it is a mammal or a
duck-billed platypus (ā€«)Ų£ļŗ£ŲÆ Ų£ļ»§ŁˆŲ§Ų¹ Ų§ļ»Ÿļŗ‘Ų·ā€¬.
7. No Person Likes a smart Vegetarian.
English to Predicate
8.Not all students take both a history and biology.
9. In every classroom there are some excellent
students who passed all IT-exams.
10. John is shorter than every student that passed
the history exam E
Predicate to English
11. ((∀X)(∀Y)(believe(Y, bob)^like(Y, mary) → like(X,Y))
Everyone likes everyone that believes bob and likes mary
12. (∃X)( ∀Y)(egg(X)^egg(Y) ^ boiled(X)^raw(Y) →
heavier(X,Y))
Some boiled eggs are heavier than raw ones
HEURISTIC SEARCH
4.0 Introduction
4.3 Using Heuristics I n Games
4.1 An Algorithm for Heuristic Search
4.4 Complexity Issues
4.2 Admissibility, Monotonicity, and
Informedness
4.5 Epilogue and References
4.6 Exercises
George F Luger
ARTIFICIAL INTELLIGENCE 5th edition
Structures and Strategies for Complex Problem Solving
1
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Heuristic
• “The study of the methods and rules of discovery
and invention” George Polya, 1945.
Heuristic in state space search
• Rules for choosing those branches in a state space
that are most likely to lead to an acceptable problem
solution
• It is used in domains like game playing and theorem
proving where heuristic is the only practical
solution.
• We need a criterion and algorithm to implement
search.
2
When to employ heuristics?
1. A Problem may not have an exact solution
because of ambiguous problem statement or
available data.
– e.g. In Medical systems, given symptoms may
have several causes.
– Doctors use heuristics to choose the most likely
diagnosis.
– e.g. Vision problem, several possible
interpretations of a possible scene
3
When to employ heuristics?
2. Exact solution may exist
computational cost is expensive.
but
its
– e.g. Chess state space grows exponentially, or
factorially with the depth, i.e. combinatorially
explosive growth.
– A solution may not be found in practical time.
– Search in the most promising path
4
Heuristics fail
• Heuristics are fallible as it is only a guess.
• Although this guess is based on experience
and intuition.
• Heuristics may find a suboptimal solution or
fail to find a solution as it depends on limited
information such as the current state.
5
Fig 4.1 First three levels of the tic-tac-toe state space reduced by symmetry
Exhaustive search has
9! states
Using symmetry it reduces to
12X7!
6
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Fig 4.2 The “most wins” heuristic applied to the first children in tic-tac-toe.
7
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Fig 4.3 Heuristically reduced state space for tic-tac-toe.
8
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Heuristic Search
1. Hill-Climbing (Gready).
2. Best First Search.
3. N-Beam First.
9
Hill Climbing
• Evaluate children of current state.
• The best child is selected for further
expansion neither its siblings nor its parents
are retained.
• It is similar to a blind mountain climber.
– Go uphill along the steepest path until you can
no further up.
• It keeps no history, therefore it cannot
recover from failures.
10
Hill Climbing
• Problem with hill climbing is their
tendency to stuck at local maxima
(minima).
• If it reaches a state that has a better
evaluation than its children, the
algorithm halts.
• It may never reach the overall best.
11
Heuristic Search
Hill-Climbing (Gready):
– -ve:
1.
2.
Locam minima (Maxima)
Flat Area (Equal Heuristics)
12
Hill Climbing
• In 8-tile puzzle, may be we need to move
from a state where 2 tiles are out of place to
the goal, we need to pass through a state
where there are 3 states out of place.
• In hill climbing and such algorithms without
backtracking or other recovery mechanism,
no way to distinguish between local and
global maxima(minima).
13
Fig 4.4 The local maximum problem for hill-climbing with 3-level look
ahead
14
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Self Reading
Study Samuel’s (1959) Checker program
15
Dynamic Programming
• Dynamic
programming
(forward-backward) by bellman, 1956.
• When using probabilities it is called
Veterbi algorithm.
• Search restricted memory cases in
problems involved multiple interacting
subproblems.
16
Dynamic Programming
• DP keeps track of and reuses subproblems
already searched and solved within the
solution of the larger problems.
• e.g. Fibonnacci series: reuse subseries
solution.
• Subproblem caching sometimes is called
memorizing partial subgoals.
– Possible applications, string matching, spell
checking.
17
Dynamic Programming- Application 1
• Find optimal global alignment of two
strings
• BAADDCABDDA
• BBADCBA
One possible solution is
• BAADDCABDDA
• BBADC B A
18
Fig 4.5 The initialization stage and first step in completing the
array for character alignment using dynamic programming.
19
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Fig 4.6 The completed array reflecting the maximum alignment information
for the strings.
20
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Fig 4.7 A completed backward component of the dynamic programming
example giving one (of several possible) string alignments.
21
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Dynamic Programming- Application 2
• Find minimum difference.
• e.g. Building an intelligent spell checker.
• To correct a certain word using list of words
in our dictionary.
• What is the best approximation? Minimum
difference?
• Could also be used in speech recognition.
What is the best word that approximates
string of phonemes?
22
Dynamic Programming- Application 2
• Spell checker should produce and ordered
list of the most likely words similar to a
word that you tried to spell but you
misspelled.
• How we can measure the difference?
• Possible definition: The number of
insertions, deletions, and replacements
necessary to change the incorrect word into
a correct one (Levenshtein distance).
23
Fig 4.8 Initialization of minimum edit difference matrix between intention
and execution (adapted from Jurafsky and Martin, 2000).
Cost of insertion,
deletion is 1
Cost of replacement is
2
24
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
25
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Fig 4.9 Complete array of minimum edit difference between intention and execution
(adapted from Jurafsky and Martin, 2000) (of several possible) string alignments.
Intention
ntention delete I, cost 1
etention replace n with e, cost 2
exention replace t with x, cost 2
exenution insert u, cost 1
execution replace n with c, cost 2
26
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Heuristic Search
2. Best First Search:
– It starts with the best child
– It may keep exploring one branch at the expense
of other branches, so it may not find the goal
state
– Memory requirements:
•
•
Worst case: as bad as breadth-first search
Best case: as good as depth-first search
27
Heuristic Search
3. N-Beam First:
– Keeps N-Best children (N: 4, 5, 6, 7) and it
discards the others.
– Avoids the risk of keeping one child and
discarding the other children.
– It uses a heuristic function that takes into
consideration how far is the state from the initial
state.
– Away solutions gets bad heuristic value
28
4.2 The Best-first search
• It uses a priority queue to recover from
local maxima and dead ends problem
that may occur in hill climbing.
• It uses list to maintain states:
– open : keeps track of the current fringe of
the search.
– Close: Records states already visited.
4.2 The Best-first search
• One added step: order states in open
according to some heuristic (how close they
are to a goal).
• By doing this order: the most promising
state is considered first.
31
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Fig 4.10 Heuristic search of a hypothetical state space.
32
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
A trace of the execution of best_first_search for Figure 4.4
33
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Fig 4.11 Heuristic search of a hypothetical state space with open and closed
states highlighted.
34
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Fig 4.12 The start state, first moves, and goal state for an example-8 puzzle.
35
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Fig 4.14 Three heuristics applied to states in the 8-puzzle.
36
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Fig 4.15 The heuristic f applied to states in the 8-puzzle.
37
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
The successive stages of open and closed that generate this graph are:
38
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Fig 4.16 State space generated in heuristic search of the 8-puzzle graph.
39
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Fig 4.17 Open and closed as they appear after the 3rd iteration of heuristic
search
40
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Games (like 8-tile puzzle) and search
Why study heuristics in games
1. Search space is large šŸ”Ŗ requires pruning.
2. Games may have many possible heuristics.
3. Knowledge representation is easy, therefore
we can focus on heuristic than
representation.
4. A single heuristic can be applied in the
whole graph as states are represented
uniformly.
41
Confidence
• A set of confidence rules for financial
advisor problems.
saving_account(adequate)^income(adequate)
šŸ”Ŗ investment(stock) with confidence=0.75
• How we can assign such confidence?
• How we can combine rules?
• More than one rule produce the same
conclusion, then what to do?
42
Breadth-first
search
is
admissible as it
guarantees to
find
shortest
path
A* : when
h(n)<=h*(n)
43
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Heuristic Function
• f(n) = g(n) + h(n)
f: Heuristic function
g: Distance between the current state and the
initial state
h: Distance between the current state and the
goal state
44
Heuristic Function
• A Algorithm: Is a best-first search algorithm that
uses a heuristic function of the form
f(n) = g(n) + h(n)
• A* Algorithm
–
if h(n) ≤ h*(n)
•
•
•
•
•
h(n): Heuristic
h*(n): The actual length (distance) of the shortest path between
the goal state and state n.
h(n) : Should NOT be under/over estimated
A* Algorithm is admissible
When h(n)= 0 , then f(n) = g(n) šŸ”Ŗ breadth first search
(admissible)
45
Heuristic 8-Tile puzzle
• h1(n) = 0
• h2(n) =# of tiles out of place
• h3(n) = Sum(Distance between each tile and its
correct position).
h1(n) ≤ h2(n) ≤ h3(n) ≤ h*(n)
• h3(n) is the closest to h*, neither over nor under
estimation, it is more informed.
46
Informedness
• Informedness: if we have two heuristics
h1(n) and h2(n), if h1(n) ≤ h2(n) for all n
• h2(n) is said to be more informed than
h1(n)
• A more informed heuristic explores fewer
states
47
48
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Note: as calculating the heuristic function requires
processing time, then we should have a trade-off between
the use of heuristic to reduce the search size and the
complexity.
49
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Fig 4.18 Comparison of state space searched using heuristic search with space searched by
breadth-first search. The proportion of the graph searched heuristically is shaded. The optimal
search selection is in bold. Heuristic used is f(n) = g(n) + h(n) where
h(n) is tiles out of
place.
50
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
4.4 Heuristic in games
• One-player vs. Two-player games.
• In the latter, we have an opponent with
hostile and unpredictable moves,
• This allows for better exploring heuristic
options.
• The search will be more difficult
51
Fig 4.19 State space for a variant of nim. Each state partitions the seven
matches into one or more piles.
Nim game with 7 tokens.
Small state space can be
exhaustively searched.
52
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Nim game and Minimax
• The main difficulty is to consider the actions of
your opponents.
• Assumption: the opponent is using the same
knowledge of the state space as you use.
• This assumption provides a reasonable basis for
predicting the opponent’s behaviour.
• Minimax searches the state space under this
assumption.
• One player is called MIN and the other is called
MAX.
53
Minimax
• The names MIN and MAX came from historical
reasons.
• MAX represents a player trying to win (maximize)
her advantage.
• MIN represents a player trying to MINimize
MAX’s score, minimizes MAX’S chances to win.
– MIN makes moves that makes MAX position worse.
• Leaves are labelled. 1 or 0 to indicate if it is a win
for MAX or MIN.
54
Fig 4.20 Exhaustive minimax for the game of nim. Bold lines indicate
forced win for MAX. Each node is marked with its derived value (0 or 1)
under minimax.
Parent is MAX:
give it the
maximum value
among its
children
Parent is MIN:
give it the
minimum value
among its
children
55
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
MINIMAX in more complicated
(Larger) search space
• In large state space, it is not usually possible
to reach the leaves.
• Normally we expand up to a certain level
(n-ply look-ahead).
• We cannot assign values to the leaves as we
don’t know the values of its children.
• Instead, we use heuristics to assign values to
the nodes.
56
MINIMAX in more complicated
(Larger) search space
• The value the is propagated to the root
reflects the best value.
• Q: how to measure the advantage of one
player over another? Many Possible
Heuristics.
57
MINIMAX in more complicated
(Larger) search space
Some possible heuristics in chess
• The difference in the number of pieces with
your opponent.
# of your pieces - # of your opponents
pieces.
• Consider the type of pieces queen, king,
rook, or ordinary checker.
• Locations of pieces over the board.
58
Fig 4.21 Minimax to a hypothetical state space. Leafstates show heuristic
values; internal states show backed-up values.
59
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Notes on a n-fly
• A heuristically promising path may lead to
bad situation.
– You may take your opponent rook but later you
may lose your queen.
• This is called horizon effect, the effect of
choosing a heuristic that may lead to a lost
game.
• Search deeper in exceptionally promising
states.
60
Notes on a n-fly
Possible research:
• What is the difference between the estimate
of minimax and minimax of estimates.
• Deeper search with minimax evaluation
does not always mean better search.
• Resources:
– Textbook: P 154.
– Pearl (1984)
61
Fig 4.22 Heuristic measuring conflict applied to states of tic-tac-toe.
62
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Fig 4.23 Two-ply minimax applied to the opening move of tic-tac-toe, from
Nilsson (1971).
63
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Fig 4.24 Two ply minimax, and one of two possible MAX second moves,
from Nilsson (1971).
64
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Fig 4.25 Two-ply minimax applied to X’s move near the end of the game,
from Nilsson (1971).
65
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Alpha-beta procedure
• Alpha-beta pruning attempts to improve
search efficiency by ignoring some un
useful branches.
• Work as in depth-first and create two values
during the search: alpha and beta.
– alpha value: associated with max nodes, can
never decrease.
– Beta value: associated with min nodes, can
never increase.
66
Alpha-beta procedure
• If alpha value for a max node is 6, do not
consider any child node with that would
return a value less than 6.
• That child and all of its children are pruned.
• Current alpha is the worst possible value.
• In the same way if the value of beta is 5, it
does not need to consider any child node
with value greater than 5.
67
Alpha-beta procedure
Rules for search termination:
• Search can be stopped below any MIN node
having a beta value less than or equal to the
alpha value of any of its MAX node ancestors.
• Search can be stopped below any MAX node
having an alpha value greater than or equal to
the beta value of any of its MIN node
ancestors.
68
Fig 4.26 Alpha-beta pruning applied to state space of Fig 4.21. States without
numbers are not evaluated.
69
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Fig 4.27 Number of nodes generated as a function of branching factor,
B, for various lengths, L, of solution paths. The relating equation is T
= B(BL – 1)/(B – 1), adapted from Nilsson (1980).
70
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Complexity
• Reduce the size of open by saving only few
states in it, beam search.
• This reduces the search space, but it may
remove the best or the only solution
71
Fig 4.28 Informal plot of cost of searching and cost of computing
heuristic evaluation against informedness of heuristic, adapted
from
Nilsson (1980).
72
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Exercise 4.5
Fig 4.29 The sliding block puzzle.
73
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Exercise 4.13:
Fig 4.30.
74
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Exercise 4.17:
Fig 4.31.
75
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Building Control Algorithms For State
Space Search.
5.0 Introduction
5.3 The Blackboard Architecture for
Problem Solving
5.1 Recursion-Based Search
5.4 Epilogue and References
5.2 Production Systems
5.5 Exercises
George F Luger
ARTIFICIAL INTELLIGENCE 5th edition
Structures and Strategies for Complex Problem Solving
1
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
2
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
3
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
4
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
6.2 Production Systems
• Used for implementing search algorithms
and for modelling human problem solving.
• It provides pattern-directed control of a
problem solving process and consists of:
– A set of production rules,
– A Working memory,
– A recognize-act control cycle.
5
6
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
7
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Fig 6.1 A production system. Control loops until working memory pattern no
longer matches the conditions of any productions.
8
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Fig 6.2 Trace of a simple production system used for sorting a string
composed of letters a,b, and c.
9
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Production System and Cognition
• Human subjects were monitored in
problem-solving activities such as
chess and solving problems in
predicate logic.
• The human behaviour (protocol)
like verbal description and eye
movement was recorded, broken
down and coded into rules.
10
Production System and Cognition
• These rules are used to construct a problem
behaviour graph.
• Production system is used to implement
search in this graph.
• The productions correspond to the
problem-solving skills in the human’s long
term memory.
• The working memory represents short-term
memory.
11
Production System and Cognition
• Production system provides a model for
encoding human expertise in the form of
rules and designing a pattern-driven search
algorithms.
12
Production System and Expert systems
• Production system not necessarily assumed to
actually model human problem-solving
behaviour, but:
• As it provides:
– modularity of rules.
– Separation of knowledge and control
– Separation
of
working
memory
problem-solving knowledge.
and
• Therefore, it is an ideal tool for designing and
building expert systems.
13
Production System Languages and
systems
•
•
•
•
OPS
OPS5
CLIPS – C implementation
JESS – Java implementation
14
Examples of Production Systems
15
Fig 6.3 The 8-puzzle as a production system.
16
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Fig 6.4 The 8-puzzle searched by a production system with loop detection and
depth-bound , from Nilsson (1971).
17
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Fig 6.5 Legal moves of a chess knight.
18
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Fig 6.6 a 3 x 3 chessboard with move rules for the simplified knight tour
problem.
19
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Table 6.1 Production rules for the 3 x 3 knight problem.
20
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Fig 6.7 A production system solution to the 3 x 3 knight’s tour problem.
21
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Fig 6.8 The recursive path algorithm as production system.
22
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Control of search - Data-driven or goal-driven
• In data-driven we begin by problem
description and infer new knowledge by
applying the production rules.
• Initial knowledge and inferred knowledge is
placed in the working memory.
• When the condition of one rule matched the
working memory, this rule is fired and the
action is added to the working memory.
• This continues until a goal state is reached.
23
Fig 6.9 Data-driven search in a production system.
Conflict resolution
strategy:
Choose the enabled
rule that has fired
least recently (or not
at all).
24
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Fig 6.10 Goal-driven search in a production system.
25
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Complexity of the search in either approach is measured by the
branching factor or penetrance
Fig 6.11 Bidirectional search missing in both directions, resulting in excessive
search.
26
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Fig 6.12 Bidirectional search meeting in the middle, eliminating much of the space
examined by unidirectional search.
27
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Control of search - through Rule structure
• The above relations are equivalent but as
structure (syntax) is different, this affect
their applicability.
28
Control of search - through conflict Resolution
Simplest strategy choose the first rule the matches
the working memory.
Strategies in OPS5
1. Refraction: Once a rule is fired, it may not fire
again until the working memory that match its
conditions have been modified. Discourage
looping
2. Recency: Prefer rules that matches the pattern that
most recently added to the working memory.
Focuses on a single line of reasoning.
3. Specificity: Use more specific problem-specific
rule than to use more general once. More specific
rules has more conditions (matches fewer rules).
29
Major advantages of production systems for artificial intelligence
Separation of Knowledge and Control
A Natural Mapping onto State Space Search
Modularity of Production Rules
Pattern-Directed Control
Opportunities for Heuristic Control of Search
Tracing and Explanation
Language Independence
A Plausible Model of Human Problem-Solving
30
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Major advantages of production systems for artificial intelligence
•Separation of Knowledge and Control:
Elegant model for separation of knowledge and control.
Control provided through act-recognize cycle.
Problem-specific knowledge is encoded through the rules.
Knowledge or control can be modified without the need to modify the
other component.
•A Natural Mapping onto State Space Search
Successive states (contents) of the working memory form the nodes
in state space graph.
Production rules represent the set of transitions between states.
Conflict resolution represent selection of a branch.
31
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Major advantages of production systems for artificial intelligence
•Modularity of Production Rules:
No interaction between rules.
They interact only through the change of working memory.
No rule can call another rule.
They can not change the value used in another rule.
The independence supports incremental development of (expert)
systems.
•Pattern-Directed Control:
Rules can be fired in any sequence which adds flexibility.
•Opportunities for Heuristic Control of Search:
Several heuristics can be used in search, i.e. Conflict resolution.
32
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Major advantages of production systems for artificial intelligence
•Tracing and Explanation:
Easy to trace system.
Each rule represents a problem solving step.
The chain of rules used represents the solution path (human’s line of
reasoning).
In programming languages, single line of code is most likely
meaningless.
•Language Independence:
Independent of representation used in rules and working memory as soon it
supports pattern matching.
Predicate calculus is used for presentation and modus ponens inference
although other presentations may be used.
Predicate calculus involves inference with certainty.
Other languages that may work with probabilities (uncertainty).
33
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Major advantages of production systems for artificial intelligence
•A Plausible Model of Human Problem-Solving (Newell and simon,
1972):
Good way to model human problem solving especially in cognitive
science research.
34
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Fig 6.13 Blackboard architecture
35
Luger: Artificial Intelligence, 5th edition. © Pearson Education Limited, 2005
Strong Method Problem Solving.
8.0 Introduction
8.4 Planning
8.1 Overview of Expert System
Technology
8.5 Epilogue and References
8.6 Exercises
8.2 Rule-Based Expert Systems
8.3 Model-Based, Case-Based and
Hybrid Systems
George F Luger
ARTIFICIAL INTELLIGENCE 5th edition
Structures and Strategies for Complex Problem Solving
1
Luger: Artiļ¬cial Intelligence, 5 th edition. © Pearson Education Limited,
Knowledge-Intensive
problem solving
• Knowledge-Intensive
(strong
method) solving focuses on solving a
problem by having rich-knowledge.
• Human experts know a lot about
their area of expertise.
2
Expert Systems
• Expert systems use knowledge speciļ¬c to
a problem domain to provide expertquality performance in an application
area.
• Expert system designers (AI specialist or
engineer) acquire knowledge with the
help of human domain expert.
• Expert systems emulate human’s expert
3
methodology and performance.
Expert Systems
• Expert systems are not general, i.e. Do not
know everything.
• Expert systems usually focus on a narrow
set of problems.
• Knowledge is theoretical and practical.
• Human
experts
usually
augment
theoretical understanding with tricks
(rules of thumb), shortcuts and heuristics
to use the knowledge through problem4
solving experience.
Features of Expert Systems
Expert systems are knowledge-intensive
and use heuristics, therefore they:
• Allow inspection of the reasoning process,
presenting about intermediate process
and answering questions about the
solution process.
• Allows easy modiļ¬cations by adding or
deleting skills from the knowledge-base.
• Use
heuristics
using
(imperfect)
knowledge to get useful solutions.
5
Expert Systems
• Usually we are sceptical when we have a
diagnosis from a human expert.
• Explanation and justiļ¬cation are important
if the user is to accept a recommendation
from a computer.
• To allow giving explanations, expert
systems should be easily prototyped, tested,
and changed.
– Example easily modiļ¬cation of rules in a
production system.
6
– Easy modiļ¬cation of the knowledge-base is a
Expert Systems
• Areas of expert systems are in wide range
including:
– Medicine,
mathematics,
engineering,
chemistry,
geology,
computer
science,
business, law, defence, and education.
7
Expert Systems Categories (Waterman,
1986)
• Interpretation: Giving a conclusion from
raw data.
• Prediction: probable
given situations.
consequence
of
• Diagnosis: Causes of a malfunction in
complex situation based on observable
symptoms.
• Design: Conļ¬gure system components to8
Expert Systems Categories (Waterman,
1986)
• Planning: Achieve set of goals by devising a
sequence of actions with starting conditions
and constraints.
• Monitoring: Comparing observed behaviour
with expected behaviour.
• Instruction: Assisting education in technical
domains.
• Control: Control the behaviour of a complex
environment.
9
Expert Systems Technology
Successful systems involve:
• Choice of appropriate application
domain.
• Acquisition and formalization of
problem-solving knowledge.
10
Fig 8.1 architecture of a typical expert system for a particular problem
domain.
11
Luger: Artiļ¬cial Intelligence, 5 th edition. © Pearson Education Limited,
Separation of knowledge
and control
Several advantages to be read from
the book
12
Architecture of a typical
expert system
• Knowledge can be if then rules in
rule-based systems.
• Inference
engine
applies
the
knowledge to the solution of actual
problem. It is an interpreter for the
knowledge-base.
• Case
speciļ¬c
data
contains
knowledge about the case under
consideration such as data given in
a
problem
instance,
partial13
Architecture of a typical expert
system
• Explanation
subsystem
allows
programs to explain its reasoning to
the user.
• Justiļ¬cation for the system conclusions
(How).
• Explanation of why the system needs
speciļ¬c data (Why).
• Knowledge-based
programmer:
editor
• locate and correct bugs
program’s performance.
helps
in
the
14
Ready expert system shells
• Broken line indicates the system
shell.
– CLIPS from NASA
– JESS from Sandia National Laboratories.
– LISP and PROLOG shells are also
available.
15
Guidelines to determine whether a problem is appropriate for
expert system solution:
Development of too complex, poorly understood, unsuited to the technology may waste time,
cost and effort.
To determine whether a problem is appropriate for expert system solution:
1.
The need for the solution justifies the cost and effort of building an expert system:
ā€• Large saving in money and time is expected in domains like business and defense.
2.
Human expertise is not available in all situations where it is needed.
ā€• Remote mining and drilling sites where a geologist or an engineer should travel long
distance to reach an area.
3.
The problem may be solved using symbolic reasoning.
ā€• No Physical or perceptual skills like those in the human being are required.
ā€• Robots lack the flexibility of the human being.
16
Luger: Artiļ¬cial Intelligence, 5 th edition. © Pearson Education Limited,
Guidelines to determine whether a problem is appropriate for
expert system solution:
4.
ā€•
ā€•
The problem domain is well structured and does not require commonsense reasoning:
Terms are defined and domains have clear and specific conceptual models.
No common sense reasoning is involved.
ā€•
The problem may not be solved using traditional computing methods.
The problem should be solved by an expert and not by a normal computer program.
ā€•
ā€•
Cooperative and articulate experts exist.
Knowledge comes a specialized human expert in the domain.
Experts should be willing and able to share knowledge.
5.
6.
7.
The problem is of proper size and scope.
ā€• Example: Modeling all the knowledge of a medical doctor is not feasible.
17
Luger: Artiļ¬cial Intelligence, 5 th edition. © Pearson Education Limited,
People involved in Building an expert
system
• Knowledge engineer (AI and
representation expert).
• Domain expert.
• End user.
18
People involved in Building an expert
system
• Knowledge engineer (AI and
representation expert):
– Chooses the SW and HW tools.
– Helps the domain expert articulate the
knowledge.
– Implements the knowledge in a correct and
eļ¬ƒcient knowledge base.
19
People involved in Building an expert
system
• Domain expert:
– Provides the needed knowledge.
– Worked in the domain and understands its
problem solving techniques.
– Able to handle imprecise data, evaluate
partial solutions.
• End user:
– Determines the design constraints such as
level of required explanation, and the used
interface.
20
Fig 8.2 Exploratory development cycle.
21
Luger: Artiļ¬cial Intelligence, 5 th edition. © Pearson Education Limited,
8.1.3 Knowledge
Acquisitions
• Diļ¬ƒculties/issues facing knowledge
acquisition.
22
Fig 8.4 The role of mental or conceptual models in problem solving.
Conceptual model: the
knowledge engineer’s
evolving conception of the
domain knowledge.
23
Luger: Artiļ¬cial Intelligence, 5 th edition. © Pearson Education Limited,
8.2 Rule-Based Expert
Systems
• Knowledge is represented as if …
then … rules.
24
8.2.1 Goal-Driven problem
solving
A small expert system for analysis of automotive problems.
Rule 1: if
the engine is getting gas, and
the engine will turn over,
then
the problem is spark plugs.
Rule 2: if
the engine does not turn over, and
the lights do not come on
then
the problem is battery or cables.
Rule 3: if
the engine does not turn over, and
the lights do come on
then
the problem is the starter motor.
Rule 4: if
there is gas in the fuel tank, and
there is gas in the carburetor
then
the engine is getting gas.
25
Luger: Artiļ¬cial Intelligence, 5 th edition. © Pearson Education Limited,
Fig 8.5 The production system at the start of a consultation in
the car diagnostic example.
26
Luger: Artiļ¬cial Intelligence, 5 th edition. © Pearson Education Limited,
Fig 8.6 The production system after Rule 1 has ļ¬red.
27
Luger: Artiļ¬cial Intelligence, 5 th edition. © Pearson Education Limited,
Fig 8.7 The system after Rule 4 has ļ¬red. Note the stack-based
approach to goal reduction.
Ask the
user about
these 3
entries
28
Luger: Artiļ¬cial Intelligence, 5 th edition. © Pearson Education Limited,
Fig 8.8 The and/or graph searched in the car diagnosis example,
with the conclusion of Rule 4 matching the ļ¬rst premise of Rule 1.
29
Luger: Artiļ¬cial Intelligence, 5 th edition. © Pearson Education Limited,
Limitations of the example
• Knowledge is limited.
• Rules are phrased in English rather
than a formal language.
• When ļ¬nding a solution, the system
stops rather than giving diagnosis of
the solution.
30
The following dialogue begins with the computer asking the user
about the goals present in working memory.
Gas in fuel tank?
Yes
Gas in carburetor?
Yes
Engine will turn over?
Why
•
It has been established that:
The engine is getting gas,
therefore, if
2. The engine will turn over,
Then the problem is the spark plugs.
How the engine is getting gas
This follows from rule 4:
if
gas in fuel tank, and
gas in carburetor
then
engine is getting gas.
gas in fuel tank was given by the user
gas in carburetor was given by the user
31
Luger: Artiļ¬cial Intelligence, 5 th edition. © Pearson Education Limited,
Fig 8.9 The production system at the start of a consultation for
data-driven
reasoning.
32
Luger: Artiļ¬cial Intelligence, 5 th edition. © Pearson Education Limited,
Fig 8.10
The production system after evaluating the ļ¬rst
premise of Rule 2, which then fails.
Fig 8.11
The data-driven production system after considering
Rule 4, beginning its second pass through the rules.
33
Luger: Artiļ¬cial Intelligence, 5 th edition. © Pearson Education Limited,
Fig 8.12
The search graph as described by the contents of
working memory (WM) for the data-driven breadth-ļ¬rst search of
the rule set of Section 8.2.1
34
Luger: Artiļ¬cial Intelligence, 5 th edition. © Pearson Education Limited,
Heuristic control of search
Factors affecting the performance of
the search
• Rules order
• Organisation of rule premises.
• Costs of different tests
Rete algorithm improves the search
process.
35
Model-Based Systems
• Model-Based System is a knowledgebased reasoner whose analysis is
founded directly on the speciļ¬cation
and functionality of a physical
system.
• Early model-based systems intended
to create software models of various
physical devices such as electronic
circuits for instructional purposes.
• Model-based system tells its user
what
to
expect
and
when36
Model-Based Systems
include:
• Description of each component in
the device.
• Description of the device’s internal
structure.
– Components and their interconnections.
• Observation of actual device’s
performance
requires
measurements of its input and
37
output.
Fig 8.13
The behavior description of an adder after Davis
and Hamscher
(1992)
38
Luger: Artiļ¬cial Intelligence, 5 th edition. © Pearson Education Limited,
Fig 8.14
Taking advantage of direction of information ļ¬‚ow,
after Davis
and Hamscher (1992).
39
Luger: Artiļ¬cial Intelligence, 5 th edition. © Pearson Education Limited,
Fig 8.15
A schematic of the simpliļ¬ed Livingstone
propulsion system,
from Williams and Nayak (1996b).
40
Luger: Artiļ¬cial Intelligence, 5 th edition. © Pearson Education Limited,
Fig 8.16
a model-based conļ¬guration management system,
from Williams and Nayak (1996b).
41
Luger: Artiļ¬cial Intelligence, 5 th edition. © Pearson Education Limited,
Case-Based Reasoning (CBR)
• Reasoning from cases, examples of
past problems and their solutions.
• CBR uses an explicit database of
problem solutions to address new
situations.
• Previous solutions may be collected
from
human
expert
through
knowledge-engineering process.
42
CBR Examples
• Medical education does not rely
solely on theoretical models but it
depends heavily on case histories
and the intern’s experience with
other patients and their treatments.
• Lawyers select past law cases that
similar to the current client and
suggest a favourable decision.
– Legal precedents, earlier situations.
43
CBR Examples
• Architects draw on their knowledge
of pleasing buildings to design new
buildings that people ļ¬nd pleasing
and comfortable.
• Historians use stories from the past
to help statesmen, bureaucrats, and
citizens understand past events and
plan for the future.
44
CBR
• CBR simplify knowledge acquisition if
we record a human’s expert solution
to a number of problems and let a
case-based
reasoner select
and
reason from the appropriate case.
• This saves the knowledge engineer
the trouble of building general rules
from the examples. i.e. Automatic
generalisation of the rules.
• CBR enables an expert system to
45
learn from experience.
Case-based reasoners share a common structure.
For each new problem they:
1. Retrieve appropriate cases from memory.
2. Modify a retrieved case so that it will apply
to the current situation.
3. Apply the transformed case.
4. Save the solution, with a record of success
or failure, for future use.
46
Luger: Artiļ¬cial Intelligence, 5 th edition. © Pearson Education Limited,
CBR data structure
• Cases may be recorded as relational
tuples where a subset of the
arguments record the features to be
matched and other arguments
record to the solution steps.
• Cases can be represented as proof
trees.
• Cases can be represented as a set of47
Kolodner (1993) offers a set of possible preference heuristics to help
organize the storage and retrieval of cases. These include:
1. Goal-directed preference. Organize cases, at least in part, by goal
descriptions. Retrieve cases that have the same goal as the current
situation.
2. Salient-feature preference. Prefer cases that match the most
important features or those matching the largest number of
important features.
3. Specify preference. Look for as exact as possible matches of
features before considering more general matches.
4. Frequency preference. Check ļ¬rst the most frequently matched
cases.
5. Recency preference. Prefer cases used most recently.
6. Ease of adaptation preference. Use ļ¬rst cases most easily adapted
to the current situation.
48
Luger: Artiļ¬cial Intelligence, 5 th edition. © Pearson Education Limited,
Fig 8.17
Transformational analogy, adapted from
Carbonell (1983).
49
Luger: Artiļ¬cial Intelligence, 5 th edition. © Pearson Education Limited,
Hybrid Design
• Combine many models: rule-based,
model-based, and CBR to have the
advantages of these systems.
• Get the best of these worlds.
50
51
Luger: Artiļ¬cial Intelligence, 5 th edition. © Pearson Education Limited,
52
Luger: Artiļ¬cial Intelligence, 5 th edition. © Pearson Education Limited,
53
Luger: Artiļ¬cial Intelligence, 5 th edition. © Pearson Education Limited,
54
Luger: Artiļ¬cial Intelligence, 5 th edition. © Pearson Education Limited,
Advantages of model-based reasoning include
1. The ability to use functional/structural knowledge of the
domain in problem-solving. This increases the reasoner’s
ability to handle a variety of problems, including those
that may not have been anticipated by the system’s
designers.
2. Model-based reasoners tend to be very robust. For the
same reasons that humans often retreat to ļ¬rst principles
when confronted with a novel problem, model-based
reasoners tend to be thorough and ļ¬‚exible problem
solvers.
3. Some knowledge is transferable between tasks. Modelbased reasoners are often built using scientiļ¬c, theoretical
knowledge. Because science strives for generally
applicable theories, this generality often extends to modelbased reasoners.
55
4. Often, model-based reasoners can provide casual
Luger: Artiļ¬cial Intelligence, 5 edition. © Pearson Education Limited,
explanations. These
can convey a deeper understanding of
th
56
Luger: Artiļ¬cial Intelligence, 5 th edition. © Pearson Education Limited,
57
Luger: Artiļ¬cial Intelligence, 5 th edition. © Pearson Education Limited,
58
Luger: Artiļ¬cial Intelligence, 5 th edition. © Pearson Education Limited,
59
Luger: Artiļ¬cial Intelligence, 5 th edition. © Pearson Education Limited,
8.4 Planning
60
Fig 8.18
The blocks world.
61
Luger: Artiļ¬cial Intelligence, 5 th edition. © Pearson Education Limited,
The blocks world of ļ¬gure 8.18 may now be represented by
the following set of predicates.
62
Luger: Artiļ¬cial Intelligence, 5 th edition. © Pearson Education Limited,
A number of truth relations or rules for performance are
created for the clear (X), ontable (X), and gripping ().
63
Luger: Artiļ¬cial Intelligence, 5 th edition. © Pearson Education Limited,
Fig 8.19
Portion of the state space for a portion of the
blocks world.
64
Luger: Artiļ¬cial Intelligence, 5 th edition. © Pearson Education Limited,
Using blocks example, the four operators pickup, putdown,
stack, and unstack are represented as triples of descriptions.
65
Luger: Artiļ¬cial Intelligence, 5 th edition. © Pearson Education Limited,
Fig 8.20
Goal state for the blocks world.
66
Luger: Artiļ¬cial Intelligence, 5 th edition. © Pearson Education Limited,
Fig 8.21
A triangle table, adapted from Nilsson (1971).
67
Luger: Artiļ¬cial Intelligence, 5 th edition. © Pearson Education Limited,
Fig 8.22
A simple TR tree showing condition action rules
supporting a top-level goal, from Klein et al. (2000).
68
Luger: Artiļ¬cial Intelligence, 5 th edition. © Pearson Education Limited,
Fig 8.23
Model-based reactive conļ¬guration management,
from Williams and Nayak (1996b).
69
Luger: Artiļ¬cial Intelligence, 5 th edition. © Pearson Education Limited,
Fig 8.24
The transition system model of a valve, from
Williams and Nayak (1996a).
70
Luger: Artiļ¬cial Intelligence, 5 th edition. © Pearson Education Limited,
Fig 8.25
(1996a).
Mode estimation (ME), from Williams and Nayak
71
Luger: Artiļ¬cial Intelligence, 5 th edition. © Pearson Education Limited,
Fig 8.26
Mode reconļ¬guration (MR), from Williams and
Nayak (1996a).
72
Luger: Artiļ¬cial Intelligence, 5 th edition. © Pearson Education Limited,
Programming in Logic
(PROLOG)
George F Luger
ARTIFICIAL INTELLIGENCE 5th edition
Structures and Strategies for Complex Problem Solving
1
Luger: Artiļ¬cial Intelligence, 5 th edition. © Pearson Education Limited,
PROLOG
• What is True
• What needs to be Proved
Not
• How to do it like
Programming languages
in
other
2
PROLOG
• Edinburgh Syntax
• Programming in PROLOG
– Clocksin and Mellish, 1984
3
PROLOG EXAMPLES
4
EXAMPLE 1
Set of facts:
parent(ali,ahmad).
parent(ahmad,salem).
parent(ali,fatema).
parent(fatema,osama).
male(ali).
male(ahmad).
male(salem).
male(osama).
female(fatema).
5
EXAMPLE 1
• Every sentence ends with a period “.”
• Names starts with lower case, constants
6
EXAMPLE 1
? parent(ali, ahmad).
By unifying this statement with the axioms
(assertions) in order.
The above query (goal) also ends with a period
Ans:
yes
?parent(ali,X).
The above statement asks about all ali’s child
Semicolon asks for more answers
Ans:
X=ahmad;
X=fatema;
no
Note: by unifying X with ahmad
7
?parent(X,Y).
X=ali
Y=ahmad;
X=ahmad
Y=salem;
X=ali
Y=fatema;
X=fatema
Y=osama;
no
EXAMPLE 1
8
EXAMPLE 1
?parent(X,ahmad), male(X).
Comma “,” means and
Who is ahmad’s father
It ļ¬rst uniļ¬es the ļ¬rst part, i.e parent(X,ahmad).
It ļ¬nds that X could take the value of ali.
It then attempts to prove male(ali)
Ans:
X=ali;
No
Backtracking on two levels to ļ¬nd other answers
9
EXAMPLE 1
Add the following rule (considered rule not fact):
father(X,Y):- parent(X,Y) , male(X).
father(X,Y)
called head
parent(X,Y) , male(X)
called body
?father(X,ahmad).
Ans:
X=ali;
No
?father(fatema,Y).
no
Because fatema is uniļ¬ed with X and there is no way
10
to prove that male(fatema)
EXAMPLE 1
sibling(X,Y):- parent(Z,X) , parent(Z,Y).
?sibling(X,Y)
X=ahmad,
Y=ahmad;
X=ahmad,
Y=fatema;
X=salem,
Y=salem;
X=fatema,
Y=ahmad;
X=fatema,
Y=fatema;
g
n
o
r tion
W ni
ļ¬
e
D
X=osama,
Y=osama;
no
11
EXAMPLE 1
sibling(X,Y):- parent(Z,X) , parent(Z,Y),
X\==Y.
?sibling(X,Y)
X=ahmad,
Y=fatema;
X=fatema,
Y=ahmad;
no
12
EXAMPLE 1
Deļ¬ne uncle relation, uncle(X,Y).
uncle(X,Y):parent(Z,Y),
parent(G,Z),
parent(G,X),
X\==Z.
13
•
•
•
•
•
•
HW: Deļ¬ne the following
rules:
mother
son
daughter
sister
sibling
grandfather(X,Y):-parent(X,Z),parent(Z,
Y), male(X).
• grandmother
• cousin(X,Y)
14
Built-in mathematical
predicates
X=:=Y
true when X equals Y
•
• X=\=Y
true when X does not equal to
Y
• X<Y
true when X is less than Y
• X>Y
true when X is greater than Y
• X=<Y
true when X is less than or
equal to Y
• X>=Y
true when X is greater than or
equal to Y
15
Built-in Mathematical
predicates
?6 = : = 4 +2.
yes
?8 = : = 4*2.
yes
?7= : = 4+6.
no
?6 =:= 6.
yes
16
Built-in logical predicates
• X= =Y
true when X can be uniļ¬ed
with Y
• X\= =Y
true when X is not the same
as Y
• X=Y
attempts to unify X with Y
17
Built-in logical predicates
?5= =5.
yes
?X= = Y.
no
?5+1 == 5+1.
yes
?5+1 = = 1+5.
no
18
Built-in logical predicates
?X= ali.
X=ali
?X= 1/2.
X=1/2
? f(X)= f(ali).
X=ali
?f = g.
no
19
Built-in logical predicates
?5 \= 4.
yes
?ahmad @<basem.
yes
? 'Ali' @< 'Basem'.
yes
? X=1+2.
X=1+2
20
Built-in logical predicates
? X is 1+2.
X=3
? X is 4*2.
X=8
?X is 7//3.
X=2
? X is 5, X is 3+3.
no
21
Mathematical operations
+
*
/
//
mod
^
Addition
Subtraction
Multiplication
Real division
Integer division
modulus
Exponent
22
Negation as failure
home(X):-not out(X).
out(ali).
?home(ali).
no
?home(X).
no
?home(zaki).
yes
?out(zaki).
no
23
Horn Clauses
P1 ^ P2 ^ P3 ^ P4 R
R:- P1, P2, P3, P4.
24
Recursion
A program that calls itself
Assume the following relations and
attempt to ļ¬nd predecessor
parent(ali,ahmad).
parent(ahmad,fatema).
parent(fatema,osama).
25
Recursion
• A non recursion version
predecessor(X,Y):- parent (X,Y).
predecessor(X,Y):- parent (X,Z), parent(Z,
Y).
predecessor(X,Y):- parent (X,Z1), parent
(Z1,Z2), parent(Z2,Y).
• The above program is applicable to a
limited number of generations.
26
Recursion
• Recursion version
predecessor(X,Y):- parent (X,Y).
predecessor(X,Y):- parent (X,Z),
predecessor(Z,Y).
27
Recursion
? predecessor(ali,ahmad).
yes
? predecessor(ali,fatema).
yes
? predecessor(ali,osama).
yes.
? predecessor(ali,X).
X=ahmad
28
• Factorial
Recursion
29
Factorial
Recursion
• fact(0,1).
• fact(N,X):N>0,
N1 is N-1,
fact(N1,X1),
X is N*X1.
?fact(0,X).
X=1.
?fact(1,X).
X=1
30
• HW:
Recursion
• Write a program to calculate the sum of
the numbers:
1, 2, 3, ..., N
31
List Processing
• Empty lists are denoted as []
• [H | T] list of :
– head H representing the ļ¬rst element.
– tail T represents the rest of the elements.
– [5, 2, 7, 10] H is 5, T is [2, 7, 10]
32
List Processing
?[H | T]=[5, 2, 7, 10].
H=5
T=[2, 7, 10]
?[a,b,c,d]=[X,Y | T].
X=a
Y=b
T=[c, d]
33
List Processing
?[H | T]=[ali].
H=ali
T=[]
?[H | T]=[].
no
34
•
List Processing membership
Test for a membership
in a list
member(X,[X | T]).
member(X,[ _ | T]):member(X,T).
?member(5,[5, 7]).
yes
?member(5,[1, 5]).
yes
35
List Processing membership
To get all members
of a list you can use:
?member(X,[a,b,c,d]).
X=a;
X=b;
X=c;
X=d;
no
36
List Processing - count
• To count how many members in a list
count([],0).
count([X | T],C):count(T,C1),
C is C1+1.
?count([2,4,7],C).
C=3
37
List Processing - sum
• To calculate the sum of the elements in a list
sum([],0).
sum([X | T],S):sum(T, S1),
S is S1+X.
?sum([2,4,7],S).
S=13
38
List Processing - append
• To append two lists and produce a list
containing the elements of the ļ¬rst list then the
elements of the second list
append([], L, L).
append([H | T1], L, [H | T]):
append(T1, L, T).
39
List Processing - append
?append([r,t],[a,b,c],L).
L=[r,t,a,b,c].
?append(L1,L2,[a,b,c]).
L1=[]
L2=[a,b,c];
L1=[a]
L2=[b,c];
L1=[a,b]
L2=[c];
L1=[a,b,c]
L2=[];
no
40
List Processing - split
• To divide a list into two parts positive and
negative
positive([],[],[]).
positive([X|Y],[X|L1], L2) :X >= 0,
positive(Y,L1, L2) .
positive([X|Y],L1, [X|L2]) :positive(Y,L1, L2).
41
•
List Processing – write
elements
To print the elements
of a list
write_a_list([]).
write_a_list([H|T]):write(H), nl, write_a_list(T).
? write_a_list([5, 6, 7]).
5
6
7
42
Cut operator
• To ļ¬nd a maximum of two numbers, you
might write:
max(X,Y,X):X>=Y.
max(X,Y,Y).
?max(5,3,M).
M=5;
M=3;
no
43
Cut operator
• To ļ¬nd a maximum of two numbers, you
might write:
max(X,Y,X):X>=Y,!.
max(X,Y,Y).
?max(5,3,M).
M=5;
no
44
Cut operator
• Add an element to a list only if that
element does not exist already in the list
add(X, L, L):member(X,L), !.
add(X, L, [X|L]).
45
Self test list exercises
1. Calculate the max/min value in
a list.
? min_in_list([5,7,3,7,9], A).
A=3
? max_in_list([8,4,9,4],X).
X=9
46
Self test list exercises
2. Calculate how many times a
speciļ¬c number appears in a
list.
3. Take a list as input and return
its reverse in another list.
4. Take two lists as input and
return their union, intersection,
and difference in another list.
5. Delete one number from a list.
47
Homework
1. Write a program to calculate the
absolute value of X, absval(X,Y)
where X is the input and Y is the
output.
2. Write a program to deļ¬ne the
even(X) which is true only if X is
even.
3. Write a program write_reverse to
print the elements of a list in a
reverse order
48
PROLOG Self test examples
?concatenate([2, 3, 5], [7, 9, 5], B).
B=[2, 3, 5, 7, 9, 5]
?order([2, 8, 3, 5], B).
B=[2, 3, 5, 8]
49
The fail predicate
country(jordan).
country(egypt).
country(england).
print_countries:country(X),
write(X),
nl,
fail.
print_countries.
50
The fail predicate
?print_countries.
jordan
egypt
england
yes
51
The repeat command
do:repeat, read(X), square(X).
do.
square(stop):-!.
square(X):Y is X*X, write(Y), nl, fail.
52
The repeat command
?do.
5.
25
6.
36
stop.
yes
53
The repeat command
do:read(X), square(X).
square(stop).
square(X):X\==stop,
Y is X*X,
write(Y), nl,
do.
54
Input / output procedures write
?X=5, write(X).
5
yes
?write([a,b,5,6]).
[a,b,5,6]
yes
55
Input / output procedures write
?write(date(16,11,2008)).
date(16,11,2008)
yes
56
Input / output procedures read
?read(X).
a.
X=a.
?read(stop).
test
no
57
Dynamic programs
• Programs that are able to modify
themselves.
• Modiļ¬cation means the ability to add/
remove facts at runtime.
• Such procedures include:
– assert(C)
– retract(C)
– abolish(C)
58
Dynamic programs - assert
• assert(C)
?parent(zaid, faris).
no
?assert(parent(zaid, faris)).
yes
?parent(zaid, faris).
yes
59
Dynamic programs - assert
• asserta(C) adds the new statement C to
the beginning of the program.
• assertz(C) adds the new statement C to
the end of the program.
60
Dynamic programs - retract
• retract removes a statement from the
program
?retract(parent(zaid, faris)).
yes
?parent(zaid, faris).
no
61
Dynamic programs - abolish
?abolish(parent,2).
yes
removes all statements deļ¬ned as
parent with arity 2.
62
Structures
employee(Name,House_no,Street_name,City_name, Day,
Month,Year).
employee(ali,5,salt_street,amman,12,10,1980).
?employee(ali,No,S,C,D,M,Y).
No=5
S=salt_street
C=amman
D=12
M=10
Y=1980
63
Structures
employee(ali,address(5,salt_street,amman),date(12,10,1980)).
employee(hasan,address(12,university_street,zarqa),date(7,3,1985)).
employee(samer,address(9,madina_street,amman),date(2,9,1987)).
?employee(hasan,A,D).
A=address(12,university_street,zarqa)
D=date(7,3,1985)
?employee(Name,address(No,St,amman),_).
Name=ali
No=5
St=salt_street;
Name=samer
No=9
St=madina_street;
false
64
Structures
?employee(Name,A,date(_,_,1985)).
Name=hasan
A=address(12,university_street,zarqa);
no
?employee(ali,A,date(Day, Month, Year)), 2011- Year>25.
A=address(5,salt_street,amman)
Day=12
Month=10
Year=1980;
no
65
Structures – Binary Trees
5
a
Example
1
a
2
b
10
c
Example
2
7
9
3
Example 3
66
Structures – Binary Trees
• Binary trees when empty are called nil.
• It may contain 3 components:
– Left subtree
– Root
– Right subtree
• It can be represented using the following
structure:
bin_tree(Left_subtree, Root, Right_subtree).
67
Structures – Binary Trees
a
Example 1
bin_tree(nil,a,nil).
68
Structures – Binary Trees
a
b
c
Example
2
bin_tree(bin_tree(nil,b,nil),a, bin_tree(nil,
c,nil)).
69
Structures – Binary
Trees
5
2
10
7
Example
3
9
3
bin_tree(bin_tree(nil,2,nil),5,
bin_tree(bin_tree(nil,7,nil),10,
bin_tree(nil,9, bin_tree(nil,3,nil)))).
70
Structures – Binary Trees Count
count(nil,0).
count(bin_tree(Left,Root,Right),C):count(Left,C1),
count(Right,C2),
C is C1+C2+1.
71
Structures – Binary Trees –
Count V2
count(nil,0).
count(bin_tree(Left,_,Right),C):count(Left,C1),
count(Right,C2),
C is C1+C2+1.
72
Structures – Binary Trees Sum
sum(nil,0).
sum(bin_tree(Left,Root,Right),S):sum(Left,S1),
sum(Right,S2),
S is S1+S2+Root.
73
Structures – Binary Trees depth
depth(nil,0).
depth(bin_tree(Left,Root,Right),D):depth(Left,D1),
depth(Right,D2),
D1>=D2,
D is D1+1.
depth(bin_tree(Left,Root,Right),D):depth(Left,D1),
depth(Right,D2),
D2>D1,
D is D2+1.
74
Structures – Binary Trees –
depth V2
depth(nil,0).
depth(bin_tree(Left,Root,Right),D):depth(Left,D1),
depth(Right,D2),
max(D1,D2,M),
D is M+1.
max(X,Y,X):- X>=Y,!.
max(X,Y,Y):- X<Y.
75
Binary Tree - Hw
• Write a PROLOG program max_tree to
ļ¬nd the maximum value in a binary
tree (bin_tree) assuming all values are
numeric.
• Write a PROLOG program to add a
new value to an ordered binary tree.
• Write a PROLOG program to display
the values stored in an ordered
76
binary tree in ascending order.
PROLOG Resources
http://en.wikibooks.org/wiki/Prolog/
Lists
http://www.csupomona.edu/~jrļ¬sher/
www/prolog_tutorial/2_7.html
77
Download