Artifical Intelligent class – Department of Network

advertisement
Artifical Intelligent
Mehdi Ebady Manaa
3rd class – Department of Network
College of IT- University of Babylon
1. Agent Learning
Why learning in agent ?
One central element of intelligent behavior is the ability to learn from
experience. (There is no way that we can know a priori all of the
situations that our intelligent agent will encounter. Learn
from
experience makes agents get better at tasks and elevate it to a higher level
of ability, So It can learn which agents to trust and cooperate with, and
which ones to avoid.
A learning agent can improve its performance based on prior
experience.
There are three types of learning:-
• The most common form of learning.
• The learning agent is trained by showing it examples of the
problem state or attributes along with the desired output or action.
• When make a prediction & the output differs from the desired
output, then the learning agent is adapted to produce the correct
output.
• Examples are back propagation NN, and a decision tree.
•
Used when the learning agent needs to recognize similarities
between inputs or to identify features in the input data.
• The data is presented to the agent, and it adapts so that it partitions
the data into groups.
Page 1
Date: Thursday, May 01, 2014
Artifical Intelligent
Mehdi Ebady Manaa
3rd class – Department of Network
College of IT- University of Babylon
•
Reinforcement learning is as a middle stage between supervised
learning and unsupervised learning.
• It is a special case of supervised learning where the exact desired
output is unknown.
• It is based only on the information of whether or not the actual
output is correct.
There are many paradigms of learning in intelligent agent
Learning forms
Rote Learning
Learning by induction
Weight adjustment
learning
Chunking, clustering or
abstraction
It is one type of Powerful form of learning. It means Learning by
repetition. The idea is that one will be able to quickly recall the meaning
of the material the more one repeats it. It avoids understanding of a
subject and focuses on memorization.
Our agent would be better able to respond to situations one week after we
started the training and would be even better one month later.
The basis for neural network learning. The idea is adjusting the weighting
factors over time to improve the likelihood of a correct decision.
Page 2
Date: Thursday, May 01, 2014
Artifical Intelligent
Mehdi Ebady Manaa
3rd class – Department of Network
College of IT- University of Babylon
• Learning by example
• Extract the important characteristics of the problem, so allowing
us to generalize to new inputs.
• Decision trees and NN
• Both use induction for classification or prediction problems.
It has the following characteristics:• Taking individual units of information (chunks) and grouping them
into larger units.
• Cutting down the amount of storage we need.
• Cutting down the processing time.
Page 3
Date: Thursday, May 01, 2014
Artifical Intelligent
Mehdi Ebady Manaa
3rd class – Department of Network
College of IT- University of Babylon
• By thinking at higher or more abstract levels, we can think “great
thoughts” without getting caught in the muddle of a million little
details.
It is a type of chunking.
• Finding groups of objects such that the objects in a group will be
similar (related) to one another and different from (unrelated) the
objects in other groups.
• This similarity could be used as a way of assigning meaning to
that group of samples.
An example would be clustering documents. It’s used to improve
the performance of a document search engine.
Classification is another example of supervised learning in two
parts:Classification Process (1): Construction
Trained
Data
NAME
Mike
Mary
Bill
Jim
Dave
Anne
Page 4
RANK
YEARS TENURED
Assistant Prof
3
no
Assistant Prof
7
yes
Professor
2
yes
Associate Prof
7
yes
Assistant Prof
6
no
Associate Prof
3
no
Date: Thursday, May 01, 2014
Artifical Intelligent
Mehdi Ebady Manaa
3rd class – Department of Network
College of IT- University of Babylon
Classification Process (2): Prediction
Page 5
Date: Thursday, May 01, 2014
Download