What is Artificial Intelligence (AI)

advertisement
What is Artificial Intelligence (AI)?
Artificial intelligence can be viewed from a variety of perspectives.


From the perspective of intelligence
artificial intelligence is making machines "intelligent" -- acting as we would
expect people to act.
o The inability to distinguish computer responses from human responses
is called the Turing test.
o Intelligence requires knowledge
o Expert problem solving - restricting domain to allow including
significant relevant knowledge
From a research perspective
"artificial intelligence is the study of how to make computers do things which,
at the moment, people do better" [Rich and Knight, 1991, p.3].
o AI began in the early 1960s -- the first attempts were game playing
(checkers), theorem proving (a few simple theorems) and general
problem solving (only very simple tasks)
o General problem solving was much more difficult than originally
anticipated. Researchers were unable to tackle problems routinely
handled by human experts.
o The name "artificial intelligence" came from the roots of the area of
study.
AI researchers are active in a variety of domains.
Domains include:




Formal Tasks (mathematics, games),
Mundane tasks (perception, robotics, natural language,
common sense reasoning)
 Expert tasks (financial analysis, medical diagnostics,
engineering, scientific analysis, and other areas)
From a business perspective AI is a set of very powerful tools, and
methodologies for using those tools to solve business problems.
From a programming perspective, AI includes the study of symbolic
programming, problem solving, and search.
o Typically AI programs focus on symbols rather than numeric
processing.
o Problem solving - achieve goals.
o Search - seldom access a solution directly. Search may include a
variety of techniques.
o AI programming languages include:
 LISP, developed in the 1950s, is the early programming
language strongly associated with AI. LISP is a functional
programming language with procedural extensions. LISP (LISt
Processor) was specifically designed for processing
heterogeneous lists -- typically a list of symbols. Features of
LISP that made it attractive to AI researchers included runtime type checking, higher order functions (functions that have
other functions as parameters), automatic memory management
(garbage collection) and an interactive environment.


The second language strongly associated with AI is PROLOG.
PROLOG was developed in the 1970s. PROLOG is based on
first order logic. PROLOG is declarative in nature and has
facilities for explicitly limiting the search space.
Object-oriented languages are a class of languages more
recently used for AI programming. Important features of
object-oriented languages include:
 concepts of objects and messages
 objects bundle data and methods for manipulating the
data
 sender specifies what is to be done receiver decides how
to do it
 inheritance (object hierarchy where objects inherit the
attributes of the more general class of objects)
Examples of object-oriented languages are Smalltalk, Objective
C, C++. Object oriented extensions to LISP (CLOS - Common
LISP Object System) and PROLOG (L&O - Logic & Objects)
are also used.
Strategies of state space search
Data driven and goal driven

Problems and Search
Problem solving involves:




problem definition -- detailed specifications of inputs and what constitutes an
acceptable solution;
problem analysis;
knowledge representation;
problem solving -- selection of best techniques.
Problem Definition
In order to solve the problem play a game, which is restricted to two person table or
board games, we require the rules of the game and the targets for winning as well as a
means of representing positions in the game. The opening position can be defined as
the initial state and a winning position as a goal state, there can be more than one.
legal moves allow for transfer from initial state to other states leading to the goal
state. However the rules are far to copious in most games especially chess where they
exceed the number of particles in the universe. Thus the rules cannot in general be
supplied accurately and computer programs cannot easily handle them. The storage
also presents another problem but searching can be achieved by hashing.
The number of rules that are used must be minimised and the set can be produced by
expressing each rule in as general a form as possible. The representation of games in
this way leads to a state space representation and it is natural for well organised
games with some structure. This representation allows for the formal definition of a
problem which necessitates the movement from a set of initial positions to one of a set
of target positions. It means that the solution involves using known techniques and a
systematic search. This is quite a common method in AI.


Well organised problems (e.g. games) can be described as a set of rules.
Rules can be generalised and represented as a state space representation:
o formal definition.
o move from initial states to one of a set of target positions.
o move is achieved via a systematic search.
Searching
A state space can be searched in two directions: from the given data of a problem
instance toward a goal or from the goal back to the data.
Data driven search
Sometimes called forward chaining,the problem solver begins with the given facts of
the problem and a set of legal moves or rules for changing state.Search proceeds by
applying rules to facts to produce new facts,which in turn used by the rules to
generate more new facts.This continues until it generates a path that satisfies the goal
condition.
Goal driven search
Sometimes called backward chaining,the problem solver begins with the goal we
want to solve and dertemine what conditions must be true to use them.These
conditions become the new goals..Search proceeds working backwards through
successive subgoals until it works back to the facts of the problem.
There are 2 basic ways to perform a search:


Blind search /uninformed search-- can only move according to position in
search.
Heuristic search -- use domain-specific information to decide where to search
next.
Blind Search Depth-First Search
1.
2.
3.
4.
Set L to be a list of the initial nodes in the problem.
If L is empty, fail otherwise pick the first node n from L
If n is a goal state, quit and return path from initial node.
Otherwise remove n from L and add to the front of L all of n's children. Label
each child with its path from initial node. Return to 2.
Note: All numbers in Fig 1 refer to order visited in search.
Breadth-First Search
1.
2.
3.
4.
Set L to be a list of the initial nodes in the problem.
If L is empty, fail otherwise pick the first node n from L
If n is a goal state, quit and return path from initial node.
Otherwise remove n from L and add to the end of L all of n's children. Label
each child with its path from initial node. Return to 2.
Note: All numbers in Fig 1 refer to order visited in search.
Heuristic Search
A heuristic is a method that


might not always find the best solution
but is guaranteed to find a good solution in reasonable time.


By sacrificing completeness it increases efficiency.
Useful in solving tough problems which
o could not be solved any other way.
o solutions take an infinite time or very long time to compute.
The classic example of heuristic search methods is the travelling salesman problem.
Heuristic Search methods Generate and Test Algorithm
1. generate a possible solution which can either be a point in the problem space
or a path from the initial state.
2. test to see if this possible solution is a real solution by comparing the state
reached with the set of goal states.
3. if it is a real solution, return. Otherwise repeat from 1.
This method is basically a depth first search as complete solutions must be created
before testing. It is often called the British Museum method as it is like looking for an
exhibit at random. A heuristic is needed to sharpen up the search. Consider the
problem of four 6-sided cubes, and each side of the cube is painted in one of four
colours. The four cubes are placed next to one another and the problem lies in
arranging them so that the four available colours are displayed whichever way the 4
cubes are viewed. The problem can only be solved if there are at least four sides
coloured in each colour and the number of options tested can be reduced using
heuristics if the most popular colour is hidden by the adjacent cube.
Hill climbing
Here the generate and test method is augmented by an heuristic function which
measures the closeness of the current state to the goal state.
1. Evaluate the initial state if it is goal state quit otherwise current state is initial
state.
2. Select a new operator for this state and generate a new state.
3. Evaluate the new state
o if it is closer to goal state than current state make it current state
o if it is no better ignore
4. If the current state is goal state or no new operators available, quit. Otherwise
repeat from 2.
In the case of the four cubes a suitable heuristic is the sum of the number of different
colours on each of the four sides, and the goal state is 16 four on each side. The set of
rules is simply choose a cube and rotate the cube through 90 degrees. The starting
arrangement can either be specified or is at random.
Best First Search
A combination of depth first and breadth first searches.
Depth first is good because a solution can be found without computing all nodes and
breadth first is good because it does not get trapped in dead ends. The best first search
allows us to switch between paths thus gaining the benefit of both approaches. At
each step the most promising node is chosen. If one of the nodes chosen generates
nodes that are less promising it is possible to choose another at the same level and in
effect the search changes from depth to breadth. If on analysis these are no better then
this previously unexpanded node and branch is not forgotten and the search method
reverts to the descendants of the first choice and proceeds, backtracking as it were.
This process is very similar to steepest ascent, but in hill climbing once a move is
chosen and the others rejected the others are never reconsidered whilst in best first
they are saved to enable revisits if an impasse occurs on the apparent best path. Also
the best available state is selected in best first even its value is worse than the value of
the node just explored whereas in hill climbing the progress stops if there are no better
successor nodes. The best first search algorithm will involve an OR graph which
avoids the problem of node duplication and assumes that each node has a parent link
to give the best node from which it came and a link to all its successors. In this way if
a better node is found this path can be propagated down to the successors. This
method of using an OR graph requires 2 lists of nodes
OPEN is a priority queue of nodes that have been evaluated by the heuristic function
but which have not yet been expanded into successors. The most promising nodes are
at the front. CLOSED are nodes that have already been generated and these nodes
must be stored because a graph is being used in preference to a tree.
Heuristics In order to find the most promising nodes a heuristic function is needed
called f' where f' is an approximation to f and is made up of two parts g and h' where g
is the cost of going from the initial state to the current node; g is considered simply in
this context to be the number of arcs traversed each of which is treated as being of
unit weight. h' is an estimate of the initial cost of getting from the current node to the
goal state. The function f' is the approximate value or estimate of getting from the
initial state to the goal state. Both g and h' are positive valued variables. Best First
The Best First algorithm is a simplified form of the A* algorithm. From A* we note
that f' = g+h' where g is a measure of the time taken to go from the initial node to the
current node and h' is an estimate of the time taken to solution from the current node.
Thus f' is an estimate of how long it takes to go from the initial node to the solution.
As an aid we take the time to go from one node to the next to be a constant at 1.
Best First Search Algorithm:
1.
2.
3.
4.
Start with OPEN holding the initial state
Pick the best node on OPEN
Generate its successors
For each successor Do
o If it has not been generated before evaluate it add it to OPEN and
record its parent
o If it has been generated before change the parent if this new path is
better and in that case update the cost of getting to any successor nodes
5. If a goal is found or no more nodes left in OPEN, quit, else return to 2.
The A* Algorithm
Best first search is a simplified A*.
1.
2.
3.
4.
Start with OPEN holding the initial nodes.
Pick the BEST node on OPEN such that f = g + h' is minimal.
If BEST is goal node quit and return the path from initial to BEST Otherwise
Remove BEST from OPEN and all of BEST's children, labelling each with its
path from initial node.
Download