Introduction to Artificial Intelligence What is AI? 'Artificial Intelligence may be defined as a branch of computer science that is concerned with the automation of intelligent behaviour'. Luger & Stubblefield. 1940 - present: computer engineering. First operational computer (Alan Turing's team) 1940. 1950 - Turing developed the Turing Machine. This demonstrated that a machine could manipulate symbols as well as numbers. Later became known for the Turing Test - for comparisons of human and machine intelligence. 1943 - McCulloch & Pitts initiated research on Neural Networks 1956 - AI term first used in print by John McCarthy at Dartmouth College Conference, USA. McCarthy also invented the AI language LISP (LISt Processing). 1960s - MIT (Massachusetts Institute of Technology became v. important for AI research - see web site). 1976 - Newell & Simon developed the GPS (General Problem Solver). Problems defined in terms of initial & goal situations. Contained operators that determined how to move from one state to the next. Problems with size of search space & with methods of knowledge representation. This idea of State Space Search is still used in today's AI systems. Later 60s & 70s - apparent that GPS approach gave weak performance. Started developing systems in specific knowledge domains - birth of Knowledge Based Systems (KBS) or Expert Systems (ES). DENDRAL - to identify chemical structures of unknown molecules. MYCIN - medical system for selection of antibiotics. LUNAR '80s - more success commercially for AI products '86 to present - Neural network (sub-symbolic) & KBS (Symbolic) research continues. There have been both successes & failures in ES developments. What advantages do computers have over human experts? Humans can change jobs, become ill, have off days etc. Human expertise is difficult to transfer. Human expertise is expensive. BUT: Humans are creative, inspired, flexible, have common sense, good learning capabilities. Philosophy of AI John Searle introduced terms WEAK AI and STRONG AI. STRONG view of AI is that the human brain is no more than a physical symbol manipulation system. Hence with enough computer power it could be replicated. WEAK view of AI says that computer systems are only a simulation of intelligence. They are useful for understanding cognitive processes but should not assume that simulation is a reality. Knowledge Based Systems What is Knowledge? 'The symbolic representation of aspects of some named universe of discourse' - Winston 1984. This assumes we can symbolise knowledge, i.e. represent it. Simon (1969) & others see AI as concerned with symbolic processing. Feigenbaum's definition of a KBS is: 'An intelligent computer program that uses knowledge & inference procedures to solve problems that are difficult enough to require significant human expertise for their solution. Knowledge necessary to perform at such a level, plus the inference procedures used, can be thought of as a model of the expertise of the best practitioners in the field.' A KBS generally does the following: Represents & stores knowledge Provides inferencing abilities Includes a consistent user interface Incorporates a means to connect to traditional software e.g. databases. Some terminology: Knowledge representation - knowledge is stored in a knowledge base in a form most appropriate to the given application. There are a number of methods of representation. Domain expert - provides expertise for the system being modeled. Knowledge elicitation/acquisition Extraction of knowledge from one or more experts in a domain Knowledge engineer - implements a KBS Neural Networks Ref: http://blizzard.gis.uiuc.edu/htmldocs/Neural/neural.html Neural networks are typically organized in layers. Layers are made up of a number of interconnected 'nodes' which contain an 'activation function'. Patterns are presented to the network via the 'input layer', which communicates to one or more 'hidden layers' where the actual processing is done via a system of weighted 'connections'. The hidden layers then link to an 'output layer' where the answer is output as shown in the graphic below. Most ANNs contain some form of 'learning rule' which modifies the weights of the connections according to the input patterns that it is presented with. ANNs learn by example as do their biological counterparts; e.g. a child learns to recognize dogs from examples of dogs. Genetic Algorithms GAs are searching techniques based on the principles of natural selection and natural genetics. John Holland proposed them at the University of Michigan, USA, in the mid-70s. They are based on the following principles: Evolution operates on chromosomes; Chromosomes are strings of genes; Less fit artificial creatures do not survive due to natural selection; New generation (offspring) inherits properties from the old generation (parents) by reproduction; Chromosomes that reproduce more. are more successful GAs combine the effect of selection with structured, randomised recombination of genetic material to perform robust search.