A General Introduction to Artificial Intelligence Course Information Course Instructor: Dr Nguyen Xuan Hoai Working Office: Room 311-3, Building 302. E-mail: nxhoai@gmail.com Phone: 8801611 Course Website: Course Prerequisites: Computer Algorithms and Data Structures prerequisites, basic knowledge in Computer Science Math (discrete structures, basic calculus and probability) is assumed. Required Skills: Working knowledge of programming languages (C++ or JAVA is preferred). Course Information Course Description: The course introduces the essential concepts and issues in artificial intelligence. Topics include intelligent problem solving with search, knowledge representation and inference, intelligent agents, intelligent planning, and machine learning. Course Textbooks: Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach, 2nd Edition, Prentice Hall, 2003. Bigus and J. Bigus, Intelligent Agent Programming with JAVA, 2nd Edition, John Wiley & Sons, 2001. M. Ginsberg, Essentials of Artificial Intelligence, Morgan Kaufmann, 1993. E. Rich & K. Knight, Artificial Intelligence, McGraw-Hill, 1991. Course Grading: 10% Programming Assignments. 48% Course Projects (4). 42% Course open book Exams (In Class, 21% Midterm, 21% Final). Contents What is Artificial Intelligence (AI)? AI related areas. Brief history of AI. Applications of AI. Core issues in AI What will be in the course?. What is AI? Definitions of AI have been somewhat controversial (because of A and because of I). Two main school of thoughts on what AI is: Strong AI and Weak AI. (see “The Artificial Minds”- MIT Press 1995 by Franklins) Strong AI Strong AI implication: Intelligent agents can become sapients (human-being, self-aware). (AI researchers in the early age and....Hollywood!!!) Weak AI Weak AI implication: Intelligent agents could only simulate human-being some human being behaviors (Widely accepted now) Views on AI Views on AI fall into 4 categories Thinking Humanly Thinking Rationally Acting Humanly Acting Rationally The view of the course: acting rationally Acting Humanly Subjected to study the human intelligence. 1960s "cognitive revolution": information-processing psychology. Requires scientific theories of internal activities of the brain. -- How to validate? Requires 1) Predicting and testing behavior of human subjects (top-down). or 2) Direct identification from neurological data (bottom-up). Both approaches (roughly, Cognitive Science and Cognitive Neuroscience) are now distinct from AI. Acting Humanly Two main approaches: Top-down: Cognitive science Symbolism. Bottom-up: Neural and Brain Science Connectionism. Acting Humanly Turing Test (1950): Predicted that by 2000, a machine might have a 30% chance of fooling a lay person for 5 minutes. Anticipated all major arguments against AI in following 50 years Suggested major components of AI: knowledge, reasoning, language understanding, learning. Thinking rationally: "laws of thought" What are the rules (laws) of thought? Aristole Gorge Bool David Hilbert = Logic Thinking rationally "laws of thought" Aristotle: what are correct arguments/thought processes? Several Greek schools developed various forms of logic: notation and rules of derivation for thoughts; may or may not have proceeded to the idea of mechanization. Direct line through mathematics and philosophy to modern AI (logic-based agents). Problems: 1. 2. Not all intelligent behavior is mediated by logic. What is the purpose of thinking? What thoughts should I have? Acting rationally: rational agents Rational behavior: "doing the right thing". The right thing: that which is expected to maximize goal achievement, given the available information. Doesn't necessarily involve thinking – e.g., blinking reflex – but thinking should be in the service of rational action. Rational Agents An agent is an entity that perceives and acts This course is about designing rational agents Abstractly, an agent is a function from percept histories to actions: [f: P* A] For any given class of environments and tasks, we seek the agent (or class of agents) with the best performance Caveat: computational limitations make perfect rationality unachievable design best program for given machine resources. Rational Agents Advantages of the view: - Intelligence does not necessary require thinking and/or reasoning. - Intelligence is not necessary attach to human or living creatures. Intelligence can be in a process. Intelligence can be obtained by cooperation of a swarm of agents. Rational Agents Examples: Evolutionary Intelligence, Swarm Intelligence. Some Definitions of AI from AI Books "The exciting new effort to make computer think … machine with minds, in the full and literal sense" (Haugeland, 1985). "Activities that we associate with human thinking, activities , as such decision-making, problem solving, learning" (Bellman, 1978). Some Definitions of AI from AI Books "The art of creating machines that perform functions that require intelligence when performed by people" (Kurzweil, 1990). "The study of how to make computers do things, at the moment, people are better" (Rich and Knight, 1991). Some Definitions of AI from AI Books "The study of mental faculties through the use of computational models" (Charniak and McDermott, 1985). "The study of the computations that make it possible to perceive, reason, and act" (Winston, 1992). Some Definitions of AI from AI Books "Computational Intelligence is the study of the design of intelligent agents" (Poole et al., 1998). "AI …. is concerned with intelligent behavior in artifacts" (Nilsson, 1998) . AI-Related Areas Philosophy. Cognitive science. Neuroscience and Brain Theory. Cybernetics and control theory. Mathematical Logic. Evolutionary Biology. Social Intelligence. Swarm Behavior. Organization Theory. Statistics. ....... AI History Three stages: Symbolism (70-80) (Automated Reasoning and Proofing, Expert Systems, Logic Programming,...). Connectionism (80s-90s) (Neural Networks, Statistical Learning, Support Vector Machines, Probabilistic Graph Learning,....). Evolutionary Computation (90s-?) (Evolutionary Programming, Evolutionary Strategies, Genetic Algorithms) , Intelligent Multi Agent Systems. Abridged History of AI 1943 McCulloch & Pitts: Mô hình boole cho não bộ. 1950 Turing's "Computing Machinery and Intelligence" 1956 Dartmouth meeting: "Artificial Intelligence “ was coined (Minsky?). 1956 Rosenblatt, Widrow and Hoff - PERCEPTRON 1950s Samuel's checker program, Newell & Simon's Logic Theorist, Gelernter's Geometry Engine. 1964 Evolutionary Strategies (Rechenberg et al.). 1964 Evolutionary Programming (L. Fogel). 1965 Robinson's complete algorithm for logical reasoning. Abridged History of AI 1969 Minsky and Papert - "PERCEPTRON" 1969-79 Knowledge-based systems (Expert and Planning Systems) - Symbolism dominant time. 1980-85 AI became an industry. 1986: Rumelhart, Hinton, Williams - Back Propangation learning algorithm for multi-layer PERCEPTRON - the rebirth of neural networks. 1987 AI became an science. 1986-1995 Neural Networks, Machine Learning, Approximate Reasoning, Fuzzy Systems,... Connectionism time. 1995 - Evolutionary Computation, Natural Computation, Intelligent Multi-Agent Systems. Areas/Applications in AI Natural Language Processing. Automated Reasoning. Knowledge-Based Systems. Pattern Recognition. Computer Vision. Speech Processing. Data Mining and Knowledge Discovery. Intelligent Planning. Intelligent Computer Games. Multi-agent Systems. Evolutionary and Natural Computation. Artificial Life. ........ State of The Art Deep Blue defeated the reigning world chess champion Garry Kasparov in 1997 MYCIN (1984, Standford). Proved a mathematical conjecture (Robbins conjecture) unsolved for decades. During the 1991 Gulf War, US forces deployed an AI logistics planning and scheduling program that involved up to 50,000 vehicles, cargo, and people Gulf War 2 (2003), Artificial War. NASA's on-board autonomous planning program controlled the scheduling of operations for a spacecraft. New washing machine generation using NeuroFuzzy Technology. Human identification through eyes detection and analysis at Heathrow airport using evolutionary computation technique. ........ Core Issue in AI Representation. Reasoning. Learning. Interaction. Course Details Week 1: Introduction & Agents Week 2: Problem Solving and Search. Week 3: Informed Search. Week 4: Modern Meta-Heuristic Search. Week 5: CSP and Adversarial Search. Week 6: Logic Agents. Week 7: First Order Logic. Week 8: Knowledge Representation, Mid-term exam. Week 9: Planning. Week 10: Reasoning under uncertainty 1 Week 11: Reasoning under uncertainty 2 Week 12: Learning 1 Week 13: Learning 2 Week 14: Learning 3 Week 15: Summary and Final exam. 10% Programming Assignment: Due third week. 4 course projects: week 5, week 9, week 12, week 15. Intelligent Agents Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types Agents An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators Human agent: eyes, ears, and other organs for sensors; hands, legs, mouth, and other body parts for actuators. Robotic agent: cameras and infrared range finders for sensors; various motors for actuators. Agents and Environments The agent function maps from percept histories to actions: [f: P* A] The agent program runs on the physical architecture to produce f agent = architecture + program Vacuum-Cleaner World Percepts: location and contents, e.g., [A,Dirty] Actions: Left, Right, Suck, NoOp A Vacuum-Cleaner Agent Percept sequence Action [A, Clean] Right [A,Dirty] Suck [B, Clean] Left [B, Dirty] Suck [A, Clean], [A, Clean] Right [A, Clean], [A, Dirty] Suck …. …. [A, Clean], [A, Clean], [A, Clean] Right [A, Clean], [A, Clean], [A, Dirty] Suck …. …. Rational Agents An agent should strive to "do the right thing", based on what it can perceive and the actions it can perform. The right action is the one that will cause the agent to be most successful. Performance measure: An objective criterion for success of an agent's behavior. E.g., performance measure of a vacuum-cleaner agent could be amount of dirt cleaned up, amount of time taken, amount of electricity consumed, amount of noise generated, etc. Rational Agents Rational Agent: For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has. Rational Agents Rationality is distinct from omniscience (allknowing with infinite knowledge) Agents can perform actions in order to modify future percepts so as to obtain useful information (information gathering, exploration) An agent is autonomous if its behavior is determined by its own experience (with ability to learn and adapt) PEAS PEAS: Performance measure, Environment, Actuators, Sensors Must first specify the setting for intelligent agent design. Consider, e.g., the task of designing an automated taxi driver: Performance measure Environment Actuators Sensors PEAS Must first specify the setting for intelligent agent design: Consider, e.g., the task of designing an automated taxi driver: Performance measure: Safe, fast, legal, comfortable trip, maximize profits. Environment: Roads, other traffic, pedestrians, customers. Actuators: Steering wheel, accelerator, brake, signal, horn. Sensors: Cameras, sonar, speedometer, GPS, odometer, engine sensors, keyboard. PEAS Agent: Medical diagnosis system Performance measure: Healthy patient, minimize costs, lawsuits Environment: Patient, hospital, staff Actuators: Screen display (questions, tests, diagnoses, treatments, referrals) Sensors: Keyboard (entry of symptoms, findings, patient's answers) PEAS Agent: Part-picking robot Performance measure: Percentage of parts in correct bins Environment: Conveyor belt with parts, bins Actuators: Jointed arm and hand Sensors: Camera, joint angle sensors PEAS Agent: Interactive English tutor Performance measure: Maximize student's score on test Environment: Set of students Actuators: Screen display (exercises, suggestions, corrections) Sensors: Keyboard Environment Types Fully observable (vs. partially observable): An agent's sensors give it access to the complete state of the environment at each point in time. Deterministic (vs. stochastic): The next state of the environment is completely determined by the current state and the action executed by the agent. (If the environment is deterministic except for the actions of other agents, then the environment is strategic). Episodic (vs. sequential): The agent's experience is divided into atomic "episodes" (each episode consists of the agent perceiving and then performing a single action), and the choice of action in each episode depends only on the episode itself. Environment Types Static (vs. dynamic): The environment is unchanged while an agent is deliberating. (The environment is semidynamic if the environment itself does not change with the passage of time but the agent's performance score does) Discrete (vs. continuous): A limited number of distinct, clearly defined percepts and actions. Single agent (vs. multiagent): An agent operating by itself in an environment. Environment Types Fully observable Deterministic Episodic Static Discrete Single agent Chess with a clock Yes Strategic No Semi Yes No Chess without a clock Yes Strategic No Yes Yes No Taxi driving No No No No No No The environment type largely determines the agent design The real world is (of course) partially observable, stochastic, sequential, dynamic, continuous, multi-agent Agent Functions and Programs An agent is completely specified by the agent function mapping percept sequences to actions One agent function (or a small equivalence class) is rational Aim: find a way to implement the rational agent function concisely Table-lookup Agent Function TABLE-DRIVEN-AGENT(percept) return an action Static: percepts, a sequence, initially empty. table, a table of actions, indexed by percept sequences, initially fully specified. append percept to the end of percepts action LOOKUP(percepts, table) return action Drawbacks: Huge table Take a long time to build the table No autonomy Even with learning long time to learn the table entries Agent Program for A Vacuum-Cleaner Agent Function REFLEX-VACUUM-AGENT (location, status) Rerurn an action if status=Dirty then Suck else if location=A then return Right else if location=B then return Left Agent Types Four basic types in order of increasing generality: Simple reflex agents Model-based reflex agents Goal-based agents Utility-based agents Simple Reflex Agents Simple Reflex Agents Function SIMPLE-REFLEX-AGENT (percept) Return an action Static: rules, a set of condition-action rules state INTERPRET-INPUT (percept) rule RULE-MATCH (state, rules) action RULE-ACTION [rule]; return action Model-Based Reflex Agents Model-based Reflex Agents Function REFLEX-AGENT-WITH-STATE (percept) Return an action Static state, a description of the current world state rules, a set of condition-action rules action, the most recent action, initially none state UPDATE-STATE (state, action, percept) rule RULE-MATCH (state, rules) action RULE-ACTION [rule] return action Goal-based Agents Utility-based Agents Learning Agents Further Reading Main textbook - chapter 2. Course textbook 2 - chapter 2. M. Wooldride, An Introduction to Multi-agent Systems (Chapter 2).