From knowledge to intelligence – building blocks and applications Chitta Baral Department of Computer Sc. & Eng. Arizona State University Tempe, AZ 85287 1 25th Anniversary of the 1st AAAI This is an exciting time for AI. Foundations are getting stronger. Deep studies in various areas. (Several recent graduate level texts on different subareas of AI.) The pioneers’ early dreams are getting closer to realization. But we should not overlook the core and the glue that holds together various aspects of AI. 2 What is the core and the glue of AI? Lets start with the meaning of the word “intelligence”. 3 Meaning of the word: “intelligence” 1. (a) The capacity to acquire and apply knowledge. (b) The faculty of thought and reason. (c) Superior powers of mind. See Synonyms at mind. 2. An intelligent, incorporeal being, especially an angel. 3. Information; news. See Synonyms at news. 4. (a) Secret information, especially about an actual or potential enemy. … Source: The American Heritage® Dictionary 4 Meaning of the word: “intelligence” n. 1. The capacity to acquire and apply knowledge, especially toward a purposeful goal. 2. An individual's relative standing on two quantitative indices, namely measured intelligence, as expressed by an intelligence quotient, and effectiveness of adaptive behavior. The American Heritage® Stedman's Medical Dictionary 5 Meaning of the word: “intelligence” n. 1 a : the ability to learn or understand or to deal with new or trying situations b : the ability to apply knowledge to manipulate one's environment or to think abstractly as measured by objective criteria (as tests) 2 : mental acuteness Merriam-Webster's Medical Dictionary 6 Meaning of the word: “intelligence” n. 1. the ability to comprehend; to understand and profit from experience [ant: stupidity] 2. a unit responsible for gathering and interpreting intelligence 3. secret information about an enemy (or potential enemy); “we sent out planes to gather intelligence on their radar coverage” Source: WordNet ® 1.6, © 1997 Princeton University 7 The key features of an intelligent entity it can acquire knowledge through various means such as learning from experience, observations, reading and processing natural language text, etc., and it can reason with this knowledge to make plans, explain observations, achieve goals, etc. 8 To learn knowledge and to reason with it we need to know how to represent knowledge in a computer readable format. Knowledge Representation (KR) is important for both reasoning and learning. McCarthy 1959 in Programs with commonsense: “In order for a program to be capable of learning something it must first be capable of being told it.” 9 Importance of KR KR is the starting point of building intelligent entities (or AI systems), and leads to the next steps of acquiring knowledge and reasoning with knowledge. 10 What does KR entails? We need languages and corresponding methodologies to represent various kinds of knowledge. 11 Importance of inventing suitable knowledge representation languages An analogy a colleague once made: Development of a suitable knowledge representation language and methodology is as important to AI systems as Calculus is to Physics and Engineering. 12 Research agenda hinted by the analogy What is it that makes Calculus widely used? It is the building block results that make it useful. Similarly, not enough to come up with a knowledge representation language and an interpreter for it. Need to develop the support structure, the building block results that will facilitate the use of the language. 13 Consolidating our thoughts! KR plays a central role in AI. We need to go beyond inventing a KR language (syntax, semantics and implementation). We need to develop building block results around it, so as to make it useful. 14 Historical perspective AI pioneers (especially McCarthy and Minsky) realized the importance of KR to AI. McCarthy 1959: Programs with commonsense (perhaps the first paper on logical AI). John McCarthy Minsky 1974: A framework for representing knowledge. Marvin Minksy 15 Historical perspective – cont. Newell and Simon: Their early focus was more on the architecture of exhibiting intelligence, than on KR per se. Nevertheless, their systems did indeed manipulate various kinds of knowledge. Allen Newell Herb Simon 16 What are the properties of a good KR language. To start with: should be non-monotonic i.e., allow revision of conclusion in presence of new knowledge. Hayes 1973 (Computation and Deduction) mentions monotonicity (calls it “extension property”) and notes that rules of default do not satisfy it. Minsky 1974 (A framework for representing knowledge) criticizes monotonicity of logistic systems. Pat Hayes Marvin Minsky 17 Pre-1980 history of non-monotonic logics –from Minker’s 93 survey THNOT in PLANNER [Hewitt in 1969] Prolog [Colmerauer et al. 1973] Circumscription [McCarthy 1977] Default Reasoning [Reiter 1978] Closed World Assumption (CWA) [Reiter 1978] Negation as failure [Clark 1978] Truth maintenance systems [Doyle 1979] 1st NCAI (AAAI 1980) 1st session Nonmonotonic logic panel AIJ Volume 13, 1980, a special issue 18 Last twenty five years … Many extensions, variations, and new logics, including Non-monotonic modal logics Auto-epistemic logic Conditional logics Description logics Probabilistic semantics for default reasoning Probability networks (Bayes nets, structural causal models) AnsProlog (programming in logic with answer sets) 19 Have we invented “calculus” of KR yet? What basic properties should it have? have a simple and intuitive syntax and semantics; be non-monotonic; have the ability to represent and reason with defaults and their exceptions; allow us to represent and reason with incomplete information; and allow us to express and answer problem solving queries such as planning queries, counterfactual queries, explanation queries and diagnostic queries. 20 Have we invented “calculus” of KR yet? - continued. What properties will make it useful? should have building block results; should have interpreters for reasoning with the language; should have existing applications; and should have systems that can learn knowledge in this language. 21 Is AnsProlog a good candidate? An AnsPrologor program (late 1980s) is a collection of rules of the form: A0 or … or Al B1, …, Bm, not C1, …, not Cn. where Ais, Bjs and Cks are literals. Michael Gelfond Jack Minker 22 Vladimir Lifschitz Ray Reiter Is AnsProlog a good candidate? Its syntax uses the intuitive If-then form. It is non-monotonic. Can express defaults and their exceptions. Can represent and reason with incomplete information. Can express and answer problem solving queries. Large body of building block results. Various implementations: Smodels, DLV, Prolog. Many applications built using it. Learning systems: Progol. Its initial paper among the top 5 AI source documents in terms of citeseer citation. 23 How AnsProlog differs from … Prolog: ordering matters in Prolog; can not handle cycles with “not”; has extra-logical features; does not have disjunction and classical negation; and is not declarative. Logic Programming: is a class of languages and many different semantics are proposed for “not”. Classical Logic: Classical logic is monotonic. in AnsProlog, which helps in expressing causality, is not reverse implication. Disjunction symbol “or” in AnsProlog is non-classical. The negation as failure symbol “not” in AnsProlog is non-classical. 24 Transitive Closure – graph reachability a b edge(a,b). c edge(c,d). d edge(d,c) reach(a). reach(X) reach(Y), edge(Y,X). a and b are reachable but c and d are not. We get that conclusion from the AnsProlog program with edge facts. 25 Reachability – Clark’s completion XY edge(X,Y) ((X=a Y =b) (X =c Y=d) (X =d Y =c)) X reach(X) (X = a or Y (reach(Y) edge(Y,X)) Equality axioms. {edge(a,b), edge(c,d), edge(d,c), reach(a), reach(b) } is a model. But so is {edge(a,b), edge(c,d), edge(d,c), reach(a), reach(b), reach(c), reach(d) }. Hence one can not conclude ~reach(c), ~reach(d). Need to go beyond first order logic. 26 Canonical example of non-monotonicity – the Drosophila of NMR Tweety is a bird. Birds normally fly. Does Tweety fly? Tweety is a Does Tweety fly? (from http://www.vbmysql.com/albums/AroundOrlando/seaworld_penguin.sized.jpg) 27 An early formalization fly(X) bird(X), not ab(X). ab(X) penguin(X). bird(X) penguin(X). bird(tweety). We conclude fly(tweety). But if we add penguin(tweety). We can no longer conclude fly(tweety) and conclude ~fly(tweety), by virtue of CWA. 28 KR using AnsProlog – defaults and their exceptions [Gelfond-mid90s] Birds normally fly. f(X) b(X), not ab(X), not ~f(X). Penguins are birds that do not fly. b(X) p(X). ~f(X) p(X). About wounded birds we do not know. ab(X) w(X). 29 Continuing with the birds fly example Birds normally fly. Tweety is a bird. Penguins normally do not fly. Conclusion: Tweety flies. Birds normally fly. Tweety is a bird. Penguins normally do not fly. Tweety is a penguin. Conclusion: Tweety does not fly. Birds normally fly. Tweety is a bird. Penguins normally do not fly. Tweety is wounded. Conclusion: Tweety may or may not fly. 30 A general framework for defaults and exceptions. Normally a p has the property q. An r (which is a p) has the property ~q. [strong exception] t’s are weak exceptions. q(X) p(X), not ab(d, X), not ~q(X). ~q(X) r(X). ab(X) t(X). or ab(X) not ~t(X). 31 The frame problem and situation calculus! Situation Calculus introduced by McCarthy and Hayes in 1969. initially ~f, ~g. a causes f if ~g. Is f true after executing a? Is g true after executing a? ~holds(f, s0). ~holds(g,s0). holds(f, res(a,S)) ~holds(g,S). How do we encode that g’s value remains unchanged? First attempt: ~holds(F, res(A,S) ~holds(F,S). Causes contradiction. Second attempt: ~holds(F, res(A,S)) ~holds(F,S), not ab(F,A,S). ab(f,a,S) ~holds(g,S). 32 The frame problem and situation calculus! – cont. What if: initially ~f, unknown g. a causes f if ~g. Is f false after executing a? ~holds(f, s0). holds(f, res(a,S)) ~holds(g,S). ~holds(F, res(A,S)) ~holds(F,S), not ab(F,A,S). ab(f,a,S) ~holds(g,S). Above formalization will say Yes. A hasty conclusion! Replace the ab rule by: ab(f,a,S) not holds(g,S). 33 Normally fluents do not change their values – a narrative based formulation h(F,T+1) h(F,T), occurs(A,T), not ab(F,A,T), not ~h(F,T+1). ~h(F,T+1) ~h(F,T), occurs(A,T), not ab(F,A,T), not h(F,T+1). Translating: a causes ~f if p1, …, pn ~h(f,T+1) occurs(a,T), h(p1,T), …, h(pn,T). ab(f,a,T) occurs(a,T), not ~h(p1,T), ..., not ~h(pn,T). 34 Interpolation: maximal reasoning with incomplete information [Baral etal. 93] Facts about par(X,Y). Facts about par(X,Y) and ~par(X,Y). anc(X,Y) par(X,Y). m_par(X,Y) not ~par(X,Y). m_anc(X,Y) m_par(X,Y). m_anc(X,Y) m_par(X,Z), m_anc(Z,Y). ~anc(X,Y) not m_anc(X,Y). anc(X,Y) par(X,Z), anc(Z,Y). ~anc(X,Y) not anc(X,Y) 35 Planning in AnsProlog [Subrahmanian & Zaniolo 95] initially ~f. initially ~g. b causes f if g. a causes g. goal f. a;b is a plan that achieves the goal. ~h(f, 1). ~h(g, 1). h(f,T+1) oc(b,T), h(g,T). h(g,T+1) oc(a,T). h(f,T+1) h(f,T), not ~h(f,T+1). ~h(f,T+1) ~h(f,T), not h(f,T+1). not h(f, 3). o_oc(A,T) oc(B,T), A B. oc(A,T) act(A), not o_oc(A,T). 36 Recap: AnsProlog is a good candidate! Its syntax uses the intuitive If-then form. It is non-monotonic. Can express defaults and their exceptions. Can represent and reason with incomplete information Can express problem solving queries Large body of building block results Various implementations: Smodels, DLV, Prolog Many applications built using it. Learning systems: Progol, PRISM 37 Building block results on AnsProlog (since late 80’s early 90’s) Subclasses Definite, normal, extended, disjunctive. Acyclic, signed, stratified, call-consistent, orderconsistent. Properties Existence of consistent answer sets. Existence of unique answer sets. Complexity of entailment. Expressiveness of the entailment relation. Relation between the literals in an answer set and the program. 38 Building block results – continued Transforming programs. Equivalence of programs. Formal relation with other languages. (classical logic, default logic, description logics etc.) Splitting programs – modular analysis, and computation of answer sets. Systematic building of programs from components: program composition. For which sub-class Prolog querying is sound? Complete? See “Knowledge Representation Reasoning and Declarative Problem Solving” for a compilation of various building block results on AnsProlog till late 2002. 39 Relation between literals in an answer set and the program. Answer sets of an AnsProlog program is analogous to models of a theory. If f is in an answer set A of a program P, then there must be a rule in P such that … If an answer set A of a program P satisfies the body of a rule r of P then, the head of that rule … 40 But reasoning is complex in AnsProlog: Complexity and Expressiveness (mid 90s) Complexity is related to expressiveness. Propositional AnsPrologor: P2P-complete; captures P2P Propositional AnsProlog: CoNP-complete; captures Co-NP Note: Entailment in SAT is CoNP-complete, but it does not capture Co-NP. Propositional AnsProlog (WFS): P-complete; captures P on ordered databases. Propositional AnsProlog stratified: P-complete; captures P on ordered databases. AnsProlog, AnsPrologor: P11-complete; captures P11 41 Reasoning Systems Smodels (Helsinki) – 1996-97 DLV (Vienna) – 1998-99 ASSAT (Based on compiling propositional AnsProlog to propositional logic and then using SAT solvers) - 2003 Prolog XSB (Stony Brook) – mid 90s 42 General Applications Reasoning Declarative problem solving (Answer set programming) Reasoning with incomplete information, default reasoning. Reasoning with preferences and priorities, inheritance hierarchies. Planning, job-shop scheduling, tournament scheduling. Abductive reasoning, explanation generations, diagnosis. Combinatorial graph problems. Combinatorial optimizations, combinatorial auctions. Product configuration. Involving both Data integration Decision support systems Question answering 43 Some specific applications Phylogeny construction Abduction and preferences in linguistics Inference of gene relation in micro-array data Reaction control system Question answering with respect to travel Reasoning about cell behavior 44 REACTION CONTROL SYSTEM (RCS) 45 RCS/USA-Advisor (Texas Tech Univ.) A decision support system for shuttle controllers. Action: Switchon Direct effects, plus … Effect propagates and affects several objects. AnsProlog based planner is well-suited for this. 46 47 Signal Pathways (from http://www.afcs.org/cm2/) 48 Reasoning about cell behavior: ASU Biosignet-RR Hypothetical Reasoning : side effect of drugs Planning: therapy design Explanation of observations: figuring out what is wrong Biosignet-RRH Hypothesis generation 49 Question Answering Text: Tweety is a bird Question: Does Tweety fly? Answer (average person): Yes. Text: John took a plane from Phoenix to Pittsburgh. Question: Where is John after that? Where is his laptop which he always carries with him? Answer: Pittsburgh. Pittsburgh. Uses common-sense knowledge that birds normally fly. Uses common-sense knowledge. Need a repository of common-sense knowledge in a proper form. Early attempts: CYC (CYC’s language), OpenMind (NL) In progress: An open source collaboration (Spring06 symp.) 50 Project Halo! An exciting KR effort! Halo pilot: structured around a challenge involving 71 pages of an advanced placement (AP) inorganic chemistry syllabus. (2003-04) Three participants in the Pilot: SRI-UT Austin; CYCORP, Ontoprise Ontoprise uses F-logic program. F-logic is an object-oriented extension of AnsProlog. 51 Extensions Sets, aggregates Preferences Consistency restoration Cardinality and Weight constraints 1 { occurs(X,T) : action(X) } 1 time (T). Probabilities 52 Putting together probability and logic First order logics of probability: Halpern Joe Halpern Probabilistic causal models: Pearl Judea Pearl 53 Adding probabilistic knowledge Bayesian networks First order logic with statistical probability (Bacchus) Probabilistic frame based systems (Koller and Pfeffer) Probabilistic relational models (Friedman, Getoor, Koller, Pfeffer) Integrating logic with probability (Halpern) Probabilistic Horn abduction, Independent Choice logic (Poole) Bayesian logic programming (Kersting & De Raedt) Probabilistic knowledge bases (Ngo & Haddaway) Stochastic logic programming (Muggleton) 54 Adding probabilistic knowledge –cont. PRISM (Sato) Probabilistic logic programming (Ng & Subrahmanian; Lukasiewicz) Structural causal models (Pearl) Logic programming with annotated disjunction (Vennekens et al.) Relational dependency networks Relational Markov networks Markov logic networks (Richardson & Domingoes) P-log (Baral, Gelfond and Rushton) 55 Existing approaches … Most do not use the full power of the logical KR languages The integration of logical knowledge and probabilistic knowledge is not quite deep All except Pearl (and P-log) do not distinguish between conditioning/updating with observations and actions (do). Updates are simplistic. Do not allow updates which is a piece of knowledge. None allow logical reasoning in deciding the range of a random variable. 56 Monty Hall problem: 3 doors One of the doors below has a prize. I am asked to pick a door. (From http://math.ucsd.edu/~crypto/Monty/monty.html) 57 I pick door one. (From http://math.ucsd.edu/~crypto/Monty/monty.html) 58 After I pick door 1 … Monty opens door 3 and shows me a goat. Should I stay with door 1 or switch to door 2. (From http://math.ucsd.edu/~crypto/Monty/monty.html ) 59 The Monty Hall problem and nontrivial conditioning Question: Does it matter if I switch? Which unopened door has the higher probability of containing the prize? The answer depends on: Whether Monty knew where the prize is and on purpose opened a door which has a goat. If yes then Switch. [increases probability from 1/3 to 2/3] If no then probability remains 1/3. 60 Why people have difficulty answering this? The existing languages of probability do not give us the syntax to explicitly express the knowledge accompanying Monty’s action. Lot of reasoning is done by the human being. Halpern in his book: gives this and other examples; differentiates trivial conditioning versus nontrivial conditioning; and suggests the use of protocols. 61 Representing the Monty Hall problem in P-log. [Baral,Gelfond &Rushton 04] doors = {1, 2, 3}. open, selected, prize, D: doors ~can_open(D) selected = D. ~can_open(D) prize = D. can_open(D) not ~can_open(D). random(prize). random(selected). random(open: {X : can_open(X)}). pr(prize = D) = 1/3. pr(selected = D) = 1/3. obs(selected = 1). obs(open = 2). obs(prize 2). P(prize = 1) =? P(prize=3)=? 62 Non-trivial conditioning in P-log We can add random atoms. We can add rules. Thus we can exactly specify the possibility space of our observation. 63 Recap! Discussed role of KR in AI. AnsProlog: examples, applications An important extension of AnsProlog: P-log In terms of language design: Adding probabilistic representation to logical KR is a frontier. Another frontier: High level languages 64 The Yale Shooting Problem Published in 1986 AAAI Received this year’s Classic Paper award. Various encodings of an English description did not work as expected. The logics used in the encodings were considered not adequate. Lot of soul searching among researchers. 65 Impact of Yale shooting Problem But soon other possibilities were considered The encodings were not adequate, but correct encoding exists. Encoding (in logic) needs practice/experience/etc. English description may have multiple interpretations. How to make sure an encoding is correct? KR moved to a more systematic approach. Instead of the prevailing example/counter-example approach; Use of high level languages as a “specification”. 66 High level languages – motivation Domain expert may not care to know the nuances of AnsProlog or P-log. They need an easier interface to encode knowledge and ask query. 67 STRIPS (Fikes & Nilsson 1971): an early HLL Planning scenario is specified in STRIPS syntax. Its semantics (defined much later) can be considered as a transition graph that is a mapping between states due to actions. Finding a plan is finding a path in this diagram. 68 Post Yale Shooting – the language A [Gelfond & Lifschitz 1992] Similar to STRIPS, but not solely for planning. Syntax: Action and effect (D): Observation (O): Query (Q): a causes f if p1, …, pn executable a if q1, …, qm initially f f after a1, …, an Semantics Defines a transition diagram. Defines when (D,O) entails Q. 69 More on A Semantics of A Hypothetical reasoning: Planning: Find a1, …, an such that (D,O) entails f after a1, …, an Explanation Does (D,O) entail f after a1, …, an? Find O’ such that (D,O + O’) entails f after a1, …, an Correctness: An encoding e in a logic L is correct if (D,O) entails Q iff e(D,O) entail-L e(Q). 70 Reasoning and acting in dynamic domains – role of various HLLs in systematic design of agents Domain (D) Observations (O) World -- how objects in the world are related to each other Action Description – impact of actions on the world what happened when, and what is true when (Temporal) properties of evolution (G) Agent control/Plan/Policies (P) Semantics: (D,O) entails G during/after P. Various HLLs for D, O, G, and P. 71 Some high level languages for D and O A [Gelfond and Lifschitz] Action and effect: Observation: Query: L [Baral, Gelfond, Provetti 1994-95] a causes f if p1, …, pn executable a if q1, …, qm initially f f after a1, …, an Observation: f at s1 a occurs_at s. Query: f after a1, …, an at s. AC [McCain & Turner; Baral 1995] World: p1, …, pn causes p. 72 High level language for D & O –cont. AK [Baral & Son 1998-99] a determines f. PAL [Baral, Tuan, Tran 2002] Sensing actions: Probabilistic information about unknown variables. Recap: advantage of HLL for D and O: Compact representation of the transition diagram Have an independent (mostly automata based) semantics “Implementation” in a logic can now be formally proven. Counters issues raised in Yale shooting. 73 HLL for G -- evolution property of the world LTL -- always, eventually CTL, CTL* -- in all paths, exists paths P-CTL* [Baral & Zhao 04] P-CTL* [Baral, Lifschitz & Zhao 05] in all paths/exists path following the given policy, In all policies/exists policies. Last two needed to formalize properties such “trying ones best” when actions may have nondeterministic effect. 74 HLL for control execution (P) Action sequences Policies (mapping from states to actions); condition-action rules, etc. Conditional plans HTN [Sacerdoti 76] Golog (alGOl in Logic), ConGOLOG etc. [Levesque, Reiter, et. al. 96 onwards] a1; (a2| a3 |a 4); a5; f? HTN+Golog, HTN+GOLOGI [Baral&Tran02,04] 75 Recap on high level languages Besides (low level) KR languages we need high level languages/interfaces A recap of various high level languages, most developed in the last 15 years. Many others exist. (e.g. Constraint Lingo.) Most are implemented in a provenly correct way using (lower level) KR languages. Many applications (mentioned earlier) use HLL as the interface language. 76 Where do we stand now? Tremendous progress in KR – From YSP to current. Significant body of building block results, interpreters, query engines, and some good applications Great opportunities lies ahead Project Halo/Digital Aristotle Question Answering Learning by Reading Reasoning about cell behavior Semantic Web Exciting times! 77 Future Challenges More of (improvements on) everything; better/faster systems Additional formulations of reasoning terms – causes, outliers etc. Integration of logical KR and probability – lots of work needed Perhaps P-log is a good starting point. Similar broadening of the core in other dimensions – preferences, weights Integrating various kinds of inference techniques (model enumeration, query driven, constraint propagation) One single KR language or integration of modules in closely related KR languages (such as AnsProlog, F-logic programs, fragments of DL) 78 And an important and immediate challenge (IMHO) Need Path From flat KR to modular KR. From starting from scratch to reuse of KR modules Large knowledge bases and applications based on them Open Knowledge Libraries The theory and interface protocols that allows the use of modules from the library. Some starting points exist (CYC, KIF etc.) – need reevaluation taking into account the recent progress AAAI’06 Spring symposium on building of a open knowledge library. 79 Conclusion: big picture of the talk Knowledge representation is key to AI. Having suitable KR languages as well as building-block results around them is crucial for building AI systems. AnsProlog is a good candidate to be the “Calculus” of KR and Intelligent system building. Need to be able to deeply integrate logical and probabilistic knowledge. Need high level knowledge representation languages. Time to consider “engineering” aspects. (modules, libraries, etc.) Exciting time for KR (and AI in general) – the road map to success is becoming clearer. 80 Acknowledgements for pictures obtained from the web. John McCarthy: www.kurzweilai.net/ bios/bio0008.html Marvin Minsky: www.almaden.ibm.com/ cs/NPUC95/panelists.html Pat Hayes: http://www.kuenstliche-intelligenz.de/Artikel/InterviewwithPatrickHayes.htm Allen Newell: diva.library.cmu.edu/ Newell/ Herb Simon: http://istpub.berkeley.edu:4201/bcc/Spring2001/cio.simon.html Vladimir Lifschitz: http://www.cs.utexas.edu/users/vl/ Michael Gelfond: http://www.cs.ttu.edu/~mgelfond/ Jack Minker: www.cs.umd.edu/ areas/db/people Ray Reiter: www.cs.toronto.edu/ DCS/People/Faculty/reiter.html Penguin: http://www.vbmysql.com/albums/AroundOrlando/seaworld_penguin.sized.jpg RCS: http://www.ksl.stanford.edu/htw/dme/rcs.html cAMP and pathway keys: http://www.signaling-gateway.org/ Monty Hall: http://math.ucsd.edu/~crypto/Monty/monty.html Joe Halpern: www.cs.cornell.edu/ home/halpern/ Judea Pearl: http://singapore.cs.ucla.edu/LECTURE/lecture_sec1.htm 81 Thanks for the opportunity! Thanks to my family, my students, teachers, colleagues and sponsors! Its been fun pursuing one (plus epsilon) of the 4 great questions (as mentioned in Simon’s memoir of Newell) the nature of matter, the origins of the universe, the nature of life, and the workings of mind (simulating “intelligence” artificially). and looking forward to the continuing journey. I wish you all the best of this journey too. 82 The End! THANKS 83