The Two (Computational) Faces of AI David Davenport Computer Engineering Dept., Bilkent University Ankara 06800 – TURKEY PT-AI talk Thessaloniki Oct. 2011 Email: david@bilkent.edu.tr 2 PT-AI 2011 Explaining Cognition / AI • Scientific endeavor & Engineering discipline “...an engineering discipline built on an unfinished science” Matt Ginsberg, 1995 • Philosophers only complicated matters ▫ confusing us about words we thought we understood. • This is my naïve attempt to understand… 3 PT-AI 2011 In the beginning… • was the classical “symbolic” paradigm ▫ cognition seen as computation ▫ logical, rule-governed manipulation of formal symbols… ▫ but meaning & biological plausibility? • enter the “connectionist” paradigm ▫ brain inspired, flexible, subsymbolic, able to learn its own “symbols”, but opaque 4 PT-AI 2011 The Architecture of Cognition • Is it symbolic & connectionist? ▫ Is one wrong? Are they genuine alternatives? Or a hybrid of both? Or neither… • Newer contenders: ▫ Dynamical systems, embodied, embedded, radical embedded, situated, extended, interactivist, enactivist, … 5 PT-AI 2011 Engineering Cognition / AI • Requirements: concerned with function, what is the problem that needs solving? • Design: an abstract solution to problem. • Implementation: concrete, physical mechanism corresponding to the design. • Test, distribution, maintenance ▫ handled by environment & evolution! 6 PT-AI 2011 Functional Requirements • Agents are small part of physical world ▫ so have limited knowledge & subject to error • World has some regularities “The unpredictability of the world makes intelligence necessary, the predictability makes it possible.” • Agents make use of regularities ▫ detect, predict & select “best” action. 7 PT-AI 2011 Use Cases • Example task types 1) Maintain body temperature, control engine speed, flower facing sun… 2) Track predator/prey even when occluded, walk/climb towards goal despite obstacles... 3) Converse in English, do math, tell fictional stories, socialise, … 8 PT-AI 2011 Different mechanisms… • Example 1 type systems ▫ require only simple feedback control • Example 3 type systems ▫ require… a full symbol system? “A physical symbol system has the necessary and sufficient means for [human-level] intelligent action.” Newell & Simon, 1976 • Note: PSS could do all types, but type 1 systems couldn’t ~~ c.f. newcomers? 9 PT-AI 2011 Design • Constrained by functional requirements & by properties of available materials • Claim designs will be computational • Take a broad view of computation “computation as prediction/modeling” why? 10 PT-AI 2011 Prediction / Modeling • Target system & model • Map states & seq. ▫ Find existing system ▫ Construct one anew ▫ Use digital computer • Rely on causation • Causal structure is all that matters Photo by Flickr user charamelody 11 PT-AI 2011 Design (cont.) • Constrained by functional requirements & by properties of available materials • Claim designs will be computational • Take a broad view of computation “computation as prediction/modeling” • Program/algorithm/computation is “…an abstract specification for a causal system.” Chalmers, 1997 12 PT-AI 2011 Design for example type 1 • Type 1 (e.g. engine speed governor) ▫ only two actions (increase/decrease steam) ▫ predictable from current engine speed ▫ any mechanism that provides such control is fine: Watt’s Centrifugal Governor (mechanical) Embedded microprocessor-based controller. 13 PT-AI 2011 Design for example type 3 • Type 3 ( human-level behaviour) ▫ with no a priori knowledge of world an agent can only store what it senses & detect similar situations in the future. ▫ combined with record of temporal sequence & of its actions ▫ it has info to make “intelligent” actions! But how? back to basics… 14 PT-AI 2011 Communication.... 15 PT-AI 2011 Storage... (copy) 16 PT-AI 2011 Recognition... (copy) 17 PT-AI 2011 Storage... (link) 18 PT-AI 2011 Recognition... (link) - exact/partial match - flat/hierarchical structure 19 PT-AI 2011 Internal & External symbols... internal symbols C A T external symbols 20 PT-AI 2011 Relating word to object Situation in which word “CAT” is heard and cat is seen audio senses “CAT” visual senses 21 PT-AI 2011 z Logically • Conventional “if a & b & c then z” • Alternative, Inscriptors a b c “if z then a & b & c” ▫ causal, rule-following, “not”, but uses abduction fill-in expectations (top-down/bottom-up) so flexible ▫ storing what seen, so syntax & semantics match ▫ decouple from input (state retaining) ▫ Model-like (simple incomplete or combine…~PSC) 22 PT-AI 2011 The Architecture of Cognition • Is Cognition Symbolic or Connectionist? ▫ Differ based on “copy” or “link” storage ▫ Shown both can do the job, so ▫ they are genuine design alternatives. (as are analog/digital & serial/parallel) • Is the PSSH wrong then? ▫ No, it is setting functional requirements. 23 PT-AI 2011 To Conclude • Presented a principled distinction between classical symbolic & connectionist approaches, showing them to be genuine design alternatives. • Distinguished (Newell’s) PSS from the symbolic paradigm, per se. • Hopefully in an understandable way (… so avoiding Bonini’s paradox)! The End (… of the beginning?) Thank you. PT-AI 2011