The New Age of Commonsense Reasoning Henry Kautz University of Rochester The Idea of Commonsense Reasoning • Programs with Common Sense (McCarthy 1959): – Use declarative representations of knowledge and general reasoning methods to • Understand the physical world • Understand human behavior • Motivation – Tasks: “advice taker”, story understanding – Step towards true AI Commonsense Reasoning, circa 1985 • Declarative and general • Missing: grounding, learning, probabilities, scalable inference Commonsense Reasoning, circa 2008 Commonsense Reasoning, circa 2008 • Methods for grounding theories in sensor data • Graphical models for learning & probabilistic inference • Practical (low-exponent) reasoning algorithms log ( backtracks ) 1000000 100000 10000 1000 1 10 100 1000 log( cutoff ) 10000 100000 1000000 New Applications • Caregiving systems – AI-based assistive technology – Assisted cognition • Automated planning – Supra-expert level • Verification • Natural language processing ... This Talk • Monitoring activities of daily living • Learning and guiding users through transportation plans • Planning as satisfiability Object-Based Activity Recognition • Activities of daily living involve the manipulation of many physical objects – Kitchen: stove, pans, dishes, … – Bathroom: toothbrush, shampoo, towel, … – Bedroom: linen, dresser, clock, clothing, … • We can recognize activities from a timesequence of object touches (Philipose et al. 2004) Applications • ADL (Activity of Daily Living) monitoring for the disabled and/or elderly – Changes in routine often precursor to illness, accidents – Human monitoring intrusive & inaccurate • Basis for ADL prompting / reminding Image Courtesy Intel Research Sensing Object Manipulation • RFID: Radio-frequency identification tags – Small – Durable – Cheap • Easy to tag objects – Near future: use products’ own tags • IRS bracelet reader – 13.56MHz reader, radio, power supply, antenna – 12 inch range, 12-150 hour lifetime Experiment: Morning Activities • 10 days of data from the morning routine in an experimenter’s home (Patterson, Fox, & Kautz 2005) • 11 activities, 61 tagged objects – Often interleaved and interrupted – Many shared objects Use bathroom Make coffee Set table Make oatmeal Make tea Eat breakfast Make eggs Use telephone Clear table Prepare OJ Take out trash Most Likely Single-State HMM • Each activity modeled as an independent singlestate Hidden Markov Model • Captures relationship between activities and objects • Compute probabilities from labeled training data • Compute most likely model at each time point • 68% accuracy HMM with One State per Activity • Single HMM where each activity is a state • Captures probability of transitioning between activities • Compute most state sequence • 88% accuracy HMM with Multiple States per Activity • HMM with (# objects) states per activity • Idea: model internal structure of activities as well as transitions between activies • 87% accuracy – Performance degrades Model Complexity Tradeoff • Too few parameters: cannot fit data • Too many parameters: poor generalization • Let a = # different activities b = # objects per activity • Number of parameters: – HMM with 1 state per activity: O(a2 + ab) – Too simple – HMM with b states per activity: O(a2b2) – Too complex Interleaved HMM • Idea: explicitly distinguish progress within an activity from switching between activities (Bai & Kautz 2008) • Each activity modeled by an individual HMM • User modeled by a set of activities – One activity is active, others are suspended – When an observation is made, credit it to the current activity, or switch to a different activity Frontier Interleaved HMM Filtering States : Si hi , s , s , , s 1 i 1 i 1 k i 1 P hi 1 , s , , s hi 1 i 1 P ei 1 | s 2 i k i | e1 , , ei 1 Ph i 1 Si | hi P s 1 i hi 1 i 1 hi 1 i k i | e1 , , ei |s P hi , s , , s where for all h hi 1 , s h i 1 s h i Results Model # Parameters Accuracy Most likely single-state HMM ab 68% HMM with one state per activity a2 + ab 88% HMM with (# objects) states per activity a2b2 87% Interleaved HMM with (# objects) states per activity a2+ab2 98% Interleaved HMM Confusion Matrix From Activity Recognition to ADL Prompting • Collaboration with Attention Control Systems to integrate activity monitoring into PEAT (Planning and Execution Assistant and Trainer) – DARPA Small Business Innovative Research This Talk • Monitoring activities of daily living • Learning and guiding users through transportation plans • Planning as satisfiability Motivation: Community Access for Persons with Cognitive Disabilities Problems • Using public transit cognitively challenging – Learning bus routes and transfers – Recovering from mistakes • Point to point shuttle service impractical – Slow – Expensive • Current GPS units hard to use – Require extensive user input – No help with transfers, timing Solution: Opportunity Knocks • User carries GPS-enabled cell phone • System infers transportation use • System learns model of typical user behavior • Novel events = possible user errors Modeling Challenge • Create a model of a user’s – Significant places – Modes of transportation – Transportation plans • Approach: – Encode a commonsense theory of transportation use as a Dynamic Bayesian Network (DBN) – Learn the parameters of the DBN from raw GPS data using Expectation-Maximization Transportation Plans Home • A B Goal: ultimate destination ̶ Home, work, friends, stores, doctors, … • Places: goals or intermediate waypoints • Trip segments: <start, end, mode> • Plan = sequence of trip segments ̶ ̶ ̶ Home to Bus stop A on Foot Bus stop A to Bus stop B on Bus Bus stop B to workplace on Foot Work Detecting User Errors • Learned model represents typical correct behavior – Model is a poor fit to user errors • We can use this fact to detect errors! • Novelty – Normal: model functions as before – Abnormal: switch in prior (untrained) parameters for mode and edge transition Dynamic Bayesian Net ck-1 ck Novelty gk-1 gk Goal tk-1 tk Trip segment mk-1 mk Transportation mode xk-1 xk Edge, velocity, position qk-1 qk GPS / pathway association zk-1 zk Time k-1 Time k GPS reading Predict Goal and Path Detecting Novel Events Further Work • Wizard of Oz study of directiongiving strategies (Liu et al. 2006) – Images, text, audio – Directions, landmarks – Subjects with cognitive disabilities • Current effort: generating prompts using a Markov Decision Process model – Account for probability that prompt will be understood – Adapt to user This Talk • Monitoring activities of daily living • Learning and guiding users through transportation plans • Planning as satisfiability Reasoning about the Physical World • The planning problem: – Given representations of • An initial state of affairs • A desired (goal) state of affairs • A set of actions defined by preconditions and effects – Find a (shortest) sequence of actions that transforms the initial state into a goal state • All useful formalizations of planning are NPcomplete or worse – In practice: desire low-exponential algorithms Planning as Inference • Advice taker (McCarthy 1959) – Proof checker • Planning as first-order theorem proving (Green 1969) – Computationally infeasible • STRIPS (Fikes & Nilsson 1971) – Limited theorem proving + state-space search – Very hard • UCPOP (Weld 1992) – Specialized theorem prover for partial-order planning – Starting to scale up • SATPLAN (Kautz & Selman 1996, 1999, 2000, 2006, 2007) – Planning as general propositional reasoning – Solves many hard problems (60+ steps) optimally Planning as Satisfiability • Time = bounded sequence of integers • Translate planning problem to a Boolean formula action(i ) pre(i ) effect( i 1) action1(i ) action2 (i ) if interfering action1 negates a precondition of action2 fact(i ) fact(i 1) action1(i ) action2 frame axioms initial_state0 , goal_staten • Find a satisfying truth assignment using a SAT engine • Translate truth assignment to a plan Benchmark Results • International Planning Competitions (2004 & 2006) – 1st place for deterministic optimal planning • Reason: Progress in scaling SAT solvers – Handful of key ideas Key Improvements • Stochastic local search – Walksat (Selman, Kautz, & Cohen 1994) – Combines gradient descent with random walk Key Improvements • Pruning methods for backtrack search – Clause learning (Bayardo & Schrag 1997) – Improves power of underlying proof system (Beame, Kautz, & Sabharwal 2003) Regular RES Clause learning DPLL Key Improvements • Restart strategies (Gomes, Selman, & Kautz 1999; Ruan, Kautz, & Horvitz 2004) – Reasoning engines often exhibit heavy-tailed run time distributions (over initial random seeds used for tiebreaking branching heuristics) – Frequent restarts eliminate heavy tails, can provide dramatic speedup log ( backtracks ) 1000000 100000 10000 1000 1 10 100 1000 log( cutoff ) 10000 100000 1000000 Supra-Human Planning • International planning competition benchmarks ; Time 177.13 ; ParsingTime 0.00 ; MakeSpan 8 0: (TURN_TO SATELLITE0 STAR2 STAR8) [1] 0: (TURN_TO SATELLITE1 STAR0 GROUNDSTATION3) [1] 0: (TURN_TO SATELLITE2 STAR2 STAR4) [1] 0: (TURN_TO SATELLITE3 STAR2 PHENOMENON9) [1] 0: (TURN_TO SATELLITE4 STAR2 PHENOMENON9) [1] 0: (SWITCH_ON INSTRUMENT0 SATELLITE0) [1] 0: (SWITCH_ON INSTRUMENT3 SATELLITE1) [1] 0: (SWITCH_ON INSTRUMENT4 SATELLITE2) [1] 0: (SWITCH_ON INSTRUMENT7 SATELLITE3) [1] 0: (SWITCH_ON INSTRUMENT8 SATELLITE4) [1] 1: (CALIBRATE SATELLITE1 INSTRUMENT3 STAR0) [1] 1: (CALIBRATE SATELLITE2 INSTRUMENT4 STAR2) [1] 1: (CALIBRATE SATELLITE3 INSTRUMENT7 STAR2) [1] 1: (CALIBRATE SATELLITE4 INSTRUMENT8 STAR2) [1] 1: (TURN_TO SATELLITE0 STAR0 STAR2) [1] 2: (TURN_TO SATELLITE1 STAR10 STAR0) [1] 2: (TURN_TO SATELLITE2 PHENOMENON7 STAR2) [1] 2: (TURN_TO SATELLITE3 STAR5 STAR2) [1] 2: (TURN_TO SATELLITE4 PLANET6 STAR2) [1] 2: (CALIBRATE SATELLITE0 INSTRUMENT0 STAR0) [1] 3: (TURN_TO SATELLITE0 PLANET6 STAR0) [1] 3: (TAKE_IMAGE SATELLITE1 STAR10 INSTRUMENT3 THERMOGRAPH2) [1] 3: (TAKE_IMAGE SATELLITE2 PHENOMENON7 INSTRUMENT4 INFRARED1) [1] 3: (TAKE_IMAGE SATELLITE2 PHENOMENON7 INSTRUMENT4 IMAGE3) [1] 3: (TAKE_IMAGE SATELLITE2 PHENOMENON7 INSTRUMENT4 INFRARED0) [1] 3: (TAKE_IMAGE SATELLITE3 STAR5 INSTRUMENT7 IMAGE3) [1] 3: (TAKE_IMAGE SATELLITE4 PLANET6 INSTRUMENT8 INFRARED0) [1] 3: (TAKE_IMAGE SATELLITE4 PLANET6 INSTRUMENT8 INFRARED1) [1] 4: (TURN_TO SATELLITE2 PHENOMENON15 PHENOMENON7) [1] 4: (TURN_TO SATELLITE3 STAR8 STAR5) [1] 4: (TURN_TO SATELLITE1 STAR1 STAR10) [1] 4: (TURN_TO SATELLITE4 GROUNDSTATION3 PLANET6) [1] 5: (TURN_TO SATELLITE0 PLANET13 PLANET6) [1] 5: (TURN_TO SATELLITE1 PLANET14 STAR1) [1] 5: (TURN_TO SATELLITE4 PHENOMENON7 GROUNDSTATION3) [1] 5: (TAKE_IMAGE SATELLITE2 PHENOMENON15 INSTRUMENT4 INFRARED1) [1] 5: (TAKE_IMAGE SATELLITE2 PHENOMENON15 INSTRUMENT4 INFRARED0) [1] 5: (TAKE_IMAGE SATELLITE3 STAR8 INSTRUMENT7 IMAGE3) [1] 6: (TURN_TO SATELLITE2 STAR17 PHENOMENON15) [1] 6: (TURN_TO SATELLITE3 PLANET16 STAR8) [1] 6: (TURN_TO SATELLITE4 STAR11 PHENOMENON7) [1] 6: (TAKE_IMAGE SATELLITE0 PLANET13 INSTRUMENT0 SPECTROGRAPH4) [1] 6: (TAKE_IMAGE SATELLITE1 PLANET14 INSTRUMENT3 THERMOGRAPH2) [1] 7: (TAKE_IMAGE SATELLITE2 STAR17 INSTRUMENT4 INFRARED0) [1] 7: (TAKE_IMAGE SATELLITE3 PLANET16 INSTRUMENT7 IMAGE3) [1] 7: (TAKE_IMAGE SATELLITE4 STAR11 INSTRUMENT8 INFRARED1) [1] 7: (TURN_TO SATELLITE0 PHENOMENON9 PLANET13) [1] 7: (TURN_TO SATELLITE1 STAR4 PLANET14) [1] Supra-Human Planning • Industrial planning – – – – Lithographic printing production planning Complex set of discrete & geometric constraints Expert+ performance (40 job ganging) Domain-specific implementation of SAT techniques Conclusion • An early goal of AI was to create programs that exhibited commonsense • This goal proved elusive – Missing efficient methods for logical & probabilistic reasoning – Lacked grounding in real world – Lacked compelling applications • Today we have the tools, the sensors, and the motivation Acknowledgements • Tongxin Bai, Jeff Bilmes, Gaetano Borriello, Tanzeem Choudhury, Kate Deibel, Oren Etzioni, Dieter Fox, Carla Gomes, Craig Harman, Harlan Hile, Eric Horvitz, Kurt Johnson, Lin Liao, Alan Liu, Joseph Modayil, Don Patterson, Parag, Sangho Park, Bill Pentney, Matthai Philipose, Yongshao Ruan, Ashish Sabharwal, Tian Sang, Bart Selman, Dan Weld, Danny Wyatt