The Future of ACT-R in the Post-John Era Appendix: The Future of ACT-R Upon re-reading the Appendix in Chapter 1 on the history of ACT-R, I was struck by two observations. I have had a very poor history of predicting where ACT next. Therefore, I am wise to make this appendix short to minimize the embarrassments of my mispredictions. The second observation is that the evolution of the ACT theory has really been driven by external inputs (the right branches in Figure 1.11). Some of these were from outside our laboratory and some were essentially parallel activities within my laboratory at CMU. HAM (Anderson & Bower, 1973) 0.Simulate Complex Cognition 1. Symbolic Declarative PSG (Newell, 1973) 2. Symbolic Procedural ACTE (Anderson, 1976) 3. Subsymbolic Declarative 4. Subsymbolic Procedural Interactive Activation Model McClelland& Rumelhart (1981) 5. Neurally Inspired Subsymbolic Caveat Emptor: So Anything I Say is Probably False Anderson (2007). P. 247 ACT* (Anderson, 1983) 6. Goal Directed Processing 7. Production Compilation Rational Analysis Anderson (Anderson, 1990) 8. Bayesian Adaptation ACT-R 2.0 (Anderson, 1993) 9. Public Simulation System ACT-RN (Lebiere & Anderson, 1993) 10. Limited Pattern Matching ACT-R 4.0 (Anderson & Lebiere, 1998) 11. End-to-End Simulations EPIC (Meyer & Kieras, 1997) 12. Perceptual-Motor Modules ACT-R 6.0 (current) 13. Brain Mapping 14. Instructable Production System I hold these truths to be self-evident: 1. ACT-R 6.0 is perfect. 2. LISP was a mistake. 3. It’s time for John to go. ACT-R 6.0 is Perfect It is marvelously maintained. Its module structure facillitates the “let a thousand flowers bloom” philosophy. But where is the theory? While there sort of is a core theory, there are multiple variants of the ACT-R theory and ACT-R 6.0 formalizes how they relate. Despite the push in Cognitive Science towards toothbrush theories, we do have a community using the core ACT-R 6.0, even if they are using it to build their own toothbrushes. One of the reasons ACT-R 6.0 is perfect is that it allows ACT-R, the theory, to evolve to incorporate new theoretical developments. It is also gratifying that there is so much effort to produce versions of ACT-R that have greater usability (at least for designated purposes). The Dilemma of the No-Magic Doctrine Newell’s 20 questions -- unhappiness with theories that addressed little pieces with no idea if they would fit into a working system. It really took decades and the work of many people to get ACT to where it is today -- perhaps not perfect but ACT-R 6.0 is now…. 1. Experimentally grounded (stimulus to response) 2. Capable of a detailed and precise accounting of data 3. Learnable through experience (including task instructions) 4. Capable of dealing with complex phenomena 5. Controlled by principled parameters (and all can be turned on) However, it is a rare ACT-R model that does all of these things. Having built a few I have to report that there is no reward. To publish them you have to focus on an aspect and ignore the rest. No one wants to read about all of the details. No one will check that you actually did everything you said. However, they may will be suspicious that you are hiding something. The benefit-cost ratio is much higher if you build a small, describable model that addresses just a piece and leaves the rest to imagination. LISP was a Mistake It was the success to the extent that it brought smart people (e.g. Christian Lebiere) to the world of ACT-R. It also allows rapid prototyping of theory (maybe not uniquely). However, it makes ACT-R more inaccessible to others and this is getting worse. It allows people to believe ACT-R is just an old-style symbolic AI theory. It exacerbates the no-magic dilemna: 1. LISP helps justify ignoring what ACT-R has achieved in addressing the no-magic issue. 2. The power of LISP makes may make it easier to fake solutions. What to do about LISP? Trying to get an agreement on an alternative is probably a bigger mistake. It would tear up our pleasant community with language wars. The LISP implementation will remain the reference even if more work is done in other languages. However, we should be welcoming of various alternative implementations of ACT-R. The response to various tools and implementations to make modeling in ACT-R easier is more complex: 1. They are clearly good things that should be welcomed when they achieve what they claim -- making it easier to pursue research and explore the ACT-R theory. 2. However, we need to be vigilant that we loose sight of the nomagic doctrine and slip into fantasy world where we are not really modeling human cognition. It is time for John to Go! This is the last time I hacked ACT-R. I have now come to use it as a tool. Partly this is because of the modular structure of ACT-R 6.0. However, this is also because my interest have moved to understanding events at a higher temporal grain size than the architecture. I think this meeting is ample testimony to the fact that things are progressing just fine without my intimate involvement. And I think the community would prosper more if it did not depend on my social and organizational skills (e.g. Niels & Hedderik’s Spring School). So we (Dan) will continue to support 6.0 but ACT-R is very much a community effort. And What About the 2011 18th Annual ACT-R Workshop? It is in part a matter of what is decided about the next ICCM. If there is a summer ICCM in 2011 (rather than a Spring ICCM in 2012), the default would be to try to attach a one-day workshop to it. If there is no summer ICCM, the default would be have a CMU-based workshop on the heels of the ACT-R summer school. There has been the feeling we need a community opportunity to reflect on the direction of ACT-R much in the spirit of the 2001 postgraduate summer school. Over the past 2 years there has been some attempt to create this in face of the 2 summer ICCMs but for many reasons none of the plans took hold. One reason was that the ball was in my court. Maybe it is time to have someone else (a committee?) take charge of such community organizing, if not for 2011 then for beyond.