CAP6938 Neuroevolution and Developmental Encoding Real-time NEAT Dr. Kenneth Stanley October 18, 2006 Generations May Not Always Be Appropriate • When a population is evaluated simultaneously – Many are observable at the same time – Therefore, entire population would change at once – A sudden change is incongruous, highly noticeable • When a human interacts with one individual at a time – Want things to improve constantly Steady State GA: One Individual Is Replaced at a Time • Start by evaluating entire first generation • Then continually pick one to remove, replace it with child of the best Start: Evaluate All f1 f2 f3 f4 f5 f6 f7 f8 2) Create offpsring from good parents Repeat… 3) Replace removed individual 1) Remove poor individual Steady State During Simultaneous Evaluation: Similar but not Identical • Several new issues when evolution is real-time – – – – Evaluation is asynchronous When to replace? How to assign fitness? How to display changes Regular NEAT Introduces Additional Challenges for Real Time • Speciation equations based on generations • No “remove worst” operation defined in algorithm • Dynamic compatibility thresholding assumes generations Speciation Equations Based on Generations How to Remove the Worst? • No such operation in generational NEAT • Worst often may often be a new species – Removing it would destroy protection of innovation – Loss of regular NEAT dynamics Dynamic Compatibility Thresholding Assumes A Next Generation Real-time NEAT Addresses Both the Steady State and Simultaneity Issues • Real-time speciation • Simultaneous and asynchronous evaluation • Steady state replacement • Fast enough to change while a game is played • Equivalent dynamics to regular NEAT Main Loop (Non-Generational) Choosing the Parent Species Finally: How Many Ticks Between Replacements? • Intuitions: – The more often replacement occurs, the fewer are eligible – The larger the population, the more are eligible – The high the age of maturity, the fewer are eligible rtNEAT Is Implemented In NERO • Download at http://nerogame.org • rtNEAT source available • Simulated demos have public appeal – Over 70,000 downloads – Appeared on Slashdot – Best Paper Award in Computational Intelligence and Games – Independent Games Festival Best Student Game Award – rtNEAT licensed – Worldwide media coverage Media Coverage Media Coverage NERO: NeuroEvolving Robotic Operatives • NPCs improve in real time as game is played • Player can train AI for goal and style of play • Each AI Unit Has Unique NN NERO Battle Mode • After training, evolved behaviors are saved • Player assembles team of trained agents • Team is tested in battle against opponent’s team NERO Training: The Factory • Reduces noise during evaluation – All evaluations start out similarly • Robot bodies produced by “factory” • Each body sent back to factory to respawn • Bodies retain their NN unless chosen for replacement • NN’s have different ages – Fitness is diminishing average of spawn trials: NERO Inputs and Outputs Enemy/Friend Radars Enemy On-Target Sensor Object Rangefinder Sensors Enemy Line-of-Fire Sensors Further Applications? • • • • • New kinds of games New kinds of AI in games New kinds of real-time simulations Training applications Interactive steady-state evolution Next Topic: Improving the neural model • Adaptive neural networks • Change over a lifetime • Leaky integrator neurons and CTRNN Evolutionary Robots with On-line Self-Organization and Behavioral Fitness by Dario Floreano and Joseba Urzelai (2000) Evolving Adaptive Neural Networks with and Without Adaptive Synapses by Kenneth O. Stanley, Bobby D. Bryant, and Risto Miikkulainen (2003)