Final Presentation Traffic Light Control Using Reinforcement Learning Daniel Goldberg Andrew Elstein The Problem • Traffic congestion is estimated to cost Americans $121 billion in lost productivity fuel, and other costs. • Traffic Lights are imperfect and contribute to this • Usually statically controlled • A better method of controlling them can reduce waiting times significantly Approach • Implement a “Reinforcement Learning” (RL) algorithm to control traffic lights • Create a simulation of traffic to tweak and test traffic light optimizations Implementation • If minor adjustments were made to the algorithm, it could operate within existing infrastructure • Optimally, a camera system and would be added Simulation Insert picture of visualization Simulation Structure To build the simulation we created the follow Data Structures: • Cars Position, Destination, Velocity, Map, Color ars Struct • Roads • Lanes • Individual Cells • Intersection location matrix • Intersections • In total, the simulation is coded in • Position, Traffic Lights MATLAB with 3100 lines of code C • Simulation Dynamics • Cars are spawned randomly • They follow an randomly generated path to destination • Cars follow normal traffic rules • Road Cells are discretized to easily simulate traffic, only one car can exist in each road cell. Cars move ahead one or two cells in each time-step, depending on the car's max velocity and whether there is an open spot. Demo Reinforcement Learning • • Weiring - Multi-Agent Reinforcement Learning for Traffic Light Control It introduced an objective function to minimize or maximize a goal value π π‘π, π, π , πΏ, π‘π ′ , π′ π π‘π, π, π , πΏ = ∗ (π π‘π, π , π‘π ′ , π′ (π‘π ′ ,π′ ) π π‘π, π, π = π πΏ| π‘π, π, π π π‘π, π, π , πΏ πΏ π π‘π, π , π‘π ′ , π′ tl p d L = 1 0 ππ π‘π, π = [π‘π ′ , π′ ] ππ π‘π, π ≠ [π‘π ′ , π′ ] = traffic light = current position = destination = light decision πΎ = discounting constant ‘ = next + πΎπ( π‘π ′ , π′ , π )) Reinforcement Learning Theory • • • • Coordinating a system of lights to respond to current conditions can reap exceptional benefit The theory cleverly merges probability, game theory and machine learning to efficiently control traffic In our case, the expected value of each of a light’s possible states are calculated With this value function a game is played to maximize it, in turn minimizing waiting time Results Wrote a script to compare the smart algorithm to static On-Off-On-Off lights. Our algorithm reduced average waiting time—and thus traveling time— for a system with any number of cars Travelling time for our implementation was reduced by an average of 10%. There was a 15% reduction for sparse traffic systems from a static control, but only a 3% decrease for heavy congestion. Results cont. Extensions • Fairness-weighted objective: • • • • • • • ω = weighting constant t = current time ti = time of arrival for car i If F(t) > 1, cars on road 1 get to go If F(t) < 1, cars on road 2 get to go Further Extensions • Car Path optimization and rerouting • Model expansion to traverse an entire city • Inter-traffic-light communication • Retesting with increased computational resources for modeling accuracy and robustness RL In the News • Samah El-Tantawy, 2012 PhD recipient from the University of Toronto, won the 2013 IEEE best dissertation award for her research in RL. • Her RL traffic model showed reduced rush-hour travel times by as much as 26 percent and is working on monetizing her research with small MARLIN-ATSC (Multi-agent Reinforcement Learning for Integrated Network of Adaptive Traffic Signal Controllers) computers. Challenges • Difficult to understand data structures and how they would interact • Object Oriented Approach vs. MATLAB’s index-based structures • Understand how cars would interact with each other • Understanding RL algorithm • Adapting our model to use RL algorithm • Limited computational resources