Cooperative bus holding and stop-skipping: A deep reinforcement learning framework

Document Type

Journal Article

Publication Date


Subject Area

place - north america, place - urban, mode - bus, operations - coordination, technology - intelligent transport systems, planning - service improvement, planning - service level


Bus control, multi-agent reinforcement learning (MARL)


The bus control problem that combines holding and stop-skipping strategies is formulated as a multi-agent reinforcement learning (MARL) problem. Traditional MARL methods, designed for settings with joint action-taking, are incompatible with the asynchronous nature of at-stop control tasks. On the other hand, using a fully decentralized approach leads to environment non-stationarity, since the state transition of an individual agent may be distorted by the actions of other agents. To address it, we propose a design of the state and reward function that increases the observability of the impact of agents’ actions during training. An event-based mesoscopic simulation model is built to train the agents. We evaluate the proposed approach in a case study with a complex route from the Chicago transit network. The proposed method is compared to a standard headway-based control and a policy trained with MARL but with no cooperative learning. The results show that the proposed method not only improves level of service but it is also more robust towards uncertainties in operations such as travel times and operator compliance with the recommended action.


Permission to publish the abstract has been given by Elsevier, copyright remains with them.


Transportation Research Part C Home Page: