Multi-agent deep reinforcement learning for adaptive coordinated metro service operations with flexible train composition

Document Type

Journal Article

Publication Date


Subject Area

mode - subway/metro, place - europe, place - urban, operations - coordination


Metro service coordination, Flexible train composition, Multi-agent deep reinforcement learning, Actor–critic architecture, Markov decision process


This paper presents an adaptive control system for coordinated metro operations with flexible train composition by using a multi-agent deep reinforcement learning (MADRL) approach. The control problem is formulated as a Markov decision process (MDP) with multiple agents regulating different service lines in a metro network with passenger transfer. To ensure the overall computational effectiveness and stability of the control system, we adopt an actor–critic reinforcement learning framework in which each control agent is associated with a critic function for estimating future system states and an actor function deriving local operational decisions. The critics and actors in the MADRL are represented by multi-layer artificial neural networks (ANNs). A multi-agent deep deterministic policy gradient (MADDPG) algorithm is developed for training the actor and critic ANNs through successive simulated transitions over the entire metro network. The developed framework is tested with a real-world scenario in Bakerloo and Victoria Lines of London Underground, UK. Experiment results demonstrate that the proposed method can outperform previous centralized optimization and distributed control approaches in terms of solution quality and performance achieved. Further analysis shows the merits of MADRL for coordinated service regulation with flexible train composition. This study contributes to real-time coordinated metro network services with flexible train composition and advanced optimization techniques.


Permission to publish the abstract has been given by Elsevier, copyright remains with them.


Transportation Research Part B Home Page: