Imperial College London

Panagiotis Angeloudis

Faculty of EngineeringDepartment of Civil and Environmental Engineering

Reader in Transport Systems and Logistics
 
 
 
//

Contact

 

+44 (0)20 7594 5986p.angeloudis Website

 
 
//

Location

 

337Skempton BuildingSouth Kensington Campus

//

Summary

 

Publications

Citation

BibTex format

@inbook{Ngu:2022:10.1177/03611981221093324,
author = {Ngu, E and Parada, L and Macias, JJE and Angeloudis, P},
booktitle = {Transportation Research Record},
doi = {10.1177/03611981221093324},
pages = {385--395},
title = {Decentralised Multi-Agent Reinforcement Learning Approach for the Same-Day Delivery Problem},
url = {http://dx.doi.org/10.1177/03611981221093324},
year = {2022}
}

RIS format (EndNote, RefMan)

TY  - CHAP
AB - Same-day delivery (SDD) services have become increasingly popular in recent years. These have been usually modeled by previous studies as a certain class of dynamic vehicle routing problem (DVRP) where goods must be delivered from a depot to a set of customers in the same day that the orders were placed. Adaptive exact solution methods for DVRPs can become intractable even for small problem instances. In this paper, the same-day delivery problem (SDDP) is formulated as a Markov decision process (MDP) and it is solved using a parameter-sharing Deep Q-Network, which corresponds to a decentralised multi-agent reinforcement learning (MARL) approach. For this, a multi-agent grid-based SDD environment is created, consisting of multiple vehicles, a central depot, and dynamic order generation. In addition, zone-specific order generation and reward probabilities are introduced. The performance of the proposed MARL approach is compared against a mixed-integer programming (MIP) solution. Results show that the proposed MARL framework performs on par with MIP-based policy when the number of orders is relatively low. For problem instances with higher order arrival rates, computational results show that the MARL approach underperforms MIP by up to 30%. The performance gap between both methods becomes smaller when zone-specific parameters are employed. The gap is reduced from 30% to 3% for a 5 3 5 grid scenario with 30 orders. Execution time results indicate that the MARL approach is, on average, 65 times faster than the MIP-based policy, and therefore may be more advantageous for real-time control, at least for small-sized instances.
AU - Ngu,E
AU - Parada,L
AU - Macias,JJE
AU - Angeloudis,P
DO - 10.1177/03611981221093324
EP - 395
PY - 2022///
SP - 385
TI - Decentralised Multi-Agent Reinforcement Learning Approach for the Same-Day Delivery Problem
T1 - Transportation Research Record
UR - http://dx.doi.org/10.1177/03611981221093324
ER -