Courier routing and assignment for food delivery service using reinforcement learning


Bozanta A., Cevik M., Kavaklioglu C., Kavuk E. M. , Tosun Kühn A., Sonuc S. B. , ...More

COMPUTERS & INDUSTRIAL ENGINEERING, vol.164, 2022 (SCI-Expanded) identifier identifier

  • Publication Type: Article / Article
  • Volume: 164
  • Publication Date: 2022
  • Doi Number: 10.1016/j.cie.2021.107871
  • Journal Name: COMPUTERS & INDUSTRIAL ENGINEERING
  • Journal Indexes: Science Citation Index Expanded (SCI-EXPANDED), Scopus, ABI/INFORM, Aerospace Database, Applied Science & Technology Source, Business Source Elite, Business Source Premier, Communication Abstracts, Compendex, Computer & Applied Sciences, INSPEC, Metadex, DIALNET, Civil Engineering Abstracts
  • Keywords: Q-Learning, DQN, DDQN, Courier routing, Courier assignment, REAL-TIME, OPTIMIZATION, ALGORITHM
  • Istanbul Technical University Affiliated: Yes

Abstract

We consider a Markov decision process model mimicking a real-world food delivery service where the objective is to maximize the revenue derived from served requests given a limited number of couriers over a period of time. The model incorporates the courier location, order origin, and order destination. Each courier's task is to pick-up an assigned order and deliver it to the requested destination. We apply three different approaches to solve this problem. In the first approach, we simplify the model to a one courier case and then solve the resulting model using Q-Learning. The resulting policy is used for each courier in the model with more than one courier based on the assumption that all couriers are identical. In the second approach, we use the same logic, however, the underlying one courier model is solved using Double Deep Q-Networks (DDQN). In the third approach, the extensive model is considered where a system state consists of the positions of all couriers and all orders in the system. We use DDQN to solve the extensive model. Policies generated by these approaches are compared against a benchmark rule-based policy. We observe that the resulting policy of training a single courier with Q-learning accumulates higher rewards than the reward collected by the rule-based policy. In addition, DDQN algorithm for a single courier outperforms both the Q-learning and the rule-based approaches, however, DDQN performance is noted to be highly dependent on the hyper-parameters of the algorithm.