Courier routing and assignment for food delivery service using reinforcement learning


Bozanta A., Cevik M., Kavaklioglu C., Kavuk E. M., Tosun Kühn A., Sonuc S. B., ...Daha Fazla

COMPUTERS & INDUSTRIAL ENGINEERING, cilt.164, 2022 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 164
  • Basım Tarihi: 2022
  • Doi Numarası: 10.1016/j.cie.2021.107871
  • Dergi Adı: COMPUTERS & INDUSTRIAL ENGINEERING
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, ABI/INFORM, Aerospace Database, Applied Science & Technology Source, Business Source Elite, Business Source Premier, Communication Abstracts, Compendex, Computer & Applied Sciences, INSPEC, Metadex, DIALNET, Civil Engineering Abstracts
  • Anahtar Kelimeler: Q-Learning, DQN, DDQN, Courier routing, Courier assignment, REAL-TIME, OPTIMIZATION, ALGORITHM
  • İstanbul Teknik Üniversitesi Adresli: Evet

Özet

We consider a Markov decision process model mimicking a real-world food delivery service where the objective is to maximize the revenue derived from served requests given a limited number of couriers over a period of time. The model incorporates the courier location, order origin, and order destination. Each courier's task is to pick-up an assigned order and deliver it to the requested destination. We apply three different approaches to solve this problem. In the first approach, we simplify the model to a one courier case and then solve the resulting model using Q-Learning. The resulting policy is used for each courier in the model with more than one courier based on the assumption that all couriers are identical. In the second approach, we use the same logic, however, the underlying one courier model is solved using Double Deep Q-Networks (DDQN). In the third approach, the extensive model is considered where a system state consists of the positions of all couriers and all orders in the system. We use DDQN to solve the extensive model. Policies generated by these approaches are compared against a benchmark rule-based policy. We observe that the resulting policy of training a single courier with Q-learning accumulates higher rewards than the reward collected by the rule-based policy. In addition, DDQN algorithm for a single courier outperforms both the Q-learning and the rule-based approaches, however, DDQN performance is noted to be highly dependent on the hyper-parameters of the algorithm.