Model-Based Reinforcement Learning for Advanced Adaptive Cruise Control: A Hybrid Car Following Policy

Yavas U., Kumbasar T., Üre N. K.

33rd IEEE Intelligent Vehicles Symposium (IEEE IV), Aachen, Germany, 5 - 09 June 2022, pp.1466-1472 identifier identifier

  • Publication Type: Conference Paper / Full Text
  • Doi Number: 10.1109/iv51971.2022.9827279
  • City: Aachen
  • Country: Germany
  • Page Numbers: pp.1466-1472
  • Istanbul Technical University Affiliated: Yes


Adaptive cruise control (ACC) is one of the frontier functionality for highly automated vehicles and has been widely studied by both academia and industry. However, previous ACC approaches are reactive and rely on precise information about the current state of a single lead vehicle. With the advancement in the field of artificial intelligence, particularly in reinforcement learning, there is a big opportunity to enhance the current functionality. This paper presents an advanced ACC concept with unique environment representation and model-based reinforcement learning (MBRL) technique which enables predictive driving. By being predictive, we refer to the capability to handle multiple lead vehicles and have internal predictions about the traffic environment which avoids reactive short-term policies. Moreover, we propose a hybrid policy that combines classical car following policies with MBRL policy to avoid accidents by monitoring the internal model of the MBRL policy. Our extensive evaluation in a realistic simulation environment shows that the proposed approach is superior to the reference model-based and model-free algorithms. The MBRL agent requires only 150k samples (approximately 50 hours driving) to converge, which is x4 more sample efficient than model-free methods.