Theses and Dissertations

ORCID

https://orcid.org/0000-0001-9178-2375

Advisor

Ball, John E.

Committee Member

Gurbuz, Ali Cafer

Committee Member

Goodin, Chris

Committee Member

Goberville, Nick

Date of Degree

8-13-2024

Original embargo terms

Embargo 6 months

Document Type

Dissertation - Open Access

Major

Electrical & Computer Engineering

Degree Name

Doctor of Philosophy (Ph.D.)

College

James Worth Bagley College of Engineering

Department

Department of Electrical and Computer Engineering

Abstract

Longitudinal velocity control, or adaptive cruise control (ACC), is a common advanced driving feature aimed at assisting the driver and reducing fatigue. It maintains the velocity of a vehicle and ensures a safe distance from the preceding vehicle. Many models for ACC are available, such as Proportional, Integral, and Derivative (PID) and Model Predictive Control (MPC). However, conventional models have some limitations as they are designed for simplified driving scenarios. Artificial intelligence (AI) and machine learning (ML) have made robust navigation and decision-making possible in complex environments. Recent approaches, such as reinforcement learning (RL), have demonstrated remarkable performance in terms of faster processing and effective navigation through unknown environments. This dissertation explores an RL approach, deep deterministic policy gradient (DDPG), for longitudinal velocity control. The baseline DDPG model has been modified in two different ways. In the first method, an attention mechanism has been applied to the neural network (NN) of the DDPG model. Integrating the attention mechanism into the DDPG model helps in decreasing focus on less important features and enhances overall model effectiveness. In the second method, the inputs of the actor and critic networks of DDPG are replaced with outputs of the self-supervised network. The self-supervised learning process allows the model to accurately predict future states from current states and actions. A custom reward function has been designed for the RL algorithm considering overall safety, efficiency, and comfort. The proposed models have been trained with human car-following data, and evaluated on multiple datasets, including publicly available data, simulated data, and sensor data collected from real-world environments. The analyses demonstrate that the new architectures can maintain strong robustness across various datasets and outperform the current state-of-the-art models.

Available for download on Sunday, December 15, 2024

Share

COinS