Theses and Dissertations

ORCID

https://orcid.org/0009-0004-0778-0529

Advisor

Ball, John E.

Committee Member

Dabbiru, Lalitha

Committee Member

Price, Stanton R.

Committee Member

Luo, Chaomin

Date of Degree

5-16-2025

Original embargo terms

Immediate Worldwide Access

Document Type

Graduate Thesis - Open Access

Major

Electrical and Computer Engineering

Degree Name

Master of Science (M.S.)

College

James Worth Bagley College of Engineering

Department

Department of Electrical and Computer Engineering

Abstract

Autonomous vehicles commonly employ multiple sensors to perceive their surroundings. Coupling these sensors would ideally improve perception compared to using a single sensor. An autonomous system can be equipped with object localization and classification, often performed using a visual camera to understand a scene intelligently. Object detection and classification can also be applied to LiDAR and infrared (IR) sensors to further enhance scene awareness of the autonomous system. Herein, sensor-level, decision-level, and feature-level fusion are explored to assess their impact on perception and mitigate sensor disagreements. Specifically, the fusing of RGB, LiDAR, and IR sensor data to improve object classification and scene awareness was investigated. Additionally, an SVM-based feature fusion method is also proposed as an alternative avenue to optimize computational efficiency of a fusion framework. Results show that multi-modal perception enhances accuracy by balancing sensor strengths and weaknesses. Experiments were conducted using a multi-sensor off-road dataset collected at the Center for Advanced Vehicular Systems (CAVS) at Mississippi State University.

Included in

Robotics Commons

Share

COinS