Theses and Dissertations
Issuing Body
Mississippi State University
Advisor
Ball, John E.
Committee Member
Anderson, Derek T.
Committee Member
Archibald , Christopher J.
Committee Member
Younan, Nicolas H.
Date of Degree
8-10-2018
Document Type
Dissertation - Open Access
Major
Electrical and Computer Engineering
Degree Name
Doctor of Philosophy (Ph.D)
College
James Worth Bagley College of Engineering
Department
Department of Electrical and Computer Engineering
Abstract
In a three-dimensional world, for perception of the objects around us, we not only wish to classify them, but also know where these objects are. The task of object detection combines both classification and localization. In addition to predicting the object category, we also predict where the object is from sensor data. As it is not known ahead of time how many objects that we have interest in are in the sensor data and where are they, the output size of object detection may change, which makes the object detection problem difficult. In this dissertation, I focus on the task of object detection, and use fusion to improve the detection accuracy and robustness. To be more specific, I propose a method to calculate measure of conflict. This method does not need external knowledge about the credibility of each source. Instead, it uses the information from the sources themselves to help assess the credibility of each source. I apply the proposed measure of conflict to fuse independent sources of tracking information from various stereo cameras. Besides, I propose a computational intelligence system for more accurate object detection in real--time. The proposed system uses online image augmentation before the detection stage during testing and fuses the detection results after. The fusion method is computationally intelligent based on the dynamic analysis of agreement among inputs. Comparing with other fusion operations such as average, median and non-maxima suppression, the proposed methods produces more accurate results in real-time. I also propose a multi--sensor fusion system, which incorporates advantages and mitigate disadvantages of each type of sensor (LiDAR and camera). Generally, camera can provide more texture and color information, but it cannot work in low visibility. On the other hand, LiDAR can provide accurate point positions and work at night or in moderate fog or rain. The proposed system uses the advantages of both camera and LiDAR and mitigate their disadvantages. The results show that comparing with LiDAR or camera detection alone, the fused result can extend the detection range up to 40 meters with increased detection accuracy and robustness.
URI
https://hdl.handle.net/11668/21125
Recommended Citation
Wei, Pan, "Fusion for Object Detection" (2018). Theses and Dissertations. 2362.
https://scholarsjunction.msstate.edu/td/2362