Theses and Dissertations
ORCID
https://orcid.org/0000-0002-4617-2022
Advisor
Ball, John E.
Committee Member
Gurbuz, Ali C.
Committee Member
Diao, Junming
Committee Member
Green, Ryan
Date of Degree
12-12-2025
Original embargo terms
Immediate Worldwide Access
Document Type
Dissertation - Open Access
Major
Electrical and Computer Engineering
Degree Name
Doctor of Philosophy (Ph.D.)
College
James Worth Bagley College of Engineering
Department
Department of Electrical and Computer Engineering
Abstract
Most conventional radar-based classification methods, including human activity recognition (HAR) and modulation recognition, rely on computationally intensive two-stage processes involving time-frequency (TF) transformations, such as the short-time Fourier transform (STFT), to generate micro-Doppler signature (��-Ds). These ��-Ds are then classified using deep neural network (DNN), a method introducing significant temporal latency and limiting real-time applicability. To overcome these limitations, this dissertation proposes a novel complex-valued Deep learning (DL) framework employing structured parameterized learnable filter (PLF) banks, enabling direct classification from raw radar data. The research first introduces high-resolution spectrogram network (HRSpecNet), a deep learning model designed to reconstruct high-resolution ��-Ds directly from complex-valued 1D radar data. HRSpecNet leverages an autoencoder for noise suppression, a learnable STFT block for adaptive frequency transformations, and a U-Net block for High-Resolution (HR) image reconstruction. Evaluations using synthetic signals and a challenging real-world American Sign Language (ASL) dataset demonstrate improved classification accuracy by 3.48% compared to traditional STFT-based methods, highlighting HRSpecNet’s superior resolution, robustness to noise, and computational efficiency. Building on HRSpecNet, this research further introduces parameterized learnable filter network (PLFNet), which integrate complex-valued PLF, including Sinc, Gaussian, Gammatone, and Ricker, directly into convolutional neural network (CNN) architectures. Unlike conventional methods, PLFNet classify raw 1D radar data without explicit CNN generation, providing enhanced interpretability and computational efficiency. PLFNet achieve approximately 47% higher accuracy than standard 1D CNNs and about 7% higher accuracy than CNNs employing real-valued learnable filters. Furthermore, PLFNet match the accuracy of standard CNNs applied to μ-DS images while reducing computational latency by approximately 75%, making them particularly suitable for real-time applications. Finally, the dissertation introduces time-gated parameterized learnable filter network (TGPLFNet), featuring time-gated parameterized learnable filters capable of adaptively focusing on critical temporal and spectral signal features, crucial for non-stationary signals. TG-PLFNet demonstrates superior performance on a newly synthesized dataset containing 51 radar and communication waveform modulations under varied conditions, surpassing existing automatic modulation recognition (AMR) models in accuracy, inference time, and interpretability. Collectively, the developed methods offer computationally efficient, interpretable, and highperforming solutions for TF-domain-based radar and radio frequency (RF) signal classification, advancing real-time capabilities and practical applicability in RF sensing applications.
Sponsorship (Optional)
Engineer Research and Development Center, ERDC
Recommended Citation
Biswas, Sabyasachi, "Complex-valued structured parameterized learnable filter banks for time-frequency domain based classification" (2025). Theses and Dissertations. 6832.
https://scholarsjunction.msstate.edu/td/6832