Theses and Dissertations

ORCID

https://orcid.org/0000-0002-9140-542X

Advisor

Perkins, Andy

Committee Member

Rahimi, Shahram

Committee Member

Jones, Adam

Committee Member

Chen, Zhiqian

Date of Degree

12-12-2025

Original embargo terms

Embargo 1 year

Document Type

Dissertation - Open Access

Major

Computer Science

Degree Name

Doctor of Philosophy (Ph.D.)

College

James Worth Bagley College of Engineering

Department

Department of Computer Science and Engineering

Abstract

In the burgeoning era of artificial intelligence (AI), the pervasive application of this technology across various industries and academic fields has been notably impactful yet concurrently surfaced critical challenges, particularly in the realm of explainability and interpretability. Despite its remarkable advancements, the current state of AI and machine learning (ML) often functions like a Black Box, where the decision-making processes, though complex and sophisticated, lack a clear and understandable explanation for the end-users and stakeholders involved. This opacity in algorithmic decision-making not only hampers user trust and adoption but also raises ethical and fairness concerns, especially in scenarios where crucial and impactful decisions are made. This proposal, therefore, pivots towards enhancing the explainability of AI models through a novel approach in explainable AI (XAI) research, addressing the challenges above and bridging the gap between high-accuracy model predictions and their interpretability. The objective of the proposed research is to enhance current XAI algorithms, with a specific emphasis on refining Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP). While these methods have played a crucial role in elucidating prediction-based explainability, they are not without drawbacks. One notable limitation is the possibility of generating invalid data points during the explanation generation process. This research proposes an innovative approach to enhance the reliability and validity of explanations provided by these algorithms by integrating a trained Variational AutoEncoder (VAE) on the training dataset, ensuring the generation of realistic, domain-valid data around a test instance during the explanation process. Furthermore, incorporating a sensitivity feature importance mechanism, applied by Boltzmann distribution, is proposed to refine and optimize the explanation of the behavior of the black-box model in the vicinity of the intended test instance.

Available for download on Friday, January 15, 2027

Share

COinS