Explainable AI: Bridging the Gap between Machine Learning Models and Human Understanding

Main Article Content

Rajiv Avacharmal

Abstract

Explainable AI (XAI) is one of the key game-changing features in machine learning models, which contribute to making them more transparent, regulated and usable in different applications. In (the) investigation of this paper, we consider the four rows of explanation methods—LIME, SHAP, Anchor, and Decision Tree-based Explanation—in disentangling the decision-making process of the black box models within different fields. In our experiments, we use datasets that cover different domains, for example, health, finance and image classification, and compare the accuracy, fidelity, coverage, precision and human satisfaction of each explanation method. Our work shows the rule trees approach called (Decision Tree-based explanation) is mostly superior in comparison to other non-model-specific methods of explanation performing higher accuracy, fidelity, coverage and precision regardless the classifier. In addition to this, the respondents who answered the qualitative evaluation indicated that they were very content with the decision tree-based explanations and that these types of explanations are very easy and understandable. Furthermore, most of the respondents famous that these sorts of clarifications are more instinctive and significant. The over discoveries stretch on the utilize of interpretable AI strategies for facilitating the hole between machine learning models and human understanding and thus advancing straightforwardness and responsibility in AI-driven decision-making.

Article Details

Section
Articles