“A Study on Explainable Artificial Intelligence (AI) Techniques to Increase Transparency in Predictive Analytics for Education in India: Conceptual Analysis”

Main Article Content

Abhishek Jain, Shivani Gulati, Shama Rani, Vandana, Divyansh Taneja

Abstract

Predictive analytics has become central to modern educational technology, underpinning early warning systems, student performance prediction, adaptive learning, and institutional decision-making processes. The adoption of complex machine learning (ML) models, while enhancing predictive accuracy, has introduced challenges related to model opaqueness, bias, and limited interpretability. These challenges undermine stakeholder trust and raise ethical concerns, particularly in high-stakes educational decisions. Explainable Artificial Intelligence (XAI) offers a promising pathway to increase transparency, improve trust, and enable the responsible deployment of predictive systems in education. This study critically evaluates key XAI techniques—including feature importance, SHAP, LIME, counterfactual explanations, interpretable models, and surrogate modeling—and analyzes their applicability and limitations within educational contexts. Quantitative insights from existing studies indicate that the incorporation of XAI can improve stakeholder trust by up to 30% and decision accuracy by 15% in certain predictive tasks. The discussion highlights how XAI supports teachers, administrators, and students in understanding prediction outputs and mitigating risks such as algorithmic bias. The study concludes with strategic recommendations for integrating XAI frameworks into educational predictive analytics pipelines and proposes future research directions emphasizing empirical validation and user-cantered design.

Article Details

Section
Articles