Ethical Concerns in Ai-Driven Assessments in the Educational Context: A Theoretical Analysis through Deontological and Utilitarian Perspectives
Main Article Content
Abstract
The integration of Artificial Intelligence (AI) into educational assessment represents a paradigm shift in evaluating student learning. AI-driven tools—ranging from automated grading systems to adaptive testing platforms—offer unprecedented efficiency, objectivity, and scalability. However, these technological advances also raise pressing ethical dilemmas related to fairness, accountability, transparency, and data privacy. This study addresses these challenges by synthesizing two foundational ethical frameworks: deontological ethics and utilitarian ethics. Drawing on deontological ethics, which emphasize moral duty and responsible implementation, and utilitarian ethics, which assess actions based on their societal benefits and harms, this analysis develops a balanced conceptual framework for the ethical deployment of AI in education. By integrating Kant’s duty-based philosophy with the consequentiality perspectives of Mill and Bentham, the study articulates conditions under which AI use in education can be considered ethically legitimate. Findings from a systematic literature review suggest that this legitimacy is contingent upon sustained human oversight, algorithmic transparency, moral accountability, and demonstrable equity in learning outcomes. The proposed framework offers practical guidance for policymakers, educators, and technology developers committed to promoting responsible AI innovation in education.