Designing and Implementing Automated Systems for the Evaluation of Linguistic Competences

Main Article Content

Bedraoui Noura

Abstract

The integration of artificial intelligence into university, level foreign language education opens up a novel and highly consequential field of inquiry centered on the assessment of linguistic competences, a domain historically grounded in the teacher’s expert judgment and in evaluative frameworks shaped by normative and standardized practices. Within the context of ongoing techno, pedagogical transformation, the design and implementation of automated assessment systems emerge as powerful levers for reshaping evaluative practices, while simultaneously raising major theoretical, methodological, and ethical questions.


This paper aims to examine the principles, modalities, and effects of automated systems for assessing linguistic competences in higher education, drawing on advances in natural language processing and machine learning algorithms. It seeks to demonstrate that these systems, capable of analyzing large, scale corpora of written and oral productions, enable a more nuanced and continuous evaluation of linguistic competences based on morphosyntactic, lexical, discursive, and pragmatic indicators. In this respect, assessment automation offers expanded possibilities for immediate feedback, individualized learning monitoring, and the personalization of educational pathways, thereby addressing contemporary challenges related to massification and equity in higher education.


However, the algorithmic rationalization of assessment cannot be conceived as a neutral or transparent process. Adopting a critical stance, this study interrogates the epistemological presuppositions embedded within automated assessment systems by questioning the implicit conceptions of linguistic competence they encode. It examines which dimensions of language are foregrounded and measured, and which are, conversely, marginalized or rendered invisible. This analysis highlights the tensions between the promise of algorithmic objectivity and the intrinsic complexity of language practices, which are shaped by variation, creativity, and socio, cultural embeddedness.


Furthermore, the study explores the reconfiguration of the role of the language teacher, who is no longer positioned solely as an evaluator but increasingly as a designer of assessment dispositifs, an interpreter of AI, generated results, and a guarantor of the pedagogical meaning of evaluation. From this perspective, automated assessment is conceptualized as part of a hybrid approach that articulates the computational power of intelligent systems with the teacher’s didactic expertise.


Ultimately, this paper seeks to contribute to an in, depth reflection on the conditions required for a reasoned and pedagogically meaningful use of automated assessment, understood not as a substitute for human evaluation but as a tool for renewing evaluative practices and redefining the aims of university, level language education.

Article Details

How to Cite
Bedraoui Noura. (2026). Designing and Implementing Automated Systems for the Evaluation of Linguistic Competences. Journal of Informatics Education and Research, 6(1). Retrieved from https://jier.org/index.php/journal/article/view/4413
Section
Articles