EXPLAINABILITY FATIGUE IN ARTIFICIAL INTELLIGENCE: A PRISMA-GUIDED CONCEPTUAL FRAMEWORK OF COGNITIVE LIMITS IN HUMAN–AI INTERACTION

Authors

  • Dahiru Hassan Usman Department of Data Science & Artificial Intelligence, Modibbo Adama University, Yola, Nigeria
  • Godfrey Manunyi Department of Computer Science, Modibbo Adama University, Yola, Nigeria

Keywords:

Explainable Artificial Intelligence, Explainability Fatigue, Cognitive Load Theory, Human–AI Interaction, Trust Calibration, Decision Quality.

Abstract

Explainable Artificial Intelligence (XAI) is increasingly recognized as essential for developing responsible and trustworthy AI, predicated on the assumption that greater transparency enhances user understanding, trust, and decision-making. Despite extensive, rigors research in explainable artificial intelligence (XAI), the integration of AI into high-stakes domains remains constrained by concerns over interpretability and trust. Current technical solutions often neglect human cognitive frameworks for interpreting complex decisions, leading to a phenomenon termed “explainability fatigue” where cognitive effort required to comprehend AI explanations outweighs perceived benefits, resulting in diminished engagement and suboptimal reliance on AI systems. This research employed a PRISMA 2020-guided conceptual systematic review to synthesize theoretical and empirical work on XAI and human cognitive constraints. Following identification, screening, eligibility, and inclusion phases,
searches across major databases (IEEE Xplore, Scopus, Web of Science, ACM Digital Library, Google Scholar) yielded 32 studies from 2021-2026. Studies were systematically coded and grouped into thematic domains: cognitive load in XAI, trust calibration, interpretability techniques, and human-centered design principles. Analysis revealed that explanation complexity increases extraneous cognitive load, leading to performance degradation rather than improvement. Three key outcomes emerged: trust miscalibration (both overtrust and undertrust), degraded decision quality through cognitive overload, and accountability gaps. The synthesis identified antecedent variables (explanation complexity, volume, user characteristics, contextual constraints) that
mediate explainability fatigue. The paper proposes a framework that positions explainability fatigue as a mediating factor between explanation design and responsible AI outcomes, explainable AI systems should adopt adaptive, context-aware explanation strategies aligned with human cognitive capabilities rather than pursuing maximal transparency, marking a shift toward “cognitively sustainable transparency” in responsible AI design.

Published

2026-05-11

How to Cite

Usman, . D. H. ., & Manunyi, . G. . (2026). EXPLAINABILITY FATIGUE IN ARTIFICIAL INTELLIGENCE: A PRISMA-GUIDED CONCEPTUAL FRAMEWORK OF COGNITIVE LIMITS IN HUMAN–AI INTERACTION . LAUTECH JOURNAL OF COMPUTING AND INFORMATICS , 5(1), 59-75. Retrieved from https://laujci.lautech.edu.ng/index.php/laujci/article/view/183