Clinical Psychiatry Open Access

  • ISSN: 2471-9854
  • Journal h-index: 10
  • Journal CiteScore: 2.5
  • Journal Impact Factor: 4.5
  • Average acceptance to publication time (5-7 days)
  • Average article processing time (30-45 days) Less than 5 volumes 30 days
    8 - 9 volumes 40 days
    10 and more volumes 45 days

Abstract

The Amplification and Perpetuation of AI-Derived Biases Through Automation Dependency: A Framework for Understanding the Long-Term Cognitive and Social Implications of LLM Over-Reliance

Christopher Cleverly*

This research introduces a framework to elucidate how automation bias in Large Language Models (LLMs) amplifies biases through human over-reliance, leading to critical thinking atrophy and the propagation of biases into human cognition and social systems. Automation bias, defined as the tendency to excessively trust AI outputs while ignoring contradictory evidence or personal judgment, drives a three-phase cycle: (1) initial dependency development, fueled by perceived AI efficiency; (2) critical thinking atrophy via cognitive offloading; and (3) bias internalization and propagation, where AI biases are inherited and reproduced in human decisions, even without AI support. Drawing on evidence such as the impact of AI on 40% of global jobs and cognitive offloading in education, we challenge the notion that technical fixes alone can mitigate these effects. We propose Wisdom as a Service (WaaS), a preliminary framework that integrates non-Europe an wisdom traditions. This theoretical approach prioritizes epistemic pluralism and community validation as potential pathways to address the long-term societal consequences of AI over-reliance (integrating non-Euro pean wisdom traditions (e.g., Ubuntu, Ny?ya) and decolonized AI architectures to disrupt bias amplification). Automation bias is not just a human-AI interaction problem but a sociocognitive epidemic. Without systemic intervention (e.g., WaaS, decolonized AI), AI biases will become permanent fixtures of human reasoning. This framework prioritizes epistemic multiplicity, community validation, and culturally grounded reasoning to ad dress the long-term societal consequences of AI over-reliance.

Published Date: 2025-11-28; Received Date: 2025-10-29