Your search

In authors or contributors
  • Generative artificial intelligence (AI) has experienced rapid advancement, fundamentally transforming the information landscape. This technological shift has not only amplified the dissemination of misinformation but has also posed significant challenges to conventional frameworks of trust and verification. This paper explores the dual impact of AI: its potential to enhance information services while simultaneously amplifying misinformation and disinformation. Seven AI-generated misinformation cases between 2022 and 2025—ranging from deepfakes and political propaganda to impersonation and amplification were analyzed. Through thematic case analysis and interdisciplinary synthesis, the study proposes the AI-Misinformation Resilience Model (AIM-RM), a conceptual framework guiding proactive responses across verification infrastructure, digital literacy, and ethical policy engagement. Drawing on recent scholarly literature and grounded in information ethics, epistemic trust, and sociocultural literacy, the model offers a path forward for LIS professionals seeking to navigate the post-truth era. Annual Meeting of the Association for Information Science & Technology | Nov. 14 – 18, 2025 | Washington, DC, USA.

Last update from database: 3/13/26, 4:15 PM (UTC)

Explore

Resource type

Resource language