Your search

In authors or contributors
Publication year
  • Several proposals for moral enhancement would use AI to augment (auxiliary enhancement) or even supplant (exhaustive enhancement) human moral reasoning or judgment. Exhaustive enhancement proposals conceive AI as some self-contained oracle whose superiority to our own moral abilities is manifest in its ability to reliably deliver the ‘right’ answers to all our moral problems. We think this is a mistaken way to frame the project, as it presumes that we already know many things that we are still in the process of working out, and reflecting on this fact reveals challenges even for auxiliary proposals that eschew the oracular approach. We argue there is nonetheless a substantial role that ‘AI mentors’ could play in our moral education and training. Expanding on the idea of an AI Socratic Interlocutor, we propose a modular system of multiple AI interlocutors with their own distinct points of view reflecting their training in a diversity of concrete wisdom traditions. This approach minimizes any risk of moral disengagement, while the existence of multiple modules from a diversity of traditions ensures pluralism is preserved. We conclude with reflections on how all this relates to the broader notion of moral transcendence implicated in the project of AI moral enhancement, contending it is precisely the whole concrete socio-technical system of moral engagement that we need to model if we are to pursue moral enhancement.

  • Practical wisdom is the intellectual virtue relating to the ability to fix ends and discern in a concrete circumstance how to achieve those ends. It is cultivated through engagement with experience rather than book learning. However, a whole matrix of convergent technologies, such as headsets, haptic suits, AI-driven chatbots, and extended realities, such as augmented and virtual reality (AR/VR), creates new conditions for training practical wisdom. How can moral educators facilitate practical wisdom in this extended reality (XR)? Drawing on Nussbaum’s account of phronesis, we contend the job of moral education in XR is mostly about ensuring students’ critical engagement. We suggest AI assistants can contribute to this task, so long as these technologies and the people using them manifest Socratic humility ensuring that no single interaction serves as an ‘oracle of truth’, leaving critical thinking and judgment firmly in the hands of the student.

Last update from database: 3/13/26, 4:15 PM (UTC)

Explore

Resource type

Resource language