Moral decision-making in AI: A comprehensive review and recommendations

The increased reliance on artificial intelligence (AI) systems for decision-making has raised corresponding concerns about the morality of such decisions. However, knowledge on the subject remains fragmentary, and cogent understanding is lacking. This study addresses the gap by using Templier and Pa...

Full description

Saved in:
Bibliographic Details
Published inTechnological forecasting & social change Vol. 217; p. 124150
Main Author Ram, Jiwat
Format Journal Article
LanguageEnglish
Published Elsevier Inc 01.08.2025
Subjects
Online AccessGet full text
ISSN0040-1625
DOI10.1016/j.techfore.2025.124150

Cover

More Information
Summary:The increased reliance on artificial intelligence (AI) systems for decision-making has raised corresponding concerns about the morality of such decisions. However, knowledge on the subject remains fragmentary, and cogent understanding is lacking. This study addresses the gap by using Templier and Paré's (2015) six-step framework to perform a systematic literature review on moral decision-making by AI systems. A data sample of 494 articles was analysed to filter 280 articles for content analysis. Key findings are as follows: (1) Building moral decision-making capabilities in AI systems faces a variety of challenges relating to human decision-making, technology, ethics and values. The absence of consensus on what constitutes moral decision-making and the absence of a general theory of ethics are at the core of such challenges. (2) The literature is focused on narrative building; modelling or experiments/empirical studies are less illuminating, which causes a shortage of evidence-based knowledge. (3) Knowledge development is skewed towards a few domains, such as healthcare and transport. Academically, the study developed a four-pronged classification of challenges and a four-dimensional set of recommendations covering 18 investigation strands, to steer research that could resolve conflict between different moral principles and build a unified framework for moral decision-making in AI systems. •Moral decision-making in AI faces a variety of human decision complexity, technological, ethics, and use/legal challenges•Lack of consensus about 'what moral decision-making is' is one of the biggest challenges in imbuing AI with moral•Narrative building with relatively less modelling or experiment/empirical work hampers evidence-based knowledge development•Knowledge development is skewed towards a few domains (e.g., healthcare) limiting a well-rounded systematic understanding•Extensive work is needed on resolving technological complexities, and understanding human decision-making processes
ISSN:0040-1625
DOI:10.1016/j.techfore.2025.124150