...
Title: Slower Moral Tradeoffs by AI Enhance AI Appreciation - Adelle Yang
While speed is often considered a key advantage of Artificial Intelligence (AI), our research challenges this assumption in morally complex contexts. Across 13 pre-registered experiments (N = 8,473), we find that participants rate an AI more favorably when it takes longer to make moral-tradeoff decisions, even when the outcomes remain unchanged. This preference for a slower AI was observed in both classic moral dilemmas and resource-allocation decisions involving individuals in need. Both the “moral” and “tradeoff” elements of the focal decision were necessary for the effect. The effect did not emerge when the AI was slower on a more important non-moral decision. We find that the effect is primarily driven by overgeneralized moral intuitions regarding the AI’s decision-making procedure, rather than beliefs about the decision outcomes. While anthropomorphism and mind perception may also partially contribute to the effect, they could not fully account for the effect. Furthermore, analyses of open-ended text responses with a large language model (GPT-4o) corroborate measured process evidence. This research offers new insights into AI resistance, especially in moral domains where AI faces the strongest resistance, and suggests new directions for policy interventions.