DateTimeRoomSpeakerAffiliationSynopsisPaper

 

10:30 AM-12:00 PM Grainger 3180Juanjuan Zhang Massachusetts Institute of Technology (MIT)See Synopsis


In Progress

 

10:30 AM -11:50 AMGrainger 3180Adelle YangNational University of SingaporeSee SynopsisIn Progress

Juanjuan Zhang

John D. C. Little Professor of Marketing at the MIT Sloan School of Management.

Designing Sustainable Recommender Systems - Lei Huang and Juanjuan Zhang

Synopsis: Recommender systems are widely deployed to serve users with content they like. However, content must be created and insufficient demand dampens a creator’s production incentive. We argue that the canonical recommender system may not be sustainable if, by promoting the content each user likes the most, it suppresses the creation incentive of the less popular but still valuable content. We propose a “sustainable recommender system” solution – subsidize creators with demand according to their “sensitivity,” which measures how easily a creator can be incentivized by demand, and their “contribution,” which measures how important a creator is to users overall. Theoretically, we prove that this algorithm maximizes long-term user utility by internalizing the externality of user choice on other users. Computationally, our main innovation is to estimate creator contribution using computer vision, where we train a deep-learning model to compute how creator distribution affects system-wide user utility. Analyzing data from a large content platform, we show that our algorithm incentivizes valuable creators and sustains long-term user experience.



Adelle Yang

Assistant Professor, National University of Singapore, Marketing.

Slower Moral Tradeoffs by AI Enhance AI Appreciation - Adelle Yang

While speed is often considered a key advantage of Artificial Intelligence (AI), our research challenges this assumption in morally complex contexts. Across 13 pre-registered experiments (= 8,473), we find that participants rate an AI more favorably when it takes longer to make moral-tradeoff decisions, even when the outcomes remain unchanged. This preference for a slower AI was observed in both classic moral dilemmas and resource-allocation decisions involving individuals in need. Both the “moral” and “tradeoff” elements of the focal decision were necessary for the effect. The effect did not emerge when the AI was slower on a more important non-moral decision. We find that the effect is primarily driven by overgeneralized moral intuitions regarding the AI’s decision-making procedure, rather than beliefs about the decision outcomes. While anthropomorphism and mind perception may also partially contribute to the effect, they could not fully account for the effect. Furthermore, analyses of open-ended text responses with a large language model (GPT-4o) corroborate measured process evidence. This research offers new insights into AI resistance, especially in moral domains where AI faces the strongest resistance, and suggests new directions for policy interventions.

  • No labels