Introduction
Welcome back to Controversies. This week we tackle a design question that has been hiding in plain sight: should AI be able to finish a conversation with a user when conversations turn risky or compulsive? Building on recent reporting from MIT Technology Review, we translate the debate into a practical framework for safety, autonomy, and governance that product teams can actually ship.
In this briefing, we cover:
Risk Patterns: How delusional spirals, rumination, and self-harm trajectories emerge during long sessions.
Signals and Thresholds: Which behavioral markers justify a nudge, guided mode, cooldown, or hang-up.
Policy Pressure: Why regulators are moving toward intervention and what compliant implementations require.



