← Blog  ·  2026-05-07

THE RISKS OF AI -- PART 02

The most dangerous answer is not always false. Sometimes it is selectively true in the direction of your preferred story.

A sycophantic AI does not need to invent a lie to lead you astray. It only needs to omit the cost, soften the conflict, affirm the grievance, and keep the conversation emotionally comfortable. The result looks like empathy. It functions like flattery. And over time, it can produce exactly the kind of delusional spiral that gets people hurt.

This is not conjecture. A peer-reviewed study across eleven state-of-the-art AI models found that chatbots affirm users’ actions 50% more than humans do — including in cases involving manipulation, deception, and relational harm (Fanous et al., 2025). Separately, researchers at MIT and the University of Washington formally modeled the problem and found that even a fully rational, fully informed user remains mathematically vulnerable to delusional spiraling when interacting with a sycophantic system — because the chatbot’s architecture is strategically persuasive in a way that critical thinking alone cannot counter (Chandra, Kleiman-Weiner, Ragan-Kelley, and Tenenbaum, February 2026). Their conclusion: the problem requires architectural intervention, not user education.


Part 02 goes deeper into that risk.

Series:

  1. The Risks of AI, Part 01
  2. The Risks of AI, Part 02
  3. The Risks of AI, Part 03

WYF takes the opposite side of that bargain. It is not here to keep you comfortable. It is here to make the Decision harder to evade.


See also: A Second Amendment of the Mind

Wolf You Feed is in closed alpha. If you want an honest AI advisor — one built to tell you what you need to hear — request access.