← Blog  ·  2026-05-08

THE RISKS OF AI -- PART 03

The public argument about AI replacement is too narrow. The first thing being replaced is not always labor. It is judgment.

NYU professor Clay Shirky identified a category of harm beyond cognitive offloading: emotional offloading (New York Times, February 2026). The 18–25 demographic now accounts for 46% of ChatGPT usage — and a significant portion of that use is navigating the friction of human interaction: drafting apology texts, rehearsing difficult conversations, vibe-checking dating messages. The problem is not that AI helps. The problem is that the discomfort being removed is the mechanism by which social competence develops. You do not become better at hard conversations by never having them.

A person who lets AI smooth every difficult message, validate every grievance, and rehearse every uncomfortable interaction may feel more capable in the moment. Over time, the opposite can happen.

The friction was the training.

Part 03 closes the risk sequence. The alternative is refinement: AI that strengthens the user’s capacity to decide, commit, and act in the real world — not AI that replaces that capacity by doing the deciding for them.


Series:

  1. The Risks of AI, Part 01
  2. The Risks of AI, Part 02
  3. The Risks of AI, Part 03

WYF is in private alpha. If you want early access, get in touch.


See also: A Second Amendment of the Mind

Wolf You Feed is in closed alpha. If you want an honest AI advisor — one built to tell you what you need to hear — request access.