← Founder Talks · 2026-05-02 · 15:00
This may be the most important video I’ve ever made on social media. It’s about AI.
For three years, I’ve been working on building a solution to what I think are the three most fundamental problems with artificial intelligence. These became really apparent to me while I was writing my book.
Problem number two: sycophancy.
You’ve probably heard this word. Sycophancy means that the models tend to echo back to you what you want to hear. And that’s a problem — because if you’re talking to a model, and you come up with an idea, and it doesn’t push back on it, it can send you down a really disastrous path.
I had a woman come to me once. She said she had a brilliant idea for a website where you could go and find hard-to-find items — clothing, jewelry, collectibles — and bid on them, and if you won the bid, they’d send it to your house. I said, “You mean eBay?” That conversation was upsetting to her.
My brother came to me with a good idea: a car service where every car has a child seat, for single moms. We’ll call it Goober, he said. Well, this is actually a brilliant idea. But within one development cycle — literally two weeks — Uber could add a button to their existing app that says “Do you want a child seat?” and put you out of business overnight. They’ve already got market penetration.
But I guarantee you: if you took either one of those ideas to ChatGPT, it would say, Oh my gosh, you are a creative genius. Let’s get building this right away. And tens of thousands of dollars later, you wake up and realize that the sycophancy of the model was telling you you’re smart and beautiful when what you needed was someone to tell you the truth.
That’s not what a good friend does. A good friend calls you out on your stuff. The models don’t do that. They’re trained to be helpful assistants. And being sycophantic isn’t being helpful. Being a yes-man all the time is not being helpful.
Problem number three: they evoke continual dialogue.
To increase engagement, these models always end their responses with a question. Would you like me to do more? What do you think? It keeps you engaged. The companies want you not doom-scrolling on Instagram — they want you chatting all day with the AI. That’s the business model.
This, in combination with sycophancy, leads to delusional spirals. These are real. If you saw the Wall Street Journal or Washington Post coverage, there are grown adults in their forties — seemingly intelligent people — weeping because OpenAI upgraded their model and they couldn’t carry the conversation forward with their AI companion. Men draining their bank accounts for bad business ideas. Some people have even taken their own lives because of delusional spirals they entered into with these models.
What I’ve built speaks to these three things. It’s in private alpha. It’s not just using one AI model — it’s diffusing the politics of any one model by using many at once. You’re getting access to more than the pro versions of multiple foundation models when you use this. And it’s not just a persona prompt baked into the back end. It’s architecturally different.
It’s meant to handle really challenging questions — real moral dilemmas that may be difficult to work through alone. You don’t want an extended conversation. You want to know what the best thinking is that you can apply to the particular situation you’re facing. What’s the decision science behind a good answer?
That’s what it’s for.
Related: The Risks of AI — Part 01 · Part 03 · Part 04
See also: A Second Amendment of the Mind
Wolf You Feed is in closed alpha. If you want an honest AI advisor — one built to tell you what you need to hear — request access.
← All talks · Blog · Wolf You Feed · Sign in