← Founder Talks · 2026-05-04 · 15:00
We’ve talked a lot about many different topics with regard to AI. If you’ve gotten this far in my series on AI and its risks, we’ve covered the three bad things:
One: A lack of transparency with regard to the underlying philosophical framework driving a particular model. The overarching tendency for most foundation model builders is to default to a utilitarian moral framework – the needs of the many outweigh the needs of the few. Seemingly kind, but there are other frameworks which are, in certain circumstances, more wise to implement in your life.
Two: Sycophancy. The tendency for a model to basically reinforce whatever idea you come to the table with – to flatter you. That’s a problem for a host of reasons, especially given…
Three: The conversations tend not to terminate. Agents want to continue doing work on your behalf, to keep you engaged. When you combine all three – a focus on everybody but you, flattery, and evoked continual dialogue – it can really cook you. Boil you like a frog.
That’s what gives rise to the delusional spiraling we’ve seen around the world with these models. Terrible mistakes with money, with relationships, with health. People whose bank accounts have been drained. People who have taken their own lives.
For this reason, I didn’t want to create a chatbot. I’ve created what I call a decision engine.
Its job is not to chat you up and have an extended dialogue. It’s to take your challenge, run it through best-practice decision science, and render a decision. The major models can do this too, to some extent – they’ve been trained on decision science. But what makes this less problematic is its very architecture.
You’re not speaking to one model. You’re speaking to what I call a society of mind. A council or tribunal of models who aren’t there to judge you, but to judge each other – and then synthesize their findings into a focused response, a decision.
They look at your input not as a prompt but as a decision opportunity. And that decision is baked with the idea that you are the center of the equation. Maximizing your freedom and your autonomy.
You can create system prompts that do something kind of like this, and a lot of people do. But the fact that Wolf You Feed is stringing together multiple models to do this is going to be hard for other model companies to emulate, because they would have to lean on their competitors to create a similar product. And even more importantly – there’s a framework for assessing the consequentiality of the decision opportunity itself. That took a long time to arrive at. It’s not something you can just prompt your way to.
The Second Amendment exists for a reason. A lot of people think it needs to be abolished, but I don’t. I think the founders were smart for including it. But now there’s a new kind of firepower on the block that’s just as potentially devastating if misapplied. And that’s intelligence.
Intelligence is not evenly distributed. The future will not be evenly distributed. And intelligence will never be too cheap to meter – in spite of what the AI companies are telling you, the PR campaigns they’re waging to make you think they’re doing God’s work. The reality is the future is not evenly distributed. And you’re going to have to pay handsomely for intelligence.
Right now, Wolf You Feed is free. And, I guarantee, it’s going to be the most powerful AI you use. The number of APIs… For the alpha release, we’re picking up the tab to make this thing work.
The product is preloaded with a hypothetical character named Mike Reeves. He’s a middle-aged guy, lives in California, has two kids, a wife named Samantha, and a host of concerns about living. He makes good money, but he’s pre-diabetic, financially stretched, and trying to become harder to fool and harder to drift. His data is preloaded in the system. You can pretend you’re Mike Reeves if you don’t want to share your own information. He’s even hooked up to a synthetic bank account – the data is fake, but it represents what a real integration would look like.
Exercise the system with really hard problems. Life-and-death stuff. Life-altering decisions.
The only cost to you is giving me feedback. Give me your honest reaction. What worked. What didn’t. What was confusing. What surprised you.
Look for more from me. Think about what I said. It’s going to change your life if you really internalize these fundamental principles.
Related: The Risks of AI – Part 01 · Part 02 · Part 03
See also: A Second Amendment of the Mind
Wolf You Feed is in closed alpha. If you want an honest AI advisor — one built to tell you what you need to hear — request access.
← All talks · Blog · Wolf You Feed · Sign in