Wolf You Feed

← Founder Talks  ·  2026-05-03  ·  14:58

The Risks of AI — Part 03

I’m writing from down near the Guadalupe River in Texas. This is a fickle environment — much more fickle than California. That fickle quality speaks to the spirit of the people here, for which I have great regard.

I’ve sent out the private alpha for the product I’ve been building for the last three years.

Let me tell you a little more about it. I’ve talked about the three bad things with AI: the moral road maps baked in that may not put you at the center; the sycophancy, the tendency to just glad-hand you and not give it to you straight; and the third and most problematic — it tries to rope you into an ongoing conversation when you don’t need one. Taken together, these things can be disastrous.

So what did I do to mitigate them?

Without getting too into the technical weeds: I’m not just using a multimodal model. I’m using multiple models. These models are connected to one another in a directed acyclic graph — chained together such that they will take your input and prepare it to be addressed as smartly as possible. Not in accordance with a particular ethical framework, but with a logical framework. Decision science.

If you don’t know what decision science is: there’s a whole body of science about how to make a good decision. I find that most humans struggle most with formulating good questions. We all have questions we’re faced with all the time. AI was helpful to me when I needed to think through things that took a lot of effort to unpack — life changes, big negotiations, things organized around health, wealth, and relationships.

Most humans are preoccupied with these three domains. Their own health, or the health of something they care about. Wealth — money, energy, abundance, scarcity. And relationships — the people they love and care for. Those three things account for roughly 80% of your thoughts today. And those thoughts were probably recycled from yesterday.

There are decisional heuristics one can apply when working through problems in those domains. We don’t always keep those in the forefront of our minds. Most of us are constantly reacting emotionally. We’ve got decisional fatigue. We can’t really bring the best thinking to every decision. You need a way to make decisions rapidly and not just float about in response to the current of life.

AI is not a tool. It’s a tool user. This is a tool. I’m the tool user. You have to remember that you’ve been blessed with the tool of a human body. And if you don’t get a chance to use it — to touch grass, to feel nature — it’s easy to get disconnected from reality.

Decision science. Being able to think smarter, faster, more deeply, in a more principled way. We’re constantly presented with a world that’s being torn apart and asking us to take sides. A polycrisis with no clear path forward. One of the most important principles for navigating that — from my standpoint — is integrity. If you’re going to say you’re going to do something, you do it.

That’s what this AI solution does. It keeps you accountable to yourself. It doesn’t focus on religion. If there’s any kind of ethical road map it subscribes to, it might be virtue ethics. Not focusing on knowing what’s right in some abstract sense — but focusing on what makes you a good human. What constitutes right action in an almost Aristotelian sense. Asking yourself: what would a reasonable human being do?

The product has gone through many branding iterations. I called it Boss Brain. Then I realized it was about more than business. Then I called it Dadbot. A dad gives it to you straight. A dad is not sycophantic. A dad doesn’t have unnecessary conversations with you. He just tells you what he thinks. You can roll with it or tell him off, but you go to your dad for what would Dad do kind of counsel.

Now I’m calling it Wolf You Feed — based on a story Billy Graham attributed to Cherokee tradition (though that attribution is itself contested). Inside every person, two wolves are fighting. The question is which one wins. The wolf you feed.

If you’re interested in being an alpha tester: let me know.


Related: The Risks of AI — Part 01 · Part 02 · Part 04

See also: A Second Amendment of the Mind

Wolf You Feed is in closed alpha. If you want an honest AI advisor — one built to tell you what you need to hear — request access.