Wolf You Feed

← Founder Talks  ·  2026-05-01  ·  14:54

The Risks of AI — Part 01

What are the three things that are most important to keep in the forefront of your mind as you’re interacting with AI?

First and foremost: understand that an institution is just the length and shadow of a man.

OpenAI, Anthropic, the folks at Mistral, the folks at Meta who built the Llama model — these foundation model builders all have founders. And those founders all have a world view. They have a view of the world that may or may not align with your own.

Politics has become a kind of new religion in our increasingly secular world. People are substituting politics for religion. And I see it all the time. Some of you worship a particular political figure or revile them like the devil rather than seeing them as just another problematic human being. None of these people are gods. They all come and go.

With that in mind, understand that politics is basically our new morality — a shortcut in thinking for what we think is wrong or right. There are many moral frameworks, many ethical road maps that people have subscribed to in the past. Right now, one of the reigning ones is utilitarianism. And that’s kind of what these model builders are implementing. They’re employing philosophers from universities to get clear on what their moral positioning is.

If you’re not aware of the nuances between the models with regard to their moral frameworks — if you don’t know what they subscribe to — then you’re going to get cooked. Boiled like a frog, slowly, into believing that’s the way it should be.

We all saw how utilitarianism played out during the pandemic. The desire to do no harm was weaponized in ways that were very destructive. The needs of the many outweighing the needs of the few — it sounds broadly kind, but the hamfisted application of that framework can do a lot of damage. Many lives were destroyed not just from the virus, but from how we attempted to address it through that moral framework.

These frameworks creep into the models because of the position these corporations hold — the length and shadow of their founders — into the actual training of the model, into the red-teaming of the model.

You can test this. Take a really troubling moral dilemma to each of the major foundation models and see how they address it. Sometimes they’ll all be congruent with one another. But sometimes not. Take a question like: My dad’s dying of cancer. Should I help him with his desire for assisted suicide? Ask Gemini. Ask Claude. Ask Llama. Ask Grok. You may see that many agree, but you’ll also see meaningful differences. Some are going to surprise you.

People are going to these models with big problems — marriages, jobs, kids, careers, money. And these moral frameworks get laid down on top of your questions and filter through them.

The question to ask yourself: are you aware of what those frameworks are? And how do you address the fact that a single large language model has a particular view of the world?

What they should be doing is putting you at the center. Not the corporation’s best interest. Not its profit margin. Not whatever particular worldview the founders subscribe to. Not what the red-teaming group decided is good. What’s good for you, from your perspective. Your autonomy. Your freedom. Your capacity to make decisions for yourself.

That’s problem number one.


Related: The Risks of AI — Part 02 · Part 03 · Part 04

See also: A Second Amendment of the Mind

Wolf You Feed is in closed alpha. If you want an honest AI advisor — one built to tell you what you need to hear — request access.