Field Notes

The Problem With OKRs

2026-04-096 min readSystemsWorkAI

OKRs are useful for measuring outcomes, but they often mistake declared intent for a system that can actually support the behavior they ask for.

I have been thinking about OKRs lately.

Not whether they work. They clearly can.

What interests me more is what they assume about how people and systems behave.

OKRs are built on a simple premise: if you define clear objectives and measurable key results, people will move toward them.

This is true.

It is also incomplete.

Over the last year, I have been focused on personal change. I lost a significant amount of weight. I have been practicing mindfulness. I have been trying to move toward something I would call radical wholeness.

At no point did I set an OKR like this:

  • Objective: Become healthier.
  • Key Result: Lose 30 pounds.

Not because it would not have worked.

Because it would have missed the system entirely.

The actual system looked more like this:

  • Walking more without tracking it obsessively.
  • Eating simpler food.
  • Paying attention to stress.
  • Sleeping better.
  • Noticing when I was acting out of habit instead of intention.

None of these map neatly to a dashboard.

Together, they changed everything.

This is where OKRs start to struggle. They are very good at measuring outcomes. They are much less effective at shaping environments.

Recently I asked an AI model how to design better systems for behavior change. Its answer was simple: make the desired behavior the path of least resistance.

That stayed with me because it names the difference clearly.

OKRs do not reduce resistance.

They declare intent.

In board games, this difference is obvious. A well-designed game does not tell you to win the game. It creates incentives, constraints, and feedback loops that make certain strategies emerge naturally.

Catan does not need an OKR.

The system teaches you how to play.

Most organizations do the opposite. We define objectives, measure results, and then ask people to navigate systems that were never designed to support those outcomes in the first place.

This creates a quiet tension: we optimize for what we can measure inside systems we do not fully understand.

AI is starting to amplify this. We can now generate OKRs faster than ever. Cleaner phrasing. Better structure. More alignment. More confidence in the shape of the sentence.

But the underlying question remains the same.

Are we measuring the right thing, or just measuring it more efficiently?

Lately I have been more interested in a different question:

What system would make the right behavior inevitable?

That is a harder question. It is less portable. It does not fit as neatly into a quarterly planning document.

It also feels closer to reality.

If everything in an organization depends on people constantly pushing uphill toward a metric, the problem may not be motivation.

It may be the system.

OKRs are useful.

But they are not the system.

And sometimes treating them like one is where things start to break.