Discovery of ignorance and the exploration loop

Rereading Sapiens over Christmas, the idea that the engine that drives most of the world’s progress in the past 500 years is a compounding loop among science, capitalism, and empire stuck with me. Surprisingly, this loop is kicked off by a mindset shift: discovery of ignorance.

The scientific revolution is not a revolution of knowledge, but the discovery that humans are ignorant. Europeans openly admit collective ignorance regarding important questions, compared to pre-modern times when many assumed God / tradition already knew all the important matters.

Maps of ignorance

We can see this mindset shift by comparing two maps: A 1459 world map (Fra Mauro) had mythical creatures show up in the margins. Then you get the Salviati Planisphere around 1525 and something changes: parts of the world are just… left blank. Not decorated or explained away—blank on purpose. That blank space is a new kind of public honesty, and an invitation to go find out.

Fra Mauro (1459)

Salviati Planisphere (1525)

This small detail signals a psychological and ideological breakthrough of scientists and conquerors: they admit they’re ignorant of large parts of the world. So they need to go out to discover—which expands both knowledge and territory.

Compounding loop: Science ↔ Capital ↔ Empire

This is where the compounding loop forms: Science turns blank space into knowledge. Capitalism / capital funds exploration before there’s proof it will work. Empire turns discovery into durable advantage (routes, legitimacy, treaties, control). And then it compounds: advantage brings more capital; more capital funds more exploration.

Ignorance admitted → capital funds voyages → science updates the map → empire expands reach/resources → capital grows → more voyages

In 2025, the 'Empire' often becomes corporations that can fund long cycles of exploration, translate discovery into products people actually use, and defend / scale advantage through the energy contracts, the proprietary data sets, and the default distribution.

What would this look like in the age of AI?

The AI map is also mostly blank. We don’t fully know what models will reliably do in the wild. We don’t know what people will trust. We don’t know what becomes habit vs. novelty. So my hypothesis is: Enduring advantage in AI will come from teams that can own the exploration loop, rather than teams that get a single breakthrough.

Capital and infrastructure fund and enable exploration. Exploration produces knowledge. Knowledge creates power and advantage. Advantage attracts more capital. Model talent matters, but the dominant advantage comes from owning the loop (compute, data, distribution, real-world feedback).

In the 1500s–1800s, “exploration” meant ships, navigators, maps, ports, financiers, and state backing. In AI, exploration means running huge numbers of experiments (training + inference), but the constraints are different: compute, energy, deployment surfaces, and feedback loops.

For example: energy access is one physical gate that decides whether capital becomes real experimentation or stays theoretical. Whoever secures it early can run more experiments, iterate faster, deploy more capacity, and get more real-world feedback. That can translate into higher quality, broader distribution, more revenue, stronger habits, and then more capital to secure more infrastructure.

Examples: industrial-scale loop vs tight loop

OpenAI × Microsoft is the industrial-scale version: capital, compute, distribution, and governance intentionally linked. Microsoft has explicitly described a “multiyear, multibillion dollar” investment partnership, and the relationship is designed around turning frontier exploration into real-world deployment at scale.

Midjourney is the tight-loop version: a small but mighty team exploring a narrower knowledge gap (what people want in images / taste). They built a capital loop through subscriptions (steady funding to buy compute and keep iterating). Importantly, they built distribution and feedback through a community workflow (Discord), and as they moved into more compute-intensive territory (video), they explicitly priced it as much more expensive than images.

Photo by NEOM on Unsplash

If you’re building / investing, here’s what I’d watch

For builders

  • Audit for ignorance. What’s your blank space?

  • Pick a loop you can sustain. Don’t build a loop that dies before it learns.

  • Choose “good revenue.” Money that also teaches you.

For investors

  • Where’s their science? What are they actually learning?

  • Where’s their capital? Who funds exploration when it gets expensive?

  • Where’s their empire / corporation advantage? What channel, contract, or platform position makes adoption hard to replace?

Photo by NEOM on Unsplash

What would prove this hypothesis wrong?

  • If small teams repeatedly win frontier capability without privileged access to compute/energy/distribution—meaning the “capital + empire” advantage stops mattering.

  • If distribution empires can win with mediocre AI (defaults/bundling) without needing real learning—meaning “science” becomes optional.

  • If exploration gets radically cheaper (efficiency leaps) so the loop no longer needs heavy capital, and advantage shifts mainly to taste/community.

There are real stress tests to this idea. Efficiency jumps (DeepSeek is the loudest recent example) suggest frontier-ish capability might become less capital-gated. Reuters reported DeepSeek said training its R1 model cost about $294,000 (with caveats around what’s included), which is the kind of number that makes people rethink the “only giants can play” narrative.

And big platforms are also clearly pushing default distribution (Apple Intelligence is now default-on; Microsoft is auto-installing Copilot), which could make “control” matter more than “learning” in some contexts. The open question is whether these are exceptions, or the early shape of what comes next.