The Lindy Effect for AI

Why the best AI systems get better with age. Applying Nassim Taleb's Lindy effect to intelligence infrastructure.

In the 1960s, comedians at Lindy's deli in New York observed a peculiar pattern: the longer a Broadway show had been running, the longer it was expected to continue running. This wasn't optimism. It was statistics. A show that survived its first week had proven something about its appeal. A show that survived its first year had proven much more.

Nassim Nicholas Taleb formalized this observation as the Lindy effect: for non-perishable things — ideas, technologies, books, cultural practices — future life expectancy is proportional to current age. A book in print for 50 years will likely be in print for another 50. A technology that has been useful for a century will likely be useful for another century.

The Lindy effect is perhaps the most powerful concept missing from artificial intelligence. And it might be the key to everything.

The Anti-Lindy Problem

Current AI systems are anti-Lindy. They depreciate over time, not appreciate.

A language model trained in 2024 is already less relevant in 2026. Its training data is stale. The world has moved on. New events have occurred. New knowledge has been produced. Relationships have changed. The model sits frozen — a snapshot of a world that no longer exists.

This is the fundamental contradiction of modern AI: we build systems that are powerful at the moment of deployment and weaker every day after. The longer they run, the less useful they become. This is the opposite of the Lindy effect. It is the opposite of compounding. It is, by Taleb's framework, fragile.

If a book has been in print for forty years, I can expect it to be in print for another forty years. But, and that is the main difference, if it survives another decade, then it will be expected to be in print another fifty years.

Now imagine an AI system with this property. A system that has been running for a year is expected to be valuable for another year. If it runs for two more years, it's expected to be valuable for three. The longer it runs, the more valuable its future becomes. Not because of some external improvement, but because of what it has accumulated.

This is what Unfragile builds.

What Makes Something Lindy-Compatible?

Not everything follows the Lindy effect. Perishable things — biological organisms, mechanical parts, anything with built-in decay — follow the opposite pattern. The older they get, the closer they are to failure.

For something to be Lindy-compatible, it must be non-perishable and it must accumulate value through survival. A book survives because it continues to provide value to readers. A technology survives because it continues to solve real problems. Each year of survival is evidence of enduring value.

Current AI is perishable by design. It is trained once, deployed as a frozen artifact, and slowly decays in relevance. To make AI Lindy-compatible, we need to change its fundamental architecture. It must:

  • Accumulate experience — Every interaction, every success, every failure deposits value into the system.
  • Refine itself continuously — Not through periodic retraining, but through ongoing adaptation to real-world usage.
  • Build structural memory — Not flat retrieval, but layered, organized memory that compounds over time.
  • Metabolize disorder — Edge cases, failures, and unexpected inputs make the system stronger, not weaker.

An AI system with these properties doesn't just survive over time. It gets better. Its future becomes more valuable with each passing day. It is Lindy-compatible.

Compound Interest for Intelligence

Albert Einstein reportedly called compound interest the eighth wonder of the world. Whether or not he said it, the math is real: small, consistent deposits that compound over time produce extraordinary results.

The same principle applies to intelligence — but only if the system is designed for it.

Consider two AI systems. System A processes 1,000 interactions and forgets all of them. System B processes 1,000 interactions and deposits value from each one into structured, persistent memory. After a year, System A is exactly where it started. System B has accumulated a rich substrate of knowledge, patterns, workflows, and commitments.

Now consider the second year. System A processes another 1,000 interactions with the same blank-slate approach. System B processes another 1,000 interactions, but each one is enriched by the accumulated context from the first year. The quality of every interaction in year two is higher because of what was learned in year one.

This is compounding. By year three, System B isn't just 3x better than it started — it's exponentially better, because the accumulated value of years one and two makes every interaction in year three more valuable. The deposits compound on the deposits.

System A will always be where it started. System B will always be better than it was yesterday. Given enough time, the gap becomes infinite.

The Three Engines of Compounding

For AI to follow the Lindy effect, it needs three compounding engines running simultaneously:

1. Memory Compounding

Knowledge accumulates in layers. Working memory assembles the right context for each moment. Episodic memory builds narrative understanding of relationships and events. Semantic memory constructs an increasingly dense knowledge graph. Procedural memory refines workflows through repetition. Prospective memory tracks commitments and drives follow-through.

Each layer compounds independently, and they reinforce each other. The result is not just more data — it's deeper understanding. And deeper understanding makes every future interaction more valuable.

2. Feedback Compounding

Every interaction generates signal. What worked? What didn't? What was the user actually looking for? What edge case was encountered? This signal, when captured and structured, becomes the raw material for improvement.

Most AI systems discard this signal entirely. Lindy-compatible AI captures it, structures it, and feeds it back into the system's adaptive loops. Over time, these feedback loops produce extraordinary precision — not through periodic retraining, but through continuous, organic refinement.

3. Cross-System Compounding

The most powerful form of compounding happens when multiple systems share a common intelligence substrate. What one system learns feeds all others. A breakthrough in one domain enriches understanding across every domain.

This is the network effect applied to intelligence. Each new system added to the network doesn't just benefit from the shared substrate — it contributes to it. The value of the network grows faster than the number of participants. This is the Lindy effect at the infrastructure level.

Time as an Ally

The most profound implication of the Lindy effect for AI is the relationship with time. In the current paradigm, time is the enemy. Models become stale. Training data becomes outdated. Systems require expensive retraining cycles to remain relevant.

In a Lindy-compatible paradigm, time is the ally. Every day that passes adds value. Every week of operation deepens understanding. Every month of accumulated experience makes the system more capable, more precise, more valuable.

This changes the fundamental economics of AI. Current AI is a depreciating asset — like a car that loses value the moment you drive it off the lot. Lindy-compatible AI is an appreciating asset — like a vineyard that produces better wine with each passing year.

Technology is at its best when it is invisible. The Lindy effect suggests the technologies that disappear into the background — that become so useful they're taken for granted — are the ones that endure. AI should aspire to this.

Building for the Long Term

The Lindy effect has practical implications for how we design AI systems.

Persistence over performance. Raw capability at deployment matters less than the rate of compounding over time. A system that starts at 80% capability but improves 1% per week will surpass a system that starts at 99% and remains static. Given enough time, the compounding system becomes incomparably better.

Structure over volume. More data doesn't create the Lindy effect. Structured data does. A five-layer memory architecture with 10,000 well-organized entries compounds faster than a flat vector store with a million entries. Architecture matters more than scale.

Disorder over control. Systems designed to avoid disorder are fragile. Systems designed to metabolize disorder are antifragile. The Lindy effect rewards antifragility — systems that encounter and absorb more disorder compound faster, because each disturbance is a learning opportunity.

Patience over speed. Compounding is slow at first and exponential later. The early stages of a Lindy-compatible AI look unimpressive compared to a static system deployed with maximum capability. But given time — months, years — the compounding system becomes qualitatively different. Building for the Lindy effect requires patience and conviction that the math will work.

The Long Now

We named our company Unfragile because we believe the opposite of fragile is the most important thing you can build. The Lindy effect is the mathematical expression of that belief.

We build AI systems that are designed from the ground up to follow the Lindy effect. Systems where time is an ally, not an enemy. Where every interaction deposits value into a persistent, compounding substrate. Where the future is always more valuable than the past.

The best AI system is not the one that's most powerful today. It's the one that's most powerful tomorrow. And the day after that. And the day after that. Forever.

That's the Lindy effect for AI. That's what we're building.

Unfragile builds AI infrastructure that follows the Lindy effect — systems that get stronger the longer they run. Join our list to follow the research.

← All Research