Can Time Be Computed? Part II
Why causality might be the real computational primitive
When halting becomes optional
I remember the afternoon clearly. It was like every other debugging afternoon, except this one felt heavier.
I was chasing a race condition in a distributed system: one of those bugs that appears only when enough users click enough buttons fast enough, vanishes if you trace it, and refuses to reproduce in any deterministic way.
A value would appear in my logs before it was supposed to exist. A future bleeding into the present.
And for a moment, I realized (not dramatically, just quietly, like noticing a stain on your favorite shirt) that the problem wasn’t logic, at all. It was time.
Every failure I’d seen in computation, like for example deadlocks, Heisenbugs, eventual consistency anomalies, was a failure of ordering, not of truth.
And that thought carried me somewhere I had never expected: into the idea that maybe time itself is a kind of computation.
Or maybe computation is a shadow cast by time. Or maybe neither is fundamental, and what we think of as execution is just a trick our brains play on us.
Waiting for the Future
The Halting Problem, which we’ve alredy covered, is usually presented like a brick wall:
“No algorithm can decide, for every possible program and input, whether it halts.”
It feels eternal. Immutable. Absolute.
But take a closer look. Its “magic” relies on something subtle: time. A program halts if, eventually, a halting state sh is reached. If it doesn’t, you can never know in finite time.
In more formal terms, let a program be a sequence of states.
Halting is essentially the predicate:
Non-halting is simply:
Notice the asymmetry: the “yes” case is witnessed finitely, the “no” case is only witnessed infinite time later.
That waiting, the “eventually,” is baked into the problem. Remove time, and halting loses meaning.
Physics as constraint, not execution
In physics, especially modern physics, “waiting” isn’t guaranteed. Einstein’s equations don’t evolve; they rather constrain:
No “next step”. No “after”. There’s only a four-dimensional block satisfying relational constraints.
Quantum mechanics complicates it further. The Schrödinger equation provides smooth evolution:
…but measurement collapses states discontinuously. And, what about quantum gravity? The hamiltonian version of Wheeler–DeWitt equation annihilates time entirely:
No t. No sequence. No “eventually.” Just a series of solutions that exist or don’t.
Physics does not run; it satisfies.
Execution is our interface, not the universe’s method.
Computation without steps
I first realized this when reading about closed timelike curves (CTCs).
Imagine a world where the past can interact with the future: a universe with loops in causality.
A computer on a CTC doesn’t run sequentially. It must satisfy a self-consistency condition.
Deutsch formalized it in a beautiful way:
Here, ρ is a density matrix representing the state of the system, and Φ is a completely positive map describing interaction with its own past.
Execution disappears. There is no s0→s1→s2. Only a state that works.
Problems that are usually thought to be intractable, suddently collapse to solutions, not by computing faster, but because the problem itself has been reframed: run replaced by exist.
I love this because it feels like cheating, except for the fact that the whole universe isn’t cheating.
We just assumed a rule (sequential steps) that no longer applies.
Halting as satisfiability
So…. What happens to the Halting Problem, then?
Instead of asking:
“Does the machine eventually reach sh?”
We ask:
“Does there exist a configuration s* satisfying the transition constraints and the halting condition?”
Symbolically:
This is no longer halting. It’s pure satisfiability. SAT.
Undecidability, once thought eternal, collapses. Not because the universe is super-Turing. But because the universe doesn’t need a clock to decide.
I remember leaning back in my chair and whispering to no one:
“Oh… so this is what time was doing all along.”
The missing clock
Chaos theory feels inevitable. Sensitive dependence on initial conditions:
But if there is no t, there is no exponential divergence. Only complex relational structure.
Chaos becomes not a dynamical phenomenon but a structural one: “I cannot know the whole from partial information”. The unpredictability persists, but it is epistemic, not temporal.
Time was doing invisible work again. Remove it, and the phenomenon mutates.
The structural version
Wolfram tells us that some systems can only be understood by running them. But “running” is a temporal concept.
In a timeless universe, irreducibility morphs. It becomes clear that:
“The only way to know the global structure is to analyze the entire relational network.”
Formally, let a system be defined by a set of constraints C over states S. Irreducibility is:
You cannot shortcut. Not because of time, not at all. But because structure itself resists local reasoning.
Why Time Feels Real
So why does time feel so real? Why does computation work at all? Why do programs execute reliably in the world we inhabit, instead of collapsing into incoherence the moment we press “run”?
At first glance, it’s almost miraculous.
The universe, a jumble of quantum fields and fluctuating geometries, somehow behaves as though it were sequential, as though there were an arrow pointing from “before” to “after.”
But the secret, I realized, lies in three deeply interwoven ingredients:
Decoherence — the quantum-to-classical alchemy that stabilizes records. Without decoherence, “memory” cannot exist. A bit in RAM, a log entry, even a thought; all of these are ephemeral superpositions that would collapse if left unobserved. Decoherence carves out islands of classicality from the quantum sea, giving us something we can call a state at all.
Entropy gradients — the thermodynamic arrow. Low entropy in the past and higher entropy in the future gives directionality. Irreversibility emerges. A program halts not because it must, but because the universe conspires to make irreversible transitions happen. Without entropy gradients, the notion of “progress” would evaporate. Your loops could oscillate forever without any emergent asymmetry.
Causal stability — macroscopic spacetime, the approximate global hyperbolicity that gives us a reliable causal order. Lightcones don’t randomly rotate. Past and future are roughly separable. Partial orders are locally well-behaved. Without this, even classical computation would collapse into ambiguity: a signal could arrive before it’s sent, and your neat sequence of state transitions would have no semantic meaning.
Together, these three factors carve out pockets of reality where causal order, memory, and execution make sense.
Within those pockets, computation works reliably. It feels fundamental, but it’s a phase, not a law.
A convenient feature of the universe’s current “operating mode,” not a guarantee for all of existence.
Execution is Fragile
And then, like a subtle shock, I realized something terrifyingly mundane: I had been staring at the same principle in my own work all along.
Every time I debugged a distributed system, the failures were rarely about wrong values. They were about ordering:
Deadlocks — processes waiting on each other forever, trapped by causal dependencies.
Races — nondeterministic outcomes arising from subtle shifts in ordering.
Livelocks — systems spinning endlessly without making progress, alive but unproductive.
Eventual consistency anomalies — the illusion of stability until the system “catches up”…. or doesn’t.
Every one of these problems is a temporal problem. Remove time, or scramble it, and the systems collapse in ways logic alone cannot predict.
Execution is fragile. It relies on scaffolding we rarely notice: stable memory, arrow-of-time, consistent partial order.
Take away any piece, and the machinery of computation, the thing we assume is rock-solid, ceases to make sense.
The universe doesn’t compute. It exists.
Execution, halting, irreducibility: these are projections. They are interfaces for observers embedded in a particular causal phase of reality.
Halting becomes satisfiability. Complexity suddently becomes constraint.
The deep laws of nature don’t “run” programs; they encode global consistency conditions that, to us, appear as sequential evolution.
Beyond Undecidability
So if halting collapses into satisfiability, chaos into structural opacity, irreducibility into relational depth, then what remains? What are the true limits of understanding?
They are no longer temporal. They are structural. They are about existence.
Formally, we can describe them as:
where R(X) is a set of relational or constraint-based rules defining a system.
Notice the shift: the question is no longer “Can this program be computed?” but “Can this global structure exist without contradiction?”
A program that fails to halt in classical computation is harmless: it just spins forever.
But in physics, inconsistency is catastrophic. A relational network with no self-consistent configuration simply cannot exist.
The universe cannot “run” an impossible program. Non-existence is the ultimate computational limit.
Consider a simple example. Imagine a set of fields ϕi with constraints:
Does there exist a set of values {ϕ1,ϕ2,ϕ3,ϕ4} satisfying all constraints? That is the fundamental question.
Not whether a sequence of updates will reach a solution, but whether the solution exists at all.
And that, I realized, is the fundamental limit of reality: structural consistency. Not undecidability, not non-termination. Just existence itself.
Every time we think we are bumping against the “hard limits” of computation (P vs NP, the Halting Problem, irreducibility) we are really bumping against the limits of the phase of reality we inhabit.
Outside the decoherence, outside the entropy gradient, outside causal stability, those limits may dissolve.
Halting, chaos, and irreducibility are just mere artifacts of our temporally embedded perspective.
The radical implications
Execution depends on time. Time depends on physical conditions. And undecidability depends on execution.
Step back, and it all becomes a chain of contingencies:
Remove the first link, and the rest vanish.
The universe doesn’t step forward. It satisfies. It exists. It is a structure, not a computation.
And what we perceive as halting, chaos, or irreducibility is simply the shadow of our perspective, projected onto a universe that doesn’t fundamentally need time to exist.
The deepest limits, it turns out, are not computational. They are ontological. Not about “what can be done,” but about “what can exist.”
And in that realization, every programmer, physicist, and philosopher finds a quiet, unnerving thrill: the universe doesn’t need to run to be infinite.
Seeing the universe as constraint
I started Part I with a debugging story. I end Part II the same way.
Systems fail not because they execute incorrectly, but because they try to execute in a regime where only existence matters.
The universe does not run.
It satisfies.
Time is not its clock. It is our interface.
Computation is not its language. It’s our projection.
Undecidability, chaos, irreducibility; these are features of temporally embedded observers. They are not fundamental.
And the question that lingers, the one I cannot shake, is this:
If the universe is a solution, not a process, then what determines which solutions exist at all?
Not what happens. But what is allowed.



