Large Language Models are subject to the laws of physics in a bad way, because they use so much electricity and make so much heat. I was interested to learn about a Mr Landauer and his principle of thermodynamic reversibility, which suggests physics just might come to our rescue and greatly reduce the amount of power required by AI datacentres. (Note ’logical reversibility’ sounds confusingly like reversible computers and backwards execution, but apart from a general spirit of going backwards, they are completely unrelated.)
Logical and thermodynamic reversibility
This all starts with Landauer’s 1961 principle that energy is not consumed by computation but by the erasure of information. This seems strange at first, but erasure forces a two-state system into one state, which increases entropy, which exit as heat. Bennett’s 1973 extension showed that any computation can be restructured to perform no erasure at all, so that the inputs are recoverable from outputs at every step. Bennett called called this logical reversibility, and it carries no Landauer penalty. From this we can see thermodynamic reversibility is the physical consequence of logical reversibility: when no information is erased, no entropy is generated, and the energy used to perform each computational step can be recovered and reused rather than lost as heat. And if you run an AI datacentre that would be wonderful.
The dependency runs in one direction only: thermodynamic reversibility requires logical reversibility, because an erased bit commits an irrecoverable entropy debt before the hardware gets any say. But logical reversibility can run on any old hardware and generate a lot of heat in the process.
Logical reversibility
Janus is a reversible language developed in 1982 and formally specified in 2007. Janus makes it impossible to write a program that discards information, and provides a program inverter that runs any Janus program cleanly backwards without a history tape. The harder problem is extending this to concurrent programs, where interleaving makes reversal non-trivial.
A 2022 mathematical paper on Reversing an Imperative Concurrent Programming Language from the University of Leicester demonstrates this difficult problem is solvable. The paper Reversible Execution for Robustness in Embodied AI and Industrial Robots (one author in common with the previous paper) says:
We thus demonstrate how a traditional AI-based planning approach is enriched by an underlying reversible execution model that relies on the embodiment of the robot system
In 2025 it was shown that reversible architectures can work at scale for Large Language Models where transformer architectures become reversible. This works by reconstructing hidden states during backpropagation rather than storing them, and achieves order-of-magnitude memory reductions without sacrificing accuracy. This is not yet running on reversible hardware because such hardware doesn’t exist yet, but it demonstrates that the software side may be solvable. The models train on conventional GPUs, but they are logically reversible.
Taking this paper at face value, it would seem such logical reversibility meets the precondition for thermodynamic reversibility, giving us energy that can be claimed back if the hardware allows it. And that is where physics is on our side.
Thermodynamic reversibility
Vaire Computing in London are building practical reversible hardware which preseves the energy gains that logical reversibility permits to be reclaimed, described in their 2025 hardware paper. The underlying principle is to stop discarding information, so that you then stop paying the penalty for doing so. Their stated aim is to have a product in 2027.
It seems the Reversibile Computing 2026 conference in Torino has Vaire as its first commercial sponsor, suggesting the academic and engineering communities may be meeting in the middle. Because of my interest in reversible execution I have followed the output of this excellent conference for a long time.