Structure vs Constitution in AI Safety

Anthropic publishes its constitution along with research about where the constitution works and where it does not. The current version (January 2026) is an ethical treatise written to Claude. The priorities are safety, ethics, Anthropic’s guidelines, and helpfulness, in that order when they conflict. Anthropic favours cultivating good values and judgment over strict rules, comparing their approach to trusting an experienced professional rather than enforcing a checklist. In March 2026, Anthropic’s operational judgment failed catastrophically: a one-line .npmignore error led to the leak of 512,000 lines of Claude Code source code.The constitution asks Claude to imagine how a “thoughtful senior Anthropic employee would react”, but what happens when the organisation’s structure fails? ...

March 31, 2026 · 5 min · Dan Shearer

Snow Crash and Standing Orders

I am using my personal Perseverence engine as I help develop the code, and I’m watching carefully to see how useful it is for developing analysis, review and writing. Evidence so far is mixed, but improving fast. I feel in control as I do with any other work tool, which I certainly do not when using a typical error-prone AI text interface. One of the reasons I feel in control is because there are more controls in place, that is the point of the Artificial Organisations concept. But another reason is that this tool is becoming more tuned to me all the time. ...

March 30, 2026 · 3 min · Dan Shearer

Slow LLMs and MCPs are hiding problems

This is a technical note about a problem that is going to bite agentic AI users soon. AI is slow, and Agentic AI is even slower. I develop an MCP server that generates PDF documents, and I work with the Agentic Perseverance Composition Engine daily, and AI seems so, so slow. There’s so much waiting, and every mistake means yet more sitting around. Tasks we know actually take maybe 5 microseconds on an operating system (eg, does a file called Things-to-Do exist?) can take one million time longer – between 2 and 5 seconds. This is because the big brain in the cloud is being consulted multiple times, often with timeouts. It’s a young, unstable and unreliable stack, rather like the early days of MS DOS or the Apple ][. When AI gets hold of the data from your computer via an MCP server it can do some very interesting things, but it is not put together well. ...

March 12, 2026 · 6 min · Dan Shearer

AI, PCE and the Geth Consensus

AI ethics and safety work mostly focuses on making individual models smarter and better-behaved, with not much hope this will succeed via guidelines and persuasian. You should especially not feel safe when an AI company reassures you about their guardrails. When you hear guardrails think of telling a dog “Don’t bite the furniture inside the house today”, because you can never know what will actually happen. The concept of Artificial Organisations doesn’t require AIs to be reliable, it ensures that when an AI goes wrong there are hard limits on how much damage it causes. Similarly we can put the dog outside the house, so no matter how bitey it is the furniture cannot be bitten. I have been spending a good deal of 2026 trying to use this concept to make AI less dangerous and more useful. I even have it studying me as an apprentice. This is mostly the opposite to Anthropic’s idea of a constitution. ...

March 6, 2026 · 11 min · Dan Shearer

Addressing the biggest problems in AI

There are many problems with what billions of people perceive to be AI in 2026, not least sustainability in many senses. All of the large AI companies are following more-or-less the same approaches to solving safety and predictability. Since February 2026 I have been working on the code of a different approach, and it really does seem promising. Addressing the Biggest Problems in AI The Perseverance Composition Engine (PCE) approaches pressing problems in AI from a different perspective. PCE does not try to make LLMs behave better. Instead, PCE applies familiar structure from human organisations so their inevitable misbehaviour is detected and corrected. This article explains why Artificial Organisations are worth trying. And if you’re not a computer scientist but you think you’ve heard this all before, you are right: games and scifi fans rejoice!. ...

March 6, 2026 · 11 min · Dan Shearer

Logical and Thermodynamic Reversibility

The topic of reversible computers and backwards execution is quite different to logical reversibility. My experiences of reversibility were all driven by correctness in software and it didn’t matter that under the bonnet nothing truly executes backwards. Now AI with its hungry datacentres has made energy a top priority problem to solve, and development of reversibile hardware to achieve thermodynamic reversibility seems feasible. Logical and thermodynamic reversibility Large Language Models are subject to the laws of physics in a bad way, because they use so much power and make so much heat. With reversibility, physics might come to our rescue and greatly reduce the amount of power required. ...

February 1, 2026 · 4 min · Dan Shearer