Snow Crash and Standing Orders

I am using my personal Perseverence engine as I help develop the code, and I’m watching carefully to see how useful it is for developing analysis, review and writing. Evidence so far is mixed, but improving fast. I feel in control as I do with any other work tool, which I certainly do not when using a typical error-prone AI text interface. One of the reasons I feel in control is because there are more controls in place, that is the point of the Artificial Organisations concept. But another reason is that this tool is becoming more tuned to me all the time. ...

March 30, 2026 · 3 min · Dan Shearer

AI, PCE and the Geth Consensus

AI ethics and safety work mostly focuses on making individual models smarter and better-behaved, with not much hope this will succeed via guidelines and persuasian. You should especially not feel safe when an AI company reassures you about their guardrails. When you hear guardrails think of telling a dog “Don’t bite the furniture inside the house today”, because you can never know what will actually happen. The concept of Artificial Organisations doesn’t require AIs to be reliable, it ensures that when an AI goes wrong there are hard limits on how much damage it causes. Similarly we can put the dog outside the house, so no matter how bitey it is the furniture cannot be bitten. I have been spending a good deal of 2026 trying to use this concept to make AI less dangerous and more useful. I even have it studying me as an apprentice. This is mostly the opposite to Anthropic’s idea of a constitution. ...

March 6, 2026 · 11 min · Dan Shearer