Snow Crash and Standing Orders

I am using my personal Perseverence engine as I help develop the code, and am measuring to see how well it can be a useful writing tool. Evidence so far is mixed, but improving towards being very useful. I feel in control as I do with any other traditional text tool, and not uncomfortable as I often do with a typical AI chat interface. One interesting part of this is the following entry in the Standing Orders document, which is the set of instructions the AI reads every time it starts up: ...

March 30, 2026 · 403 words · Dan Shearer

AI, PCE and the Geth Consensus

Most AI safety work focuses on making individual models smarter and better-behaved, and it isn’t clear if it is even possible to get that right. I am among many who think models cannot be made safe, because typically what we do amounts to giving guidelines and persuasian. You should not feel safe when a giant company reassures you about their guardrails. When you hear guardrails think of telling a dog “Don’t bite the furniture inside the house today”, because it might or might not work and you can never know what will happen. The concept of Artificial Organisations is about arranging things so that when an AI goes wrong there are hard limits on how bad it can be. Like putting the dog outside, so no matter how bitey it is the furniture is safe. I have been spending a good deal of 2026 trying to use this concept to make AI less dangerous and more useful. I even have it studying me as an apprentice. ...

March 6, 2026 · 2185 words · Dan Shearer