Rule-based Epidemic Modelling and Malaria

The Rule-based Epidemic Modelling (RBEM) Project was funded by the UK Medical Research Council project (grant X/011658/1) to make rule-based modelling methodology more accessible to the infectious disease modelling community. The rule-based approach is instead of writing differential equations, and there can be human as well as technical advantages to this. Our pre-print (draft) paper illustrates how the approach can be applied to classical problems, in addition to our previous work (cited below) on novel problems during pandemic pressures. ...

April 14, 2026 · 4 min · Dan Shearer

Active Heat Exchanger

The Active Heat Exchanger (AHE) research programme started as an engineering development project, discovered first an engineering research question and then a health research question. We address the problem of houses making their occupants sick, but we also solve difficult data problems highlighted by the COVID-19 pandemic. Here is where we started: I walked into the Edinburgh Hacklab one day in 2022 and I saw this arrangement of fans and tubes in the window: ...

April 10, 2026 · 13 min · Dan Shearer

Structure vs Constitution in AI Safety

Anthropic publishes its constitution along with research about where the constitution works and where it does not. The current version is an ethical treatise addressing Claude discussing safety, ethics, Anthropic’s guidelines, and helpfulness, in that order when they conflict. Anthropic favours cultivating good values and judgment over strict rules. In 2026, Anthropic’s operational judgment failed twice in the same way, leading to the leak of the Claude Code source code.The constitution asks Claude to imagine how a “thoughtful senior Anthropic employee would react”, but what happens when the organisation’s structure fails? ...

March 31, 2026 · 5 min · Dan Shearer

Snow Crash and Standing Orders

I am using my personal Perseverance engine as I help develop the code, and I’m watching carefully to see how useful it is for developing analysis, review and writing. Evidence so far is mixed, but improving fast. I feel in control as I do with any other work tool, which I certainly do not when using a typical error-prone AI text interface. One of the reasons I feel in control is because there are more controls in place, that is the point of the Artificial Organisations concept. But another reason is that this tool is becoming more tuned to me all the time. ...

March 30, 2026 · 3 min · Dan Shearer

Slow LLMs and MCPs are hiding problems

This is a technical note about a problem that is going to bite agentic AI users soon. AI is slow, and Agentic AI is even slower. I develop an MCP server that generates PDF documents, and I work with the Agentic Perseverance Composition Engine daily, and AI seems so, so slow. There’s so much waiting, and every mistake means yet more sitting around. Tasks we know actually take maybe 5 microseconds on an operating system (eg, does a file called Things-to-Do exist?) can take one million time longer – between 2 and 5 seconds. This is because the big brain in the cloud is being consulted multiple times, often with timeouts. It’s a young, unstable and unreliable stack, rather like the early days of MS DOS or the Apple ][. When AI gets hold of the data from your computer via an MCP server it can do some very interesting things, but it is not put together well. ...

March 12, 2026 · 6 min · Dan Shearer

AI, PCE and the Geth Consensus

AI ethics and safety work mostly focuses on making individual models smarter and better-behaved via guidelines and persuasion, with not much hope this will succeed. You should especially not feel safe when an AI company reassures you about their guardrails. When you hear guardrails think of telling a dog “Don’t bite the furniture inside the house today”, because you can never know what will actually happen. The concept of Artificial Organisations doesn’t require AIs to be reliable, it ensures that when an AI goes wrong there are hard limits on how much damage it causes. Similarly we can put the dog outside the house, so no matter how bitey it is the furniture cannot be bitten. I have been spending a good deal of 2026 trying to use this concept to make AI less dangerous and more useful. I even have it studying me as an apprentice. This is mostly the opposite to Anthropic’s idea of a constitution. ...

March 6, 2026 · 10 min · Dan Shearer

The biggest problems in using AI

There are many problems with the AI billions of people use in 2026, discussed endlessly at all levels of society. From the end of 2025 I became interested in the particular problems of ethics and reliability, and why the approaches taken by all of the large AI companies are not good enough. Predictability, or ‘alignment’ as they call it, is just not something we can expect from this type of AI. ...

March 6, 2026 · 11 min · Dan Shearer

One Health and Epidemiology

In late 2025 the Rule-based Epidemiology Modellings began to consider the wider context of their work. On the one hand, the techniques of epidemiology save lives at scale, but on the other, emerging diseases and newer health-related epidemics are accelerating. We asked: ”How can a scholar quickly grasp epidemiology basics?” This resulted in my paper discovering epidemiology. Beginning with what epidemiology is not, we see how disease management saved millions of lives from about 1950. Epidemiology matured through the 20th century but stalled in the early 21st. It became clear that epidemiology was in- sufficient, so political consensus was found to expand the scientific scope to One Health. One Health treats ecology, animals and humans as a system of systems across dozens of science fields, using the language of epidemiology. The newly-refocused World Health Organisation is committed to get 2030 global health goals back on track with a One Health approach, with further millions of lives at stake. Many new One Health scientists and scholars are not epidemiologists, and this paper is for them. ...

February 10, 2026 · 1 min · Dan Shearer

Logical and Thermodynamic Reversibility

Large Language Models are subject to the laws of physics in a bad way, because they use so much electricity and make so much heat. I was interested to learn about a Mr Landauer and his principle of thermodynamic reversibility, which suggests physics might also help, by greatly reducing the amount of power required by AI datacentres. That still leaves many, many AI problems including an economic bubble, but it would definitely help. ...

February 1, 2026 · 3 min · Dan Shearer