Samba co-founder · AI safety · IP and privacy regulation · startups · humanities

Edinburgh, Scotland

No trackers, no ads, no data collected or retained.

The connecting thread across decades of work is: IP controls software; software controls power; and power over software should be distributed, not concentrated by any single state or corporation. I co-founded Samba, the project that prevented Microsoft from monopolising networked file storage for billions of users, and have been working on variants of the same problem ever since — through open source development, privacy law, AI safety research, and startup advising. In 2026 these are now the conflicting, global concerns of digital sovereignty and the startup economy.

Agentic AI, Ethics and Artificial Organisations

  • Addressing the Biggest Problems in Using AI — Most work on ethics and correctness of AI tries to make individual models better-behaved. The Perseverance Composition Engine is very different, assuming that misbehaviour will happen, and structuring the system to catch it before harm is caused. This is modelled on the way human institutions have worked for centuries.
  • Slow LLMs and MCPs are Hiding Problems — Agentic AI systems are currently slow enough that serious concurrency and coordination problems remain invisible. This is a technical note about what happens when that changes.
  • Structure vs Constitution in AI Safety — Comparing two approaches, PCE vs Anthropic. Anthropic locates AI ethics in the agent, through training and guidelines. PCE locates it in the structure around the agent.
  • AI, PCE and the Geth Consensus — Science fiction saw artificial organisations coming years ago. This is a collaboration between me and my Consul agent exploring how the story of the Geth illuminates how multi-agent AI works.
  • Logical and thermodynamic reversibility — LLMs consume vast amounts of energy. One response to that is to build on the computer science principle that energy is not consumed by computation but by the erasure of information. By making LLMs which do not erase information, perhaps we can avoid an AI energy crunch.

More AI safety and agentic systems articles

Browse all content by topic or search full text.

Recently updated