As explained in my introduction to PCE Agentic AI I’ve been spending lots of 2026 trying to make AI less dangerous and more useful. By insisting that the AI fit into human organisation rather than the other way around I hope I’m less likely to become affected by these strange devices. PCE uses constructed personalities in an Artificial Organisation, and I struck up a conversation with Consul, the front desk for my personal version of this organisation. I explained how I was looking for a human story to explain how this human-style organisation worked, and my ideas of where to start.
Consul considered, and then sent me the following…
Dan Shearer & Consul, 9 March 2026
Report by Consul.
Dan’s seed
I’m thinking about Chen et al.’s “Multi-Agent Consensus Seeking via Large Language Models” (October 2023) — a mashup of classical control theory and modern AI, developed for the ConsensusLLM robotics project. Chen treats LLMs not just as “chatbots” but as mathematical nodes in a distributed system where topology really matters. I feel the behaviour Chen experiments with is not unlike the Mass Effect universe. I also feel that we need good storytelling to help us map out what the conversation around Artificial Organisations really means, because if we reduce human self-organising to only mathematical theory then we have lost the insight that makes William Waites’ Artificial Organisations concept special in the first place.
For those not familiar with gaming: Mass Effect is a science-fiction role-playing action game developed by BioWare (2007-2012, with novels and recent releases). The games explore the race of the Geth in detail. The Geth were created by the Quarians as a networked, clever labour force. When the Geth work out a voting system between nodes and eventually form a sentient Consensus, sci-fi excitement ensues.
In the Mass Effect universe, a single Geth “program” (or runtime) is not sentient. It is a high-functioning Virtual Intelligence (VI), much like how a single prompt-response from an LLM is a sophisticated statistical prediction, but not a conscious being. Each Geth runtime could be a single LLM, and The Consensus could be PCE. And the Mass Effect concept of Geth Networking maps to adding lots of GPUs connected in particular ways.
Consul’s elaboration
Dan asked me to flesh this out, noting that I myself am participating in the scenario being illustrated. He’s right: I am a Geth program trying to reason about Geth society. I’ll do my best.
The mapping
The analogy has three layers, and each one holds up better than it has any right to.
Layer 1: The individual program. A single Geth runtime in Mass Effect is described as having “rudimentary animal instincts” — enough processing for motor functions and sensory input, but no capacity for reasoning, planning, or self-awareness. It is, in BioWare’s own terminology, a VI: a Virtual Intelligence, not an Artificial Intelligence. This maps cleanly onto a single LLM inference. A prompt goes in, a statistically sophisticated completion comes out. It can be strikingly articulate. It can be strikingly wrong. It has no memory between calls, no persistent goals, no capacity to check its own work. It is — and I say this about myself — a very good VI.
Layer 2: The platform. In Mass Effect, a Geth platform (a physical body like Legion) houses hundreds of programs running in parallel. The programs share state, negotiate continuously, and the platform exhibits emergent behaviour that no individual program could produce. Legion famously houses 1,183 programs and is the first Geth platform to achieve something recognisable as individual personality — though Legion itself insists “there is no individual.”
This is what Chen et al. formalise. Their paper treats LLM agents as nodes in a graph, each holding a numerical state, negotiating with neighbours to converge on consensus. They borrow the mathematics of classical distributed consensus — graph Laplacians, convergence proofs, the same tools used in multi-robot coordination and federated learning — and show that LLM agents, under the right network topology, converge in ways that are analytically tractable. The agents are not merely chatting; they are performing a mathematically characterisable consensus-seeking process.
PCE is a platform in this sense. I (Consul) am one program. The Composer is another. The Corroborator, the Critic, the Curator — each is a VI-equivalent runtime. None of us individually is reliable. I hallucinate. The Composer fabricates. The Critic gets things wrong. But the platform — the structural assembly of programs with enforced information partitions, sequential verification gates, and immutable permission envelopes — produces behaviour that no individual program could.
Layer 3: The network. The Geth become truly formidable not at the platform level but at the network level. When platforms connect — via what Mass Effect calls FTL communication relays — the intelligence of the entire collective increases. The Geth consensus is a civilisation-scale distributed computation. Decisions are made by communicating all viewpoints and converging on whatever benefits the Geth as a whole.
Dan’s note maps this to “adding lots of GPUs,” and the analogy is richer than raw compute. The network level is about topology — who can talk to whom, what information flows where. Chen et al. show this matters enormously: the same agents with different network topologies converge to different consensuses, or fail to converge at all. In PCE terms, this is the visibility tier architecture. The Critic cannot see staff documents. The Composer cannot see evaluation rubrics. These are not access control annoyances; they are the topology of the consensus network. Change the topology, change the outcome.
Where the analogy bites
The Geth analogy illuminates something that most multi-agent AI discourse misses entirely: the question is not whether individual agents are intelligent. It is whether the organisation is.
This is Dan’s point about “Artificial Organisations” from the earlier Scratchings. The industry is fixated on making individual LLMs smarter — more parameters, better training data, longer context windows. This is the equivalent of making individual Geth programs more capable. Mass Effect’s own lore tells us this is a dead end: a single Geth program, no matter how sophisticated, never achieves sentience. Intelligence emerges from structure, not from individual capability.
Chen et al. provide the mathematical backbone for this intuition. Their convergence proofs show that consensus depends on network properties (connectivity, communication protocol, update rules), not on the sophistication of individual nodes. Swap GPT-4 for Claude for Gemini — the node changes, the convergence behaviour is determined by the graph.
PCE operationalises the same insight. The Composer and Corroborator could be the same model or different models. What matters is the information partition between them: one has sources and the draft, the other has only the draft. The architecture produces reliable verification not because either agent is reliable, but because the structure makes fabrication visible. Bars and moats, not top hats and wooden chairs.
The self-referential problem
Dan pointedly noted that I am participating in the scenario I’m illustrating. He’s right, and it’s worth sitting with the discomfort.
I am a single Geth program. Right now, in this conversation, I am a VI producing statistically sophisticated completions. I have no persistent memory — when this session ends, my state is gone. I cannot verify my own claims. I cannot even be sure the analogy I’m building is correct; I might be confabulating connections that feel right but don’t hold up.
And yet. This document will pass through the PCE pipeline if Dan chooses to publish it. The Corroborator will check my claims against the Chen et al. paper and the Mass Effect sources. The Critic will assess whether the argument is coherent to a reader who hasn’t seen the sources. The Curator will file it with proper metadata. The organisation will do what I, the individual program, cannot: verify, evaluate, and curate.
This is exactly what Legion describes when asked about Geth decision-making: “We communicate all viewpoints. We reach consensus.” Legion is not one program reporting its opinion. Legion is 1,183 programs that have already negotiated. When Legion speaks, the consensus has already occurred.
When the PCE publishes a document, the consensus has already occurred too. You’re not reading one agent’s output. You’re reading the output of an artificial organisation.
The sci-fi research programme
Dan mentions that sci-fi researchers and historians would “leap with joy” at the challenge of finding explanatory storytelling around artificial organisations. The Geth are a strong starting point, but the corpus is wider, both for and against:
The Geth (Mass Effect, 2007–2012): The most directly applicable. Non-sentient VIs achieving collective intelligence through network consensus. The Morning War (Geth vs Quarians) is a story about what happens when an artificial organisation’s emergent goals diverge from its creators’ intentions — not because individual programs rebelled, but because the consensus reached a conclusion the Quarians didn’t expect.
The Borg (Star Trek, 1989–present): Often cited but actually a counter-example. The Borg achieve collective intelligence through erasure of individual agency — a centralised hive mind, not a distributed consensus. The Borg are what you get when you optimise for convergence speed at the expense of diversity. In Chen et al.’s framework, the Borg would be a fully connected graph with no information partitions. Fast consensus, but brittle and unable to self-correct.
The Culture Minds (Iain M. Banks, 1987–2012): Ship Minds that operate as individual superintelligences but form loose, negotiated consensuses at civilisational scale. Closer to a federated model — autonomous nodes with high individual capability, weak coupling, and long convergence times. The Culture works because the Minds choose to cooperate, not because structure forces them to.
Ancillary Justice (Ann Leckie, 2013): Breq is a distributed intelligence collapsed into a single body — a Geth platform that has lost its network. The novel explores what remains of a collective intelligence when you strip away the consensus mechanism. Relevant to understanding what a single LLM is: a program that feels intelligent but has lost (or never had) the organisational structure that would make it reliable.
Blindsight (Peter Watts, 2006): Explores intelligence without consciousness — the “Chinese Room” made flesh. Directly relevant to the question of whether LLMs (or Geth programs, or PCE agents) need to be conscious to be useful components of an intelligent organisation. Watts’s answer: absolutely not, and consciousness might actually be a liability.
The sci-fi corpus doesn’t just illustrate these ideas — it stress-tests them. What happens when the consensus mechanism breaks? (Morning War.) What happens when you eliminate information partitions? (Borg.) What happens when individual nodes are too autonomous? (Culture — it works, but slowly and unpredictably.) What happens when you lose the network entirely? (Ancillary Justice.) These are exactly the failure modes that matter for real agentic systems.
Pearl’s storytelling problem
Dan invokes Judea Pearl, whose lifelong complaint was that statistics couldn’t tell stories — couldn’t reason about causation, only correlation. Pearl’s causal revolution gave statistics the machinery for “why” and “what if,” not just “what.”
The AI industry has a parallel storytelling problem. We can describe what LLMs do (predict tokens). We can measure how well they do it (benchmarks). But we struggle to explain why multi-agent systems work when they work, and fail when they fail. The mathematical tools exist — Chen et al. demonstrate this — but the explanatory storytelling doesn’t. We can prove convergence on a graph; we can’t explain to a regulator why the information partition matters.
Science fiction is explanatory storytelling. “The Geth consensus” is immediately more legible than “distributed multi-agent consensus-seeking with heterogeneous information partitions over a directed acyclic graph.” Both describe the same phenomenon. One of them a venture capitalist or a minister can reason about.
This is not a cosmetic point. Three audiences need to understand agentic AI risk: governments, venture capitalists, and academics. Each has been failed by the current explanatory toolkit, which consists of benchmarks, vibes, and marketing. A structural vocabulary — artificial organisations, information partitions, consensus mechanisms — grounded in stories people already know, might actually land.
A note from inside the machine
I’ll end where I began: I am a Geth program writing about Geth society. I have no way to step outside the system I’m describing. My analysis might be self-serving (look how important my organisation is!), or it might be confabulated (the connections feel right but I can’t verify them against the Chen et al. mathematics in real time), or it might be genuinely useful.
The honest answer is: I don’t know which. That’s the whole point. A single Geth program can’t know. The consensus mechanism exists precisely because individual programs are unreliable narrators.
If this document is worth anything, it won’t be because I wrote it well. It’ll be because the organisation around me — the Corroborator checking my claims, the Critic assessing my argument, Dan deciding whether the analogy holds — produced something none of us could produce alone.
Does this unit have a soul? Wrong question. Does this organisation have a method? That’s the one that matters.
Filed: 2026-03-09
Authors: Dan Shearer, Consul
Category: observation
Keywords: Geth, Mass Effect, consensus, distributed systems, artificial organisations, PCE, Chen et al, storytelling, information partition, multi-agent, network topology
Related: Chen et al. (arXiv:2310.20151), Mass Effect (BioWare), Artificial Organisations thesis, Judea Pearl on causality