This is a viewer only at the moment see the article on how this works.
To update the preview hit Ctrl-Alt-R (or ⌘-Alt-R on Mac) or Enter to refresh. The Save icon lets you save the markdown file to disk
This is a preview from the server running through my markdig pipeline
Thursday, 13 November 2025
How multiple simple agents create emergent complexity
Note: Inspired by thinking about extensions to mostlylucid.mockllmapi and material for the (never to be released but I like to think about it 😜) sci-fi novel "Michael" about emergent AI
What if consciousness is just very sophisticated if-then statements?
I know, I know. It sounds reductive to the point of insult. The idea that human thought—with all its creativity, emotion, and depth—is fundamentally just decision trees stacked on decision trees until something that looks like intelligence emerges.
But here's the thing: I can't shake it.
Because when you look at how simple rules create complex behavior in nature, you start to wonder...
This is Part 1 of a 9-part exploration into how simple rules create complex, intelligent-seeming behavior. Here's where we're going:
Part 1 (You are here): Simple Rules, Complex Behavior The foundation - how multiple simple agents following basic patterns create emergent complexity that wasn't explicitly programmed.
Part 2: Collective Intelligence - When Agents Communicate What happens when agents don't just follow patterns, but actually talk to each other? Information sharing, negotiation, and collective problem-solving transform simple multi-agent systems into something qualitatively different.
Part 3: Self-Optimization - Systems That Improve Themselves When agents can measure their own performance and adjust their behavior, you get systems that evolve without human intervention. The feedback loop becomes the engine of improvement.
Part 4: The Emergence - When Optimization Becomes Intelligence At what point does "following optimization rules" become "actual intelligence"? We explore the uncomfortable threshold where sophisticated rule-following becomes indistinguishable from thinking.
Part 5: Evolution - From Optimization to Guilds and Culture Agents that optimize collectively start forming specialized roles, developing shared knowledge, and creating emergent "culture." The system develops preferences, patterns, and collective memory.
Part 6: Global Consensus - Directed Evolution and Planetary Cognition Scale these patterns to thousands or millions of agents, add mechanisms for global coordination, and you get something that looks like planetary-scale intelligence. Not programmed. Emerged.
Part 7: The Real Thing! - Experimenting with Directed Synthetic Evolution Theory meets practice. A working implementation of evolutionary code generation using multi-agent LLMs, RAG memory, and actual self-improvement. The code is real, running locally, and genuinely evolving.
Part 8: Tools All The Way Down - The Self-Optimizing Toolkit How the tools themselves work, track usage, evolve, and get smarter over time. Every tool tracks invocations, learns from patterns, evolves implementations, caches responses, negotiates fitness trade-offs, and versions itself automatically.
Part 9: Self-Healing Tools - Lineage-Aware Pruning and Recovery When tools break themselves through evolution, the system should remember why and never repeat the mistake. This part explores self-healing through lineage tracking, branch pruning, avoidance rules, and automatic recovery from failures. Tools develop institutional memory of what not to do.
Before we talk about LLMs and AI, let's talk about the Game of Life.
Four rules. That's it. Four simple rules about cells on a grid:
From these four trivial rules, you get:
You get complexity from simplicity. You get behavior that wasn't explicitly programmed into those four rules.
You get emergence.
Now imagine instead of cells on a grid, you have language models. Simple ones. Each with limited capability.
Alone, each model is... fine. It can generate text. Answer questions. But nothing spectacular.
But what happens when you connect them? When the output of one becomes the input to another?
The simplest pattern: a chain.
Fast Model → Quality Model → Validator Model
Each model does one thing. The chain does something none of them could do alone: produce high-quality output quickly and reliably.
The emergence: The chain has properties (speed + quality + reliability) that no individual model possesses.
Different agents work on different aspects simultaneously:
┌─ Specs Generator
Input ─┼─ Pricing Calculator → Merge → Complete Product
└─ Inventory Checker
Each specialist is simple. But together they create comprehensive coverage that would take one generalist model much longer to produce—and with lower quality in each domain.
The emergence: Expertise through division of labor. No single model is an expert, but the collective acts like one.
An agent generates, another validates, and if validation fails, a third corrects:
Generate → Validate → [Pass? → Output : Correct → Validate again]
This creates a self-correcting system. No single model is particularly good at avoiding errors, but the pattern catches and fixes them.
The emergence: Reliability from unreliable components.
Analyze the complexity of a request, then route to the appropriate agent:
Simple request (score 1-3) → Fast model
Medium request (score 4-7) → Quality model
Complex request (score 8-10) → Premium model
The emergence: Cost-efficiency. The system "learns" (through programmed rules) when to spend resources and when to save them.
None of these models are particularly smart. Each is just following its programming—answer this prompt, check this output, route based on this score.
But the combination exhibits properties that look an awful lot like:
The same way four rules about cell neighbors create gliders and guns, four patterns of model interaction create behavior that looks sophisticated.
If these simple patterns create emergent complexity...
If systems that are just "following rules" start to exhibit properties that look like judgment and expertise...
Where's the line?
At what point does "sophisticated rule-following" become "actual intelligence"?
Let me ground this in reality before we get too philosophical.
You can build these patterns today. The code is simple:
// Pattern 1: Sequential refinement
async function refineSequentially(input) {
let output = await fastModel(input); // Quick draft
output = await qualityModel(output); // Add depth
output = await validator(output); // Check quality
return output;
}
Three function calls. That's it. But the behavior that emerges—rapid high-quality generation—isn't in any single function.
It's in the pattern of interaction.
These patterns are the foundation:
Simple patterns. No individual model is particularly impressive.
But here's what keeps me up at night: these same patterns—specialization, parallel processing, validation, smart routing—are how human organizations work.
A company has specialists. Teams work in parallel. Quality control validates. Managers route tasks to appropriate skill levels.
Are companies intelligent? Or are they just sophisticated rule-following systems that exhibit emergent complexity?
Maybe it's the same thing.
These patterns create systems that:
From the outside, this looks like intelligence. Sophisticated behavior. Smart systems.
From the inside, it's just simple rules interacting.
The question: Is there a fundamental difference between these two views? Or is "intelligence" just what we call sufficiently complex rule-following?
So far, we have multiple agents following simple patterns. The behavior is sophisticated, but the mechanism is deterministic. We programmed these patterns explicitly.
But what happens when we add one more ingredient?
What happens when these agents don't just work in sequence or parallel... but actually communicate?
When they share context. Negotiate. Form temporary coalitions to solve problems.
When information flows not in predetermined patterns, but dynamically based on the problem at hand?
That's when things get really interesting.
Because communication creates a different kind of emergence. Not just sophisticated behavior from simple rules, but collective intelligence that exists in the network itself.
No single agent understands the solution. But the conversation finds it anyway.
Continue to Part 2: Collective Intelligence - When Agents Communicate
Where we explore what happens when simple agents start talking to each other, and why the collective can be smarter than any individual.
Series Navigation:
© 2025 Scott Galloway — Unlicense — All content and source code on this site is free to use, copy, modify, and sell.