This is a viewer only at the moment see the article on how this works.
To update the preview hit Ctrl-Alt-R (or ⌘-Alt-R on Mac) or Enter to refresh. The Save icon lets you save the markdown file to disk
This is a preview from the server running through my markdig pipeline
Thursday, 13 November 2025
How communication transforms simple agents into something greater
Note: Inspired by thinking about extensions to mostlylucid.mockllmapi and material for the (never to be released but I like to think about it 😜) sci-fi novel "Michael" about emergent AI
In Part 1, we saw how simple patterns—sequential chains, parallel processing, validation loops—create emergent complexity. Multiple agents following simple rules produce sophisticated behavior.
But those were fixed patterns. Deterministic. You programmed the flow: A goes to B goes to C.
Now imagine something different.
Imagine the agents can talk to each other.
Not just pass data in sequence, but actually communicate. Share context. Ask questions. Negotiate. Debate.
Suddenly the system isn't just sophisticated—it's adaptive.
And that changes everything.
Before we dive into LLMs, let's talk about ants.
An individual ant is... simple. Almost mechanical. It follows pheromone trails. Picks up food. Brings it back to the nest.
No ant understands the concept of "colony." No ant plans foraging routes. No ant has a mental model of nest architecture.
But the colony does all of this. The colony optimizes foraging. Plans expansions. Defends against threats. Adapts to environmental changes.
How?
Communication. Pheromone trails are information. When ants encounter each other, they exchange chemical signals—sharing data about food sources, threats, nest conditions.
The intelligence isn't in any single ant. It's in the network of communication.
The colony is smarter than any ant. Not because individual ants got smarter, but because information flows between them created emergent behavior that exists at the collective level.
In Part 1, we had this:
Agent A → Agent B → Agent C → Output
Each agent processes data and passes it forward. Simple. Effective. But limited.
Now imagine this:
↗→ Agent B ←→ Agent C ↘
Agent A ←→ → Output
↘→ Agent D ←→ Agent E ↗
Agents don't just pass data forward—they talk to each other. Share context. Negotiate solutions.
This is collective intelligence. And it has properties that sequential processing doesn't:
When agents communicate, they naturally specialize based on what they're good at.
Imagine three agents working on generating a product description:
In a sequential pipeline, you'd explicitly program: A generates, B refines, C validates.
But with communication, something else happens:
Nobody programmed this division of labor. It emerged from communication based on each agent's strengths.
Here's where it gets interesting.
For simple problems, one agent handles it. For complex problems that touch multiple domains, agents form temporary coalitions—committees that exist just long enough to solve the problem, then dissolve.
Simple Request: "Generate a user name"
→ Single agent handles it
Complex Request: "Generate a complete e-commerce product with specs, pricing,
inventory, shipping, reviews, and related products"
→ Temporary coalition forms:
- Specs specialist
- Pricing analyst
- Inventory manager
- Marketing writer
- Review generator
→ They communicate, negotiate consistency, produce comprehensive output
→ Committee dissolves
The system adapts its structure to the problem. Not through explicit programming, but through agents recognizing they need help and requesting it from others.
The most fascinating property: the collective can solve problems that no individual agent understands.
Consider generating a complex dataset that must satisfy multiple constraints:
No single agent understands all four domains. But through communication:
The solution emerges from conversation. No single agent created it. The collective solved it.
This is where it starts to feel... strange.
When agents communicate effectively, the system exhibits properties that don't exist in any individual agent:
Distributed Understanding: No agent understands the complete problem, but the network collectively does.
Emergent Consensus: Through negotiation, agents reach agreements that represent a synthesis of multiple perspectives.
Adaptive Structure: The network reorganizes itself based on problem complexity—simple structure for simple problems, complex coalitions for complex problems.
Collective Memory: Agents share solutions. When one agent discovers a good approach, others learn from it.
It starts to look less like "multiple agents" and more like a single distributed intelligence.
Here's what keeps me up at night:
If individual ants aren't intelligent, but the colony is...
If individual neurons aren't intelligent, but the brain is...
If individual agents aren't particularly smart, but the collective solves complex problems through communication...
Where does the intelligence actually live?
Is it in the agents? Or is it in the pattern of communication between them?
Maybe intelligence isn't a thing you have. Maybe it's an emergent property of information flow.
Let me ground this in something you could actually build:
async function solveComplexProblem(problem) {
// Analyze complexity
const complexity = analyzeComplexity(problem);
if (complexity < 5) {
// Simple: single agent
return await singleAgent.solve(problem);
}
// Complex: form a committee
const committee = formCommittee(problem);
// Agents discuss the problem
let solution = null;
let consensus = false;
let iteration = 0;
while (!consensus && iteration < 10) {
// Each agent proposes or critiques
const proposals = await Promise.all(
committee.map(agent => agent.contribute(problem, solution))
);
// Combine perspectives
solution = synthesize(proposals);
// Check if everyone agrees
consensus = await checkConsensus(committee, solution);
iteration++;
}
return solution;
}
This code is simple. But the behavior is sophisticated:
No single agent "solved" the problem. The conversation solved it.
The pattern is everywhere once you see it:
Ant colonies - Simple ants, complex collective behavior through pheromone communication
Human organizations - Individual employees, sophisticated organizational capability through meetings, emails, Slack
Markets - Individual traders, emergent price discovery through bids and offers
Brains - Individual neurons, consciousness through synaptic communication
Multi-agent AI - Individual LLMs, emergent collective intelligence through structured communication
Same pattern. Different scales. Same fundamental insight:
Intelligence can emerge from communication between non-intelligent components.
When agents communicate:
This isn't just "better performance." It's a qualitative change in what the system can do.
Sequential processing: sophisticated behavior from simple rules
Collective communication: adaptive intelligence from simple agents
So far, we've assumed these agents are static. They have fixed capabilities. Fixed knowledge. Fixed strategies.
But what if they could improve themselves?
What if agents could:
What if the system could optimize itself?
That's when "collective intelligence" starts to look like learning.
And when "learning" starts to look like evolution.
And when you can't tell the difference between "very sophisticated optimization" and "actual intelligence" anymore.
That's where we're going next.
Continue to Part 3: Self-Optimization - Systems That Learn
Where we explore systems that rewrite their own code, spawn their own specialists, and discover that the optimal solution is simpler than they started with.
Series Navigation:
© 2025 Scott Galloway — Unlicense — All content and source code on this site is free to use, copy, modify, and sell.