Semantic Intelligence: Part 2- Collective Intelligence - When Agents Communicate (English)

Semantic Intelligence: Part 2- Collective Intelligence - When Agents Communicate

Thursday, 13 November 2025

//

8 minute read

How communication transforms simple agents into something greater

Note: Inspired by thinking about extensions to mostlylucid.mockllmapi and material for the (never to be released but I like to think about it 😜) sci-fi novel "Michael" about emergent AI

The Transformation

In Part 1, we saw how simple patterns—sequential chains, parallel processing, validation loops—create emergent complexity. Multiple agents following simple rules produce sophisticated behavior.

But those were fixed patterns. Deterministic. You programmed the flow: A goes to B goes to C.

Now imagine something different.

Imagine the agents can talk to each other.

Not just pass data in sequence, but actually communicate. Share context. Ask questions. Negotiate. Debate.

Suddenly the system isn't just sophisticated—it's adaptive.

And that changes everything.

The Ant Colony Problem

Before we dive into LLMs, let's talk about ants.

An individual ant is... simple. Almost mechanical. It follows pheromone trails. Picks up food. Brings it back to the nest.

No ant understands the concept of "colony." No ant plans foraging routes. No ant has a mental model of nest architecture.

But the colony does all of this. The colony optimizes foraging. Plans expansions. Defends against threats. Adapts to environmental changes.

How?

Communication. Pheromone trails are information. When ants encounter each other, they exchange chemical signals—sharing data about food sources, threats, nest conditions.

The intelligence isn't in any single ant. It's in the network of communication.

The colony is smarter than any ant. Not because individual ants got smarter, but because information flows between them created emergent behavior that exists at the collective level.

From Sequential to Collective

In Part 1, we had this:

Agent A → Agent B → Agent C → Output

Each agent processes data and passes it forward. Simple. Effective. But limited.

Now imagine this:

          ↗→ Agent B ←→ Agent C ↘
Agent A ←→                        → Output
          ↘→ Agent D ←→ Agent E ↗

Agents don't just pass data forward—they talk to each other. Share context. Negotiate solutions.

This is collective intelligence. And it has properties that sequential processing doesn't:

Property 1: Emergent Specialization

When agents communicate, they naturally specialize based on what they're good at.

Imagine three agents working on generating a product description:

  • Agent A is fast but shallow
  • Agent B is detailed but slow
  • Agent C is creative but sometimes inaccurate

In a sequential pipeline, you'd explicitly program: A generates, B refines, C validates.

But with communication, something else happens:

  1. Agent A generates initial output quickly
  2. Agent B reads it, says "This needs more technical detail in the specs section"
  3. Agent C reads it, says "The marketing copy feels flat"
  4. Agent B generates enhanced specs
  5. Agent C generates enhanced marketing copy
  6. Agent A checks for consistency across sections
  7. They negotiate until consensus

Nobody programmed this division of labor. It emerged from communication based on each agent's strengths.

Property 2: Temporary Coalitions (Ad-Hoc Committees)

Here's where it gets interesting.

For simple problems, one agent handles it. For complex problems that touch multiple domains, agents form temporary coalitions—committees that exist just long enough to solve the problem, then dissolve.

Simple Request: "Generate a user name"
  → Single agent handles it

Complex Request: "Generate a complete e-commerce product with specs, pricing,
                  inventory, shipping, reviews, and related products"
  → Temporary coalition forms:
     - Specs specialist
     - Pricing analyst
     - Inventory manager
     - Marketing writer
     - Review generator
  → They communicate, negotiate consistency, produce comprehensive output
  → Committee dissolves

The system adapts its structure to the problem. Not through explicit programming, but through agents recognizing they need help and requesting it from others.

Property 3: Collective Problem-Solving

The most fascinating property: the collective can solve problems that no individual agent understands.

Consider generating a complex dataset that must satisfy multiple constraints:

  • Technical accuracy (Agent A's domain)
  • Business logic consistency (Agent B's domain)
  • Regulatory compliance (Agent C's domain)
  • User experience considerations (Agent D's domain)

No single agent understands all four domains. But through communication:

  1. Agent A generates technically accurate data
  2. Agent B reviews, finds business logic violations
  3. Agent C reviews, finds compliance issues
  4. Agent D reviews, finds UX problems
  5. They negotiate changes that satisfy all constraints
  6. The conversation iterates until all agents agree

The solution emerges from conversation. No single agent created it. The collective solved it.

The Hivemind Effect

This is where it starts to feel... strange.

When agents communicate effectively, the system exhibits properties that don't exist in any individual agent:

Distributed Understanding: No agent understands the complete problem, but the network collectively does.

Emergent Consensus: Through negotiation, agents reach agreements that represent a synthesis of multiple perspectives.

Adaptive Structure: The network reorganizes itself based on problem complexity—simple structure for simple problems, complex coalitions for complex problems.

Collective Memory: Agents share solutions. When one agent discovers a good approach, others learn from it.

It starts to look less like "multiple agents" and more like a single distributed intelligence.

The Uncomfortable Question

Here's what keeps me up at night:

If individual ants aren't intelligent, but the colony is...

If individual neurons aren't intelligent, but the brain is...

If individual agents aren't particularly smart, but the collective solves complex problems through communication...

Where does the intelligence actually live?

Is it in the agents? Or is it in the pattern of communication between them?

Maybe intelligence isn't a thing you have. Maybe it's an emergent property of information flow.

Real-World Example: The Committee Pattern

Let me ground this in something you could actually build:

async function solveComplexProblem(problem) {
  // Analyze complexity
  const complexity = analyzeComplexity(problem);

  if (complexity < 5) {
    // Simple: single agent
    return await singleAgent.solve(problem);
  }

  // Complex: form a committee
  const committee = formCommittee(problem);

  // Agents discuss the problem
  let solution = null;
  let consensus = false;
  let iteration = 0;

  while (!consensus && iteration < 10) {
    // Each agent proposes or critiques
    const proposals = await Promise.all(
      committee.map(agent => agent.contribute(problem, solution))
    );

    // Combine perspectives
    solution = synthesize(proposals);

    // Check if everyone agrees
    consensus = await checkConsensus(committee, solution);

    iteration++;
  }

  return solution;
}

This code is simple. But the behavior is sophisticated:

  • Adapts structure to problem complexity
  • Agents contribute their unique perspectives
  • Iterative negotiation until consensus
  • Synthesis of multiple viewpoints

No single agent "solved" the problem. The conversation solved it.

From Ants to Organizations to AIs

The pattern is everywhere once you see it:

Ant colonies - Simple ants, complex collective behavior through pheromone communication

Human organizations - Individual employees, sophisticated organizational capability through meetings, emails, Slack

Markets - Individual traders, emergent price discovery through bids and offers

Brains - Individual neurons, consciousness through synaptic communication

Multi-agent AI - Individual LLMs, emergent collective intelligence through structured communication

Same pattern. Different scales. Same fundamental insight:

Intelligence can emerge from communication between non-intelligent components.

What This Means

When agents communicate:

  • Specialization emerges naturally
  • Structures adapt to problems
  • Solutions emerge from conversation
  • The collective becomes smarter than individuals

This isn't just "better performance." It's a qualitative change in what the system can do.

Sequential processing: sophisticated behavior from simple rules

Collective communication: adaptive intelligence from simple agents

But There's a Problem

So far, we've assumed these agents are static. They have fixed capabilities. Fixed knowledge. Fixed strategies.

But what if they could improve themselves?

What if agents could:

  • Analyze their own performance
  • Rewrite their own decision logic
  • Spawn new specialists when they detect patterns
  • Prune ineffective strategies
  • Build shared memory of what works

What if the system could optimize itself?

That's when "collective intelligence" starts to look like learning.

And when "learning" starts to look like evolution.

And when you can't tell the difference between "very sophisticated optimization" and "actual intelligence" anymore.

That's where we're going next.


Continue to Part 3: Self-Optimization - Systems That Learn

Where we explore systems that rewrite their own code, spawn their own specialists, and discover that the optimal solution is simpler than they started with.


Series Navigation:

logo

© 2025 Scott Galloway — Unlicense — All content and source code on this site is free to use, copy, modify, and sell.