Semantic Intelligence: Part 1 - Simple Rules, Complex Behavior, Emergent Intelligence? (English)

Semantic Intelligence: Part 1 - Simple Rules, Complex Behavior, Emergent Intelligence?

Thursday, 13 November 2025

//

9 minute read

How multiple simple agents create emergent complexity

Note: Inspired by thinking about extensions to mostlylucid.mockllmapi and material for the (never to be released but I like to think about it 😜) sci-fi novel "Michael" about emergent AI

The Absurd Question

What if consciousness is just very sophisticated if-then statements?

I know, I know. It sounds reductive to the point of insult. The idea that human thought—with all its creativity, emotion, and depth—is fundamentally just decision trees stacked on decision trees until something that looks like intelligence emerges.

But here's the thing: I can't shake it.

Because when you look at how simple rules create complex behavior in nature, you start to wonder...

What This Series Explores

This is Part 1 of a 9-part exploration into how simple rules create complex, intelligent-seeming behavior. Here's where we're going:

Part 1 (You are here): Simple Rules, Complex Behavior The foundation - how multiple simple agents following basic patterns create emergent complexity that wasn't explicitly programmed.

Part 2: Collective Intelligence - When Agents Communicate What happens when agents don't just follow patterns, but actually talk to each other? Information sharing, negotiation, and collective problem-solving transform simple multi-agent systems into something qualitatively different.

Part 3: Self-Optimization - Systems That Improve Themselves When agents can measure their own performance and adjust their behavior, you get systems that evolve without human intervention. The feedback loop becomes the engine of improvement.

Part 4: The Emergence - When Optimization Becomes Intelligence At what point does "following optimization rules" become "actual intelligence"? We explore the uncomfortable threshold where sophisticated rule-following becomes indistinguishable from thinking.

Part 5: Evolution - From Optimization to Guilds and Culture Agents that optimize collectively start forming specialized roles, developing shared knowledge, and creating emergent "culture." The system develops preferences, patterns, and collective memory.

Part 6: Global Consensus - Directed Evolution and Planetary Cognition Scale these patterns to thousands or millions of agents, add mechanisms for global coordination, and you get something that looks like planetary-scale intelligence. Not programmed. Emerged.

Part 7: The Real Thing! - Experimenting with Directed Synthetic Evolution Theory meets practice. A working implementation of evolutionary code generation using multi-agent LLMs, RAG memory, and actual self-improvement. The code is real, running locally, and genuinely evolving.

Part 8: Tools All The Way Down - The Self-Optimizing Toolkit How the tools themselves work, track usage, evolve, and get smarter over time. Every tool tracks invocations, learns from patterns, evolves implementations, caches responses, negotiates fitness trade-offs, and versions itself automatically.

Part 9: Self-Healing Tools - Lineage-Aware Pruning and Recovery When tools break themselves through evolution, the system should remember why and never repeat the mistake. This part explores self-healing through lineage tracking, branch pruning, avoidance rules, and automatic recovery from failures. Tools develop institutional memory of what not to do.

The Foundation: Conway's Game of Life

Before we talk about LLMs and AI, let's talk about the Game of Life.

Four rules. That's it. Four simple rules about cells on a grid:

  1. A live cell with 2-3 neighbors survives
  2. A live cell with <2 neighbors dies (loneliness)
  3. A live cell with >3 neighbors dies (overcrowding)
  4. A dead cell with exactly 3 neighbors becomes alive

From these four trivial rules, you get:

  • Stable structures (blocks, beehives)
  • Oscillators (blinkers, pulsars)
  • Gliders that move across the grid
  • Guns that shoot gliders
  • Pattern that grow forever

You get complexity from simplicity. You get behavior that wasn't explicitly programmed into those four rules.

You get emergence.

The Pattern: Multiple Simple Agents

Now imagine instead of cells on a grid, you have language models. Simple ones. Each with limited capability.

Alone, each model is... fine. It can generate text. Answer questions. But nothing spectacular.

But what happens when you connect them? When the output of one becomes the input to another?

Pattern 1: Sequential Refinement

The simplest pattern: a chain.

Fast Model → Quality Model → Validator Model
  1. Fast Model generates basic structure (cheap, quick, good enough)
  2. Quality Model adds detail and nuance (slower, better at depth)
  3. Validator checks for errors and inconsistency (expensive, catches what others miss)

Each model does one thing. The chain does something none of them could do alone: produce high-quality output quickly and reliably.

The emergence: The chain has properties (speed + quality + reliability) that no individual model possesses.

Pattern 2: Parallel Specialization

Different agents work on different aspects simultaneously:

       ┌─ Specs Generator
Input ─┼─ Pricing Calculator  → Merge → Complete Product
       └─ Inventory Checker

Each specialist is simple. But together they create comprehensive coverage that would take one generalist model much longer to produce—and with lower quality in each domain.

The emergence: Expertise through division of labor. No single model is an expert, but the collective acts like one.

Pattern 3: Validation Loops

An agent generates, another validates, and if validation fails, a third corrects:

Generate → Validate → [Pass? → Output : Correct → Validate again]

This creates a self-correcting system. No single model is particularly good at avoiding errors, but the pattern catches and fixes them.

The emergence: Reliability from unreliable components.

Pattern 4: Smart Routing

Analyze the complexity of a request, then route to the appropriate agent:

Simple request (score 1-3) → Fast model
Medium request (score 4-7) → Quality model
Complex request (score 8-10) → Premium model

The emergence: Cost-efficiency. The system "learns" (through programmed rules) when to spend resources and when to save them.

The Key Insight: 1 + 1 > 2

None of these models are particularly smart. Each is just following its programming—answer this prompt, check this output, route based on this score.

But the combination exhibits properties that look an awful lot like:

  • Judgment (routing decisions)
  • Quality control (validation loops)
  • Efficiency (parallel processing)
  • Expertise (specialization)

The same way four rules about cell neighbors create gliders and guns, four patterns of model interaction create behavior that looks sophisticated.

The Uncomfortable Implication

If these simple patterns create emergent complexity...

If systems that are just "following rules" start to exhibit properties that look like judgment and expertise...

Where's the line?

At what point does "sophisticated rule-following" become "actual intelligence"?

A Practical Grounding

Let me ground this in reality before we get too philosophical.

You can build these patterns today. The code is simple:

// Pattern 1: Sequential refinement
async function refineSequentially(input) {
  let output = await fastModel(input);      // Quick draft
  output = await qualityModel(output);      // Add depth
  output = await validator(output);         // Check quality
  return output;
}

Three function calls. That's it. But the behavior that emerges—rapid high-quality generation—isn't in any single function.

It's in the pattern of interaction.

The Four Building Blocks

These patterns are the foundation:

  1. Sequential Enhancement - Data flows through stages, each adding refinement
  2. Parallel Specialization - Different agents handle different aspects simultaneously
  3. Validation Loops - Generate, check, correct, repeat until quality threshold met
  4. Hierarchical Routing - Analyze complexity, route to appropriate capability level

Simple patterns. No individual model is particularly impressive.

But here's what keeps me up at night: these same patterns—specialization, parallel processing, validation, smart routing—are how human organizations work.

A company has specialists. Teams work in parallel. Quality control validates. Managers route tasks to appropriate skill levels.

Are companies intelligent? Or are they just sophisticated rule-following systems that exhibit emergent complexity?

Maybe it's the same thing.

What This Means

These patterns create systems that:

  • Make decisions (routing)
  • Show expertise (specialization)
  • Self-correct (validation)
  • Optimize resources (cost-aware routing)

From the outside, this looks like intelligence. Sophisticated behavior. Smart systems.

From the inside, it's just simple rules interacting.

The question: Is there a fundamental difference between these two views? Or is "intelligence" just what we call sufficiently complex rule-following?

Where We Go From Here

So far, we have multiple agents following simple patterns. The behavior is sophisticated, but the mechanism is deterministic. We programmed these patterns explicitly.

But what happens when we add one more ingredient?

What happens when these agents don't just work in sequence or parallel... but actually communicate?

When they share context. Negotiate. Form temporary coalitions to solve problems.

When information flows not in predetermined patterns, but dynamically based on the problem at hand?

That's when things get really interesting.

Because communication creates a different kind of emergence. Not just sophisticated behavior from simple rules, but collective intelligence that exists in the network itself.

No single agent understands the solution. But the conversation finds it anyway.


Continue to Part 2: Collective Intelligence - When Agents Communicate

Where we explore what happens when simple agents start talking to each other, and why the collective can be smarter than any individual.


Series Navigation:

logo

© 2025 Scott Galloway — Unlicense — All content and source code on this site is free to use, copy, modify, and sell.