Thinking in Systems: A Retrospective on a Life Spent Trying to Understand How Things Fit Together (English)

Thinking in Systems: A Retrospective on a Life Spent Trying to Understand How Things Fit Together

Sunday, 07 December 2025

//

10 minute read

By someone who can't stop analysing patterns, even their own.

As the category says, this is a far more personal essay than my usual work. I don't mind if you skip it...but if it resonates leave a comment! I read them all (and it needn't be public).

The Beginning: Before I Knew What "Systems Thinking" Was

I didn't start life knowing I was "a systems thinker." I just knew that I saw connections everywhere.

As a kid I picked apart ideas the way other people took apart broken radios. Not because I wanted to fix them ; but because I needed to know how the pieces related.

People assumed I was being difficult when I asked too many questions. In truth, I was trying to calm my brain down by giving it a structure to work with.

Only much later did I discover that a mix of ASD cognition and a background in psychology explains this beautifully. My mind simply doesn't accept unexamined wholes ; it has to disassemble and reassemble them into something stable.


Studying Psychology: Thinking About Minds as Systems

Psychology ; especially forensic psychology ; was probably the first field that let me acknowledge what my brain had been doing all along: tracing causal chains.

People, behaviours, institutions, incentives, trauma histories… they're all networks of pressure points and compensations.

If you study offenders long enough, you learn that the "final act" is never final; it's always the last turn in a loop that started years ago. Feedback loops everywhere.

The Brain as a System of Systems

What deepened this understanding was neuroanatomy ; the study of how the physical brain maps to cognition and behaviour. Oliver Sacks was formative here. His case studies in books like The Man Who Mistook His Wife for a Hat showed how specific lesions or dysfunctions could surgically remove particular capabilities while leaving others intact. A man who couldn't recognise faces but could still compose music. A woman who lost her sense of proprioception and had to consciously will every movement. Each case was a window into the modular, interconnected nature of the mind.

The most famous example predates Sacks by over a century: Phineas Gage, the railroad foreman who survived an iron rod blasting through his prefrontal cortex in 1848. He lived ; but his personality didn't. The man who emerged was impulsive, irreverent, profane. The physical substrate of his decision-making and social cognition had been destroyed, and with it, the person his friends and family knew. Gage proved that "self" isn't ethereal ; it's implemented in tissue that can be damaged like any other component.

Later, as a research psychologist, I worked on neurovascular dementia ; the cognitive decline that follows vascular damage to the brain. Small strokes, silent infarcts, white matter lesions. Each one a tiny subsystem failure, and over time, the cascading effects became visible: memory fragmentation, executive dysfunction, personality changes. I watched the system degrade in slow motion.

Each stage reinforced the same lesson: humans are systems. Not metaphorically ; structurally. The brain is a massively parallel, deeply interconnected network of specialised subsystems, and when parts fail, the whole doesn't just "feel different" ; it computes differently.

Computers, by comparison, are the simpler digital version of the same principle. Easier to debug, easier to reason about, easier to fix. But the underlying logic is the same: components, connections, feedback, emergence.

I didn't love psychology because I wanted to "help people" (though I invariably did). I loved it because it let me analyse human behaviour the way I already analysed everything else: What's interacting with what? What emerges from those interactions? Why does this keep repeating?

Psychology gave me the language for that.

Software gave me the tools.


The Turn Toward Software: Modelling Systems That Behaved Predictably

Moving into software wasn't a career pivot so much as a relief.

Computers ; unlike people ; do what they say. Their inconsistencies can be documented, debugged, versioned, and explained. Human inconsistency can only be interpreted, and I've never liked guesswork.

Programming gave me a playground where my style of thinking wasn't unusual ; it was practical. The same habit of breaking things into components suddenly became productive rather than socially awkward.

Where psychology taught me to analyse systems, software let me build them.


Microsoft: The Right Systems, The Wrong Role

Microsoft was not traumatic. It wasn't "toxic." It was just… the wrong job for someone like me.

As a Program Manager on Project Server and later ASP.NET, I got to understand how large-scale software frameworks and systems actually functioned. The technical side was fascinating ; sometimes brilliant. Seeing how components interconnected across massive codebases, how architectural decisions rippled through millions of lines of code, how teams coordinated to ship software at scale. That part fed exactly the kind of thinking I'd been doing my whole life.

But Program Management at Microsoft isn't primarily a technical role. It's a coordination role. The job is to be the glue between engineering, design, marketing, and customers ; which means the core skill isn't systems analysis. It's social navigation.

Meetings, political subtext, emotional calibration, tacit expectations ; all the states you have to maintain in your head just to operate. In forensic psychology, ambiguity is interesting. In a corporation, ambiguity is… policy.

And that kind of environment slowly drained me. Not because anyone was awful ; but because keeping pace required a kind of multi-layered social awareness that I simply do not have in real-time. I was in a role that demanded constant context-switching between technical depth and social performance, and the latter consumed far more energy than it should.

People call this "burnout," but that's too clinical. It felt more like I was running a mental emulator for a brain that wasn't mine, and the overhead finally caught up.

Leaving wasn't an act of bravery. It was the point at which the simulation crashed.


Afterwards: Building and Analysing on My Own Terms

What came next wasn't a reinvention. It was me finally doing what I'd always done ; just without the wrong job title attached.

The work since Microsoft has been a mix of building and analysing. Startups would bring me in to understand systems that had grown beyond their founders' comprehension ; legacy codebases, tangled architectures, teams that had lost sight of what they'd built. My job was to map the territory, find the pressure points, and either redevelop what was broken or help them understand why it couldn't be saved.

Sometimes that meant bootstrapping entirely new systems from scratch. Sometimes it meant building teams who could maintain what I'd untangled. Sometimes it meant being the person who finally said "this needs to be thrown away" ; which is its own kind of systems analysis.

The pattern repeated across industries and scales. A fintech startup drowning in technical debt. An enterprise trying to modernise a decade-old monolith. A small team that had accidentally built something they no longer understood. Each engagement was different on the surface, but underneath they were all the same problem: a system that had exceeded someone's ability to hold it in their head.

That's where I came in. Not as a contractor who writes code and leaves ; but as someone who could absorb the whole, understand how the pieces interacted, and either fix it or explain why it couldn't be fixed. The role didn't have a clean title. "Consultant" undersells it. "Architect" doesn't fully encompass it. "The person you call when nobody understands what's happening anymore" is closer to the truth.

None of it felt like different jobs. It was all just systems waiting to be understood.


The Present: Distributed Systems and AI Theory

Today, I find myself supporting large distributed systems as essentially a one-person team. The irony isn't lost on me ; I left Microsoft partly because the coordination overhead was exhausting, and now I operate as a single point of integration across systems that would normally require entire departments.

But the difference is control. When you're the only person in the loop, there's no translation overhead. No meetings to align stakeholders. No political layers between understanding and action. The system and I can have a direct conversation.

The other thread running through the present is AI ; specifically, the theory of how we might advance the use of LLMs and related technologies. Not just prompting or fine-tuning, but thinking about AI subsystems architecturally. How do you compose multiple models? How do you build feedback loops that actually improve behaviour? How do you design systems where AI components can be observed, debugged, and reasoned about the same way we reason about any other distributed system?

It turns out the same mental toolkit applies. The brain is a system of specialised subsystems. Software is a system of interacting components. And AI ; at least the kind we're building now ; is a system of models, prompts, contexts, and feedback loops that can be analysed the same way.

The technology is new. The thinking isn't.


What Makes This Work

My mind works best when:

  • I can explore without permission
  • Map things until they stop feeling chaotic
  • Test ideas by nudging them lightly
  • Follow curiosity to its natural conclusion
  • And build systems that behave honestly, even when humans don't

Code doesn't require interpretation. Systems don't gaslight you. Feedback loops make sense.

There's something deeply calming about that.


Seeing the Life Story Through the Thinking Style

If I look backwards, the pattern is almost embarrassingly consistent:

Stage What I Was Doing
Childhood Breaking ideas into pieces
Psychology Learning the vocabulary of systems
Neuroanatomy Understanding the brain as modular, interconnected subsystems
Forensics Modelling human behaviour as causal loops
Software Finding a system that mirrors my cognitive shape
Microsoft Learning large-scale systems ; and discovering the cost of the wrong role
Consulting Becoming the person you call when nobody understands what's happening
Present Supporting distributed systems solo; theorising about AI architectures

It's not a heroic narrative. It's a systemic one.

When you think in systems, your life becomes one too.

Not a tidy one. Not always a kind one. But a coherent one.


What I've Learned

I don't have a neat conclusion. Systems thinkers rarely do ; we just notice that things connect.

But if there's anything useful here, it might be this:

The way you think isn't a limitation to work around. It's a lens. And once you stop fighting the lens, you can finally see clearly through it.

I spent decades trying to think like other people expected me to think. Faster. Smoother. More socially fluent. Less obsessively detailed.

None of that worked.

What worked was building environments where my actual cognitive style was an asset rather than a liability. Writing code instead of attending meetings. Mapping systems instead of reading rooms. Creating feedback loops I could trust.

The world doesn't owe you an environment that fits. But you're allowed to build one.

And sometimes, looking back at the shape of your own life, you realise you already have.

logo

© 2025 Scott Galloway — Unlicense — All content and source code on this site is free to use, copy, modify, and sell.