· 8 min read
What Software Is
Software has always been mostly other things. The code is the easy part — the substrate underneath is people, problems, systems, and judgement.
Software has always been mostly other things.
Writing the code is where the artifact comes from, sure. But if you’ve shipped anything real, you know the code was the easy part. The hard part was figuring out what to build, then convincing people to let you build it, then watching how they actually used it and realising you’d misunderstood half of it, then doing it again.
In all my years at Sesimi, I’ve never seen a project stall on the code. Coding takes time. It doesn’t stall. What stalls is understanding — what the customer wants, what their customer wants, what we agreed to last week, what we’re now pretending we agreed to.
Every few years someone declares that programmers are about to be replaced. The form changes — outsourcing, low-code, AI — but the prediction is the same, made with the same confidence, and it’s always wrong. Not in a small way. Wrong about what software is.
So here’s the thing again, in the form it deserves.
We are uncovering better ways of building software by doing it and helping others do it. Through this work we have come to value:
People over code
Problem-finding over problem-solving
Systems over practices
Judgement over processThat is, while there is value in the items on the right, we value the items on the left more.
Call this The Substrate.
The substrate is what software grows on. People, problems, systems, judgement — the four lines name its parts. The code is what grows out of it. Everything else in this essay is an attempt to take the substrate seriously.
People over code
This is the dumbest of the four and also the most important. Software is made by people, used by people, paid for by people, complained about by people. The code is what falls out the bottom of all that human activity. It’s residue.
When you forget this, everything else gets weird. You start optimising the residue. You measure pull requests merged. You count lines. You time the build. You instrument the developer and forget to instrument the customer. You hire for the syntax and not for the curiosity. And then you wonder why the team is shipping faster than ever and the product is somehow worse.
The mistake isn’t subtle. It’s the same one car manufacturers made when they decided the assembly line was the company. The line is real. It matters. It’s also not why anybody buys a car.
People-first means: who’s on the team, who’s the customer, who’s the team’s customer’s customer, what do they actually want, who’s blocking who, who’s pretending to agree, who actually agrees. That’s the work. The code is what gets written when you’ve done it well.
Everything that follows derives from this. If you take people seriously as the substrate of software, the rest is just bookkeeping.
Problem-finding over problem-solving
The industry has spent thirty years getting better at solving problems. Faster languages, better frameworks, cleaner abstractions, infinite tutorials on how to centre a div. We’re aggressively good at problem-solving. We’re embarrassingly bad at problem-finding.
Here’s what I mean. Solving a problem is bounded — it has a state where you’re done. Finding a problem isn’t. You can keep going. You can talk to the customer for the fifth time and find out the thing they’ve been complaining about for a year is actually a symptom of a different thing they didn’t think to mention. You can ship a feature, watch them use it wrong, and realise the wrong they’re doing is more interesting than the right you designed for.
Problem-finding has no terminal state. Which is exactly why it’s so easy to skip.
Engineering culture skips it. We treat the spec like it came down from a mountain. We treat the ticket like it’s the truth. Then we put our heads down and solve, solve, solve. And we end up with software that is technically excellent and substantively wrong.
AI makes this worse before it makes it better. When code gets cheap, there’s a temptation to just generate more of it — more features, more endpoints, more options — under the assumption that volume is value. It isn’t. Volume of solution to the wrong problem is just faster wrongness.
The good engineers I’ve worked with — the ones who actually move things — are the ones who keep asking what the actual problem is, even after everyone else has moved on. They are, frankly, a bit annoying. Hold on to them.
Systems over practices
Extreme Programming gave us a list of practices. TDD. Pair programming. Continuous integration. Collective ownership. Planning poker. Some of these are good. Some are great. None of them are the point.
The point is the system that turns understanding into shipped software. The practices are how some teams, in some eras, made that system work. Confusing the practices for the system is the most common mistake in our industry, and it’s the same mistake whether you’re cargo-culting Spotify squads or mandating a 90% test coverage threshold.
Ask yourself: is TDD still important? My honest answer — sometimes, depending on what you’re building, and increasingly less so as AI writes more of the first draft. Is testing important? Yes, but the question is now what you’re testing for, not how many tests you have. Is AI testing AI-written code even useful? Probably, in narrow ways, with humans in the loop. The practices keep shifting because the system underneath them keeps shifting. Treating any individual practice as gospel just locks you into yesterday.
A system view asks different questions. What’s the loop from customer pain to deployed change? Where does that loop break? Who’s in it? Who should be? Where are we lying to ourselves about how it works? What’s the slowest, dumbest, most human-bottlenecked step — and is that step actually the most important one? It usually is.
Good practices fall out of a healthy system. They don’t create one. A team with a good system invents the practices it needs. A team obsessed with practices ends up performing them while the system rots underneath.
Judgement over process
Process is what you reach for when you don’t trust the people. Sometimes that’s appropriate. More often it isn’t, and the process becomes a substitute for thinking, and the thinking atrophies, and now you can’t change the process because nobody remembers why it exists.
Every methodology in software starts as someone’s good judgement, written down. Agile was good judgement, written down. XP was good judgement, written down. The trouble is that judgement is contextual and the writing-down strips the context. What’s left is a ritual. The ritual gets enforced. The judgement that produced it gets forgotten.
You can see this anywhere. The four-eyes review that started because of one bad incident and is now applied to changelog edits. The estimation ceremony that began as a useful conversation and is now a number-generating machine nobody believes. The retro that’s run every fortnight because it’s run every fortnight.
None of these are wrong on their face. They’re wrong because nobody can tell you what they’re for anymore, and nobody is allowed to ask.
In the age of AI this matters more, not less. When the cost of producing code goes to zero, the cost of producing the wrong code goes to zero too. The only thing that doesn’t go to zero is judgement — taste, scepticism, the willingness to push back on the ticket, the instinct to ask the customer one more question.
Process can’t generate this. AI can’t generate this. It comes from people who’ve been around the block and care.
So: prefer the judgement of someone you trust over the output of a process you inherited. If you have to choose, the judgement will be right more often. And if the judgement turns out to be wrong, at least you’ll know whose it was, and you can do something about that. You can’t argue with a process.
These four lines aren’t new. They were true when software was punched into cards and they’ll be true after whatever replaces the current models. The technology shifts. The substrate doesn’t. Software is people figuring out what they want, told to other people, who tell it to a machine — which forgets nothing and understands less.
Notice what AI has done to the conversation. It has pulled all the oxygen toward the right-hand side. Code, problem-solving, practices, process — these are the things AI can plausibly help with, so these are the things everyone wants to talk about. Meanwhile the left-hand side — people, problem-finding, systems, judgement — gets quietly sidelined, partly because AI can’t help with it and partly because it’s harder to demo. That’s exactly backwards. AI doesn’t make the right-hand items more important. It makes them cheaper. Cheap things deserve less attention, not more.
AI will make some of us faster. It won’t make any of us right. The substrate is still us.