I've noticed a pattern across a number of different technology situations — different companies, different stages, different stacks — that I keep coming back to.
Most of the problems weren't caused by wrong decisions. They were caused by right decisions made in the wrong order.
What sequencing problems look like
The most common version: a team decides to invest in infrastructure before the product is stable. They build a sophisticated deployment pipeline, microservices architecture, observability stack — all the things that scale well — before they've really established what they're scaling. Then the product changes, the architecture becomes a constraint, and they're refactoring systems that were never actually proven out.
Or the reverse: a team moves fast, ships a monolith, gets traction — and then realises the accumulated architectural decisions are now load-bearing walls. The right decomposition is clear in retrospect, but the sequence of what got built first made every subsequent step harder.
Neither team made bad decisions at the time. They both made reasonable calls. The problem is that the calls were made in the wrong order relative to what they actually knew and needed.
Why sequence matters more than the decision itself
Here's the thing about technology decisions: most of them are reversible, but the cost of reversal varies dramatically depending on what you've built on top of them.
A choice of database engine is reversible. If you've built three hundred service integrations that depend on Postgres-specific behaviour, the reversal cost is enormous. If you've been running for six months and have one service talking to it, you can migrate over a weekend.
The decision is the same. The sequence is different.
This is why I'm often more interested in what a team did before they made a given decision than in the decision itself. The question "should we build microservices?" is almost unanswerable without knowing: what's the team size, what's the deployment frequency, what's the domain complexity, and most importantly, what are the next five things you're going to build? The sequencing context changes the answer completely.
A few places where sequencing gets it wrong
Solving scale before you have load: Investing in distributed systems, caching layers, and horizontal scaling before you have traffic to justify them. The engineering is interesting, but it's solving a future problem at the expense of the present one — which is usually getting the product right and understanding what users actually do.
Standardising before you have variety: Introducing frameworks, shared libraries, and organisational standards before you understand what you're standardising. Early standardisation tends to encode the wrong things, because you don't yet know which patterns are actually recurring and which are accidents of the current moment.
Modernising the wrong layer first: In legacy migration projects, there's always a temptation to start with the most technically interesting part rather than the part that most constrains everything else. I've seen teams rebuild the data pipeline before the core domain model was stable, or redesign the API layer before the underlying service was well understood. The modernised layer then has to be revised again once the constraint it depends on is addressed.
Premature abstraction in team structure: Hiring or reorganising around a future architecture rather than the current one. Building a platform team when you don't have a clear picture of what the platform needs to look like. Creating specialist roles before you've validated that the specialisation is actually warranted.
What good sequencing looks like
It's not that there's a single correct order for every situation. It's more that you're trying to maximise the information available at each decision point and minimise the cost of being wrong.
A few questions that sometimes help:
If this decision turns out to be wrong, what's the cheapest way to recover, and what would I need to preserve to do that? This is the classic reversibility framing, but applied to what comes after rather than the decision itself.
What has to be true about the current layer before this layer can be addressed well? Often a problem at one layer is actually a symptom of something unsettled in the layer below it. Fixing the symptom is faster but might not stay fixed.
Is this decision being driven by current evidence or by anticipating a future state that may not materialise? Future-anticipating decisions aren't wrong — sometimes you genuinely have to build ahead of the present state — but they should be conscious, not the default.
I don't think any of this is a formula. Technology situations are particular enough that the same analysis produces different answers in different contexts. But sequence is a dimension I think about a lot, and I find it often surfaces options that get missed when the focus is purely on what decision to make rather than when.
If you're working through a technology decision and want to think it through, I'm happy to have that conversation — get in touch.