Microservices are a people decision, not a technical one.
February 2026 • 376 words • 3 min read
I’ve seen companies running thirty microservices with a twelve-person engineering team.
Nobody questions it. Microservices have become the default answer to “how should we structure our system?” - adopted for the wrong reasons, creating new problems while the old ones quietly follow along.
My unpopular opinion - microservices are mostly a way to organise teams, not to scale systems. And most companies that adopt them are doing it backwards.
The actual value
Mel Conway observed in 1967 that organisations design systems that mirror their own communication structure. Three teams building together produce a three-part system - whether that’s the right architecture or not.
This is the real case for microservices. When the payments team owns the payments service end to end - builds it, deploys it, fixes it at 2am - they stop waiting on five other teams to ship a change. Reduced coordination. Clearer ownership. That’s worth something.
But it only works when the split is real. One team, one thing, completely.
Most companies don’t do it this way.
The mistake
The most common thing I’ve seen: splitting code without splitting ownership.
You have a user service, an auth service, an account service. They all touch the same user data. Any change to how a user is represented requires coordinating across three teams. The services are separate. The work isn’t.
The mess hasn’t gone away. It’s moved into the gaps between services.
And when something breaks, it breaks across multiple services at once. A bug that takes an hour to find in a monolith takes a day when you’re tracing requests through four systems and reading logs from three different places.
Every service also needs its own deployment pipeline, its own monitoring, its own on-call rotation. At Netflix this overhead is justified. At a 20-person startup it’s a tax on every engineer, every day. I’ve watched teams ship at half their previous speed after going microservices - not because the technology was wrong, but because the team wasn’t big enough to carry the weight.
Why this matters more now
AI coding agents change the calculation further.
An agent can hold an entire codebase in context and make changes across multiple modules without getting lost. What’s harder is reasoning across service boundaries - different repositories, different API contracts, different deployment pipelines. Every boundary you add is friction for the agent.
A modular monolith, where code is well-structured but lives in one place, is significantly easier for an agent to work with. I think this is where most teams will land as AI-assisted development matures - not because microservices are wrong, but because the tool that’s transforming how we build software works best with fewer walls.
When to actually use them
Microservices make sense when teams are genuinely separate - different product lines, different shipping cadences, minimal coordination needed. They make sense when one part of your system has dramatically different scaling needs from the rest. At real scale, the operational overhead is worth it.
The mistake is using them as the starting point. Start with a well-structured monolith. Split when you have a specific reason to split, not because microservices sound like the professional choice.
The architecture should follow the team structure. Most of the time, the team is smaller than the architecture suggests.