Good point Ernesto. For microservice-appropriate systems, I concede that you are right that to a degree. My counterpoints:
Not all applications are appropriate for microservices. For example, a machine learning model is not. Nor is embedded software running in a device. Nor is a protocol stack, or a compiler.
And for those designs that are microservice-appropriate: For a microservice, one does not have just one — one has many. And microservice architecture is actually a-lot more complex than monolithic architecture. One chooses the microservice pattern to reach Internet-scale. It makes things vastly more complicated:
One has to deal with all the “saga” failure cases that one does not have in a monolithic application server design: application server components perform each user request in a single transaction, whereas a microservice approach splits a request up into many separate transactions, and if any one of them fails, there is no “rollback” of the call chain — instead, there is a mess that has to be cleaned up by the application logic: failure messages need to be broadcast, compensating actions need to be performed, etc.
It is true that if an individual service is small, then it is individually easier to understand; but even a small service will generally be a few hundred lines; and if you have to inspect several small services, and one of them is confusing, you get stuck pretty quickly.
Tangentially, I’ll point out that Python’s runtime inefficiency doesn’t scale. If you have a run a thousand Python instances to do what could be done with 10 natively compiled Rust instances, the cost difference starts to add up. Again, scale is the issue: Python is great for small scale single team projects, but doesn’t scale in terms of having many teams, a large codebase, and/or Internet level usage scale (millions or hundreds of millions of users).