title: Microservices - Please, don’t url: http://basho.com/posts/technical/microservices-please-dont/ hash_url: e88f83094723927904e0c89f802a68fc

This blog post is adapted from a lightning talk I gave at the Boston Golang meetup in December of 2015.

For a while, it seemed like everyone was crazy for microservices. You couldn’t open up your favorite news aggregator of choice without some company you had never heard of touting how the move to microservices had saved their engineering organization. You may have even worked for one of those companies that got swept up in all the hype around these tiny, magical little services and how they were going to solve all of the problems in your big, ailing, legacy codebase.

Of course, in hindsight, nothing could have been further from the truth. The beauty of hindsight is that it’s often much closer to the 20/20 vision we thought we had looking forward, all those months ago.

I’m going to cover a few of the major fallacies and “gotchas” of the Microservices movement, coming from someone who worked at a company that also got swept up in the idea that breaking apart a legacy monolithic application was going to save the day. While I don’t want the takeaway of this blog post to be “Microservices == Bad”, ideally anyone reading this should walk away with a series of issues to think about when deciding if the move to a Microservice based architecture is right for them.

What is a “Micro Service” anyways?

There really is no perfect definition of what does and does not constitute a microservice, although a few people who really champion the approach have codified it to a fairly reasonable set of requirements.

Tautologically, it is not a monolith. What this actually means in practice is that a microservice only deals with as limited an area of the domain as possible, so that it does as few things as necessary to serve its defined purpose in your stack. To give you a more concrete example, if you were a bank with a “Login Service”, the last thing you’d want it to do is have access to the records of your users’ financial transactions. You’d push that out to a “Transaction Service” of some kind (keep in mind, naming things is very hard).

Additionally, when people talk about microservices they often are implicitly talking about services that need to speak to others remotely. Since they’re distinct processes, and quite often running in locations that are remote from each other, it’s common to build these services so they speak over the network using REST, or some kind of RPC protocol.

At the outset, this actually seems pretty simple – we’ll just wrap tiny pieces of the domain in a REST API of some kind, and we’ll just have everyone talk to each other over the network. In my experience, there are 5 “truths” that people believe about this approach which are not always true:

  1. It keeps the code cleaner
  2. It’s easy to write things that only have one purpose
  3. They’re faster than monoliths
  4. It’s easy for engineers to not all work in the same codebase
  5. It’s the simplest way to handle autoscaling, plus Docker is in here somewhere

Fallacy #1: Cleaner Code

“You don’t need to introduce a network boundary as an excuse to write better code”

The simple fact of the matter is that microservices, nor any approach for modeling a technical stack, are a requirement for writing cleaner or more maintainable code. It is true that since there are less pieces involved, your ability to write lazy or poorly thought out code decreases, however this is like saying you can solve crime by removing desirable items from store fronts. You haven’t fixed the problem, you’ve simply removed many of your options.

A popular approach is to architect the internals of your code around logical “services” that own a piece of the domain. This mirrors the concepts of a microservice in that it helps you to keep the dependencies needed for managing the domain explicit, as well as helps you to keep your key business logic from sprawling into multiple places. Additionally, using these services no longer incurs excess use of the network, nor potential error cases that arise from it.

A further benefit of this approach, given that it very closely mirrors a Service Oriented Architecture built around microservices, is that once you decide you should move to a microservice approach, you’ve already done a good deal of the design work up front, and likely understand your domain well enough to be able to extract it. A solid SOA approach begins in the code itself and moves out into the physical topology of the stack as time moves on.

Fallacy #2: It’s Easier

“Distributed Transactions are never easy”

While it might seem simple at the outset, most domains (especially in newer companies which need to prototype, pivot, and generally re-define the domain itself many times) do not lend themselves to being neatly carved into little boxes. Often times, a given piece of the domain needs to reach out and get data about other parts to do its job correctly. This becomes even more complex when it needs to delegate the responsibility of writing data outside of its own domain. Once you’ve broken out of your own area of influence, and need to involve others in the request flow to store and modify data, you’re in the land of Distributed Transactions (sometimes known as Sagas).

There is a lot of complexity wrapped in the problem of involving multiple remote services in a given request. Can you call them in parallel, or must they be done serially? Are you aware of all of the possible errors (both application and network level) that could arise at any point in the chain, and what that means for the request itself? Often, each of these distributed transactions needs its own approach for handling the failures that could arise, which can be a lot of work not only to understand the errors, but to determine how to handle and recover for each of them.

Fallacy #3: It’s Faster

“You could gain a lot of performance in a monolith by simply applying a little extra discipline”

This is a tough one to dispel because in truth you often can make individual systems faster by paring down the number of things they do, or the number of dependencies they load up, etc etc.

But ultimately, this is a very anecdotal claim. While I have no doubt folks who pivoted to microservices saw individual code paths isolated inside of those services speed up, understand that you’re also now adding the network in-between many of your calls. The network is never as fast as co-resident code calls, although often times it can be “fast enough”.

Additionally, many of these stories about performance gains are actually touting the benefits of a new language or technology stack entirely, and not just the concept of building out code to live in a microservice. Rewriting an old Ruby on Rails, or Django, or NodeJS app into a language like Scala or Go (two popular choices for a microservice architecture) is going to have a lot of performance improvements inherent to the choice of technology itself. But these languages don’t really “care” that you chose to describe the process they run in as “micro”, they simply perform better due to things like compilation.

Further, for a majority of apps in the startup space that are just starting out, raw CPU or Memory performance is almost never your problem. It’s I/O – and additional network calls is only adding further I/O to your profile.

Fallacy #4: Simple for Engineers

“A bunch of engineers working in isolated codebases leads to ‘Not my problem’ syndrome”

While on the tin it might seem simpler to have a smaller team focused on one small piece of the puzzle, ultimately this can often lead to many other problems that dwarf the gains you might see from a having a smaller problem space to tackle.

The biggest is simply that to do anything, you have to run an ever-increasing number of services to make even the smallest of changes. This means you have to invest time and effort into building and maintaining a simple way for engineers to run everything locally. Things like Docker can make this easier, but someone still needs to maintain these as things change.

Additionally, it also makes writing tests more difficult, as to write a proper set of integrations tests means understanding all of the different services a given interaction might invoke, capturing all of the possible error cases, etc etc. There is even more time spent on simply understanding the system, which could better be spent continuing to develop it. While I would never tell any engineer that time spent understanding a system is time wasted, I would definitely warn people away from prematurely adding these levels of complexity until they know they need it.

Finally, it also creates social problems as well. Bugs that span multiple services and require many changes can languish as multiple teams need to coordinate and synchronize their efforts on fixing things. It can also breed a situation where people don’t feel responsible, and will push as many of the issues onto other teams as possible. When engineers work together in the same codebase, their knowledge of each other and the system itself grows in kind. They’re more willing and capable when working together to tackle problems, as opposed to being the kings and queens of isolated little fiefdoms.

Fallacy #5: Better for Scalability

“You can scale a microservice outward just as easily as you can scale a monolith”

It’s not incorrect to say that packaging your services as discrete units which you then scale via something like Docker is a good approach for horizontal scalability.

However, it’s incorrect to say that you can only do this with something like a microservice. Monolithic applications work with this approach as well. You can create logical clusters of your monolith which only handle a certain subset of your traffic. For example, inbound API requests, your dashboard front end, and your background jobs servers might all share the same codebase, but you don’t need to handle all 3 subsets of work on every box.

The benefit here, like it exists in a microservice approach, is that you can tune individual clusters to their given workload, as well as scale them individually in response to a surge in traffic to a given workload. So while a microservice approach guides you into this approach from the get go, you can apply the exact same method of scaling your stack to a more monolithic process as well.

When should you use microservices?

“When you’re ready as an engineering organization”

I’d like to close by going over when it could be the right time to pivot to this approach (or, if you’re starting out, how to know if this is the right way to start).

The single most important step on the path to a solid, workable approach to microservices is simply understanding the domain you’re working in. If you can’t understand it, or if you’re still trying to figure it out, microservices could do more harm than good. But if you have a deep understanding, then you know where the boundaries are, what the dependencies are, so a microservices approach could be the right move.

Another important thing to have a handle on is your workflows – specifically how they might relate to the idea of a Distributed Transaction. If you know the paths each category of request will make through your system, and you understand where, how, and why each of those paths might fail, you could start to build out a distributed model of handling your requests.

Alongside understanding your workflows is monitoring your workflows. Monitoring is a subject greater than just “Microservice VS Monolith”, but it should be something at the core of your engineering efforts. You may need a lot of data at your fingertips about various parts of your systems to understand why one of them is underperforming, or even throwing errors. If you have a solid approach for monitoring the various pieces of your system, you can begin to understand your systems behaviors as you increase its footprint horizontally.

And finally, when you can actually demonstrate value to your engineering organization, and the business as well, that moving to microservices will help you grow, scale, and make money. Although it’s fun to build things and try new ideas out, at the end of the day the most important thing for many companies is their bottom line. If you have to delay putting out a new feature that will make the company revenue because a blogpost told you monoliths were “doing it wrong”, you’re going to need to justify that to the business. Sometimes these tradeoffs are worth it. Sometimes they aren’t. Knowing how to pick your battles and spend time on the right technical debt will earn you a lot of credit in the long run.

Takeaways

Hopefully, you have a new series of conditions and questions to go over the next time someone is suggesting a microservices approach. As I opened with, my aim was not to tell you that microservices are bad; Rather, that jumping into them without thinking through all of the concerns is a recipe for problems down the road.

If you were to ask me, I’d advocate for building “Internal” services via cleanly defined modules in code, and carve them out into their own distinct services if a true need arises over time. This approach isn’t necessarily the only way to do it, and it also isn’t a panacea against bad code on its own. But it will get you further, faster, than trying to deal with a handful or more microservices before you’re ready to do so.

 

Sean Kelly (affectionately known as “Stabby”) has been a software engineer for over 12 years. He is an inaugural member of Basho’s “Taishi” developer program, and currently works for Komand, a security orchestration and automation platform that empowers security teams to quickly automate and streamline security operations, with no need for code. Komand enables teams to connect their tools, build dynamic workflows, and utilize human insight to accelerate incident response and move forward, faster. You can reach him on twitter @StabbyCutyou.