Book Reviews

Building Microservices by Sam Newman

ISBN-13: 978-1491950357
Publisher: O'Reilly Media
Pages: 280

I worked on my first microservice architecture back in 2005. Of course, at that time we didn't have the label nor the surrounding hype. We arrived at a microservice architecture by responding to certain requirements balanced with our experience of the system we wanted to replace. The reason it became micro is because we also followed the UNIX philosophy. The system turned-out to be quite successful, both in operation and from a maintenance perspective. However, there were certain aspects of system complexity that worried me. As microservices now enter the mainstream, I wanted to catch-up on the recent ideas. Sam Newman's book is an excellent starting point.

More than Technology

The essence of microservices is to encapsulate each fine-grained responsibility in its own service. If we manage to keep the services loosely coupled, a microservices architecture delivers a set of interesting properties. These properties allow us to respond faster to change. First, we're able to modify a service without affecting other services. Since a microservice is supposed to be, well, micro, this architectural style also provides a beautiful way to deal with decaying legacy code: just throw it away and re-write the service. Another advantage is that we're able to build and deploy services in isolation, which leads to higher availability. To work in practice, you need a workflow and safety-net that supports your application. Sam Newman spends a significant part of the book covering these techniques like continuous integration, automated deployment and containers, monitoring strategies, etc. I'm impressed by all the ground Sam manages to cover. Sure, each chapter is more of an overview and you shouldn't expect a detailed treatment. Yet this approach is the most important parts of the book since it points us in the right direction while maintaining the overall picture.

Done right, we may also get social benefits from microservices in addition to the possible technical advantages. For example, microservices provide natural boundaries for the different teams that work on your system. It was a pleasant surprise to see that Sam Newman dedicates a whole chapter to this topic as microservices are discussed in the context of Conway's Law. Sam's chapter is one of the best discussions I've read on Conway's Law. My own experience (I analyze codebases for a living at Empear) is that we often fail to align organizational factors, like for example team responsibilities, with the way we actually work with the system. So even if Conway's Law has received a lot of attention over the past years it's still important to keep repeating this timeless observation - microservices or not - your organization has to be aligned with your architecture.

Building Microservices covers a lot of ground. It's a complete book in the sense that it dives into the whole chain of development. That also means the book won't cover any specific technology in depth. I think that's a good trade-off since it gives us an overview and context that guides further learning. After all, each of the discussed technologies are complex enough to mandate their own books (for example, you won't learn all the details on how to design a REST API to integrate different services). One aspect that stands out though, are all the valuable tips that Sam shares throughout the book. My favorite example is the section Beware Too Much Convenience where Sam discusses how technologies that simplify the implementation of a service (e.g. frameworks to automatically map database operations to an external API) may make the service harder to evolve since its API gets coupled to the mechanics of the implementation. These tips are all shared as small stories. It's clear that these recommendations come from hard-won experience. To me, these stories are the highlights of the book. Brilliant!

Design to Evolve

Now, let's go back to where I started. What has happened in the past 10 years? What did I learn? Well, we did several things right in our 2005 system. We had an extensive suite of automated integration checks. Our deployment was automated and we had natural team boundaries aligned with the way the system grew. However, we didn't do any continuous integration back then. Continuous integration would have allowed us to move forward faster. My most important take away, however, is how weak we were on the monitoring aspect. In particular, a technique like synthetic monitoring would have saved us a lot of pain. The idea behind synthetic monitoring is that you generate fake events in the live system. You then measure the response time of the system and trigger a warning in case certain thresholds are exceeded. Of course, you also need a way to ensure the fake events don't result in actual side-effects. I wish I'd known about synthetic monitoring ten years ago. It's an interesting approach that pretty much addresses the challenge of measuring performance across multiple services.

One of the reasons I chose a microservice architecture for the system I worked on back in 2005 was fault tolerance. In case a service failed, we could often just restart it. Sure, we'd lose some data, but for most parts of the system that wasn't an issue. And the few cases where it was, we failed hard by forcing a re-start of the entire system and expected a redundant secondary system to take over without any operational disturbance. Even if such events were rare, it was still an availability concern. That took me on the path to learn Erlang, which I think is great not only as a language but also as the glue in a microservice style system. I got to design an Erlang based architecture a few years after my initial microservices experience. It was striking how much simpler that was. Back in 2005 we had unknowingly re-implemented several key aspects of Erlang without gaining all of its benefits.

See the Trade-Offs behind the Hype

So, do microservices allow us to respond faster to change? My experience is that certain aspects of the development get cheaper. Part of that comes from the isolation of change and the increased testability. When I've worked with microservices, we always had a way to launch just the services impacted by a specific change and use custom test services to simulate the rest of the system. We also took advantage of another selling point behind microservices: polyglot codebases. The 2005 system I worked on was serving a safety-critical sub-system. While our microservices part wasn't safety critical itself, the overall safety principles still mandated that we wrote it in C++ (yes, sweet irony). However, the safety requirements only mandated the coding of the application. That meant we were free to use whatever we wanted for our automated tests and diagnostics parts. This turned-out to be the aspect of the architecture where we really reaped the benefits of microservices. We did most of the automated system-level tests in Expect (a dead language these days, but beautiful at its job of automating interactive sessions). We used shell scripts to inject messages and to inspect the reaction of the system. And each one of those test and diagnostics utilities were implemented as their own microservice without any impact on the other parts of the system. To me, that kind of growth pattern is exactly what makes a software system truly maintainable; The less code we need to modify in order to extend a system's responsibilities, the better off we are.

However, while each individual change got easier, the system behavior as a whole became more complex. One trade-off that we do as we focus on small, independent pieces of code is that you lose the holistic picture. We can balance that to a certain degree with automated system-level tests, but you'll probably still find that it's hard to understand how the different services interact. The total complexity you'll find in a traditional architecture is still there, only now it's distributed as well.

The microservice hype over recent years worries me. When I wrote Your Code as a Crime Scene, I included a section on analyzing and identifying early warnings in how your microservices evolve. While I think I've captured the essence, the increasing hype around this architectural style makes me regret that I didn't evolve that section into a full chapter. Remember, today's hype is tomorrow's legacy code.

This is also one of the reasons I enjoyed this book. Sam Newman makes it clear that a microservice architecture isn't simple. Right from the start he warns us that microservices make a bad choice as a golden hammer. He makes it clear that a distributed system is complex in itself and that microservices shouldn't be your default choice. Sam also explains that all the potential advantages don't come for free and you need to invest a lot in automation to make it happen (perhaps even to the degree that your development focus will shift to supporting code rather than the application itself). This critical eye is part of what makes this book pragmatic and also the reason Building Microservices has lessons that go way beyond its topic and applies to software architecture in general. Combine that with the accessible writing style and you have a highly recommended read to every one working on large-scale software, microservices or not.

Reviewed April 2016