What we have observed throughout the decades while working in the transport industry is that developers like to build either monolithic services or something like a “distributed monolith”. Distributed monolith can often look like it’s split in a way that made sense during the initial rollout (user portal web UI coupled with the user portal backend) but becomes a problem when you want to have a different web UI in a different project (for example integrate into an existing city’s portal, etc.).
The galaxy far far away
Unsurprisingly, this is not unique to the transport industry. This was (and often still is) a pattern with many legacy enterprise systems as well. What it meant for businesses is that each time they would like to add new functionality or upgrade certain components they would have to go on a multi-year project to rewrite the whole thing. These projects would often get cancelled mid-way due to lack of funds or too many issues with the new service.
Enter Microservices
With microservices the promise is that you can build a service that can be used by many different people and that can be used by many different systems. For example a self-service portal can be used by multiple web frontends or mobile apps.
What’s important with microservices is to keep their number under control. Each new service will mean that:
- You need to deploy and configure this new service
- Implement a server, clients for communication
- Configure database connection (if needed) which can grow the total number of connections your database needs to handle
- Have an upgrade strategy for this new service
- Collect logs, make sure it’s easy to debug and audit this service
As you can see, if you have 1 or 3 services it’s fine, however if your number climbs to 10 or more then you probably want to look into ways to merge some of your microservices together.
Dos And Don’ts
While it’s easy to just fully embrace the microservices approach and try to make the split per process it often leads to catastrophic results. Some semi obvious points:
Dos:
- Split into a separate service when you are exposing a new external API group. This will allow you to upgrade the services or even run multiple versions of the same service in parallel.
- When possible, minimize direct database access. Ideally have a centralized control-plane style server that exposes access to the database via internal gRPC service. You can then do automated database migrations without worrying to update all your services at once.
- Use either gRPC or OpenAPI spec for interservice communication. Ideally both servers and clients are auto generated.
Don’ts:
- Don’t try to split by a process. It’s totally okay to have a service that runs tens, hundreds or thousands of concurrent processes.
- Avoid using multiple languages. It might look fun to have a service in Go and a service in Python but it’s not a good idea to have two services in different languages. Eventually you will encounter problems with having a consistent testing strategy, connectivity quirks, build/deployment issues, etc.
Protocols
With microservices the idea is to split logical parts of the system into different modules that can work separately from the rest of the components as long as their dependencies implement the correct APIs.
Internally Deliust chose to use gRPC to establish this communication between the components while relying on OpenAPI for external comms.
Some advantages of using gRPC internally:
- All server and client code is directly generated from the protocol buffers.
- Static types on both ends leaves no room for miscommunication.
- High performance compared to other encoding types like JSON (let’s not even mention XML here :))
- Supported by most languages
However, when it comes to external connectivity from end-user devices we chose to not use it mostly due to:
- gRPC traffic is harder to route from the external sources, not all ingress controllers on Kubernetes support it
- Due to the nature of gRPC relying on long-lived connections you would need to do some more work to load-balance requests (Envoy has a solution but then it’s an additional component)
Hence, we chose to rely on a more traditional REST API for mobile apps and just generate TypeScript clients from the OpenAPI spec to use in the web UIs.
Same same but different
When deploying the service to multiple locations, the only difference becomes a Docker image name that has specific customer’s branded website.
All the internals are the same, updates become fast, simple and uneventful.
Integrating with hardware providers
Integrating with card reader hardware can often be painful as they often provide SDKs only in C or C++. Then you would still need to integrate with screen and sound separately.
Instead of writing our own SDK we actually just made an agreement with the manufacturer so that they implement a gRPC server that performs basic functions and we implement a client that can subscribe to the card tap events and issue RPCs to play sounds and what to display:
|
|
Integration becomes much simpler as the client is not changing across different hardware manufacturers.
Wrapping up
As you can see, there’s no magic here. Everything relies on being explicit when designing interfaces between components. Ideally, you would like to have only a single service, however in real world it’s often quite hard. You need to iterate on specific components based on the focus during the development process.
You also like to give our customers complete freedom to just present us with a Docker image that we start as a distributor or end-user portal self-service.