top of page

New trends in the Microservices world (as of 2024)

By Oleh Kasian, Systems Architect II at EPAM Systems, Inc.

It has been more than 10 years since I first encountered the term "microservice." It no longer sounds like a buzzword. However, our understanding of the "microservice" has changed over time, with patterns and frameworks continually being developed. For instance, back in 2014, if you read an article about microservices, you would most likely encounter words like Docker and CI/CD. Fast forward to 2024, industry thought leaders are now talking about things like observability, scalability, resilience and operational efficiency. I think this shift is a fantastic indicator of how the platform has evolved.

In 2020, I was working for a client with a modernization strategy for a decade-old enterprise architecture. The aim was to shift towards a more modern, distributed approach, with microservices on Kubernetes being the primary technical focus. Given my prior involvement in the implementation of microservices for smaller companies, I commenced the project with a substantial 'golden hammer' bias, but it proved problematic.

Modernizing to microservices has been a standard practice for software engineers over the past few decades. It involves dividing the monolith into logical pieces and moving them one by one until you have microservices. It's a simple process unless you have 500+ microservices, as we note in the example below.

[Picture 1: “AWS re: Invent 2015: A Day in the Life of a Netflix Engineer”,]

As I proceeded with this modernization, I identified and documented a list of problems that needed solutions. At that point, some of the issues did not have mature solutions. Let's delve into each of them and examine the shifts that have occurred over the past four years.

Event management, support for legacy event brokers

Messaging systems like JMS were introduced long before the concept of microservices. They have evolved, changing protocols and altering how you communicate with message brokers. Now imagine a large enterprise with hundreds of systems developed by various teams. Whenever a new message queue—like RabbitMQ or Kafka—was introduced, some were eager to try it. Others, more conservative, resisted the idea. In some instances, you also have to rush new features, or maybe the application isn't being developed anymore, and you will not upgrade. In the end, you are typically left with an assortment of old and new messaging technologies.

[Picture 2: “Serverless Workflow”,]

Another problem is managing the schemas of events. JMS is for Java, so you exchange Java objects. AMQP allows even non-structured data to be exchanged. Kafka (if you use Confluent) might implement a schema registry. The list continues. Commonly, efforts are made to arrange messages and events into a cohesive, reusable format, but the larger the company, the more disordered it may get.

These two challenges become far more visible when you contemplate migrating to microservices, which are likely to exchange more events than a monolith. Not only do you have an increased number of events, but you must also maintain configurations for your assortment of old and new messaging technologies. You must also manage the networking infrastructure, authorization, etc., multiplied by the number of microservices. Moreover, ownership of each event will transfer to each microservice team and become less centralized, amplifying the effect of schema changes.

A solution for supporting legacy systems and complex routing came in the form of an 'event mesh' pattern, even though it was not very mature at that time. If you're interested in learning more, I recommend reading about the Event Mesh Pattern.

The second problem, however, had no real solution. As an architect, I love standards. I always wondered why there are standards like WSDL and OpenAPI, which are perfect for web services, but no such thing exists for event-driven communication. Of course, I wasn’t the only one pondering this. Around 2020, "Cloud Events" started gaining popularity, and in 2024, it finally graduated from the CNCF incubator and is now considered mature. You can look at the specification on GitHub (Cloud Events specification), but in general, it looks like this:

Additionally, project comes supplied with a good amount of SDKs for different languages and bindings to different protocols, with an idea being that you don’t have to change all your code when you migrate to a newer version.

Scheduling, batch processing, workflow management solutions

Another challenge I encountered was the need to schedule processes on projects like the one I was working on. For instance, a process that connects a couple of systems or carries out some batch processing. When you have a monolith, you typically have a built-in scheduler (like Quartz) to solve this. So how are microservices different? The answer lies in scalability.

The jobs you had within your monolith now have to be repeated multiple times. The data is heavily distributed and replicated multiple times to keep it closer to the service (at the cost of consistency). To make things even more captivating, due to the distributed nature of microservices, the decision on how to perform scheduled jobs will be made several times by different individuals. Without firm rules and standards, you will end up with a big mess.

Comparable to API management and event management, solutions to this problem have existed for a long time. Each cloud platform handles this in its own way — Kubernetes, for instance, naturally incorporates cron jobs and the market offers various specialized tools. The challenge is, each of these tools has specific Domain Specific Language (DSL). If different tools are utilized for different platforms, support becomes increasingly complex.

Although it's not yet fully matured, I see the solution to this issue in Serverless Workflow. This is a relatively new open-source specification but is quite promising. Let's breakdown its benefits:

-It doesn't impose the use of any particular tool — any tool supporting the specification can be used, much like OpenAPI being supported by multiple API Management solutions.

-It integrates with other specification standards such as OpenAPI, AsyncAPI, CloudEvents and GraphQL.

-It supports a wide variety of use-cases, including scheduled jobs and event handling.

The schema for a workflow might resemble the image provided below.

[Picture 4: “CNCF Landscape”,]

Project bootstrapping. Microservice Chassis.

Let us say you have finally built your microservices platform. You have observability, logging, tracing, some specific service discovery solution and API Gateway. The problem is – it is useless unless developers start integrating their solutions to support it. How do you make them do it? Surely, writing a tutorial is a good way, but then each of them will not only have to read and understand the information but also implement it each time they start a new project.

There is a better solution for this - Microservice Chassis.

[Picture 5: “Microservice Chassis pattern”,]

In 2020, I started with a couple of template repositories, which sufficed. But the issue here lies within the resulting code. It often requires alterations, mostly configuration-related. Additionally, managing all those repositories can be daunting, and promoting their use and alerting people of any changes becomes more challenging.

The microservice chassis pattern is still being employed in 2024 but has evolved into something even more appealing. Instead of merely creating template repositories, companies now integrate them as part of their developer portals. They allow you to set up all the infrastructure for your new project from one place, and the configuration is inscribed into your code repository (or a config management solution like Config Server).

Future vision

[Picture 6: Microservice architecture by Oleh Kasian]


Here is how I envision the foreseeable future of the microservice architecture:

-It all commences from a developer creating a new project in the developer portal.

- All the necessary infrastructure for your new project is also provisioned through the portal, and the configuration is embedded within your code repository (or Config Server).

- As part of the template, you have SDKs for OpenAPI, CloudEvents, and Serverless Workflow. All mandatory observability tools, such as Open Telemetry, Open Traces, and a logging framework configured with a proper log format, should be present.

- The provision of CI/CD is integral to the template, allowing your application to be deployed immediately.

- The only part left for you is to write some code.

Of course, this is only my vision. With rapidly evolving technology, it is hard to predict what will happen next, but I am certain it will be intriguing.


EPAM Systems, Inc., a leading digital transformation services and product engineering company, helps its clients navigate the waves of digital transformation, building solutions that help them level the playing field and stay competitive through constant market disruption.

EPAM extended its competencies in Romania in 2020 and, for the past few years, has embraced a remote-working format, employing people from several cities in Romania and opening offices in Bucharest, Iasi, and Cluj. They focused on building long-term partnerships with its clients, enabling them to reimagine their businesses through a digital lens. EPAM helps its clients become faster, more agile, and more adaptive enterprises by delivering solutions through best-in-class engineering, strategy, design, consulting, education, and innovation services.


Learn more at and follow us on LinkedIn.


Oleh Kasian has just under 10 years of experience as a Systems Architect. Highly adept at designing reliable, scalable, and highly available infrastructure solutions built on public clouds and container technologies, Oleh has honed his focus on AWS Cloud, Azure Cloud, Kubernetes, Cloud-native Java, and Python development fields.



bottom of page