The Future is Serverless – with MicroStream

This is an impressive prediction by Adam Bien and was the title of his keynote session at our first MicroStream Summit. The recorded session is now ready to watch in your MicroStream account. Log in or just create your free account and get access to all premium videos and on-demand courses:

What is Serverless actually?

In the serverless computing model, the cloud provider allocates machine resources on demand. Developers are not concerned with capacity planning, configuration, management, maintenance, fault tolerance, or scaling of containers, or VMs. Different from VMs and conventional containers managed with Kubernetes, in the serverless model there are no costs when an app is not in use. Pricing is based on the actual amount of resources consumed by an app. No activity, no charges. You can even measure and calculate the costs of every single business transaction, for instance how expensive is a particular order, report, or backup, and finally, you will see it on the monthly invoice of your provider.

Cloud-native can be cheeper

Today, Kubernetes is state of the art to run microservices containerized in the cloud. Unfortunately, Kubernetes is complex to use and expensive. No matter if you run a container or not, you will have base costs for running your Kubernetes cluster. It’s not only your production system. Actually, you also should have numerous of development and testing environments. That’s why many cloud projects run out of money. The only thing you really need in the cloud is something that starts and stops your containers. For your Java code there is no difference whether it runs on Kubernetes or something Docker-based.

In Amazon Web Services (AWS), the most obvious choice is AWS Fargate. Fargate is serverless option with Amazon Elastic Container Service (ECS), which is a container management service to run containers without having to manage servers or clusters of Amazon EC2 instances. Fargate is simple to use. The configuration is about 20 lines of JSON code. YAML is not needed. It’s a way simpler than Kubernetes and you have no base costs for your environment.

Running microservices as Lambdas

Even more interesting is AWS Lambda. It is an event-driven, serverless computing service that runs code in response to events and automatically manages all resources required by that code. If an event occurs, a related lambda is invoked, executes the Java code and falls asleep again. You only have to pay for the execution time. Lambdas that are not running are not charged. The cold-start of a Lambda is slow. It can take seconds. However, all subsequence starts are much faster (only some milliseconds). Astonishingly, Java code running within a Lambda is even faster than the other supported languages like Python or TypeSript, because of Java’s JIT compiler. The size of a single Lambda is usually about 10 classes. So, microservices can be executes as Lambdas. Even “fat functions” are still tiny. To reduce startup-time you can optional use GraalVM. Thus, by using Lambda with Java, you can save money and in addition to that, with Java you are able to write significant larger and more complex apps running within a Lambda. Microsoft Azure provides a similar architecture.

If you use synchronous Lambdas you can use JAX-RS, bean validation and even CDI, but if you use asynchronous Lambdas you don’t need MicroProfile or any framework at all, because you get JSON objects from the outside world already deserialized and you can focus on business logic.

Lambdas using MicroStream

With MicroStream you can add state to the stateless world. The problem with Lambdas is, that Lambdas are stateless. In all serverless runtimes, the functions are invoked, execute code, and go to sleep again. Hereby, the function loses its state. However, you will have to work with the state. To solve this problem you can persist the state of your Lambda as JSON in AWS S3 or Amazon DynamoDB. S3 is an object storage service. DynamoDB is a NoSQL database. S3 is Amazon’s cheapest data storage. Compared to DynamoDB, S3 is about 10x cheeper. However, to serialize objects into JSON format and vice versa you have to write ugly boilerplate code. More complex object graphs using circular references cannot be serialized into JSON at all.

MicroStream basically enables you to store any Java object as a blob in a very simple and object-oriented, convenient way somewhere, for instance in a plain file or in AWS S3. So, MicroStream and S3 are a great fit. Now, with MicroStream you can read the state of a Lamda from S3, work with it, and finally write it back to S3. This is an optimization that will be visible on your monthly invoice. The nice thing is how clean the code looks like. There are various serialization frameworks out there, but MicroStream has deciding benefits: Implementing the interface is not mandatory, thus any Java object and even objects from 3rd-party APIs can be serialized. Serializing object graphs of any size and complexity is possible. Circular references are troublefree. The depth of an object graph is not limited, there is no stack-based recursion. Last but not least, you can serialize any POJO. To use MicroStream, there are no specific interfaces, superclasses or annotations required.


Watsch now:


About Adam Bien:

Adam Bien is a Java Champion, freelancer, book author, keynote speaker, consultant, architect, trainer, podcaster, and Java enthusiast who uses Java since JDK 1.0. He regularly organizes Java / Web / Architecture online live workshops and a monthly Q&A live streaming show on

Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post

MicroStream Account Space

Next Post

MicroStream at Devoxx UK 2022

Related Posts
Secured By miniOrange