Posted on Leave a comment

Decoupling microservices with Apache Camel and Debezium

The rise of microservices-oriented architecture brought us new development paradigms and mantras about independent development and decoupling. In such a scenario, we have to deal with a situation where we aim for independence, but we still need to react to state changes in different enterprise domains.

I’ll use a simple and typical example in order to show what we’re talking about. Imagine the development of two independent microservices: Order and User. We designed them to expose a REST interface and to each use a separate database, as shown in Figure 1:

Diagram 1 - Order and User microservices

Figure 1: Order and User microservices.

We must notify the User domain about any change happening in the Order domain. To do this in the example, we need to update the order_list. For this reason, we’ve modeled the User REST service with addOrder and deleteOrder operations.

Solution 1: Queue decoupling

The first solution to consider is adding a queue between the services. Order will publish events that User will eventually process, as shown in Figure 2:

Diagram 2 - decoupling with a queue

Figure 2: Decoupling with a queue.

This is a fair design. However, if you don’t use the right middleware you will mix a lot of infrastructure code into your domain logic. Now that you have queues, you must develop producer and consumer logic. You also have to take care of transactions. The problem is to make sure that every event ends up correctly in both the Order database and in the queue.

Solution 2: Change data capture decoupling

Let me introduce an alternative solution that handles all of that work without your touching any line of your microservices code. I’ll use Debezium and Apache Camel to capture data changes on Order and trigger certain actions on User. Debezium is a log-based data change capture middleware. Camel is an integration framework that simplifies the integration between a source (Order) and a destination (User), as shown in Figure 3:

Diagram 3 - decoupling with Debezium and Camel

Figure 3: Decoupling with Debezium and Camel.

Debezium is in charge of capturing any data change happening in the Order domain and publishing it to a topic. Then a Camel consumer can pick that event and make a REST call to the User API to perform the necessary action expected by its domain (in our simple case, update the list).

Decoupling with Debezium and Camel

I’ve prepared a simple demo with all of the components we need to run the example above. You can find this demo in this GitHub repo. The only part we need to develop is represented by the following source code:

public class MyRouteBuilder extends RouteBuilder { public void configure() { from("debezium:mysql?name=my-sql-connector" + "&databaseServerId=1" + "&databaseHostName=localhost" + "&databaseUser=debezium" + "&databasePassword=dbz" + "&databaseServerName=my-app-connector" + "&databaseHistoryFileName=/tmp/dbhistory.dat" + "&databaseWhitelist=debezium" + "&tableWhitelist=debezium._order" + "&offsetStorageFileName=/tmp/offset.dat") .choice() .when(header(DebeziumConstants.HEADER_OPERATION).isEqualTo("c")) .process(new AfterStructToOrderTranslator()) .to("rest-swagger:http://localhost:8082/v2/api-docs#addOrderUsingPOST") .when(header(DebeziumConstants.HEADER_OPERATION).isEqualTo("d")) .process(new BeforeStructToOrderTranslator()) .to("rest-swagger:http://localhost:8082/v2/api-docs#deleteOrderUsingDELETE") .log("Response : ${body}"); } } 

That’s it. Really. We don’t need anything else.

Apache Camel has a Debezium component that can hook up a MySQL database and use Debezium embedded engine. The source endpoint configuration provides the parameters needed by Debezium to note any change happening in the debezium._order table. Debezium streams the events according to a JSON-defined format, so you know what kind of information to expect. For each event, you will get the information as it was before and after the event occurs, plus a few useful pieces of meta-information.

Thanks to Camel’s content-based router, we can either call the addOrderUsingPOST or deleteOrderUsingDELETE operation. You only have to develop a message translator that can convert the message coming from Debezium:

public class AfterStructToOrderTranslator implements Processor { private static final String EXPECTED_BODY_FORMAT = "{\"userId\":%d,\"orderId\":%d}"; public void process(Exchange exchange) throws Exception { final Map value = exchange.getMessage().getBody(Map.class); // Convert and set body int userId = (int) value.get("user_id"); int orderId = (int) value.get("order_id"); exchange.getIn().setHeader("userId", userId); exchange.getIn().setHeader("orderId", orderId); exchange.getIn().setBody(String.format(EXPECTED_BODY_FORMAT, userId, orderId)); } } 

Notice that we did not touch any of the base code for Order or User. Now, turn off the Debezium process to simulate downtime. You will see that it can recover all events as soon as it turns back on!

You can run the example provided by following the instructions on this GitHub repo.

Caveat

The example illustrated here uses Debezium’s embedded mode. For more consistent solutions, consider using the Kafka connect mode instead, or tuning the embedded engine accordingly.

Share

The post Decoupling microservices with Apache Camel and Debezium appeared first on Red Hat Developer.

Posted on Leave a comment

How Quarkus brings imperative and reactive programming together

The supersonic subatomic Java singularity has expanded!

42 releases, 8 months of community participation, and 177 amazing contributors led up to the release of Quarkus 1.0.  This release is a significant milestone with a lot of cool features behind it. You can read more in the release announcement.

Building on that awesome news, we want to delve into how Quarkus unifies both imperative and reactive programming models and its reactive core. We’ll start with a brief history and then take a deep dive into what makes up this dual-faceted reactive core and how Java developers can take advantage of it.

Microservices, event-driven architectures, and serverless functions are on the rise. Creating a cloud-native architecture has become more accessible in the recent past; however, challenges remain, especially for Java developers. Serverless functions and microservices need faster startup times, consume less memory, and above all offer developer joy. Java, in that regard, has just in recent years done some improvements (e.g., ergonomics enhancements for containers, etc.). However, to have a performing container-native Java, it hasn’t been easy. Let’s first take a look at some of the inherent issues for developing container-native Java applications.

Let’s start with a bit of history.

Threads CPUs Java and Containers

Threads and containers

As of version 8u131, Java is more container-aware, due to the ergonomics enhancements. So now, the JVM knows the number of cores it’s running on and can customize thread pools accordingly — typically the fork/join pool. That’s all great, but let’s say we have a traditional web application that uses HTTP servlets or similar on Tomcat, Jetty, or the like. In effect, this application gives a thread to each request allowing it to block this thread when waiting for IO to occur, such as accessing databases, files, or other services. The sizing for such an application depends on the number of concurrent requests rather than the number of available cores; this also means quota or limits in Kubernetes on the number of cores will not be of great help and eventually will result in throttling.

Memory exhaustion

Threads also cost memory. Memory constraints inside a container do not necessarily help. Spreading that over multiple applications and threading to a large extent will cause more switching and, in some cases, performance degradation. Also, if an application uses traditional microservices frameworks, creates database connections, uses caching, and perhaps needs some more memory, then straightaway one would also need to look into the JVM memory management so that it’s not getting killed (e.g., XX:+UseCGroupMemoryLimitForHeap). Even though JVM can understand cgroups as of Java 9 and adapt memory accordingly, it can still get quite complex to manage and size the memory.

Quotas and limits

With Java 11, we now have the support for CPU quotas (e.g., PreferContainerQuotaForCPUCount). Kubernetes also provides support for limits and quotas. This could make sense; however, if the application uses more than the quota again, we end up with sizing based on cores, which in the case of traditional Java applications, using one thread per request, is not helpful at all.

Also, if we were to use quotas and limits or the scale-out feature of the underlying Kubernetes platform, the problem wouldn’t solve itself; we would be throwing more capacity at the underlying issue or end up over-committing resources. And if we were running this on a high load in a public cloud, certainly we would end up using more resources than necessary.

What can solve this?

A straightforward solution to these problems would be to use asynchronous and non-blocking IO libraries and frameworks like Netty, Vert.x, or Akka. They are more useful in containers due to their reactive nature. By embracing non-blocking IO, the same thread can handle multiple concurrent requests. While a request processing is waiting for some IO, the thread is released and so can be used to handle another request. When the IO response required by the first request is finally received, processing of the first request can continue. Interleaving request processing using the same thread reduces the number of threads drastically and also resources to handle the load.

With non-blocking IO, the number of cores becomes the essential setting as it defines the number of IO threads you can run in parallel. Used properly, it efficient dispatches the load on the different cores, handling more with fewer resources.

Is that all?

And, there’s more. Reactive programming improves resource usage but does not come for free. It requires that the application code embrace non-blocking and avoid blocking the IO thread. This is a different development and execution model. Although there are many libraries to help you do this, it’s still a mind-shift.

First, you need to learn how to write code executed asynchronously because, as soon as you start using non-blocking IOs, you need to express what is going to happen once the response is received. You cannot wait and block anymore. To do this, you can pass callbacks, use reactive programming, or continuation. But, that’s not all, you need to use non-blocking IOs and so have access to non-blocking servers and clients for everything you need. HTTP is the simple case, but think about database access, file systems, and so on.

Although end-to-end reactive provides the best efficiency, the shift can be hard to comprehend. Having the ability to mix both reactive and imperative code is becoming essential to:

  1. Use efficiently the resources on hot paths, and
  2. Provide a simpler code style for the rest of the application.

Enter Quarkus

This is what Quarkus is all about: unifying reactive and imperative in a single runtime.

Quarkus uses Vert.x and Netty at its core. And, it uses a bunch of reactive frameworks and extensions on top to help developers. Quarkus is not just for HTTP microservices, but also for event-driven architecture. Its reactive nature makes it very efficient when dealing with messages (e.g., Apache Kafka or AMQP).

The secret behind this is to use a single reactive engine for both imperative and reactive code.

Quarkus does this quite brilliantly. Between imperative and reactive, the obvious choice is to have a reactive core. What that helps with is a fast non-blocking code that handles almost everything going via the event-loop thread (IO thread). But, if you were creating a typical REST application or a client-side application, Quarkus also gives you the imperative programming model. For example, Quarkus HTTP support is based on a non-blocking and reactive engine (Eclipse Vert.x and Netty). All the HTTP requests your application receive are handled by event loops (IO Thread) and then are routed towards the code that manages the request. Depending on the destination, it can invoke the code managing the request on a worker thread (servlet, Jax-RS) or use the IO was thread (reactive route).

For messaging connectors, non-blocking clients are used and run on top of the Vert.x engine. So, you can efficiently send, receive, and process messages from various messaging middleware.

To help you get started with reactive on Quarkus, there are some well-articulated guides on Quarkus.io:

There are also reactive demo scenarios that you can try online; you don’t need a computer or an IDE, just give it a go in your browser. You can try them out here.

Additional resources

Share

The post How Quarkus brings imperative and reactive programming together appeared first on Red Hat Developer.

Posted on Leave a comment

Open Liberty Java runtime now available to Red Hat Runtimes subscribers

Open Liberty is a lightweight, production-ready Java runtime for containerizing and deploying microservices to the cloud, and is now available as part of a Red Hat Runtimes subscription. If you are a Red Hat Runtimes subscriber, you can write your Eclipse MicroProfile and Jakarta EE apps on Open Liberty and then run them in containers on Red Hat OpenShift, with commercial support from Red Hat and IBM.

Develop cloud-native Java microservices

Open Liberty is designed to provide a smooth developer experience with a one-second startup time, a low memory footprint, and our new dev mode:

Tweet about Open Liberty Dev Mode.

Open Liberty provides a full implementation of MicroProfile 3 and Jakarta EE 8. MicroProfile is a collaborative project between multiple vendors (including Red Hat and IBM) and the Java community that aims to optimize enterprise Java for writing microservices. With a four-week release schedule, Liberty usually has the latest MicroProfile release available soon after the spec is published.

Also, Open Liberty is supported in common developer tools, including VS Code, Eclipse, Maven, and Gradle. Server configuration (e.g., adding or removing a capability, or “feature,” to your app) is through an XML file. Open Liberty’s zero migration policy means that you can focus on what’s important (writing your app!) and not have to worry about APIs changing under you.

Deploy in containers to any cloud

When you’re ready to deploy your app, you can just containerize it and deploy it to OpenShift. The zero migration principle means that new versions of Open Liberty features will not break your app, and you can control which version of the feature your app uses.

Monitoring live microservices is enabled by MicroProfile Metrics, Health, and OpenTracing, which add observability to your apps. The emitted metrics from your apps and from the Open Liberty runtime can be consolidated using Prometheus and presented in Grafana.

Learn with the Open Liberty developer guides

Our Open Liberty developer guides are available with runnable code and explanations to help you learn how to write microservices with MicroProfile and Jakarta EE, and then to deploy them to Red Hat OpenShift.

Get started

To get started with Open Liberty, try the Packaging and deploying applications guide and the Deploying microservices to OpenShift guide.

Share

The post Open Liberty Java runtime now available to Red Hat Runtimes subscribers appeared first on Red Hat Developer.