The InfoQ eMag / Issue #96 / July 2021 Building Microservices in Java Spring Boot Tutorial: Building Microservices Deployed to Google Cloud Project Helidon Tutorial: Building Microservices with Oracle’s Lightweight Java Framework Getting Started with Quarkus FACILITATING THE SPREAD OF KNOWLEDGE AND INNOVATION IN PROFESSIONAL SOFTWARE DEVELOPMENT InfoQ @ InfoQ InfoQ InfoQ Building Microservices in Java IN THIS ISSUE Spring Boot Tutorial: Building Microservices Deployed to Google Cloud Getting Started with Quarkus 06 14 Project Helidon Tutorial: Building Microservices with Oracle’s Lightweight Java Framework Virtual Panel: the MicroProfile Influence on Microservices Frameworks 20 29 PRODUCTION EDITOR Ana Ciobotaru / COPY EDITORS Susan Conant DESIGN Dragos Balasoiu / Ana Ciobotaru GENERAL FEEDBACK feedback@infoq.com / ADVERTISING sales@infoq.com / EDITORIAL editors@infoq.com CONTRIBUTORS Sergio Felix Robert Cortez Cesar Hernandez is a Software Engineer in Google Cloud where he works in Cloud Engineering Productivity, an organization within Google Cloud that focuses on making development frictionless and improving Product & Eng Excellence. is a passionate Java Developer with more than 10 years of experience. He is involved in the Open Source Community to help other individuals spread the knowledge about Java technologies. He is a regular speaker at conferences like JavaOne, Devoxx, Devnexus, JFokus, and others. He leads the Coimbra JUG and founded the JNation Conference in Portugal. When he is not working, he hangs out with friends, plays computer games, and spends time with family. is a product manager and a former architect atis a Senior Software Engineer at Tomitribe with over 14 years of experience in Enterprise Java Applications. He is a Java Champion, Duke’s Choice Award winner, Oracle Groundbreaker Ambassador, Open Source advocate, Apache and Eclipse Committer, teacher, and public speaker. When Cesar is away from a computer, he enjoys spending time with his family, traveling, and playing music with the Java Community Band, The Null Pointers. Follow Cesar on Twitter. Emily Jiang Otavio Santana Erin Schnabel is a Java Champion. She is Liberty Microservices Architect and Advocate, Senior Technical Staff Member (STSM) in IBM, based at Hursley Lab in the UK. Emily is a MicroProfile guru and has been working on MicroProfile since 2016 and leads the specifications of MicroProfile Config, Fault Tolerance and Service Mesh. She was a CDI Expert Group member. She is passionate about MicroProfile and Jakarta EE. is a passionate software engineer focused on Cloud and Java technology. He has experience mainly in persistence polyglot and high-performance applications in finances, social media, and e-commerce. Otavio is a member of both Expert Groups and Expert Leader in several JSRs and JCP executive committee. He is working on several Apache and Eclipse Foundation projects such as Apache Tamaya, MicroProfile, Jakarta EE. is a Senior Principal Software Engineer and maker of things at Red Hat. She is a Java Champion, with 20 years under her belt, as a developer, technical leader, architect and evangelist, and she strongly prefers being up to her elbows in code.understanding of how reactive operators work. A LETTER FROM THE EDITOR Michael Redlich is a Senior Research Technician at ExxonMobil Research & Engineering in Clinton, New Jersey (views are his own) with experience in developing custom scientific laboratory and web applications for the past 30 years. He also has experience as a Technical Support Engineer at AiLogix, Inc. (now AudioCodes) where he provided technical support and developed telephony applications for customers. His technical expertise includes object-oriented design and analysis, relational database design and development, computer security, C/C++, Java, Python, and other programming/scripting languages. His latest passions include MicroProfile, Jakarta EE, Helidon, Micronaut and MongoDB. Over the past few years, the Java community has been offered a wide variety of microservicesbased frameworks to build enterprise, cloud-native and serverless applications. Perhaps you’ve been asking yourself questions such as: What are the benefits of building and maintaining a microservicesbased application? Should I migrate my existing monolith-based application to microservices? Is it worth the effort to migrate? Which microservices framework should I commit to using? What are MicroProfile and Jakarta EE? What happened to Java EE? How does Spring Boot fit into all of this? What is GraalVM? For those of us that may be old enough to remember, the concept of microservices emerged from the service-oriented architecture (SOA) that was introduced nearly 20 years ago. SOA applications used technologies such as the Web Services Description Language (WSDL) and Simple Object Access Protocol (SOAP) to build enterprise applications. Today, however, the Representational State Transfer (REST) protocol is the primary method for microservices to communicate with each other via HTTP. Since 2018, we’ve seen three new open-source frameworks Micronaut, Helidon and Quarkus - emerge to complement the already existing Java middleware open-source products such as Open Liberty, WildFly, Payara and Tomitribe. We have also seen the emergence of GraalVM, a polyglot virtual machine and platform created by Oracle Labs that, among other things, can convert applications to native code. In this eMag, you’ll be introduced to some of these microservices frameworks, MicroProfile, a set of APIs that optimizes enterprise Java for a microservices architecture, and GraalVM. We’ve The InfoQ eMag / Issue #87 / November 2020 hand-picked three full-length articles and facilitated a virtual panel to explore these frameworks. In the first article, Sergio Felix, senior software engineer at Google, provides for you a step-by-step tutorial on how to deploy applications to the Google Cloud Platform. Starting with a basic Spring Boot application, Sergio will then containerize the application and deploy it to Google Kubernetes Engine using Skaffold and the Cloud Code IntelliJ plugin. In the second article, Roberto Cortez, principal software engineer at Red Hat, introduces you to Quarkus, explains the motivation behind its creation and demonstrates how it is different from the other frameworks. Dubbed “supersonic subatomic Java,” Quarkus has received a significant amount of attention within the Java community since its initial release in 2019. In the third article, I will introduce you to Project Helidon, Oracle’s lightweight Java microservices framework. I will explain the differences between Helidon SE and Helidon MP, explore the core components of Helidon SE, show you how to get started, and introduce a movie application built on top of Helidon MP. I also demonstrate how to convert a Helidon application to native code with GraalVM. And finally, we present a virtual panel featuring an all-star cast of Java luminaries. Cesar Hernandez, senior software engineer at Tomitribe, Emily Jiang, Liberty microservice architect and advocate at IBM, Otavio Santana, staff software engineer at xgeeks, and Erin Schnabel, senior principal software engineer at Red Hat, discuss the MicroProfile influence on microservices frameworks. There was also a discussion on how developers and organizations are reverting back to monolith-based application development. We hope you enjoy this edition of the InfoQ eMag. Please share your feedback via editors@infoq.com or on Twitter. The InfoQ eMag / Issue #96 / July 2021 Spring Boot Tutorial: Building Microservices Deployed to Google Cloud by Sergio Felix, Software Engineer With the increasing popularity of microservices in the industry, there’s been a boom in technologies and platforms from which to choose to build applications. Sometimes it’s hard to pick something to get started. In this article, I’ll show you how to create a Spring Boot based application that leverages some of the services offered by Google Cloud. This is the approach we’ve been using in our team at Google for quite some time. I hope you find it useful. The Basics Let’s start by defining what we will build. We’ll begin with a very basic Spring Boot-based application written in Java. Spring is a mature framework that allows us to quickly create very powerful and feature-rich applications. 6 We’ll then make a few changes to containerize the application using Jib (builds optimized Docker and OCI images for your Java applications without a Docker) and a distroless version of Java 11. Jib works both with Maven and Gradle. We’ll use Maven for this example. Next, we will create a Google Cloud Platform (GCP) project and use Spring Cloud GCP to leverage Cloud Firestore. Spring Cloud GCP allows Spring-based applications to easily consume Google services like databases (Cloud Firestore, Cloud Spanner or even Cloud SQL), Google Cloud Pub/Sub, Stackdriver for logging and tracing, etc. After that, we’ll make changes in our application to deploy it to Google Kubernetes Engine (GKE). GKE is a managed, production-ready environment Finally, we will use Skaffold and Cloud Code to make development easier. Skaffold handles the workflow for building, pushing and deploying your application. Cloud Code is a plugin for VS Code and IntelliJ that works with Skaffold and your IDE so that you can do things like deploy to GKE with a click of a button. In this article, I’ll be using IntelliJ with Cloud Code. In addition, we’ll run some commands to make sure that the application is running on your machine and can communicate with the services running on your project in Google Cloud. Let’s make sure we are pointing to the correct project and authenticate you using: Setting up our Tools Before we write any code, let’s make sure we have a Google Cloud project and all the tools installed. gcloud config set project <YOUR PROJECT ID> gcloud auth login Creating a Google Cloud Project Setting up a GCP instance is easy. You can accomplish this by following these instructions. This new project will allow us to deploy our application to GKE, get access to a database (Cloud Firestore) and will also allow us to have a place where we can push our images when we containerize the application. Next, we’ll make sure your machine has application credentials to run your application locally: Install Cloud Code Next, we’ll install Cloud Code. You can follow these instructions on how to install Cloud Code to IntelliJ. Cloud Code manages the installation of Skaffold and Google SDK that we’ll use later in the article. Cloud Code also allows us to inspect our GKE deployments and services. Most importantly, it also has a clever GKE development mode that continuously listens to changes in your code when it detects a change it builds the app, builds the image, pushes the image to your registry, deploys the application to your GKE cluster, starts streaming logs and opens a localhost tunnel so you can test your service locally. It›s like magic! • Google Container Registry API - This will allow us to have a registry where we can privately push our images. • Cloud Firestore in Datastore mode - This will allow us to store entities in a NoSQL database. Make sure to select Datastore mode so that we can use Spring Cloud GCP’s support for it. In order to use Cloud Code and proceed with our application, let’s make sure that you log in using the Cloud Code plugin by clicking on the icon that should show up on the top right of your IntelliJ window: The InfoQ eMag / Issue #96 / July 2021 for deploying containerized Kubernetes-based applications. gcloud auth application-default login Enabling the APIs Now that we have everything set up we need to enable the API’s we will be using in our application: You can manage the APIs that are enabled in your project by visiting your project’s API Dashboard. Creating our Dog Service First things first! We need to get started with a simple application we can run locally. We’ll create something important like a Dog microservice. Since I’m using IntelliJ Ultimate I’ll go to `File -> New -> Project…` and select «Spring Initializr». I’ll select Maven, Jar, Java 11 and change the name to something important like `dog` as shown below: 7 The InfoQ eMag / Issue #96 / July 2021 We’ll create a controller class for the Dog and the REST endpoints: @RestController @Slf4j public class DogController { Click next and add: Lombok, Spring Web and GCP Support: @GetMapping(“/api/v1/dogs”) public List<Dog> getAllDogs() { log.debug(“->getAllDogs”); return ImmutableList.of(new Dog(“Fluffy”, 5), new Dog(“Bob”, 6), new Dog(“Cupcake”, 11)); } @PostMapping(“/api/v1/dogs”) public Dog saveDog(@RequestBody Dog dog) { log.debug(“->saveDog {}”, dog); return dog; } } The endpoints return a list of predefined dogs and the saveDog endpoint doesn’t really do much, but this is enough for us to get started. If all went well, you should now have an application that you can run. If you don’t want to use IntelliJ for this, use the equivalent on your IDE or use Spring’s Initilizr. Next, we’ll add a POJO for our Dog service and a couple of REST endpoints to test our application. Our Dog object will have a name and an age and we’ll use Lombok’s @Data annotation to save us from writing setters, getters, etc. We’ll also use the @AllArgsConstructor annotation to create a constructor for us. We’ll use this later when we are creating Dogs. @Data @AllArgsConstructor public class Dog { private String name; private int age; } 8 Using Cloud Firestore Now that we have a skeleton app, let’s try to use some of the services in GCP. Spring Cloud GCP adds Spring Data support for Google Cloud Firestore in Datastore mode. We’ll use this to store our Dogs instead of using a simple list. Users will now also be able to actually save a Dog in our database. To start we’ll add the Spring Cloud GCP Data Datastore dependency to our POM: <dependency> <groupId>org.springframework.cloud</ groupId> <artifactId>spring-cloud-gcp-datadatastore</artifactId> </dependency> Now, we can modify our Dog class so that it can be stored. We’ll add an @Entity annotation and a @ Id annotation to a value of type Long to act as an identifier for the entity: Now we can create a regular Spring Repository class as follows: @Repository public interface DogRepository extends DatastoreRepository<Dog, Long> {} As usual with Spring Repositories, there is no need to write implementations for this interface since we’ll be using very basic methods. We can now modify the controller class. We’ll inject the DogRepository into the DogController then modify the class to use the repository as follows: @RestController @Slf4j @RequiredArgsConstructor public class DogController { private final DogRepository dogRepository; @GetMapping(“/api/v1/dogs”) public Iterable<Dog> getAllDogs() { log.debug(“->getAllDogs”); return dogRepository.findAll(); } @PostMapping(“/api/v1/dogs”) public Dog saveDog(@RequestBody Dog dog) { log.debug(“->saveDog {}”, dog); return dogRepository.save(dog); } } Note that we are using Lombok’s @ RequiredArgsConstructor to create a constructor to inject our DogRepository. When you run your application, the endpoints will call your Dog service that will attempt to use Cloud Firestore to retrieve or store the Dogs. TIP: To quickly test this, you can create an HTTP request in IntelliJ with the following: POST http://localhost:8080/api/v1/dogs Content-Type: application/json { } “name”: “bob”, “age”: 5 In only a few steps, we now have the application up and running and consuming services from GCP. Awesome! Now, let’s turn this into a container and deploy it! The InfoQ eMag / Issue #96 / July 2021 @Entity @Data @AllArgsConstructor public class Dog { @Id private Long id; private String name; private int age; } Containerizing the Dog Service At this point, you could start writing a Dockerfile to containerize the application we created above. Let’s instead use Jib. One of the things I like about Jib is that it separates your application into multiple layers, splitting dependencies from classes. This allows us to have faster builds so that you don’t have to wait for Docker to rebuild your entire Java application - just deploy the layers that changed. In addition, Jib has a Maven plugin that makes it easy to set up by just modifying the POM file in your application. To start using the plugin, we’ll need to modify our POM file to add the following: <plugin> <groupId>com.google.cloud. tools</groupId> <artifactId>jib-maven-plugin</ artifactId> <version>1.8.0</version> <configuration> <from>  </from> <to>  </to> </configuration> </plugin> 9 The InfoQ eMag / Issue #96 / July 2021 Notice we are using Google’s distroless image for Java 11. “Distroless” images contain only your application and its runtime dependencies. They do not contain package managers, shells or any other programs you would expect to find in a standard Linux distribution. Restricting what’s in your runtime container to precisely what’s necessary for your app is a best practice employed by Google and other tech giants that have used containers in production for many years. It improves the signal-to-noise of scanners (e.g. CVE) and reduces the burden of establishing provenance to just what you need. Make sure to replace your GCP registry in the code above to match the name of your project. After doing this, you can attempt to build and push the image of the app by running a command like: $ ./mvnw install jib:build This will build and test the application. Create the image and then finally push the newly created image to your registry. NOTE: It’s usually a common good practice to use a distro with a specific digest instead of using “latest”. I’ll leave it up to the reader to decide what base image and digest to use depending on the version of Java you are using. Deploying the Dog Service At this point, we are almost ready to deploy our application. In order to do this, let’s first create a GKE cluster where we will deploy our application. Creating a GKE Cluster To create a GKE cluster, follow these instructions. You’ll basically want to visit the GKE page, wait for the API to get enabled and then click on the button to create a cluster. You may use the default settings, but just make sure that you click on the “More options” button to enable full access to all the Cloud APIs: 10 This allows your GKE nodes to have permissions to access the rest of the Google Cloud services. After a few moments the cluster will be created (please check image on page 11). Applications living inside of Kubernetes Kubernetes likes to monitor your application to ensure that it’s up and running. In the event of a failure, Kubernetes knows that your application is down and that it needs to spin up a new instance. To do this, we need to make sure that our application is able to respond when Kubernetes pokes it. Let’s add an actuator and Spring Cloud Kubernetes. Add the following dependencies to your POM file: <dependency> <groupId>org.springframework.boot</ groupId> <artifactId>spring-boot-starteractuator</artifactId> </dependency> If your application has an application. properties file inside the src/main/ resources directory, remove it and create an application.yaml file with the following contents: spring: application: name: dog-service management: endpoint: health: enabled: true This adds a name to our application and exposes the health endpoint mentioned above. To verify that this is working, you may visit your application The InfoQ eMag / Issue #96 / July 2021 at localhost:8080/actuator/health You should see something like: { } “status”: “UP” Configuring to run in Kubernetes In order for us to deploy our application to our new GKE cluster, we need to write some additional YAML. We need to create a deployment and a service. Use the deployment that follows. Just remember to replace the GCR name with the one from your project: deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: dog-service spec: selector: matchLabels: app: dog-service replicas: 1 template: metadata: labels: app: dog-service spec: containers: - name: dog-service image: gcr.io/<YOUR GCR REGISTRY NAME>/dog ports: - containerPort: 8080 livenessProbe: initialDelaySeconds: 20 httpGet: port: 8080 path: /actuator/health readinessProbe: initialDelaySeconds: 30 httpGet: port: 8080 path: /actuator/health Add a service.yaml file with the following: apiVersion: v1 kind: Service metadata: name: dog-service spec: type: NodePort selector: app: dog-service ports: - port: 8080 targetPort: 8080 The deployment contains a few changes to the readiness and liveness probe. This is so that Kubernetes uses these endpoints to poke the app to see if it’s alive. The service exposes the deployment so that other services can consume it. After doing this, we can now start using the Cloud Code plugin we installed at the beginning of this article. From the Tools menu, select: Cloud Code -> Kubernetes -> Add Kubernetes Support. This will automatically add a Skaffold YAML to your application and set up a few things for you so that you can deploy to your cluster by clicking a button. 11 The InfoQ eMag / Issue #96 / July 2021 To confirm that all this worked you can inspect the configuration from the Run/Debug Configurations section in IntelliJ. If you click on the Develop on Kubernetes run, it should have automatically picked up your GKE cluster and Kubernetes configuration files and should look something this: Conclusions Congratulations if you made it this far! The application we built in this article showcases some key technologies that most microservices-based applications would use: a fast, fully managed, serverless, cloud-native NoSQL document Click OK and then click on the green “Play” button at the top right: database (Cloud Firestore), GKE a managed, production-ready environment for deploying Kubernetes-based containerized applications and finally a simple cloud native microservice build with Spring Boot. After that, Cloud Code will build the app, create the image, deploy the application to your GKE cluster and stream the logs from Stackdriver into your local machine. It will also open a tunnel so that you can consume your service via localhost:8080. You can also peak at the workloads page in the Google Cloud Console. (Figure 8). 12 Along the way, we also learned how to use a few tools like Cloud Code to streamline your development workflow and Jib to build containerized applications using common Java patterns. I hope you’ve found the article helpful and that you give these technologies a try. If you found this interesting, have a look at a bunch of codelabs that Google offers where you can learn about Spring and Google Cloud products. Figure 8 • Using Google Kubernetes Engine (GKE) along with Spring Boot allows you to quickly and easily set up microservices. • Jib is a great way to containerize your Java application. It allows you to create optimized images without Docker using Maven or Gradle. • Google’s Spring Cloud GCP implementation allows developers to leverage Google Cloud Platform (GCP) services with little configuration and using some of Spring’s patterns. • Setting up Skaffold with Cloud Code allows developers to have a nice development cycle. This is especially useful with starting to prototype a new service. The InfoQ eMag / Issue #96 / July 2021 TL;DR 13 The InfoQ eMag / Issue #96 / July 2021 Getting Started with Quarkus by Roberto Cortez, Java Developer Quarkus created quite a buzz in the enterprise Java ecosystem in 2019. Like all other developers, I was curious about this new technology and saw a lot of potential in it. What exactly is Quarkus? How is it different from other technologies established in the market? How can Quarkus help me or my organization? Let’s find out. What is Quarkus? The Quarkus project dubbed itself Supersonic Subatomic Java. Is this actually real? What does this mean? To better explain the motivation behind the Quarkus project, we need to look into the current state of software development. From On-Premises to Cloud The old way to deploy applications was to use physical hardware. With the purchase of a physical box, we paid upfront for the hardware requirements. We had already made the investment, so it wouldn’t matter if we used all the machine 14 resources or just a small amount. In most cases, we wouldn’t care that much as long as we could run the application. However, the Cloud is now changing the way we develop and deploy applications. In the Cloud, we pay exactly for what we use. So we have become pickier with our hardware usage. If the application takes 10 seconds to start, we have to pay for these 10 seconds even if the application is not yet ready for others to consume. Java and the Cloud Do you remember when the first Java version was released? Allow me to refresh your memory — it was in 1996. There was no Cloud back then. In fact, it only came into existence several years later. Java was definitely not tailored for this new paradigm and had to adjust. But how could we change a paradigm after so many years tied to a physical box It’s All About the Runtime The way that many Java libraries and frameworks evolved over the years was to perform a set of enhancements during runtime. This was a convenient way to add capabilities to your code in a safe and declarative way. Do you need dependency injection? Sure! Use annotations. Do you need a transaction? Of course! Use an annotation. In fact, you can code a lot of things by using these annotations that the runtime will pick and handle for you. But there is always a catch. The runtime requires a scan of your classpath and classes for metadata. This is an expensive operation that consumes time and memory. Quarkus Paradigm Shift Quarkus addressed this challenge by moving expensive operations like Bytecode Enhancement, Dynamic ClassLoading, Proxying, and more to compile time. The result is an environment that consumes less memory, less CPU, and faster startup. This is perfect for the use case of the Cloud, but also useful for other use cases. Everyone will benefit from less resources consumption overall, no matter the environment. Maybe Quarkus is Not So New Have you heard of or used technologies such as CDI, JAX-RS, or JPA? If so, the Quarkus stack is composed of these technologies that have been around for several years. If you know how to develop these technologies, then you will know how to develop a Quarkus application. Do you recognize the following code? @Path(“books”) @Consumes(APPLICATION_JSON) @Produces(APPLICATION_JSON) public class BookApi { @Inject BookRepository bookRepository; @Path(“/{id}”) Response get(@PathParam(“id”)Long id) { return bookRepository.find(id) .map(Response::ok) .orElse(Response.status(NOT_FOUND)) .build(); } } Congratulations, you have your first Quarkus app! The InfoQ eMag / Issue #96 / July 2021 where costs didn’t matter as much as they do in the Cloud? Best of Breed Frameworks and Standards The Quarkus programming model is built on top of proven standards, be it official standards or de facto standards. Right now, Quarkus has first class support for technologies like Hibernate, CDI, Eclipse MicroProfile, Kafka, Camel, Vert.x, Spring, Flyway, Kubernetes, Vault, just to name a few. When you adopt Quarkus, you will be productive from day one since you don’t really need to learn new technologies. You just use what has been out there for the past 10 years. Are you looking to use a library that isn’t yet in the Quarkus ecosystem? There is a good chance that it will work out of the box without any additional setup, unless you want to run it in GraalVM Native mode. If you want to go one step further, you could easily implement your own Quarkus extension to provide support for a particular technology and enrich the Quarkus ecosystem. Quarkus Setup So, you may be asking if there is something hiding under the covers. In fact yes there is. You are required to use a specific set of dependencies in your project that are provided by Quarkus. Don’t worry, Quarkus supports both Maven and Gradle. For convenience, you can generate a skeleton project in Quarkus starter page, and select which technologies you would like to use. Just import it in your favorite IDE and you are ready to go. Here is a sample Maven project to use JAX-RS with RESTEasy and JPA with Hibernate: @GET 15 The InfoQ eMag / Issue #96 / July 2021 16 <?xml version=”1.0”?> <project xsi:schemaLocation=”http://maven. apache.org/POM/4.0.0 https://maven.apache. org/xsd/maven-4.0.0.xsd” xmlns=”http:// maven.apache.org/POM/4.0.0” xmlns:xsi=”http://www.w3.org/2001/ XMLSchema-instance”> <modelVersion>4.0.0</modelVersion> <groupId>org.acme</groupId> <artifactId>code-with-quarkus</ artifactId> <version>1.0.0-SNAPSHOT</version> <properties> <compiler-plugin.version>3.8.1</ compiler-plugin.version> <maven.compiler.parameters>true</maven. compiler.parameters> <maven.compiler.source>1.8</maven. compiler.source> <maven.compiler.target>1.8</maven. compiler.target> <project.build.sourceEncoding>UTF-8</ project.build.sourceEncoding> <project.reporting. outputEncoding>UTF-8</project.reporting. outputEncoding> <quarkus-plugin.version>1.3.0.Final</ quarkus-plugin.version> <quarkus.platform.artifact-id>quarkusuniverse-bom</quarkus.platform.artifact-id> <quarkus.platform.group-id>io.quarkus</ quarkus.platform.group-id> <quarkus.platform.version>1.3.0.Final</ quarkus.platform.version> <surefire-plugin.version>2.22.1</ surefire-plugin.version> </properties> <dependencyManagement> <dependencies> <dependency> <groupId>${quarkus.platform.groupid}</groupId> <artifactId>${quarkus.platform. artifact-id}</artifactId> <version>${quarkus.platform. version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-resteasy</ artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-junit5</ artifactId> <scope>test</scope> </dependency> <dependency> <groupId>io.rest-assured</groupId> <artifactId>rest-assured</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-hibernate-orm</ artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-resteasy-jsonb</ artifactId> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>io.quarkus</groupId> <artifactId>quarkus-maven-plugin</ artifactId> <version>${quarkus-plugin. version}</version> <executions> <execution> <goals> <goal>build</goal> </goals> </execution> </executions> </plugin> <plugin> <artifactId>maven-compiler-plugin</ artifactId> <version>${compiler-plugin. version}</version> </plugin> <plugin> <artifactId>maven-surefire-plugin</ artifactId> <version>${surefire-plugin. version}</version> <configuration> <systemProperties> <java.util.logging.manager>org. jboss.logmanager.LogManager</java.util. logging.manager> </systemProperties> You might have noticed that most of the dependencies start with the groupId io. quarkus and that they are not the usual dependencies that you might find for Hibernate, Resteasy, or Junit. Quarkus Dependencies Now, you may be wondering why Quarkus supplies their own wrapper versions around these popular libraries. The reason is to provide a bridge between the library and Quarkus to resolve the runtime dependencies at compile time. This is where the magic of Quarkus happens and provides projects with fast start times and smaller memory footprints. Does this mean that you are constrained to use only Quarkus specific libraries? Absolutely not. You can use any library you wish. You run Quarkus applications on the JVM as usual, where you don’t have limitations. GraalVM and Native Images Perhaps you already heard about this project called GraalVM by Oracle Labs? In essence, GraalVM is a Universal Virtual Machine to run applications in multiple languages. One of the most interesting features is the ability to build your application in a Native Image and run it even faster! In practice, this means that you just have an executable to run with all the required dependencies of your application resolved at compile time. This does not run on the JVM — it is a plain executable binary file, but includes all necessary components like memory management and thread scheduling from a different virtual machine, called Substrate VM to run your application. For convenience, the sample Maven project already has the required setup to build your project as native. You do need to have GraalVM in your system with the native-image tool installed. Follow these instructions on how to do so. After that, just build as any other Maven project but with the native profile: mvn verify -Pnative. This will generate a binary runner in the target folder, that you can run as any other binary, with ./projectname-runner. The following is a sample output of the runner in my box: The InfoQ eMag / Issue #96 / July 2021 </configuration> </plugin> </plugins> </build> </project> [io.quarkus] (main) code-with-quarkus 1.0.0-SNAPSHOT (powered by Quarkus 1.3.0.Final) started in 0.023s. Listening on: http://0.0.0.0:8080 INFO [io.quarkus] (main) Profile prod activated. [io.quarkus] (main) Installed features: [agroal, cdi, hibernate-orm, narayana-jta, resteasy, resteasy-jsonb] Did you notice the startup time? Only 0.023s. Yes, our application doesn’t have much, but still pretty impressive. Even for real applications, you will see startup times in the order of milliseconds. You can learn more about GraalVM on their website. Developer Productivity We have seen that Quarkus could help your company become Cloud Native. Awesome. But what about the developer? We all like new shiny things, and we are also super lazy. What does Quarkus do for the developer that cannot be done with other technologies? Well, how about hot reloading that actually works without using external tools or complicated tricks? Yes, it is true. After 25 years, since Java was born, we now have a reliable way to change our code and see those changes with a simple refresh. Again, this is accomplished by the way Quarkus works internally. Everything is just code, so you don’t have to worry about the things that made hot reloading difficult anymore. It is a trivial operation. 17 The InfoQ eMag / Issue #96 / July 2021 To accomplish this, you have to run Quarkus in Development Mode. Just run mvn quarkus:dev and you are good to go. Quarkus will start up and you are free to do the changes to your code and immediately see them. For instance, you can change your REST endpoint parameters, add new methods, and change paths. Once you invoke them, they will be updated reflecting your code changes. How cool is that? Is Quarkus Production Ready? All of this seems to be too good to be true, but is Quakus actually ready for production environments? Yes it is. A lot of companies are already adopting Quarkus as their development/runtime environment. Quarkus has a very fast release cadence (every few weeks), and a strong Open Source community that helps every developer in the Java community, whether they are just getting started with Quarkus or are an advanced user. Check out this sample application that you can download or clone. You can also read some of the adoption stories in a few blog posts so you can have a better idea of user experiences when using Quarkus. Conclusion After a year of its official announcement, Quarkus is already on version 1.3.1.Final. A lot of effort is being put in the project to help companies and developers to write applications that they can run natively in the Cloud. We don’t know how far Quarkus can go, but one thing is for certain: Quarkus shook the entire Java ecosystem in a space dominated by Spring. I think the Java ecosystem can only win by having multiple offerings that can push each other and innovate to keep themselves competitives. Resources • Quarkus Website 18 • Quarkus Start Page • Sample Github Repo TL;DR • Quarkus is a new technology aimed at cloud development. • With Quarkus, you can take advantage of smaller runtimes optimized for the cloud. • You don’t need to relearn new APIs. Quarkus is built on top of the bestof-breed technologies from the last decade, like Hibernate, RESTEasy, Vert.x, and MicroProfile. • Quarkus is productive from day one. • Quarkus is production ready. Feature-Driven Development: A Brief Overview Feature-driven development (FDD) is a five-step Agile framework that organizes software development around making progress on features in one to two-week sprints. The five steps are: • Develop an overall model • Build a features list • Plan by feature • Design by feature • Build by feature A feature in FDD does not always refer to a product feature; it could refer to a small task or process a client or user wishes to complete. For example, “View all open tasks”, “Pay on-line bill”, or “Chat with my friends in the game.” Features in FDD are similar to user stories in Scrum. As with other Agile software development frameworks, the goal of feature-driven development is to iterate quickly to satisfy the needs of the customer. The five-step process of FDD assigns roles and utilizes a set of project management best practices to ensure consistency, making it easier for new team members to onboard. Roles Before we can dive into the methodology, it is important to understand the six primary roles involved in a featuredriven development team. Each role serves a specific function throughout the development process, there may be more than one person in a given role on a team. The Project Manager is the leader of the whole project and coordinates all the moving parts, ensures deadlines get hit, identifies gaps in the workflow, and so on. The Chief Architect creates the blueprint for the overall system. Part of their job is to educate the people on the team about the system›s design so each person can effectively fit their individual tasks within the context of the whole project. The Chief Architect approaches the project from a holistic point of view. The Development Manager coordinates all the teams ensuring they complete their tasks on time providing mentoring and leadership of programming activities. The Chief Programmer is an experienced programmer that leads a small development team, helping with analysis and design to keep the project moving in the right direction. The InfoQ eMag / Issue #96 / July 2021 SPONSORED ARTICLE The Class Owners are individual developers creating features on smaller development teams. Their responsibilities can include designing, coding, testing, and documenting the features or classes. The Domain Expert has detailed knowledge of the user requirements and understands the problem customers want solved. In addition to the six primary roles, there are eight supporting roles that may be needed: • Release Manager • Language guru • Build engineer • Tool-smith • System administrator • Tester • Deployer • Technical writer Please read the full-length version of this article 19 The InfoQ eMag / Issue #96 / July 2021 Project Helidon Tutorial: Building Microservices with Oracle’s Lightweight Java Framework by Michael Redlich, Senior Research Technician at ExxonMobil Research & Engineering Oracle introduced its new open-source framework, Project Helidon, in September 2018. Originally named J4C (Java for Cloud), Helidon is a collection of Java libraries for creating microservices-based applications. Within six months of its introduction, Helidon 1.0 was released in February 2019. The current stable release is Helidon 1.4.4, but Oracle is well on their way to releasing Helidon 2.0 planned for late Spring 2020. 20 This tutorial will introduce Helidon SE and Helidon MP, explore the three core components of Helidon SE, how to get started, and introduce a movie application built on top of Helidon MP. There will also be a discussion on GraalVM and what you can expect with the upcoming release of Helidon 2.0. Helidon Landscape Helidon, designed to be simple and fast, is unique because it ships with two programming models: Helidon SE and Helidon MP. In the graph below, you can see where Helidon SE and Helidon Let’s start with the first version of the startServer() method to start a Helidon web server on a random available port: private static void startServer() { Routing routing = Routing.builder() .any((request, response) -> response.send(“Greetings from the web server!” + “\n”)) .build(); Helidon SE Helidon SE is a microframework that features three core components required to create a microservice -- a web server, configuration, and security -- for building microservices-based applications. It is a small, functional style API that is reactive, simple and transparent in which an application server is not required. Let’s take a look at the functional style of Helidon SE with this very simple example on starting the Helidon web server using the WebServer interface: WebServer.create( Routing.builder() .get(“/greet”, (req, res) -> res.send(“Hello World!”)) .build()) .start(); Using this example as a starting point, we will incrementally build a formal startServer() method, part of a server application for you to download, to explore the three core Helidon SE components. Web Server Component Inspired by NodeJS and other Java frameworks, Helidon’s web server component is an asynchronous and reactive API that runs on top of Netty. The WebServer interface provides basic server lifecycle and monitoring enhanced by configuration, routing, error handling, and building metrics and health endpoints. WebServer webServer = WebServer .create(routing) .start() .toCompletableFuture() .get(10, TimeUnit.SECONDS); The InfoQ eMag / Issue #96 / July 2021 MP align with other popular microservices frameworks. System.out.println(“INFO: Server started at: http://localhost:” + webServer. port() + “\n”); } First, we need to build an instance of the Routing interface that serves as an HTTP request-response handler with routing rules. In this example, we use the any() method to route the request to the defined server response, “Greetings from the web server!”, which will be displayed in the browser or via the curl command. In building the web server, we invoke the overloaded create() method designed to accept various server configurations. The simplest one, as shown above, accepts the instance variable, routing, that we just created to provide default server configuration. The Helidon web server was designed to be reactive which means the start() method returns and an instance of the CompletionStage<WebServer> interface to start the web server. This allows us to invoke the toCompletableFuture() method. Since a specific server port wasn’t defined, the server will find a random available port upon startup. Let’s build and run our server application with Maven: 21 The InfoQ eMag / Issue #96 / July 2021 $ mvn clean package $ java -jar target/helidon-server.jar When the server starts, you should see the following in your terminal window: Apr 15, 2020 1:14:46 PM io.helidon. webserver.NettyWebServer <init> INFO: Version: 1.4.4 Apr 15, 2020 1:14:46 PM io.helidon. webserver.NettyWebServer lambda$start$8 INFO: Channel ‘@default’ started: [id: 0xcba440a6, L:/0:0:0:0:0:0:0:0:52535] INFO: Server started at: http:// localhost:52535 As shown on the last line, the Helidon web server selected port 52535. While the server is running, enter this URL in your browser or execute it with following curl command in a separate terminal window: $ curl -X GET http://localhost:52535 You should see “Greetings from the web server!” To shut down the web server, simply add this line of code: webServer.shutdown() .thenRun(() -> System.out. println(“INFO: Server is shutting down... Good bye!”)) .toCompletableFuture(); Configuration Component The configuration component loads and processes configuration properties. Helidon’s Config interface will read configuration properties from a defined properties file, usually but not limited to, in YAML format. Let’s create an application.yaml file that will provide configuration for the application, server, and security. app: greeting: “Greetings from the web server!” 22 server: port: 8080 host: 0.0.0.0 security: config: require-encryption: false providers: - http-basic-auth: realm: “helidon” users: - login: “ben” password: “${CLEAR=password}” roles: [“user”, “admin”] - login: “mike” password: “${CLEAR=password}” roles: [“user”] - http-digest-auth: There are three main sections, or nodes, within this application.yaml file - app, server and security. The first two nodes are straightforward. The greeting subnode defines the server response that we hard-coded in the previous example. The port subnode defines port 8080 for the web server to use upon startup. However, you should have noticed that the security node is a bit more complex utilizing YAML’s sequence of mappings to define multiple entries. Separated by the ‘-’ character, two security providers, http-basic-auth and http-digest-auth, and two users, ben and mike, have been defined. We will discuss this in more detail in the Security Component section of this tutorial. Also note that this configuration allows for clear-text passwords as the config.requireencryption subsection is set to false. You would obviously set this value to true in a production environment so that any attempt to pass a cleartext password would throw an exception. Now that we have a viable configuration file, let’s update our startServer() method to take advantage of the configuration we just defined. Routing routing = Routing.builder() .any((request, response) -> response.send(config.get(“app.greeting”). asString().get() + “\n”)) .build(); WebServer webServer = WebServer .create(serverConfig, routing) .start() .toCompletableFuture() .get(10, TimeUnit.SECONDS); System.out.println(“INFO: Server started at: http://localhost:” + webServer. port() + “\n”); } First, we need to build an instance of the Config interface by invoking its create() method to read our configuration file. The get(String key) method, provided by Config, returns a node, or a specific subnode, from the configuration file specified by key. For example, config.get(“server”) will return the content under the server node and config. get(“app.greeting”) will return “Greetings from the web server!”. Next, we create an instance of ServerConfiguration, providing immutable web server information, by invoking its create() method by passing in the statement, config.get(“server”). The instance variable, routing, is built like the previous example except we eliminate hard-coding the server response by calling config.get(“app. greeting”).asString().get(). The web server is created like the previous example except we use a different version of the create() method that accepts the two instance variables, serverConfig and routing. We can now build and run this version of our web server application using the same Maven and Java commands. Executing the same curl command: $ curl -X GET http://localhost:8080 You should see “Greetings from the web server!” Security Component Helidon’s security component provides authentication, authorization, audit and outbound security. There is support for a number of implemented security providers for use in Helidon applications: • HTTP Basic Authentication • HTTP Digest Authentication • HTTP Signatures • Attribute Based Access Control (ABAC) Authorization • JWT Provider • Header Assertion • Google Login Authentication • OpenID Connect • IDCS Role Mapping The InfoQ eMag / Issue #96 / July 2021 private static void startServer() { Config config = Config.create(); ServerConfiguration serverConfig = ServerConfiguration.create(config. get(“server”)); You can use one of three approaches to implement security in your Helidon application: • a builder pattern where you manually provide configuration • a configuration pattern where you provide configuration via a configuration file • a hybrid of the builder and configuration patterns We will be using the hybrid approach to implement security in our application, but we need to do some housekeeping first. 23 The InfoQ eMag / Issue #96 / July 2021 Let’s review how to reference the users defined under the security node of our configuration file. Consider the following string: security.providers.0.http-basic-auth. users.0.login When the parser comes across a number in the string, it indicates there are one or more subnodes in the configuration file. In this example, the 0 right after providers will direct the parser to move into the first provider subnode, httpbasic-auth. The 0 right after users will direct the parser to move into the first user subnode containing login, password and roles. Therefore, the above string will return the login, password and role information for the user, ben, when passed into the config.get() method. Similarly, the login, password and role information for user, mike, would be returned with this string: security.providers.0.http-basic-auth. users.1.login Next, let’s create a new class to our web server application, AppUser, that implements the SecureUserStore.User interface: public class AppUser implements SecureUserStore.User { private String login; private char[] password; private Collection<String> roles; public AppUser(String login, char[] password, Collection<String> roles) { this.login = login; this.password = password; this.roles = roles; } @Override public String login() { return login; } @Override public boolean isPasswordValid(char[] chars) { return false; 24 } @Override public Collection<String> roles() { return roles; } @Override public Optional<String> digestHa1(String realm, HttpDigest. Algorithm algorithm) { return Optional.empty(); } } We will use this class to build a map of roles to users, that is: Map<String, AppUser> users = new HashMap<>(); To accomplish this, we add a new method, getUsers(), to our web server application that populates the map using the configuration from the http-basic-auth subsection of the configuration file. private static Map<String, AppUser> getUsers(Config config) { Map<String, AppUser> users = new HashMap<>(); ConfigValue<String> ben = config. get(“security.providers.0.http-basic-auth. users.0.login”).asString(); ConfigValue<String> benPassword = config. get(“security.providers.0.http-basic-auth. users.0.password”).asString(); ConfigValue<List<Config>> benRoles = config.get(“security.providers.0.http-basicauth.users.0.roles”).asNodeList(); ConfigValue<String> mike = config. get(“security.providers.0.http-basic-auth. users.1.login”).asString(); ConfigValue<String> mikePassword = config.get(“security.providers.0.http-basicauth.users.1.password”).asString(); ConfigValue<List<Config>> mikeRoles = config.get(“security.providers.0.http-basicauth.users.1.roles”).asNodeList(); return users; } Now that we have this new functionality built into our web server application, let’s update the startServer() method to add security with Helidon’s implementation of HTTP Basic Authentication: private static void startServer() { Config config = Config.create(); ServerConfiguration serverConfig = ServerConfiguration.create(config. get(“server”)); Map<String, AppUser> users = getUsers(config); displayAuthorizedUsers(users); SecureUserStore store = user -> Optional.ofNullable(users.get(user)); HttpBasicAuthProvider provider = HttpBasicAuthProvider.builder() .realm(config.get(“security. providers.0.http-basic-auth.realm”). asString().get()) .subjectType(SubjectType.USER) .userStore(store) .build(); Security security = Security.builder() .config(config.get(“security”)) .addAuthenticationProvider(provider) .build(); WebSecurity webSecurity = WebSecurity. create(security) .securityDefaults(WebSecurity. authenticate()); Routing routing = Routing.builder() .register(webSecurity) .get(“/”, (request, response) -> response.send(config.get(“app.greeting”). asString().get() + “\n”)) .get(“/admin”, (request, response) -> response.send(“Greetings from the admin, “ + users.get(“admin”).login() + “!\n”)) .get(“/user”, (request, response) -> response.send(“Greetings from the user, “ + users.get(“user”).login() + “!\n”)) .build(); WebServer webServer = WebServer .create(serverConfig, routing) .start() .toCompletableFuture() .get(10, TimeUnit.SECONDS); The InfoQ eMag / Issue #96 / July 2021 users.put(“admin”, new AppUser(ben. get(), benPassword.get().toCharArray(), Arrays.asList(“user”, “admin”))); users.put(“user”, new AppUser(mike. get(), mikePassword.get().toCharArray(), Arrays.asList(“user”))); System.out.println(“INFO: Server started at: http://localhost:” + webServer. port() + “\n”); } As we did in the previous example, we will build the instance variables, config and serverConfig. We then build our map of roles to users, users, with the getUsers() method as shown above. Using Optional for null type-safety, the store instance variable is built from the SecureUserStore interface as shown with the lambda expression. A secure user store is used for both HTTP Basic Authentication and HTTP Digest Authentication. Please keep in mind that HTTP Basic Authentication can be unsafe, even when used with SSL, as passwords are not required. We are now ready to build an instance of HTTPBasicAuthProvider, one of the implementing classes of the SecurityProvider interface. The realm() method defines the security realm name that is sent to the browser (or any other client) when unauthenticated. Since we have a realm defined in our configuration file, it is passed into the method. The subjectType() method defines the principal type a security provider would extract or propagate. It accepts one of two SubjectType enumerations, namely USER or SERVICE. The userStore() method 25 The InfoQ eMag / Issue #96 / July 2021 accepts the store instance variable we just built to validate users in our application. With our provider instance variable, we can now build an instance of the Security class used to bootstrap security and integrate it with other frameworks. We use the config() and addAuthenticationProvider() methods to accomplish this. Please note that more than one security provider may be registered by chaining together additional addAuthenticationProvider() methods. For example, let’s assume we defined instance variables, basicProvider and digestProvider, to represent the HttpBasicAuthProvider and HttpDigestAuthProvider classes, respectively. Our security instance variable may be built as follows: Security security = Security.builder() .config(config.get(“security”)) .addAuthenticationProvider(basicProvider) .addAuthenticationProvider(digestProvider) .build(); The WebSecurity class implements the Service interface which encapsulates a set of routing rules and related logic. The instance variable, webSecurity, is built using the create() method by passing in the security instance variable and the WebSecurity.authentic() method, passed into the securityDefaults() method, ensures the request will go through the authentication process. Our familiar instance variable, routing, that we’ve built in the previous two examples looks much different now. It registers the webSecurity instance variable and defines the endpoints, ‘/’, ‘/admin’, and ‘/user’ by chaining together get() methods. Notice that the /admin and /user endpoints are tied to users, ben and mike, respectively. Finally, our web server can be started! After all the machinery we just implemented, building the web server looks exactly like the previous example. 26 We can now build and run this version of our web server application using the same Maven and Java commands and execute the following curl commands: • $ curl -X GET http://localhost:8080/ will return “Greetings from the web server!” • $ curl -X GET http://localhost:8080/ admin will return “Greetings from the admin, ben!” • $ curl -X GET http://localhost:8080/ user will return “Greetings from the user, mike!” You can find a comprehensive server application that demonstrates all three versions of the startServer() method related to the three core Helidon SE components we just explored. You can also find more extensive Helidon security examples that will show you how to implement some of the other security providers. Helidon MP Built on top of Helidon SE, Helidon MP is a small, declarative style API that is an implementation of the MicroProfile specification, a platform that optimizes enterprise Java for a microservices architecture for building microservices-based applications. The MicroProfile initiative, formed in 2016 as a collaboration of IBM, Red Hat, Payara and Tomitribe, specified three original APIs - CDI (JSR 365), JSON-P (JSR 374) and JAX-RS (JSR370) - considered the minimal amount of APIs for creating a microservices application. Since then, MicroProfile has grown to 12 core APIs along with four standalone APIs to support reactive streams and GraphQL. MicroProfile 3.3, released in February 2020, is the latest version. Helidon MP currently supports MicroProfile 3.2. For Java EE/Jakarta EE developers, Helidon MP is an excellent choice due to its familiar declarative approach with use of annotations. There is no Let’s take a look at the declarative style of Helidon MP with this very simple example on starting the Helidon web server and how it compares to the functional style with Helidon SE. public class GreetService { @GET @Path(“/greet”) public String getMsg() { return “Hello World!”; } } Notice the difference in this style compared to the Helidon SE functional style. Helidon Architecture Now that you have been introduced to Helidon SE and Helidon MP, let’s see how they fit together. Helidon’s architecture can be described in the diagram shown below. Helidon MP is built on top of Helidon SE and the CDI extensions, explained in the next section, extend the cloud-native capabilities of Helidon MP. CDI Extensions Helidon ships with portable Context and Dependency Injection (CDI) estensions that support integration of various data sources, transactions and clients to extend the cloud-native functionality of Helidon MP applications. The following extensions are provided: • HikariCP, a “zero-overhead” production ready JDBC connection pool data source • Oracle Universal Connection Datapool (UCP) data sources • Jedis, a small Redis Java client • Oracle Cloud Infrastructure (OCI) object storage clients • Java Transactional API (JTA) transactions Helidon Quick Start Guides Helidon provides quick start guides for both Helidon SE and Helidon MP. Simply visit these pages and follow the instructions. For example, you can quickly build a Helidon SE application by simply executing the following Maven command in your terminal window: $ mvn archetype:generate -DinteractiveMode=false \ -DarchetypeGroupId=io.helidon. archetypes \ -DarchetypeArtifactId=helidonquickstart-se \ -DarchetypeVersion=1.4.4 \ -DgroupId=io.helidon.examples \ -DartifactId=helidon-quickstart-se \ -Dpackage=io.helidon.examples. quickstart.se The InfoQ eMag / Issue #96 / July 2021 deployment model and no additional Java EE packaging required. This will generate a small, yet working application in the folder, helidon-quickstart-se, that includes a test and configuration files for the application (application.yaml), logging (logging.properties), building a native image with GraalVM (native-image.properties), containerizing the application with Docker (Dockerfile and Dockerfile.native) and orchestrating with Kubernetes (app.yaml). Similarly, you can quickly build a Helidon MP application: $ mvn archetype:generate -DinteractiveMode=false \ -DarchetypeGroupId=io.helidon. archetypes \ -DarchetypeArtifactId=helidonquickstart-mp \ -DarchetypeVersion=1.4.4 \ -DgroupId=io.helidon.examples \ -DartifactId=helidon-quickstart-mp \ -Dpackage=io.helidon.examples. quickstart.mp This is a great starting point for building more complex Helidon applications as we will discuss in the next section. 27 The InfoQ eMag / Issue #96 / July 2021 Movie Application Using a generated Helidon MP quickstart application, additional classes - a POJO, a resource, a repository, a custom exception, and an implementation of ExceptionMapper - were added to build a complete movie application that maintains a list of Quentin Tarantino movies. The HelidonApplication class, shown below, registers the required classes. @ApplicationScoped @ApplicationPath(“/”) public class HelidonApplication extends Application { @Override public Set<Class<?>> getClasses() { Set<Class<?>> set = new HashSet<>(); set.add(MovieResource.class); set. add(MovieNotFoundExceptionMapper.class); return Collections. unmodifiableSet(set); } } You can clone the GitHub repository to learn more about the application. GraalVM Helidon supports GraalVM, a polyglot virtual machine and platform, that converts applications to native executable code. GraalVM, created by Oracle Labs, is comprised of Graal, a just-in-time compiler written in Java, SubstrateVM, a framework that allows ahead-of-time compilation of Java applications into executable images, and Truffle, an open-source toolkit and API for building language interpreters. The latest version is 20.1.0. You can convert Helidon SE applications to native executable code using GraalVM’s nativeimage utility that is a separate installation using GraalVM’s gu utility: $ gu install native-image $ export 28 GRAALVM_HOME=/usr/local/bin/graalvm-cejava11-20.1.0/Contents/Home Once installed, you can return to the helidonquickstart-se directory and execute the following command: $ mvn package -Pnative-image This operation will take a few minutes, but once complete, your application will be converted to native code. The executable file will be found in the /target directory. The Road to Helidon 2.0 Helidon 2.0.0 is scheduled to be released in the late Spring 2020 with Helidon 2.0.0.RC1 available to developers at this time. Significant new features include support for GraalVM on Helidon MP applications, new Web Client and DB Client components, a new CLI tool, and implementations of the standalone MicroProfile Reactive Messaging and Reactive Streams Operators APIs. Until recently, only Helidon SE applications were able to take advantage of GraalVM due to the use of reflection in CDI 2.0 (JSR 365), a core MicroProfile API. However, due to customer demand, Helidon 2.0.0 will support Helidon MP applications to be converted to a native image. Oracle has created this demo application for the Java community to preview this new feature. To complement the original three core Helidon SE APIs - Web Server, Configuration and Security - a new Web Client API completes the set for Helidon SE. Building an instance of the WebClient interface allows you to process HTTP requests and responses related to a specified endpoint. Just like the Web Server API, Web Client may also be configured via a configuration file. You can learn more details on what developers can expect in the upcoming GA release of Helidon 2.0.0. The InfoQ eMag / Issue #96 / July 2021 Virtual Panel: the MicroProfile Influence on Microservices Frameworks by Michael Redlich, Senior Research Technician at ExxonMobil Research & Engineering Since 2018, several new microservices frameworks - including Micronaut, Helidon and Quarkus - have been introduced to the Java community, and have made an impact on microservices-based and cloud-native applications development. The MicroProfile community and specification was created to enable the more effective delivery of microservices by enterprise Java developers. This effort has influenced how developers are currently designing and building applications. MicroProfile will continue to evolve with changes to its current APIs and most likely the creation of new APIs. Developers should familiarize themselves with Heroku’s “Twelve-Factor App,” a set of guiding principles that can be applied with any language or framework in order to create cloud-ready applications. When it comes to the decision to build an application using either a microservices or monolithic style, developers should analyze the business requirements and technical context before choosing the tools and architectures to use. In mid-2016, two new initiatives, MicroProfile and the Java EE Guardians (now the Jakarta EE Ambassadors), had formed as a direct response to Oracle having stagnated their efforts with the release of Java EE 8. The Java community felt that enterprise Java had fallen behind with the emergence of web services technologies for building microservices-based applications. 29 The InfoQ eMag / Issue #96 / July 2021 Introduced at Red Hat’s DevNation conference on June 27, 2016, the MicroProfile initiative was created as a collaboration of vendors - IBM, Red Hat, Tomitribe, Payara - to deliver microservices for enterprise Java. The release of MicroProfile 1.0, announced at JavaOne 2016, consisted of three JSR-based APIs considered minimal for creating microservices: JSR-346 - Contexts and Dependency Injection (CDI); JSR-353 - Java API for JSON Processing (JSON-P); and JSR-339 - Java API for RESTful Web Services (JAX-RS). By the time MicroProfile 1.3 was released in February 2018, eight community-based APIs, complementing the original three JSR-based APIs, were created for building more robust microservices-based applications. A fourth JSR-based API, JSR-367 - Java API for JSON Binding (JSON-B), was added with the release of MicroProfile 2.0. Originally scheduled for a June 2020 release, MicroProfile 4.0 was delayed so that the MicroProfile Working Group could be established as mandated by the Eclipse Foundation. The working group defines the MicroProfile Specification Process and a formal Steering Committee composed of organizations and Java User Groups (JUGs), namely Atlanta JUG, IBM, Jelastic, Red Hat and Tomitribe. Other organizations and JUGs are expected to join in 2021. The MicroProfile Working Group was able to release MicroProfile 4.0 on December 23, 2020 featuring updates to all 12 core APIs and alignment with Jakarta EE 8. The founding vendors of MicroProfile offered their own microservices frameworks, namely Open Liberty (IBM), WildFly Swarm/Thorntail (Red Hat), TomEE (Tomitribe) and Payara Micro (Payara), that ultimately supported the MicroProfile initiative. 30 In mid-2018, Red Hat renamed WildFly Swarm, an extension of Red Hat’s core application server, WildFly, to Thorntail to provide their microservices framework with its own identity. However, less than a year later, Red Hat released Quarkus, a “Kubernetes Native Java stack tailored for OpenJDK HotSpot and GraalVM, crafted from the best-of-breed Java libraries and standards.” Dubbed “Supersonic Subatomic Java,” Quarkus quickly gained popularity in the Java community to the point that Red Hat announced Thorntail’s endof-life in July 2020. Quarkus joined the relatively new frameworks, Micronaut and Helidon, that were introduced to the Java community less than a year earlier. With the exception of Micronaut, all of these microservices-based frameworks support the MicroProfile initiative. The core topics for this virtual panel are threefold: first, to discuss how microservices frameworks and building cloud-native applications have been influenced by the MicroProfile initiative. Second, to explore the approaches to developing cloud-native applications with microservices and monoliths, and also the recent trend in reverting back to monolith-based application development. And third, to debate several best practices for building microservices-based and cloud-native applications. Panelists • Cesar Hernandez, senior software engineer at Tomitribe. • Emily Jiang, Liberty microservice architect and advocate at IBM. • Otavio Santana, developer relations engineer at Platform.sh. • Erin Schnabel, senior principal software engineer at Red Hat. Hernandez: MicroProfile has been allowing Java developers to increase their productivity during the creation of new distributed applications and, at the same time, has allowed them to boost their existing Jakarta EE (formerly known as Java EE) architectures. Jiang: Thanks to the APIs published by MicroProfile, the cloud-native applications using MicroProfile have become slim and portable. With these standard APIs, cloud-native application developers can focus on their business logics and their productivities are significantly increased. The cloud-native applications not only work on the runtime that developed against originally, they would also work on different runtimes that support MicroProfile, such as Open Liberty, Quarkus, Helidon, Payara, TomEE, etc. The developers learned the APIs once and never need to worry about re-learn a complete set of APIs in order to achieve the same goal. Santana: The term cloud-native is still a large gray area and it’s concept is still under discussion. If you, for example, read ten articles and books on the subject, all these materials will describe a different concept. However, what these concepts have in common is the same objective - get the most out of technologies within the cloud computing model. MicroProfile popularized this discussion and created a place for companies and communities to bring successful and unsuccessful cases. In addition, it promotes good practices with APIs, such as MicroProfile Config and the third factor of The Twelve-Factor App. Schnabel: The MicroProfile initiative has done a good job of providing Java developers, especially those used to Java EE concepts, a path forward. Specific specs, like Config, Rest Client, and Fault Tolerance, define APIs that are essential for microservices applications. That’s a good thing. InfoQ: Since 2018, the Java community has been introduced to Micronaut, Helidon and Quarkus. Do you see a continuation of this trend? Will there be more microservices frameworks introduced as we move forward? Hernandez: Yes, I think new frameworks and platforms help in the synergy that produces innovation within the ecosystem. Over time, these innovations can lead to new standards. The InfoQ eMag / Issue #96 / July 2021 InfoQ: How has the MicroProfile initiative, first introduced in 2016, influenced the way developers are building today’s microservices-based and cloud-native applications? Jiang: It is great to see different frameworks introduced to aid Java developers to develop microservices. It clearly means Java remains attractive and still the top choice for developing microservices. The other trend I saw is that the newly emerging frameworks adopt MicroProfile as the out-of-box solution for developing cloudnative microservices, as demonstrated by Quarkus, Helidon, etc. Personally, I think there will be more microservices framework introduced. I also predict they might wisely adopt MicroProfile as their cloudnative solutions. Santana: Yes, I strongly believe that there is a big trend with this type of framework especially to explore AOT compilation and the benefits from the application’s cold start. The use of reflection by the frameworks has its trade-offs. For example, at the application start and in-memory consumption, the framework usually invokes the inner class ReflectionData within Class. java. It is instantiated as type SoftReference, which demands a certain time to leave the memory. So, I feel that in the future, some frameworks will generate metadata with reflection and other frameworks will generate this type of information at compile time like the Annotation Processing API or similar. We can see this kind of evolution already happening in CDI Lite, for example. 31 The InfoQ eMag / Issue #96 / July 2021 Another trend in this type of framework is to support a native image with GraalVM. This approach is very interesting when working with serverless, after all, if you have code that will run only once, code improvements like JIT and Garbage Collector don’t make sense. Schnabel: I don’t see the trend stopping. As microservices become more specialized, there is a lot more room to question what is being included in the application. There will continue to be a push to remove unnecessary dependencies and reduce application size and memory footprint for microservices, which will lead either to new Java frameworks, or to more microservices in other languages that aren’t carrying 20+ years of conventions in their libraries. InfoQ: With a well-rounded set of 12 core MicroProfile APIs, do you see the need for additional APIs (apart from the standalone APIs) to further improve on building more robust microservices-based and cloud-native applications? Hernandez: Eventually, we will need more than the 12 core APIs. The ecosystem goes beyond MicroProfile and Java; the tooling, infrastructure, and other stakeholders greatly influence creating new APIs. Jiang: The MicroProfile community adapts itself and stays agile. It transforms together with the underlying framework or cloud infrastructure. For example, due to the newly established CNCF project OpenTelemetry (OpenTracing + OpenCensus), MicroProfile will need to realign MicroProfile Open Tracing with OpenTelemetry. Similarly, the previous adopted technologies might not be mainstream any more. MicroProfile will need to align with the new trend. For example, with the widely adopted metrics framework, Micrometer, which provides a simple facade over the instrumentation clients for the most popular monitoring systems, the MicroProfile community is willing to work with Micrometer.The 32 other areas I think MicroProfile can improve is to provide some guidance such as how to do logging, require all MicroProfile supporters to support CORS, etc. If any readers have any other suggestions, please start up a conversation on the MicroProfile Google Group. Personally, I would like to open up the conversation in early 2021 on the new initiatives and gaps MicroProfile should focus on. Santana: Yes, and also improvements that need and can be made on the existing ones. Many things need to be worked together with the Jakarta EE team. The first point would be to embrace The Twelve-Factor App even more in the APIs. For example, making it easier for those who need credentials, such as username and password, in an application through the MicroProfile Config API. I could use JPA and the JMS as examples. Another point would be how to think about the integration of CDI events with Event-Driven Design and increasingly explore the functional world within databases and microservices. Schnabel: There is still a lot of evolution to come in the reactive programming space. We all understand REST, but even that space is seeing some change as pressure grows to streamline capabilities. There is work to do to align with changing industry practices around tracing and metrics, and I think there will continue to be some changes as the capabilities that used to be provided by an application server move out into the infrastructure be that raw Kubernetes, a serverless PaaS-esque environment, or whatever comes next as we try to make Kubernetes more friendly. InfoQ: How would building cloud-native applications be different if it weren’t for microservices? For example, would it be more difficult to build a monolith-based cloud-native application? Hernandez: When we remove microservices from the cloud-native equation, I immediately think of Jiang: I don’t see a huge difference between building cloud-native applications vs. microservices. Cloud-Native Application includes modular monolith and microservices. Microservices are normally small, while monolith is large by tradition. MicroProfile APIs can be used by either monolith or microservices. The important bit is that cloud-native applications might contain a few modules, which share the same release cadence and are interconnected. Coud-native applications should have their dedicated database. Santana: So, as Java didn’t die, the monoliths won’t die anytime soon either! Many of the best practices we use today in the cloud-native environment with microservices can be applied with monoliths. It is worth remembering that The Twelve Factor App, for example, is based on the book Patterns of Enterprise Application Architecture, which is from 2003 and the popularization of the cloud environment occurred, mainly, in 2006 with Amazon AWS. In general, the best practices you need to build a microservice architecture are also needed in the monolith environment. For example, CI/CD, test coverage, The Twelve Factor App, etc. Since monoliths need fewer physical layers, usually two or three as a database and application, they tend to be easier to configure, monitor and deploy in the cloud environment. The only caveat is that its scalability in general is vertical, which often causes the limit that your cloud provider can provide to be exceeded. Schnabel: In my opinion, the key characteristics for cloud-native applications are deployment flexibility and frequency: as long as your application does not depend on the specifics of its environment (e.g., it has credentials for backing services injected and avoids special code paths), you’re in good shape. If your monolith can be built, tested, and deployed frequently and consistently (hello, automation!), you’re ok. InfoQ: Despite the success of microservices, recent publications have been discussing the pitfalls of microservices with recommendations to return to monolith-based applications development. What are your thoughts on this subject? Should developers seriously consider a move back to monolith-based applications development? The InfoQ eMag / Issue #96 / July 2021 monolith uber-jars deployments. I feel that the early adopters of container-based infrastructures started taking advantage of cloud-native features using SOAP architectures instead of JAX-RS based microservices. Hernandez: Developers should analyze the business requirements and technical context before choosing the tools and architectures to use. As a developer, we get excited to test new frameworks and tools. Still, we need to understand and analyze the architecture for a particular scenario. The challenge then turns into finding the balance between the pros and cons of adopting or moving back to existing architectures. Jiang: Monolith is not evil. Sometimes modular monolith performs as well as microservices. Microservices are not the only choice for cloud-native solutions. Choosing monolith or microservices depends on the company team structure and culture. If you have a small team and all of the applications releasing at the same cadence, monolith is the right choice. On the other hand, if you have a distributed team and they operate independently, microservices fit in well with the team structure and culture. If moving towards microservices causes the team moving slower, moving back to monolith-based application is wise. In summary, there is no default answer. You have to make your own choice based on your company setting and culture. Choose whatever suits the 33 The InfoQ eMag / Issue #96 / July 2021 company best. You should not measure the success based on whether you have microservices or the number of the microservices. Santana: As developers, we need to understand that there are no silver bullets in any architectural solution. All solutions have their respective tradeoffs. The problem was the herd effect, which led to the community thinking that you, as a developer, are wrong if you are not on microservices. In technology, this was neither the first nor the last buzzword that will flood the technological community. Overall, I see this with great eyes; I feel that the community is mature enough to understand that microservices cannot be applied to all possible things, that is, common sense and pragmatism is still the best tool for the developer/architect. Schnabel: I agree with the sentiment: there is no need to start with microservices, especially if you’re starting something new. I’ve found that new applications are pretty fluid in the beginning. What you thought you were going to build isn’t quite what you needed in the end. Using a monolithic approach for that initial phase saves a lot of effort: coordinating the development and deployment of many services in the early days of an application is more complicated, and there isn’t a compelling reason to start there. Sound software practices inside the monolith can set you up for later refactoring into microservices once your application starts to stabilize. I’ve found that obvious candidates emerge pretty naturally. InfoQ: What are some of your recommended best practices for building microservices-based and cloud-native applications? Hernandez: Analyze first if the problem to solve fits within the microservice approach and the constraints or opportunities the particular project has. All the benefits of the cloud-native 34 approach depend on how well you apply the base architecture. Jiang: The base practices for building microservices-based and cloud-native applications are to adopt The Twelve Factor App, which ensures your microservices perform well in the cloud. Check out my 12 Factor App talk at QCon London 2020 on how to use MicroProfile and Kubernetes to develop cloud-native applications. The golden tip is to use an open and well-adopted standard for your cloudnative applications and then you are rest assured they will work well in the cloud. Santana: In addition to the practices, both popularize such as DDD, The Twelve Factor App, etc. I consider myself as having good sense and being a pragmatic, professional, excellent practitioner. You don’t need to be afraid to apply something simple if it meets the requirement and your business, so escape the herd effect of technology. In the recent book I’m reading, 97 Things Every Cloud Engineer Should Know: Collective Wisdom from the Experts, among several tips I will quote two: The first is no problem if you don’t use Kubernetes; as I mentioned, don’t feel bad about not using a technology when it’s not needed. The second is the possibility of using services that increase abstraction and decrease the risk of implementing cloud-like PaaS, DBaaS. It is important to understand that this type of service brings the cloud-provider’s whole know-how and makes operations such as backup/restore, updating services such as databases something much simpler. For example, I can mention Platform.sh, which allows deploying cloud-native applications straightforwardly in a GitOps centric way. If you’re going to use microservices, use microservices. Understand that they are completely decoupled, independent entities with their own evolutionary lifecycle. APIs and contract testing should be on your radar. Do your best to wrap your head around the ramifications of microservices: if you try to ensure that everything works together in a staging environment and then deploy a set of versioned services that work together, you’ve gone back to a monolithic deployment model. If you have to consistently release two or more services at the same time because they depend on each other, you are dealing with a monolith that has unnecessary moving parts. evolve with either new APIs or changes to existing ones. Our panelists also recommended that all developers should also familiarize themselves with Heroku’s “Twelve-Factor App,” a set of guiding principles that can be applied with any language or framework in order to create cloud-ready applications. The InfoQ eMag / Issue #96 / July 2021 Schnabel: Avoid special code paths. They are hard to test and can be harder to debug. Make sure environment-specific dependencies (hosts, ports, credentials for backing services) are injected in a consistent way regardless of the environment your application runs in. With the advent of microservices, monolithbased application development had seemingly been characterized as “evil.” However, our experts warned against subscribing to such a characterization. Along with describing the pros and cons, they explained situations in which monolith-based application development would be beneficial over microservices-based application development. The challenging part now is for you to take this wisdom and apply it to your specific context and set of challenges. Think about application security up front. Every exposed API is a risk. Every container you build will need to be maintained to mitigate discovered vulnerabilities. Plan for that. When you’re moving to a cloud-native environment, you should take it as a given that “making an application work and then forgetting about it and letting it run forever” is no longer an option, and ensure you have the tools in place for on-going maintenance. Conclusions In this virtual panel, we asked the expert practitioners to discuss MicroProfile and microservices frameworks. We even asked for their thoughts on the recent trend to return to monolithbased application development. Our experts agreed that MicroProfile has influenced the way developers are building microservicesbased and cloud-native applications. The Java community should expect new microservices frameworks to emerge as MicroProfile itself will 35 Read recent issues Kubernetes and Cloud Architectures Re-Examining Microservices after the First Decade Java Innovations That Are on Their Way Does it feel to you like the modern application stack is constantly shifting with new technologies and practices emerging at a blistering pace? We’ve hand-picked a set of articles that highlight where we’re at today. With a focus on cloud-native architectures and Kubernetes, these contributors paint a picture of what’s here now, and what’s on the horizon. We have prepared this eMag for you with content created by professional software developers who have been working with microservices for quite some time. If you are considering migrating to a microservices approach, be ready to take some notes about the lessons learned, mistakes made, and recommendations from those experts. In this eMag we want to talk about “Innovations That Are On Their Way”. This includes massive, root-and-branch changes such as Project Valhalla as well as some of the more incremental deliveries coming from Project Amber such as Records and Sealed Types. InfoQ @InfoQ InfoQ InfoQ