Effective Kubernetes for JakartaEE and Microprofile Developers

Best practices to get your deployment up and running on Kubernetes and Azure

David Minkovski
21 min readOct 17, 2023
Image generated by Bing AI image using free icons

Motivation

Kubernetes has become one of the most loved solutions for managing containers. But can you tell me why people love it?

Because it runs magic containers (essentially what you want) where you want. Whether on-premise or in the cloud (Azure), Kubernetes enables engineering teams to ship containers and scale and manage deployments and clusters easily.

Let’s look at some of those benefits some other time, OK? I am sure you will find plenty of resources out there explaining why K8S is great.

Many customers I have enjoyed working with are very fond of Java.
And that makes perfect sense, especially if you consider Jakarta EE and MicroProfile.

These are widely adopted industry standard specifications by the Eclipse Foundation for high-quality enterprise software.

Java is almost 30 years old. There are a huge amount of software and legacy projects out there that want to — no, need to be migrated onto Kubernetes.

Well, how do you go about that?

This is precisely what this article aims to help you with.

For this, we will be using the repository by Reza Rahman, which you can find here, and my humble fork.

This article will help you set up best deployment practices, amongst other things listed below:

  1. Auto-Scaling for Efficiency
  2. Auto-Discovery for Seamless Integration
  3. Load-Balancing for Even Workloads
  4. Self-Healing for Resilience
  5. Monitoring for Insights
  6. Operators for Application Management
  7. CI/CD Pipelines
  8. Running It on Azure

First Things First — Cluster Time

Image generated by the author using Bing AI image Creator

Let's get Kubernetes running on your machine.

I am using Windows, so I will be using Docker Desktop.

However, you can use something like minikube.

Feel free to also check out Azure Kubernetes Services — so you can easily get your cluster up and running in the cloud! You can choose any of those to run a cluster on your machine.

Next, we need to install the Kubernetes CLI. Please follow these instructions.

In Docker Desktop (v1.27.2), you can spin up a cluster by checking the “Enable Kubernetes” option in the settings and then hitting “Apply.”

Here’s what that looks like:

Docker Desktop Settings for Kubernetes

Yes. It is that easy!

After some time, you should see a couple of good old Kubernetes default containers spin up (or not, if you did not tick the “show system containers (advanced)” option). Here’s what your screen should look like now:

Kubernetes Containers running on Docker Desktop

Congrats! Now that our cluster is running — let’s see if it’s working by using the following command:

# Display all nodes in our cluster
kubectl get node
kubectl get node

What Are We Working With? Coffee Time!

I forked the original repository and added another service to complete the following setup. Here’s what’s included:

  1. Cafe. This is our main Jakarta EE application running on IBM Websphere Liberty. The service offers a simple CRUD interface, connects to a PostgreSQL database, and we use it to manage the types of Coffee we want to offer.
  2. Coffee House. This Quarkus application runs our Coffee House. It orders from the Cafe every 45 seconds using a scheduled task and calls to the REST API to GET the various types of coffee.
Cafe and Coffee House

Let’s get rolling…

The best way to manage resources in Kubernetes is to get them grouped in namespaces. If you prefer order and stuff, do that.

kubectl create namespace cafe
kubectl create namespace cafe

Database setup

Let’s start with creating our PostgreSQL database service using the database.yml, shown below:

apiVersion: apps/v1
kind: Deployment
metadata:
name: cafe-database
spec:
replicas: 1
selector:
matchLabels:
app: cafe-database
template:
metadata:
name: cafe-database
labels:
app: cafe-database
spec:
containers:
- name: cafe-database
env:
- name: POSTGRES_HOST_AUTH_METHOD
value: "trust"
image: postgres:latest
ports:
- containerPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: db-volume
volumes:
- name: db-volume
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: cafe-database
spec:
type: ClusterIP
ports:
- name: http
port: 5432
targetPort: 5432
selector:
app: cafe-database

The database will store our coffees, as you can see:

#cd jakartaee-kubernetes\localk8s
kubectl apply -f .\database.yml
kubectl apply -f .\database.yml

This command is going to spin up a Deployment with the official Postgres docker image, exposing port 5432 (default port for Postgres) and make it available to the cluster using the default ClusterIP service.

What is the ClusterIP service?

The ClusterIP Service gives a consistent, stable address to a group of related applications (Pods) so they can easily talk to each other. This makes communication between them smooth and efficient. It’s handy because even if the Pods change, they can still be reached using this special address.

Or simply said:

You can think of it as an email address for a team of people. When someone wants to talk to the team, they send a message to that email, and the email gets answered by one team member.

Let’s see if our database is running with the following command:

# List all our pods in the default namespace
kubectl get po
kubectl get po

Please read, this is highly important!

When it comes to production deployments or important workloads, please use something that is actually production ready.

A service I highly recommend is Azure Database for PostgreSQL.

It’s the simplest way to get a highly reliable and scalable PostgreSQL database with no headaches. I warned you. You will remember this.

Cafe time with Jakarta

One thing I emphasize here is the true power of containerization.

You see — anything that can be containerized can run on Kubernetes.
Lucky for us, there are many ready-made Docker files out there.
So we do not even have to worry about how to containerize properly.

After building the application using Maven, you should have a target folder with the .war file. We can now build the container in the clustering folder.

Be sure to add the .war file to the folder for the Dockerfile. Take a look at the Dockerfile, and you will see where the .war has to be placed.

# Build a docker container based on the current directory
docker build -t localhost:5000/jakartaee-cafe .
Running the docker build

We should be able to see our new docker image in the list:

# List all docker images
docker images

Let’s push our image to a local docker image registry.

We can run a local registry using the following command:

# Run local image registry on port 5000
docker run -d -p 5000:5000 --restart=always --name registry registry:2

# Push the image we created into the registry
docker push localhost:5000/jakartaee-cafe

Now, it’s time! We can run our cafe using the jakartaee-cafe.yml, shown below:

apiVersion: apps/v1
kind: Deployment
metadata:
name: jakartaee-cafe
spec:
replicas: 1
selector:
matchLabels:
app: jakartaee-cafe
template:
metadata:
name: jakartaee-cafe
labels:
app: jakartaee-cafe
spec:
containers:
- name: jakartaee-cafe
env:
- name: POSTGRES_SERVER
value: "cafe-database"
- name: POSTGRES_USER
value: "postgres"
- name: POSTGRES_PASSWORD
value: "password"
image: localhost:5000/jakartaee-cafe
ports:
- containerPort: 9080
readinessProbe:
httpGet:
path: /health/ready
port: 9080
initialDelaySeconds: 15
periodSeconds: 30
timeoutSeconds: 5
failureThreshold: 2
livenessProbe:
httpGet:
path: /health/live
port: 9080
initialDelaySeconds: 15
periodSeconds: 60
timeoutSeconds: 10
failureThreshold: 3
---
apiVersion: v1
kind: Service
metadata:
name: jakartaee-cafe
spec:
type: ClusterIP
ports:
- name: http
port: 9080
targetPort: 9080
selector:
app: jakartaee-cafe

---
apiVersion: v1
kind: Service
metadata:
name: jakartaee-cafe-lb
spec:
type: LoadBalancer
ports:
- name: http
port: 9080
targetPort: 9080
selector:
app: jakartaee-cafe

Have you seen how we can connect to the database service by using only its name?!

env:
- name: POSTGRES_SERVER
value: "cafe-database"

I find this mind-blowing. Right? No static IP referencing needed.

This is made possible thanks to Kubernetes’ auto-discovery feature that maps the service names to the right services and, thus, the running pods fully automatically.

This is a huuuge advantage in comparison to hardcoded endpoint IPs.

# Run jakartaee-cafe deployment and service
kubectl apply -f .\jakartaee-cafe.yml
kubectl apply -f .\jakartaee-cafe.yml

Take a look at the service declaration.

A LoadBalancer Service in Kubernetes is like a public face for your applications. It distributes incoming requests from the outside world to the right places inside your cluster.

Imagine it like being a receptionist at a busy office building. The receptionist directs visitors to the correct office or department when they arrive.

In technical terms, a LoadBalancer Service automatically assigns a public IP address to your applications, making them accessible from the internet. It spreads out the incoming traffic evenly among a group of Pods, ensuring they don’t get overwhelmed.

This is especially useful for applications that need to handle a lot of users at once, like a popular website.

# Get all services 
kubectl get services
kubectl get services

Yeah! Right there is our jakartaee-cafe LoadBalancer.

It serves at http://localhost:9080 and our application at
Jakarta EE Cafe should run on your machine inside your cluster.

Let’s have some fun!

Add some coffee types to our application.

Cafe CRUD interface

Let’s check if our REST interface works fine, too.

I like to use Postman for that. Here’s what it looks like:

Postman Request

Finally, you can also use the browser and open the OpenApi Endpoint. It will look like this:

openapi: 3.0.0
info:
title: Deployed APIs
version: 1.0.0
servers:
- url: http://localhost:9080/jakartaee-cafe
- url: https://localhost:9443/jakartaee-cafe
paths:
/rest/coffees:
get:
operationId: getAllCoffees
responses:
default:
description: default response
content:
application/xml:
schema:
type: array
items:
$ref: '#/components/schemas/Coffee'
application/json:
schema:
type: array
items:
$ref: '#/components/schemas/Coffee'
post:
operationId: createCoffee
requestBody:
content:
application/xml:
schema:
$ref: '#/components/schemas/Coffee'
application/json:
schema:
$ref: '#/components/schemas/Coffee'
responses:
default:
description: default response
/rest/coffees/{id}:
get:
operationId: getCoffeeById
parameters:
- name: id
in: path
required: true
schema:
type: integer
format: int64
responses:
default:
description: default response
content:
application/xml:
schema:
$ref: '#/components/schemas/Coffee'
application/json:
schema:
$ref: '#/components/schemas/Coffee'
delete:
operationId: deleteCoffee
parameters:
- name: id
in: path
required: true
schema:
type: integer
format: int64
responses:
default:
description: default response
components:
schemas:
Coffee:
type: object
properties:
id:
type: integer
format: int64
name:
type: string
price:
type: number
format: double

Good, now let’s get to ordering those coffees!

Quarkus Coffeehouse

After building the Quarkus application with Maven, we can build the docker container using the prepacked Docker files.

# Build quarkus application docker image
docker build -t localhost:5000/coffeehouse -f .\src\main\docker\Dockerfile.jvm .

# Push it to the registry too
docker push localhost:5000/coffeehouse
docker build -t localhost:5000/coffeehouse -f .\src\main\docker\Dockerfile.jvm

We can now run it on our cluster using the coffeehouse.yml:

apiVersion: apps/v1
kind: Deployment
metadata:
name: quarkus-coffeehouse
spec:
replicas: 1
selector:
matchLabels:
app: quarkus-coffeehouse
template:
metadata:
name: quarkus-coffeehouse
labels:
app: quarkus-coffeehouse
spec:
containers:
- name: quarkus-coffeehouse
image: localhost:5000/coffeehouse
ports:
- containerPort: 8080
readinessProbe:
httpGet:
path: /q/health/ready
port: 8080
initialDelaySeconds: 15
periodSeconds: 30
timeoutSeconds: 5
failureThreshold: 2
livenessProbe:
httpGet:
path: /q/health/live
port: 8080
initialDelaySeconds: 15
periodSeconds: 60
timeoutSeconds: 10
failureThreshold: 3
---
apiVersion: v1
kind: Service
metadata:
name: quarkus-coffeehouse-lb
spec:
type: LoadBalancer
selector:
app: quarkus-coffeehouse
ports:
- port: 8080
name: http
targetPort: 8080

Here’s the command to run it:

# Start the coffeehouse deployment and service
kubectl apply -f .\coffeehouse.yml

Congrats! Now, our coffeehouse service runs on your machine on localhost:8080.

You might need to wait and refresh after a minute to see the new coffees ordered.

All services running

Scaling our cluster — with great power comes great responsibility

Auto-scaling in Kubernetes allows you to dynamically adjust the number of running pods in a deployment based on the current load or resource utilization. This helps ensure that your applications can handle varying levels of traffic and demand.

In Kubernetes, you can create a Horizontal Pod Autoscaler (HPA) to adjust the number of pods in a deployment automatically.

This is done based on observed CPU utilization or other custom metrics.
To create an HPA for your Jakarta EE application, you can use the following command:

# Autoscale deployment based on cpu-percentage threshold
kubectl autoscale deployment jakartaee-cafe --cpu-percent=80 --min=1 --max=10

This command creates an HPA for the jakartaee-cafe deployment, ensuring CPU utilization across all pods remains below 80%. It also sets a minimum of one pod and a maximum of ten pods.

Or we can manually scale it up using the following command:

# Scale deployment to 3 replicas
kubectl scale deployment jakartaee-cafe --replicas=3
kubectl scale deployment jakartaee-cafe — replicas=3

Now, you can see three pods running. I know… like I promised: MAGIC.
The load balancer takes care of the rest.

Self-healing for resilience

Image generated by the author using Bing AI image Creator

Self-healing refers to the ability of the system to detect and recover from failures automatically. It ensures that your applications remain available and operational even in the face of pod or node failures.

Here’s how you can set up self-healing in your Kubernetes cluster using so-called Pod Health Probes. Kubernetes allows you to define health checks for your pods. There are two types of probes:

  • Liveness probe: It determines if a pod is running. If the liveness probe fails, Kubernetes will restart the pod.
  • Readiness probe: It determines if a pod is ready to serve traffic. If a readiness probe fails, the pod will be removed from service endpoints.

Why does this play so well with MicroProfile? You guessed it.

Because MicroProfile Health comes with all the specifications needed to match these requirements. If you have not noticed already, it’s part of our pod definition, shown below:

apiVersion: apps/v1
kind: Deployment
metadata:
name: jakartaee-cafe
spec:
replicas: 2
selector:
matchLabels:
app: jakartaee-cafe
template:
metadata:
name: jakartaee-cafe
labels:
app: jakartaee-cafe
spec:
containers:
- name: jakartaee-cafe
env:
- name: POSTGRES_SERVER
value: "db-coffee-demo.postgres.database.azure.com"
- name: POSTGRES_USER
value: "jakarta@db-coffee-demo"
- name: POSTGRES_PASSWORD
value: "ILoveCafe123!"
image: acrcoffeedemo.azurecr.io/jakartaee-cafe:v1
ports:
- containerPort: 9080
readinessProbe:
httpGet:
path: /health/ready
port: 9080
initialDelaySeconds: 15
periodSeconds: 30
timeoutSeconds: 5
failureThreshold: 2
livenessProbe:
httpGet:
path: /health/live
port: 9080
initialDelaySeconds: 15
periodSeconds: 60
timeoutSeconds: 10
failureThreshold: 3
imagePullSecrets:
- name: myregistrykey
---
apiVersion: v1
kind: Service
metadata:
name: jakartaee-cafe
spec:
type: ClusterIP
ports:
- name: http
port: 80
targetPort: 9080
- name: https
port: 443
targetPort: 9443
selector:
app: jakartaee-cafe
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: jakartaee-cafe
annotations:
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: jakartaee-cafe
port:
number: 80
- path: /
pathType: Prefix
backend:
service:
name: jakartaee-cafe
port:
number: 443

And here’s another file:

...
spec:
containers:
- name: jakartaee-cafe
...
readinessProbe:
httpGet:
path: /health/ready
port: 9080
initialDelaySeconds: 15
periodSeconds: 30
timeoutSeconds: 5
failureThreshold: 2
livenessProbe:
httpGet:
path: /health/live
port: 9080
initialDelaySeconds: 15
periodSeconds: 60
timeoutSeconds: 10
failureThreshold: 3

Observe self-healing parts by deleting the pod and seeing how Kubernetes automatically creates a new pod to replace the one you deleted.

That means no more headaches when your application fails.

However, self-healing mechanisms do not inherently address application-level failures or bugs. Self-healing alone may be insufficient if there are internal issues within the application code, such as logic errors, database connection problems, or other application-specific issues.

Monitoring for Insights

Image generated by the author using Bing AI image Creator

Monitoring your Kubernetes cluster using Grafana and Prometheus is a powerful way to gain insights into the health and performance of your applications and infrastructure.

How does that work?

Our applications provide the metrics in their respective metrics endpoints. These are regularly crawled by Prometheus as configured in the prometheus.yml:

global:
scrape_interval: 5s
external_labels:
monitor: 'jakartaee-cafe-monitor'
scrape_configs:
- job_name: 'jakartaee-cafe-metric'
metrics_path: /metrics/
kubernetes_sd_configs:
- role: pod
namespaces:
names:
- default
selectors:
- role: "pod"
label: "app=jakartaee-cafe"
- job_name: 'quarkus-coffeehouse'
metrics_path: /q/metrics/
kubernetes_sd_configs:
- role: pod
namespaces:
names:
- default
selectors:
- role: "pod"
label: "app=quarkus-coffeehouse"
scheme: http

Metrics for the Jakarta app can be found at localhost:9080/metrics

For Quarkus, they are served at localhost:8080/q/metrics/

This again plays hand in hand with the MicroProfile Metrics Specification.

Let’s build Grafana and Prometheus containers by using the provided Dockerfiles in monitoring:

# Build grafana docker image
docker build -t localhost:5000/grafana -f .\Dockerfile-grafana .

# Build prometheus docker image
docker build -t localhost:5000/prometheus -f .\Dockerfile-prometheus .

# Push images to registry
docker push localhost:5000/grafana
docker push localhost:5000/prometheus

# cd jakartaee-cafe/monitoring
# Run the monitoring deployment
kubectl apply -f .\jakartaee-cafe-dashboard.yml
kubectl apply -f .\jakartaee-cafe-dashboard.yml

Our Prometheus and Grafana services should be up and running:

All services of the cluster

Now, you can access the Grafana dashboard:

Grafana Dashboard

Be sure to check out Prometheus, too:

Prometheus

We can also add custom metrics about the application (business logic) in our code demo. Below, you can see that our OrderTask uses a meter counter to update the coffees ordered:

package org.coffeehouse.tasks;

import io.quarkus.scheduler.Scheduled;
import jakarta.enterprise.context.ApplicationScoped;
import jakarta.inject.Inject;
import org.coffeehouse.model.Coffee;
import org.coffeehouse.service.CoffeeService;
import org.eclipse.microprofile.rest.client.inject.RestClient;

import java.util.ArrayList;
import java.util.List;
import java.util.Random;
import java.util.concurrent.atomic.AtomicInteger;
import org.jboss.logging.Logger;
import io.micrometer.core.instrument.MeterRegistry;
import io.micrometer.core.instrument.Tags;


@ApplicationScoped
public class OrderTask {
private static final Logger log = Logger.getLogger(OrderTask.class);


@RestClient
CoffeeService coffeeService;

@Inject
MeterRegistry registry;

private List<Coffee> coffeesOrdered = new ArrayList<Coffee>();
private AtomicInteger coffeesOrderedCount = new AtomicInteger();;

private Random rand = new Random();

@Scheduled(every="45s")
void orderCoffeePeriodically() {
try{
log.info("Trying to order coffee...");
List<Coffee> coffees = coffeeService.getCoffees();
log.info("Received Coffees to choose from: "+coffees.toString());
int nextCoffeeIndex = rand.nextInt(coffees.size());
Coffee coffeeToOrder = coffees.get(nextCoffeeIndex);
coffeesOrdered.add(coffeeToOrder);
coffeesOrderedCount.incrementAndGet();
log.info("Coffee ordered: "+coffeeToOrder.name);
System.out.println("Coffee to order:"+coffeeToOrder.name+" for: "+coffeeToOrder.price);
registry.counter("coffee_orders_counter", Tags.of("coffee", coffeeToOrder.name)).increment();
}
catch (Exception e){
e.printStackTrace();
}

}

public int getCoffeesOrderedCount(){
return coffeesOrderedCount.get();
}
public List<Coffee> getCoffesOrdered(){
return coffeesOrdered;
}
}

We can then visualize what a pie chart will look like in Grafana. How cool is that?!

Coffee Piechart

Have complex stuff? Kubernetes operators to the rescue

Imagine you have a complex database system that you want to run on Kubernetes. This database system requires specific configurations, replication, backup, and scaling mechanisms.

Things you would need to consider are:

  1. Deployment
    You could manually deploy the database by creating multiple pods, services, and persistent volumes. However, this is a complex process; if not done correctly, it could lead to misconfigurations, downtime, or data loss.
  2. Scaling
    As your application grows, you might need to scale your database by adding more nodes or increasing the resources allocated to each node. Manually managing this scaling process can be error-prone and time-consuming — YIKES.
  3. Backups and recovery
    You need a robust backup and recovery strategy to ensure that you can restore your database in case of data corruption or hardware failures. Setting up and managing regular backups can be a tedious task. You don’t want to lose all your data suddenly, right?
  4. Upgrades
    Periodically, you’ll want to upgrade your database to the latest version to take advantage of new features and security patches.
    This process can be complex and may involve steps like schema updates and data migrations.
  5. Monitoring and alerts
    You need to monitor the health and performance of your database and set up alerts for any anomalies or issues. This requires configuring monitoring tools and creating alerting rules.
  6. High availability and failover
    Achieving high availability for your database involves setting up replication and failover mechanisms and ensuring there is no single point of failure.

Yup, quite a list, I agree. Painful, isn’t it?

Now, imagine you have a so-called Kubernetes Operator tailored for your specific database. This operator understands the intricacies of deploying, managing, and maintaining that database.

  • When you create a custom resource for your database, the operator automatically handles the deployment, including setting up the required pods, services, and volumes.
  • When you want to scale, you update the custom resource, and the operator adds or removes nodes.
  • Backups and recovery are managed by the operator, ensuring regular backups are taken and providing mechanisms for easy recovery.
  • When upgrading the database, you can modify the custom resource to specify the new version. The operator handles the upgrade process, including any necessary data migrations.
  • The operator continuously monitors the database’s health and performance. If any issues are detected, it can take automated actions or trigger alerts.
  • The operator manages replication and failover for high availability, ensuring that your database remains accessible even in a node failure.

In summary, the operator streamlines the deployment, management, and maintenance of complex applications like databases.

It encapsulates domain-specific knowledge and automates tasks that would otherwise require manual intervention, making running such applications in a Kubernetes environment easier.

Lucky for us, here is an example of such an operator for liberty:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: default

resources:
- open-liberty-crd.yaml
- open-liberty-operator.yaml

Be sure to check out other operators on OperatorHub.io.

CI/CD Pipeline

Continuous Integration (CI) is a software development practice where developers regularly merge code changes into a shared repository (like Github). Automated tests are run to ensure that the code integrates seamlessly.

Continuous Deployment/Delivery (CD) takes CI a step further. It automates releasing code changes to production environments after passing tests, allowing faster and more reliable software deployment.

In a nutshell: automation, automation, automation!

Let’s do this:

Waaaaait.

CI / CD makes no sense locally. We need to climb up to the cloud.
Hello, Azure, my old friend.

I know I have said it already, but once again:

If you have not tried Azure, you should.

It’s FREE.

After creating your Azure subscription, we will get back to the CI / CD Pipeline once we are in the cloud.

Running the Java Kubernetes Cluster on Azure

Image generated by the author using Bing AI image Creator

To interact with Azure and the resource manager, we need the Azure CLI.
Already there? Let us log in first and see that we get the details back on our machine in PowerShell to make sure we have our CLI working as expected:

# Login to Azure
az login

After you have successfully logged in via the magically appearing browser window, the response should look something like this:

Azure Login Success — With a lot of black

Now, let us create a resource group

Azure resource groups are a way to organize your Azure resources.
This is useful if you have multiple projects running and need to organize your resources accordingly.

Here’s what the code looks like:

# Set Variables: Location & Ressource Group Name
$LOCATION = "westeurope"
$RESOURCE_GROUP = "CAFE-RG"

# Create a Resource Group within our Subscription
az group create -l $LOCATION -n $RESOURCE_GROUP

# See all Resource Groups to verify
az group list

Once this is successful, you should be able to see the Resource Group in
the Azure Portal as well.

Azure Database for PostgreSQL

If you remember, we had our own database container that we hosted on K8S. But now we are going into the cloud, and like I mentioned… don’t host the database yourself. Please. Azure Database for PostgreSQL is the perfect service to have a fully managed database up and running without hassle.

You will see just how easy it is. Let’s create the server (please edit the username and password):

# Create a postgres server
az postgres server create -l $LOCATION -g $RESOURCE_GROUP -n cafedbsvr -u powerranger -p Secret123!

# Create a database
az postgres db create --name jakartaee-cafe-db-dm --server-name cafedbsvr --resource-group $RESOURCE_GROUP

Done! I told you it was easy.

Kubernetes cluster

Wait until you see how easy creating an AKS cluster with two nodes is. Give it a try with the following command:

$AKS_CLUSTER = "cafeakscluster"

# Create cluster
az aks create -g $RESOURCE_GROUP -n $AKS_CLUSTER --enable-managed-identity --node-count 2 --generate-ssh-keys

# Connect to Kubernetes Context
az aks get-credentials --resource-group $RESOURCE_GROUP --name $AKS_CLUSTER

# Get cluster nodes
kubectl get nodes
kubectl get nodes

Container Registry — Our Image Manager in the Cloud

Container Registry on Azure

Remember our local docker image registry running on localhost:5000?

Now we need one in the cloud.

The Azure Container Registry is the place where we will store our container images. Our services can then pull the image we want and deploy it. So, let’s create one!

# Name for our registry
$REGISTRY_NAME = "registrycafeacr"

# Create the Container Registry
$ACR = az acr create --resource-group $RESOURCE_GROUP --name $REGISTRY_NAME --sku Basic

# This is the registry server address
$ACR_SERVER = "$REGISTRY_NAME.azurecr.io"

# Log into our Container Registry and see that "Login succeeded"
az acr login --name $REGISTRY_NAME

Good. Now, we need to tag our local images with our new registry to push them to the cloud.

For that, we can use the following commands:

# Tag and push the image to our registry
docker tag localhost:5000/jakartaee-cafe $ACR_SERVER/jakartaee-cafe
docker push $ACR_SERVER/jakartaee-cafe

# Tag and push the image to our registry
docker tag localhost:5000/coffeehouse $ACR_SERVER/coffeehouse
docker push $ACR_SERVER/coffeehouse

You will need to reference the proper container registry in your .yml files.
But to fetch the images, we must first authenticate to our registry using an imagePullSecret and a registry admin user.

You would not want anyone to be able to just pull your images, right?

Enable the admin user for our registry and get the credentials of our Azure Container Registry so we can push our images to it.

# Create an Admin User for our registry
az acr update -n $REGISTRY_NAME --admin-enabled true

# Retrieve the ACR Credentials for our Admin User
$ACR_USERNAME = az acr credential show --name $REGISTRY_NAME --query username
$ACR_PASSWORD = az acr credential show --name $REGISTRY_NAME --query passwords[0].value

Time to set up our imagePullSecret using a Kubernetes secret:

# Create a secret to pull the images
kubectl create secret docker-registry myregistrykey --docker-server=$REGISTRY_NAME.azurecr.io --docker-username=$ACR_USERNAME --docker-password=$ACR_PASSWORD

Let’s spin up the services! We can now use the same .yml files we used
before to spin up our applications.

But we need to do one adjustment, can you guess which one?

It’s the database environment configuration in our jakartaee-cafe.yml. Make sure to fill in the values you used for your Azure database service.

Ingress controller

An Ingress Controller is a key component in Kubernetes that manages external access to the services in your cluster. It acts as a reverse proxy and load balancer, directing traffic from the outside world to the appropriate services inside your Kubernetes cluster.

We can install one using this command:

# Install Ingress Controller
helm install ingress-nginx ingress-nginx/ingress-nginx --set controller.replicaCount=2 --set controller.nodeSelector."kubernetes\.io/os"=linux --set defaultBackend.nodeSelector."kubernetes\.io/os"=linux --set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz

And so we have our external endpoint where our app is running:

kubectl get service

If you prefer to use a managed gateway service — check out Azure Application Gateway Ingress Controller.

The running application is now available here: http://EXTERNAL_IP.

CI/CD pipeline

Ah, right, I knew there was something. Let us add the cherry on top.

Now it’s time to configure GitHub to have a pipeline that will automate the deployment for us. In this workflow file, you will find an example of how you can set up the pipeline:

name: Main Build

on:
workflow_dispatch:

jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2

- name: Login to Azure
uses: azure/login@v1
with:
creds: '${{ secrets.AZURE_CREDENTIALS }}'
- uses: azure/aks-set-context@v3
with:
resource-group: CAFE-RG
cluster-name: cafeakscluster
- uses: azure/setup-kubectl@v3

- name: Delete Jakarta EE Cafe Deployment
run: kubectl delete -f devops/jakartaee-cafe.yml
continue-on-error: true

- name: Set up Java
uses: actions/setup-java@v1
with:
java-version: '1.8'

- name: Cache Maven packages
uses: actions/cache@v1
with:
path: ~/.m2
key: ${{ runner.os }}-m2-${{ hashFiles('**/pom.xml') }}
restore-keys: ${{ runner.os }}-m2

- name: Build with Maven
run: mvn clean package --file devops/jakartaee-cafe/pom.xml

- name: Login to Docker Hub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}

- name: Build and push Docker image
uses: docker/build-push-action@v2
with:
context: devops
push: true
tags: ${{ secrets.DOCKERHUB_USERNAME }}/jakartaee-cafe:v4

- name: Create Azure Cafe Deployment
run: kubectl create -f devops/jakartaee-cafe.yml

However, we will also need a so-called service principal to access our service. We must store some credentials to authenticate in our GitHub Secret Environment.

You can find anything about this setup in this Readme.

That’s it! We are in the cloud. Now, how easy was that?!

Summary

In this article, we have explored how Kubernetes, MicroProfile, and Jakarta EE applications can work together to create cloud-native applications that are scalable, resilient, and portable.

We have seen how Kubernetes provides a platform for deploying and managing containers and how MicroProfile adds specifications for developing microservices.

I hope this article has given you some insights and inspiration for your own projects. As the cloud computing landscape evolves, so do the technologies and practices that enable us to build better applications.

Kubernetes, MicroProfile, and Jakarta EE are not static but rather dynamic and adaptive. They are constantly improving and adding new capabilities to meet the needs and challenges of the modern world.

By combining these technologies, we can leverage their strengths and benefits to create applications that are not only functional but also enjoyable and satisfying.

Enjoy building your own applications, and try it out yourself!

Curious about more?

My newsletter is a burst of tech inspiration, problem-solving hacks, and entrepreneurial spirit.
Subscribe for your weekly dose of innovation and mind-freeing insights:
https://davidthetechie.substack.com/

Want to see more?

Go to my GitHub page for more cool projects!

--

--