Jeff Mesnil
Weblog · About

WildFly - The GitOps Way

March 5, 2024

We have improved the Maven plug-in for WildFly to be able to provision and configure WildFly application server directly from the application source code. This make it very simple to control the server configuration and management model and make sure it is tailor-made for the application requirements.

This is a good model for DevOps team where a single team is responsible for the development and deployment of the application.

However, we have users that are in a different organisational structure where the Development team and the Operational team work in silos.

In this article, we will show how it is possible to leverage the WildFly Maven plugin to handle the configuration and deployment of WildFly separately from the application development in a loose GitOps manner.

Provision the WildFly Server

We will use a Maven project to control the application server installation and configuration.

mkdir wildfly-gitops
cd wildfly-gitops
touch pom.xml

The pom.xml will configure the provisioning and configuration of WildFly

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns=""

    <!-- Specify the version of WildFly to provision -->


This pom.xml will provision (install and configure) WildFly. The version of WildFly is configured with the version.wildfly property (set 31.0.0.Final in the snippet above).

Let's now install it with:

mvn clean package

Once the execution is finished, you have a WildFly server ready to run in target/server and you can run it with the command:

cd target/server

The last log will show that we indeed installed WildFly 31.0.0.Final:

13:21:52,651 INFO  [] (Controller Boot Thread) WFLYSRV0025: WildFly Full 31.0.0.Final (WildFly Core 23.0.1.Final) started in 1229ms - Started 280 of 522 services (317 services are lazy, passive or on-demand) - Server configuration file in use: standalone.xml

At this point you can init a Git repository from this wildfly-gitops directory and you have the foundation to manage WildFly in a GitOps way.

The Maven Plugin for WildFly provides rich features to configure WildFly including:

  • using Galleon Layers to trim the server according to the deployment capabilities
  • Running CLI scripts to configure its subsystems (for example, the Logging Guide illustrates how you can add a Logging category for your own deployments)

[Aside] Create Application Deployments

To illustrate how to manage the deployments of application in this server without direct control of the application source code, we must first create these deployments.

When Dev and Ops teams are separate, the Dev team will have done these steps and the Ops team would only need to know the Maven coordinates of the deployments.

For this purpose, we will compile and install 2 quickstarts examples from WildFly in our local maven repository:

cd /tmp
git clone --depth 1 --branch 31.0.0.Final
cd quickstart
mvn clean install -pl helloworld,microprofile-config

We have only built the helloworld and microprofile-config quickstarts and put them in our local Maven repository.

We now have two deployments that we want to deploy in our WildFly Server with the Maven coordinates:

  • org.wildfly.quickstarts:helloworld:31.0.0.Final
  • org.wildfly.quickstarts:microprofile-config:31.0.0.Final

Assemble The WildFly Server With Deployments

Now that we have deployments to work with, let's see how we can include them in our WildFly server in a GitOps manner.

We will use a Maven assembly to control the deployments in our server. To do so, we will create a assembly.xml file in the wildfly-gitops directory:

<assembly xmlns=""


All this verbose file does is:

  • create a directory that is composed of:
    • the content of the target/server (that contains the WildFly Server)
    • add any war dependency to the standalone/deployments of this directoy
      • and rename them to xxx.war (instead of the whole Maven coordinates)

We also need to update the pom.xml to use this assembly:


We can now run a Maven command to assemble our server:

mvn clean package

When the command is finished, we now have an assembled server in target/wildfly-gitops-server/wildfly

cd target/wildfly-gitops-server/wildfly

NOTE: There are 2 different "servers" after mvn package is executed:

  • target/server contains the provisioned WildFly Server
  • target/wildfly-gitops-server/wildfly contains the WildFly server (copied from the previous directory) with any additional deployments.

But we did not add any deployment! Let's do it now.

In the wildfly-gitops/pom.xml, we will add a dependency to specify that we want to include the helloworld quickstart:


And that's it!

Let's now run once more mvn clean package.

If we now list the standalone/deployments directory of the assembled server, the helloworld.war deployment is listed:

ls target/wildfly-gitops-server/wildfly/standalone/deployments
README.txt                      helloworld.war

When we run the assembled server, the HelloWorld application is deployed and ready to run:

cd target/wildfly-gitops-server/wildfly

14:01:25,307 INFO  [] (ServerService Thread Pool -- 45) WFLYSRV0010: Deployed "helloworld.war" (runtime-name : "helloworld.war")

We can access the application by opening our browser at localhost:8080/helloworld/

At this stage, we have complete control of the WildFly server and the application(s) we want to deploy on it from this wildfly-gitops Git repository.

Let's see what we could do from here.

Add Another Deployment to The Server

We can now add the microprofile-config deployment to the assembled server by adding it as a dependency in the pom.xml:


Let's package the server again and start it:

mvn clean package
cd target/wildfly-gitops-server/wildfly
CONFIG_PROP="Welcome to GitOps" ./bin/

The microprofile-config application is deployed and can be accessed from localhost:8080/microprofile-config/config/value

We have added deployments using Maven dependencies but it is also possible to include them in the assembled server by other means (copy them from a local directory, fetch them from Internet, etc.). The Assembly Plugin provides additional information for this.

Update The WildFly Server

The version of WildFly that we are provisioning is specified in the pom.xml with the version.wildfly property. Let's change it to use a more recent version of WildFly 31.0.1.Final

    <!-- Specify the version of WildFly to provision -->

We can repackage the server and see that it is now running WildFly 31.0.1.Final:

mvn clean package
cd target/wildfly-gitops-server/wildfly
14:15:23,909 INFO  [] (Controller Boot Thread) WFLYSRV0025: WildFly Full 31.0.1.Final (WildFly Core 23.0.3.Final) started in 1938ms - Started 458 of 678 services (327 services are lazy, passive or on-demand) - Server configuration file in use: standalone.xml

Use Dependabot to Be Notified of WildFly Updates

WildFly provisioning is using Maven artifacts. We can take advantage of this to add a "symbolic" dependency to one of this artifact in our pom.xml so that Dependabot will periodically check and propose updates when new versions of WildFly are available:

      <!-- We add the WildFly Galleon Pack as a provided POM dependency
           to be able to  use dependabot to be notified of updates -->

We use a provided scope as we don't want to pull this dependency but this will ensure that Dependabot is aware of it and triggers updates when a new version is available.


In this article, we show how you can leverage the WildFly Maven Plug-in to manage WildFly in a GitOps way that is not directly tied to the development of the applications that are be deployed to the server.

The code snippets used in this article are available on GitHub at

WildFly and the Twelve-factor App Methodology

September 13, 2023

In my daily job at Red Hat, I'm focused these days on making WildFly run great on container platforms such as Kubernetes.

"Traditional" way to develop and run applications with WildFly

WildFly is a "traditional" application server for Entreprise Java applications. To use it on the cloud, we are making it more flexible and closer to "cloud-native" applications so that it can be used to develop and run a 12-Factor App.

Traditionally, if you were using WildFly on your own machine, the (simplified) steps would be:

  1. Download WildFly archive and unzip it
  2. Edit its XML configuration to match your application requirements
  3. In your code repository, build your deployment (WAR, EAR, etc.) with Maven
  4. Start WildFly
  5. Copy your deployment
  6. Run tests

At this point, your application is verified and ready to use.

There are a few caveats to be mindful of.

  • Whenever a new version of WildFly is available, you have to re-apply your configuration change and verify that the resulting configuration is valid.
  • You run tests against your local download of WildFly with your local modifications. Are you sure that these changes are up to date with the production servers?
  • If you are developing multiple applications, are you using different WildFly downloads to test them separately?

"Cloudy" way to develop and run applications with WildFly

When you want to operate such application on the cloud, you want to automate all these steps in a reproduceable manner.

To achieve this, we inverted the traditional application server paradigm.

Before, WildFly was the top-level entity and you were deploying your applications (ie WARs and EARs) on it. Now, your application is the top-level entity and you are in control of the WildFly runtime.

With that new paradigm, the steps to use WildFly on the cloud are now:

  1. In your code repository, configure WildFly runtime (using a Maven plugin)
  2. Use Maven to build your application
  3. Deploy your application in your target container platform

Step (2) is the key as it automates and centralizes most of the "plumbing" that was previously achieved by hand.

If we decompose this step, it actually achieves the following:

  1. Compile your code and generate the deployment
  2. "Provision" WildFly to download it, change its configuration to match the application requirements
  3. deploy your deployment in the provisioned WildFly server
  4. Run integration tests against the actual runtime (WildFly + your deployments) that will be used in production
  5. Optionally create a container image using docker

Comparing the two ways to develop and run WildFly can look deceiving. However, a closer examination shows that the "Cloudy" way unlocks many improvements in terms of productivity, automation, testing and, at least in my opinion, developer joy.

What does WildFly provide for 12-Factor App development?

The key difference is that your Maven project (and its pom.xml) is the single 12-factor's Codebase to track your application. Everything (your application dependencies, WildFly version and configuration changes) is tracked in this repo. You are sure that what is built from this repository will always be consistent. You are also sure that WildFly configuration is up to date with production server because your project is where its configuration is updated. You are not at risk of deploying your WAR in a different version of the server or a server that has not been properly configured for your application.

Using the WildFly Maven Plugin to provision WildFly ensures that all your 12-factor's Dependencies are explicitly declared. Wheneve a new version of WildFly is released, you can be notified with something like Dependabot and automatically test your application with this new release.

We have enhanced WildFly configuration capabilities so that you can store your 12-factor's Config in your environment. WildFly can now use environment variables to change any of its management attributes or resolve their expressions. Eclipse MicroProfile Config is also available to store any of your application config in the environment.

Connecting to 12-factor's Backing Services is straightforward and WildFly is able to connect to a database with a few env vars representing its URL and credentials with the datasources feature pack.

Using the WildFly Maven Plugin in your pom.xml, you can simply have different stages to 12-factor's Build, release, run and make sure you build once your release artifact (the application image) and runs it on your container platform as needed.

Entreprise Java application as traditionally stateful so it does not adhere to the 12-factor's Processes unless you refactor your Java application to make it stateless.

WildFly complies with 12-factor's Port binding and you can rely on accessing its HTTP port on 8080 and its management interfance on 9990.

Scaling out your application to handle 12-factor's concurrency via the process model is depending on your application architecture. However WildFly can be provisioned in such a way that its runtime can exactly fit your application requirements and "trim" any capabilites that is not needed. You can also split a monolith Entreprise Java application in multiple applications to scale the parts that need it.

12-factor's Disposability is achieved by having WildFly fast booting time as well as graceful shutdown capabilities to let applications finish their tasks before shutting down.

12-factor's Dev/prod parity is enabled when we are able to use continuous deployment and having a single codebase to keep the gap between what we develop and what we operate. Using WildFly with container-based testing tool (such as Testcontainers) ensures that what we test is very similar (if not identical) to what is operated.

WildFly has extensive logging capabilities (for its own runtime as well as your application code) and works out of the bow with 12-factor's Logs by outputting its content on the standard output. For advanced use cases, you can change its output to use a JSON formatter to query and monitor its logs.

12-factor's Admin processes has been there from the start with WildFly that provides a extensive CLI tool to run management operations on a server (running or not). The same management operations can be executed when WildFly is provisioned by the WildFly Maven Plugin to adapt its configuration to your application.


We can develop and operate Entrprise Java applications with a modern software methodology. Some of these principles resonate more if you targeting cloud environment but most of them are still beneficical for traditional "bare metal" deployments.

I. Codebase

II. Dependencies

  • All dependencies (including WildFly) are managed by your application pom.xml.

III. Config

  • Use Eclipse MicroProfile Config and WildFly capabilities to read configuration from the environment.

IV. Backing Services

  • Jakarta EE is designed on this principle (eg JDBC, JMS, JCA, etc.).

V. Build, release, run

  • With a single mvn package, you can build your release artifact and deploy it wherever you want. The WildFly Maven Plugin can generate ate a directory or an application image to suit either bare-metal or container-based platform.

VI. Processes

  • WildFly can run stateless application but you will have to design them this way :)

VII. Port Binding

  • 8080 for the application, 9990 for the management interface :)

VIII. Concurrency

  • Entreprise Java application have traditionally be scaling up so there is some architecture and application changes to make them scale out instead. The lightweight runtime from WildFly is definitely a good opportunity for scaling out Entreprise Java applications.

IX. Disposability

  • WildFly boots fast and gracefully shuts down.

X. Dev/prod parity

  • Use the WildFly Maven Plugin to control WildFly, container-based testing to reduce the integration disparity and keep changes between dev, staging and production to a minimum.

XI. Logs

  • WildFly outputs everything on the standard output. Optionally, you can use a JSON formatter to query and monitor your application logs.

XII. Admin processes

  • WildFly tooling provides CLI scripts to run management operations. You can store them in your codebase to handle configuration changes, migration operations, maintenance operations.


Using the "cloudy" way to develop and operate entreprise applications unlocks many benefits regardless of the deployment platform (container-based or bare metal).

It can automate most of the mundane tasks that reduce developer joy & efficiency while improving the running operations of WildFly improving operator joy & efficiency.

TLS certificate on

September 13, 2023

Web browers now treats sites served by HTTP as "Not secure". I finally caved in and added a TLS certificate to

Displayed Padlock achievement: completed
Displayed Padlock achievement: completed

If you are visiting, you can now browse it safely and be sure that your credit cards numbers will not be stolen. That's progress I suppose...

I host my site on Amazon AWS and use a bunch of their services (S3, Route 53, CloudFront, Certificate Manager) to be able to redirect the HTTP traffic to HTTPS (and the URLs to I will see how much this increase the AWS bill...

More interestingly, I used Let's Encrypt to generate the certificates. It took me less than 5 minutes to generate it (including the acme challenge to verify the ownership of the domain name). This project is a great example of simplifying and making accessible a complex technology to web publishers.

Health Update

July 3, 2023

On October 17th of last year, while playing basketball, I suffered a ruptured Achilles tendon.

Unfortunately, an initial misdiagnosis and a lengthy waitlist for necessary medical examinations resulted in me having to postpone surgery until December 9th. The tendon rupture measured approximately 6cm, necessitating the use of tissue from adjacent areas of my feet to construct a completely new tendon.

This led to a period of immobilization lasting 45 days. Although the Christmas break was not particularly enjoyable, I consider myself fortunate to have an incredible wife and children who provided unwavering support, showering me with love and kindness throughout the ordeal. My managers and colleagues at Red Hat were also very supportive so that I could focus on my health during that period.

When my boot was finally removed, I caught sight of my foot for the first time, revealing a 15cm scar that I could proudly boast about if I were on the "Jaws" boat :)

Scar of my Achilles tendon
Scar of my Achilles tendon © Jeff Mesnil

By the end of January, I cautiously began walking again, albeit with a noticeable limp. Since then, my rehabilitation has been a gradual journey with its fair share of ups and downs. Yesterday, I was able to run 5 km, but today climbing stairs causes discomfort. I am hopeful that I will achieve a full recovery. As a symbolic "endpoint" to my rehab, I have set a goal to participate in a semi-marathon next year.

Although my competitive basketball days are over, I am still enthusiastic about playing with my kids and continuing to enjoy the sport. I'll play it less and watch it more :) Walking, running, and hiking at my own pace have become my main physical activities, whether I'm by myself or accompanied by friends and family. They provide me energy, focus and a deep appreciation of a functional body.

⇨ to close

March 22, 2023

Over the years, DPReview has been an invaluable help when I was searching for camera gears.

I seldom browse it now as I am mostly using my phone to take pictures but I am still subscribed to their newsfeed.

If I were to look for a replacement for my aging camera, I'm not sure which site would provide the same quality of technical reviews...

⇨ Writing Greener Java Applications

March 1, 2023

Holly Cummins, a colleague at Red Hat and Java Champion, is making a lot of great content about making software greener.

Her presentation focuses on Quarkus which is a greenfield 1 Java framework focused on Cloud-native applications while my job is on WildFly and JBoss EAP, Java application servers that predate the Cloud.

Her presentation resonates with me professionally and personally.

Professionally, my focus these days is on making WildFly and EAP run great on the Cloud. Doing so in a sustainable manner is important as it has direct impact on our users' costs and performance.

Personally, and more importantly, it resonates with my vision of my work and my life. All my career has been done around Open Source projects. They fit my idea of a good humane society with core values of Openness, Meritocracy and Accountability. Developping Open Source code is now a given for me and I don't envision other ways to perform my job.

The next frontier is now to develop Sustainable code that reduces our impact on the planet. I deeply believe now that any advancements we are making, whether it's the Cloud, Cryptocurrency, Machine-Learning, can not be at the expense at the planet and our future generations.

It's not simple to see how that translates in my daily tasks but we are past the point where we should include sustainability into our guiding principles and I'm making a deliberate conscious choice to do that now at my own level.

  1. Pun intended 

Storm Over Lake

July 31, 2017

Storm Over Lake
 Storm Over Lake © Jeff Mesnil

Yesterday, we made a 3-hour cruise on the Lac du Bourget and the storm was coming right before we left the boat.

Eclipse MicroProfile Config 1.0 and WildFly implementation

July 27, 2017

Eclipse MicroProfile Config 1.0 has been released.

It is quite a milestone as it is the first specification released by the Eclipse MicroProfile project. It covers a simple need: unify the configuration of Java application from various sources with a simple API:

@ConfigProperty(name = "app.timeout", defaultValue = "5000")
long timeout;

The developer no longer needs to check for configuration files, System properties, etc. He or she just specifies the name of the configuration property (and an optional default value). The Eclipse MicroProfile Config specification ensures that several sources will be queried in a consistent order to find the most relevant value for the property.

With Eclipse MicroProfile Config 1.0 API available, I have released wildfly-microprofile-config 1.0.1.

This project contains an implementation of the specification:


This implementation passes the MicroProfile Config 1.0 TCK. It can be used by any CDI-aware container/server (i.e. that are able to load CDI extensions).

This project also contains a WildFly extension so that any application deployed in WildFly can use the MicroProfile Config API. The microprofile-config subsystem can be used to configure various config sources such as directory-based for OpenShift/Kubernetes config maps (as described in a previous post) or the properties can be stored in the microprofile-config subsystem itself):

<subsystem xmlns="urn:wildfly:microprofile-config:1.0">
    <config-source name="appConfigSource">
        <property name="app.timeout" value="2500" />
    <config-source name="configSourceFromDir" dir="/etc/config/numbers-app" />

Finally, a Fraction is available for WildFly Swarm so that any Swarm application can use the Config API as long as it depends on the appropriate Maven artifact:


It is planned that this org.wildfly.swarm:microprofile-config Fraction will eventually move to Swarm own Git repository so that it Swarm will be able to autodetect applications using the Config API and load the dependency automatically. But, for the time being, the dependency must be added explicitely.

If you have any issues or enhancements you want to propose for the WildFly MicroProfile Config implementation, do not hesitate to open issues or propose contributions for it.

A Look At Eclipse MicroProfile Healthcheck

July 7, 2017

I recently looked at the Eclipse MicroProfile Healthcheck API to investigate its support in WildFly.
WildFly Swarm is providing the sample implementation so I am very interested to make sure that WildFly and Swarm can both benefit from this specification.

This specification and its API are currently designed and anything written in this post will likely be obsolete when the final version is release. But without further ado...

Eclipse MicroProfile Healthcheck

The Eclipse MicroProfile Healthcheck is a specification to determine the healthiness of an application. It defines a HealthCheckProcedure interface that can be implemented by an application developer. It contains a single method that returns a HealthStatus: either UP or DOWN (plus some optional metadata relevant to the health check). Typically, an application would provide one or more health check procedures to check healthiness of its parts. The overall healthiness of the application is then determined by the aggregation of all the procedures provided by the application. If any procedure is DOWN, the overall outcome is DOWN. Else the application is considered as UP.

The specification has a companion document that specifies an HTTP endpoint and JSON format to check the healthiness of an application.

Using the HTTP endpoint, a container can ask the application whether it is healthy. If it is not healthy, the container can take actions to deal with it. It can decide to stop the application and eventually respin a new instance. The canonical example is Kubernetes that can configure a liveness probe to check this HTTP health URL (OpenShift also exposes this liveness probe).

WildFly Extension prototype

I have written a prototype of a WildFly extension to support health checks for applications deployed in WildFly and some provided directly by WildFly:

The microprofile-health subsystem supports an operation to check the health of the app server:

[standalone at localhost:9990 /] /subsystem=microprofile-health:check
    "outcome" => "success",
    "result" => {
        "checks" => [{
            "id" => "heap-memory",
            "result" => "UP",
            "data" => {
                "max" => "477626368",
                "used" => "156216336"
        "outcome" => "UP"

It also exposes an (unauthenticated) HTTP endpoint: http://localhost:8080/health/

$ curl http://localhost:8080/health/

This HTTP endpoint can be used to configure OpenShift/Kubernetes liveness probe.

Any deployment that defines Health Check Procedures will have them registered to determine the overall healthiness of the process. The prototype has a simple example of a Web app that adds a health check procedure that randomly returns DOWN (which is not very useful ;).

WildFly Health Check Procedures

The Healthcheck specification mainly targets user applications that can apply application logic to determine their healthiness. However I wonder if we could reuse the concepts inside the application server (WildFly in my case). There are "things" that we could check to determine if the server runtime is healthy, e.g.:

  • The amount of heap memory is close to the max
  • some deployments have failed
  • Excessive GC
  • Running out of disk space
  • Some threads are deadlocked

These procedures are relevant regardless of the type of applications deployed on the server.

Subsystems inside WildFly could provide Health check procedures that would be queried to check the overall healthiness. We could for example provide a health check that the used heap memory is less that 90% of the max:

HealthCheck.install(context, "heap-memory", () -> {
   MemoryMXBean memoryBean = ManagementFactory.getMemoryMXBean();
   long memUsed = memoryBean.getHeapMemoryUsage().getUsed();
   long memMax = memoryBean.getHeapMemoryUsage().getMax();
   HealthResponse response = HealthResponse.named("heap-memory")
      .withAttribute("used", memUsed)
      .withAttribute("max", memMax);
   // status is is down is used memory is greater than 90% of max memory.
   HealthStatus status = (memUsed < memMax * 0.9) ? response.up() : response.down();
   return status;


To better integrate WildFly with Cloud containers such as OpenShift (or Docker/Kunernetes), it should provide a way to let the container checks the healthiness of WildFly. The Healthcheck specification is a good candidate to provide such feature. It is worth exploring how we could leverage it for user deployments and also for WildFly internals (when that makes sense).

Eclipse MicroProfile Config in OpenShift

June 16, 2017

This is another post that covers some work I have been doing around the Eclipse MicroProfile Config as a part of my job working on WildFly and Swarm (first post is here).

This post is about some updates of the project status and work being done to leverage the Config API in OpenShift (or other Docker/Kubernetes-based environment).

Project update

Since last post, WildFly and Swarm projects agreed to host the initial work I did in their GitHub projects and the Maven coordinates have been changed to reflect this. For the time being, everything is hosted at wildfly-extras/wildfly-microprofile-config.

The Eclipse MicroProfile Config 1.0 API should be released soon. Once it is released, we can then release the 1.0 version of the WildFly implementation and subsystem. The Swarm fraction will be moved to the Swarm own Git repo and will be available with future Swarm releases.

Until the Eclipse MicroProfile Config 1.0 API is released, you still have to build everything from wildfly-extras/wildfly-microprofile-config and uses the Maven coordinates:


Directory-based Config Source

We have added a new type of ConfigSource, DirConfigSource, that takes a File as a parameter.

When this ConfigSource is created, it scans the file (if that is a directory) and creates a property for each file in the directory. The key of the property is the name of the file and its value is the content of the file.

For example, if you create a directory named /etc/config/numbers-app and add a file in it named num.size with its content being 5, it can be used to configure the following property:

@ConfigProperty(name = "num.size")
int numSize;

There are different ways to use the corresponding DirConfigSource depending on the type of applications.

WildFly Application

If you are deploying your application in WildFly, you can add this config source to the microprofile subsystem:

<subsystem xmlns="urn:wildfly:microprofile-config:1.0">
    <config-source name="numbers-config-source" dir="/etc/config/numbers-app" />

Swarm Application

If you are using Swarm, you can add it to the MicroProfileFraction from your main method:

swarm.fraction(new MicroProfileConfigFraction()
    .configSource("numbers-config-source", (cs) -> {

Plain Java Application

If you are using the WildFly implementation of the Config API outside of WildFly or Swarm, you can add it to a custom-made Config using the Eclipse MicroProfile ConfigBuilder API.

OpenShift/Kubernetes Config Maps

What is the use case for this new type of ConfigSource?

It maps to the concept of OpenShift/Kubernetes Config Maps so that an application that uses the Eclipse MicroProfile Config API can be deployed in a container and used its config maps as a source of its configuration.

I have added an OpenShift example that shows a simple Java application running in a variety of deployment and configuration use cases.

The application uses two properties to configure its behaviour. The application returns a list of random positive integers (the number of generated integers is controlled by the num.sizeproperty and their maximum value by the num.max property):

@ConfigProperty(name = "num.size", defaultValue = "3")
int numSize;

@ConfigProperty(name = "num.max", defaultValue = "" + Integer.MAX_VALUE)
int numMax;

The application can be run as a standalone Java application configured with System Properties:

$ java -Dnum.size=5 -Dnum.max=10 -jar numbers-app-swarm.jar

It can also be run in Docker configured with environment variables:

$ docker run -e "num.size=2" -e "num.max=10" -p 8080:8080 numbers/numbers-app

It can also be run in OpenShift configured with Config Maps:

apiVersion: v1
kind: ConfigMap
  name: numbers-config
  namespace: numbers
  num.size: '5'
  num.max: '100'

This highlights the benefit of using the Eclipse MicroProfile Config API to configure a Java application: the application code remains simple and use the injected values from the Config API. The implementation then figures out all the sources where the values can come from (System properties, properties files, environment, and container config maps) and inject the appropriate ones.

Eclipse MicroProfile Config (Part I)

April 25, 2017

This is the first post that will cover some work I have been doing around the Eclipse MicroProfile Config as a part of my job working on WildFly and Swarm.

In this post, I will show how to use the Config API from a Java application. The remaining posts will be about developing such new features within the WildFly and Swarm ecosystem.

Eclipse MicroProfile

As stated on its Web site, the mission of the Eclipse MicroProfile is to define:

An open forum to optimize Enterprise Java for a microservices architecture by innovating across multiple implementations and collaborating on common areas of interest with a goal of standardization.

One of the first new API that they are defining is the Config API that provides a common way to retrieve configuration coming from a variety of sources (properties file, system properties, environment variables, database, etc.). The API is very simple and consists mainly of 2 things:

  • a Config interface can be used to retrieve (possibly optional) values identified by a name from many config sources
  • a @ConfigProperty annotation to directly inject a configuration value using CDI

The API provides a way to add different config source from where the properties are fetched. By default, they can come from:

  • the JVM System properties (backed by System.getProperties())
  • the OS environment (backed by System.getenv())
  • properties file (stored in META-INF/

A sampe code to use the Config API looks like this:

Config config;

@ConfigProperty(name = "BAR", defaultValue = "my BAR property comes from the code")
String bar;

@ConfigProperty(name = "BOOL_PROP", defaultValue = "no")
boolean boolProp;

Optional<String> foo = config.getOptionalValue("FOO", String.class);

There is really not much to the Config API. It is a simple API that hides all the complexity of gathering configuration from various places so that the application can focus on using the values.

One important feature of the API is that you can define the importance of the config sources. If a property is defined in many sources, the value from the config source with the higher importance will be used. This allows to have for example default values in the code or in a properties file (with low importance) that are used when the application is tested locally. When the application is deployed in a container, environment variables defined by the container will have higher importance and be used instead of the default ones.


The code above comes from the example that I wrote as a part of my work on the Config API.

To run the example, you need to first install my project and then run the example project:

$ cd example
$ mvn wildfly-swarm:run
2017-04-14 10:35:24,416 WARN  [org.wildfly.swarm] (main) WFSWARM0013: Installed fraction: Eclipse MicroProfile Config - UNSTABLE        net.jmesnil:microprofile-config-fraction:1.0-SNAPSHOT
2017-04-14 10:35:30,676 INFO  [org.wildfly.swarm] (main) WFSWARM99999: WildFly Swarm is Ready

It is a simple Web application that will return the value of some variable and fields that are configured using the Config API:

$ curl http://localhost:8080/hello
FOO property = Optional[My FOO property comes from the file]
BAR property = my BAR property comes from the code
BOOL_PROP property = false

We then run the application again with environment variables:

$ BOOL_PROP="yes" FOO="my FOO property comes from the env" BAR="my BAR property comes from the env" mvn wildfly-swarm:run

If we call the application, we see that the environment variables are now used to configure the application:

$ curl http://localhost:8080/hello
FOO property = Optional[my FOO property comes from the env]
BAR property = my BAR property comes from the env
BOOL_PROP property = true

The example is using Swarm and for those familiar with it, only requires to add two fractions to use the Config API:


I have not yet released a version of my implementation as it is not clear yet where it will actually be hosted (and which Maven coordinates will be used).


This first post is a gentle introduction to the Config API. The specification is not final and I have left outside some nice features (such as converters) that I will cover later.

The next post will be about my experience of writing an implementation of this API and the way to make it available to Java EE applications deployed in the WildFly application server or to MicroServices built with WildFly Swarm.

Dockerization of My Web Site

March 9, 2016

This web site is composed of static files generated by Awestruct. I have extended the basic Awestruct project to provide some additional features (link posts, article templates, photo thumbnails, etc.) Unfortunately, some of these extensions uses Rubygems that depends on native libraries or uglier spaghetti dependencies.

I recently wanted to write an article but was unable to generate the web site because some Rubygems where no longer working with native libraries on my Mac. Using bundler to keep the gems relative to this project does not solve the issue of the native libraries that are upgraded either the OS or package system (such as Homebrew).

This was the perfect opportunity to play with Docker to create an image that I could use to generate the web site independantly of my OS.

I create this jmesnil/ image by starting from the vanilla awestruct image created by my colleague, Marek. I tweaked it because he was only installing the Awestruct gem from the Dockerfile while I have a lot of other gems to install for my extensions.

I prefer to keep the gems listed in the Gemfile so that the project can also work outside of Docker so I added the Gemfile in the Dockerfile before calling bundle install.

This web site is hosted on Amazon S3 and use the s3cmd tool to push the generated files to the S3 bucket. The s3cmd configuration is stored on my home directory and I need to pass it to the Docker image so that when the s3cmd is run inside it, it can used by secret credentials. This is done in a script that I used to start the Docker image:

# Read the s3cmd private keys from my own s3cmd config...
AWS_ACCESS_KEY_ID=`s3cmd --dump-config | grep access_key | grep -oE '[^ ]+$'`
AWS_SECRET_ACCESS_KEY=`s3cmd --dump-config | grep secret_key | grep -oE '[^ ]+$'`

# ... and pass it to the s3cmd inside Docker using env variables
docker run -it --rm \
  -v `pwd`/:/home/jmesnil/ \
  -p 4242:4242 \
  -w /home/jmesnil/ \

Once the Docker image is started by this script, I can then use regular Rake tasks to run a local Web server to write articles (rake dev) or publish to Amazon S3 (rake production).

This is a bit overkill to use a 1.144 GB Docker image to generate a 6MB web site (that only contains text, all the photos are stored in a different S3 bucket) but it is worthwhile as it will no longer be broken every time I upgrade the OS or Brew.

The image is generic enough that it could serve as the basis of any Ruby project using Bundler (as long as required native libs are added to the yum install at the beginning of the Dockerfile).

⇨ Austin Mann's iPhone 6s Camera Review

October 9, 2015

This is a great review of the iPhone 6s camera and summary of its new features.

Marion has bought an iPhone 6s and the camera is a definite improvement when I compare her photos to the ones from my iPhone 6.

Better low light performance and higher resolution are always a plus, but the new feature I prefer is Live Photos. When Apple announced it, I found that superfluous but I changed my mind after looking at watching photos of Raphaël and hearing him twittering...

Stepping Out From Personal Open Source Projects

September 4, 2015

Our first child, Raphaël, is born last September and we will soon celebrate his first birthday.

This year has been the happiest of my life. However, having a baby means that I have less free time than before, and I want to spend this time with my family or doing personal projects I feel passionate about.

These days I feel more passionate about making photography (mostly of Raphaël and Marion) than writing software (I already have a full time job at Red Hat where I enjoy coding on WildFly).

Before our baby's birth, I could spend evenings and weekends working on personal Open Source projects. After one year, it is time to admit that I do not want to that anymore and act accordingly.

I have decided to flag my personal Open Source projects as not maintained anymore. Some of these projects are still quite used, the three main are all dealing with messaging clients:

  • stomp.js - a JavaScript library to write web apps/nodes.js client for STOMP protocol
  • StompKit - an Objective-C client for STOMP protocol
  • MQTTKit - an Objective-C client for MQTT protocol

I'll modify these projects READMEs to warn that they are no longer maintained (with a link to that post to give some context).

It's not fair to users to spend time using them, reporting bugs and have no warning that their issues will be ignored.

If you are using these projects, I understand that you may be upset that your bugs are not fixed or that the enhancement you request will not be fixed in the original project. Fortunately, these projects are licensed under the Apache License V2. You can fork them (they are hosted on GitHub and use Git as their version control system) and modify them to suit your needs.

I also had some discussion to donate stomp.js to Apache ActiveMQ.

It is a tough decision to not maintain these projects anymore but it is a decision that I have subconsciously made months ago. Now I just have to acknowledge it.

I may revisit this decision when my child is older or when I feel passionate about these project again. Or I may create other Open Source projects, who knows?

The key thing is that by releasing these projects under an Open Source license, I ensured that their use could outlast my initial contributions to it.

Civilian Sponsorship of Raphaël

July 21, 2015

Earlier this month, we celebrated the civilian sponsorship1 of our son, Raphaël. It was a great sunny day full of laughs and emotion.

Now our son has a godfather and godmother that will be there to take care of him.

Rapha&euml;l with his Godmother &amp; Godfather
Raphaël with his Godmother & Godfather © Jeff Mesnil

I took this opportunity to create a story on Exposure. I learnt about this service from Scott Kelby and this celebration was a good candidate to try this service.

Bapt&ecirc;me Civil de Rapha&euml;l on Exposure
Baptême Civil de Raphaël on Exposure

You can read the story (in French) here.

Exposure service is really good (I only used their free offer). The image layout options are basic but good enough for such an image-oriented story. There is only one place where I would have preferred to use a grid layout with same size for all images. I also would have liked more typography options to better distinguish the (sparse) text.

Scott Kelby's page provides some great examples of Exposure features.

I don't plan to upgrade to their pro offer but I'll keep playing with it as their offer improves.

  1. A civilian sponsorship is not religious but republican. It is performed at the town hall and is a moral engagement 

Marion & Raphaël

November 17, 2014

Over the week-end I experimented with portraits of Marion holding our baby, Raphaël. I wanted to go for an intimate moment between a mother and her baby. The idea was to have a soft portrait of Raphaël and Marion wrapped in shadows.

Regardless of the darkness that surrounds them, their love is a bright light that will not be dimmed.

Marion &amp; Rapha&euml;l
Marion & Raphaël © Jeff Mesnil

The end result looks like a chiaroscuro painting of La Vierge à l'Enfant.

Technical (boring) corner

Technically, this is a simple photo with a soft light to preserve this private moment.

I used a one-light setup with a Yongnuo YN-560 III flash reflected by a 34" white umbrella. I positioned the flash at their right, at 45° above them. The light is feathered so they only catch the edge of the light (retrospectively, I should have feathered it even more to increase the intimacy).

The background is a 40"x60" reflector with its black surface. Since the light is feathered, almost none reaches the black background and it remains pure black.

I positioned Marion so that Raphaël's face could catch most of the light and only her right side would be lit... plus a tiny bit of reflection on her left cheek thanks to the almost bald skull of our baby ;)

I used my Fuji X-E2 with its awesome 56mm ƒ/1.2 lens that I opened at ƒ/5.6 to have enough field of depth for both of them.

Raphaë was intrigued by the flash tests and I just had to make a few pictures to capture this one.

Simple stuff, and a lovely moment that all three of us shared.

⇨ O'Reilly Webcast: Using Messaging Protocols to Build Mobile and Web Applications

November 17, 2014

Last week, I made a webcast for O'Reilly about using messaging protocols to build mobile and web applications to promote my book, Mobile & Web Messaging.

This webcast can be watched for free and you can also read the slides I used for the presentation. They contain most of the information I talked about during this 1-hour long webcast.

Thanks to the people at O'Reilly (and especially Yasmina Greco) to setup this presentation, it was a great and fun experience.

Raphaël Mesnil

October 16, 2014

Our first child, Raphaël Mesnil, is born on September, 28th at 52cm and 3.230kg.

Marion and the baby are fine and we enjoy every moment with him.

Rapha&euml;l Mesnil
Raphaël Mesnil © Jeff Mesnil

I am now a father which will make many photographs of his baby :)

I am already making good use of the Fujinon 56mm ƒ/1.2 lens that I bought a few months ago when we learnt that Marion was pregnant.

I am not accustomed to baby photography (or portraits in general) but this will change now that Raphaël is here. We were back at home for one day and I was already setting up a studio to shoot his portrait. Next step is to make photographs of Marion and the baby together.

So much fun, so much love.

Rainy Venezia

September 9, 2014

We went to Venezia last May and had a rainy day during our trip. We spent almost the whole afternoon at a coffee shop and I was shooting the soaked people rushing to cross the street.

This old man stood out with his red umbrella and his leisurely pace.

Rainy Venezia
Rainy Venezia © Jeff Mesnil