Why it’s important to log using slf4j ?

September 7th, 2016 | code quality, hadoop, java, logging, scala |

You are a Java or Scala programmer. You are logging stuff with different levels of severity. And you probably already used slf4j even without noticing.

This post is global overview of its ecosystem, why it exists and how does it work. It’s not because you’re using something everyday that you know the details right?

Why does slf4j even exist?

Why do we need something complicated like a logging framework to do something simple as put a message on stdout? Because not everybody wants to use only stdout, and because of dependencies that have their own logging logic too.

slf4j needs love

slf4j is an API that exposes logging methods (logger.info, logger.error and so on). It’s just a facade, an abstraction, an interface. By itself, it can’t log anything. It needs an implementation, a binding, something that truly logs the message somewhere. slf4j is just the entry point, it needs an exit.

slf4j breathes in logs

But it can also serve as a exit for other logging systems. This is thanks to the logging adapters/bridges, that redirect others logging frameworks to slf4j. Hence, you can make all your application logs to go through the same pipe even if the origin logging system is different.

slf4j is magic

The magic in that ? You can do all that and update the implementation without altering the existing code.

 

We are going to see several logging implementations slf4j can be bound to.

I’m going to use Scala code because it’s more concise, but that’s exactly the same in Java.

Simple logging using JUL

JUL stands for java.util.logging. This is a package that exists since the JDK1.4 (JSR 47). It’s quite simple to use and does the job:

val l = java.util.logging.Logger.getLogger("My JUL")
l.info("coucou")

Output:

Aug 18, 2016 11:41:00 PM App$ delayedEndpoint$App$1
INFO: coucou

App is my class, delayedEndpoint is the method.

It’s configurable through its API:

// we create a logger that accept ALL levels
val l = java.util.logging.Logger.getLogger("My JUL")
l.setLevel(Level.ALL)
// we output ALL the logs to the console
val h = new ConsoleHandler
h.setLevel(Level.ALL)

// and to a file but only greather or equal to WARNING
val f = new FileHandler("warn.log", true)
f.setLevel(Level.WARNING)
f.setFormatter(new SimpleFormatter)

l.addHandler(h)
l.addHandler(f)

// log stuff
l.entering(classOf[App].toString, "myMethod")
l.info("hello there")
l.severe("badaboom")
l.exiting(classOf[App].toString, "myMethod")

That can output something like :

sept. 07, 2016 11:16:53 PM interface scala.App myMethod
FINER: ENTRY
sept. 07, 2016 11:16:53 PM com.App$ myMethod
INFO: hello there
sept. 07, 2016 11:16:53 PM com.App$ myMethod
INFO: hello there
sept. 07, 2016 11:16:53 PM com.App$ myMethod
SEVERE: badaboom
sept. 07, 2016 11:16:53 PM com.App$ myMethod
SEVERE: badaboom
sept. 07, 2016 11:16:53 PM interface scala.App myMethod
FINER: RETURN

The default format is horrible but we can see our logs. You’ll notice we have the INFO and SEVERE twice but not the FINER. It’s because, by default, there is already a console handler logging all INFO minimum.

It’s also configurable through a properties file often named “logging.properties”.

For instance, on OSX, you can find the JVM global JUL configuration here (that contains the default console handler we just talked about):

/Library/Java/JavaVirtualMachines/jdk1.8.0_91.jdk/Contents/Home/jre/lib/logging.properties

You can use a file of yours by specifying its path in the system properties:

-Djava.util.logging.config.file=src/main/resources/logging.properties

Some values inside must be references (FQCN) that will be load dynamically, otherwise it’s simple properties (think beans).

.level = INFO
handlers=java.util.logging.ConsoleHandler
java.util.logging.ConsoleHandler.level = INFO
java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter
java.util.logging.SimpleFormatter.format=%4$s: %5$s [%1$tc]%n

We can get a reference to the global logger to change it’s minimum level:

java.util.logging.Logger.getGlobal.setLevel(Level.ALL)

The output is better:

FINER: ENTRY [Wed Sep 07 23:32:48 CEST 2016]
INFO: hello there [Wed Sep 07 23:32:48 CEST 2016]
SEVERE: badaboom [Wed Sep 07 23:32:48 CEST 2016]
FINER: RETURN [Wed Sep 07 23:32:48 CEST 2016]

Be careful, specifying a configuration file is not used as an override of the default! If you forget something (especially handlers=), you might not see any logging.

Note that we used the handler java.util.logging.ConsoleHandler but there is also available a FileHandler (if unconfigured, it logs into $HOME/java0.log).

LogManagers

All the Loggers created in the application are managed by a LogManager.

By default, there is a default instance created on startup. It’s possible to give another one, by specifying the property java.util.logging.manager.

It’s often used along with log4j that implements a custom LogManager (available in the package org.apache.logging.log4j:log4j-jul):

-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager

This way, any manager can have a hand on any Logger created in the application.

It can change their behavior and where do they read their configuration for instance. This is what we call a Logging Adapter or a bridge: you can log using JUL in the code and use log4j features to manipulate and save the logs. We’ll go into more details later in this post.

A smarter logging with slf4j-api

Let’s go into the main subject: slf4j.

The API

First, we need to add a dependency to its API:

libraryDependencies += "org.slf4j" % "slf4j-api" % "1.7.21"
val sl: Logger = LoggerFactory.getLogger("My App")
sl.info("hello")

We are getting some logs, but not what we expect:

SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.

slf4j is using a org.slf4j.helpers.NOPLogger to log, but unfortunately, as the name says, all methods are empty shells:

// org.slf4j.helpers.NOPLogger.java
final public void info(String msg, Throwable t) {
    // NOP
}

The application still works, but without logs. slf4j tries to find a class “org.slf4j.impl.StaticLoggerBinder” available in the classpath. If it does found one, it fallbacks to the NOPLogger.

A simple slf4j binding

Fortunately, there is a simple implementation of slf4j :

libraryDependencies += "org.slf4j" % "slf4j-simple" % "1.7.21"

Now it can find a org.slf4j.impl.StaticLoggerBinder to create a Logger (a SimpleLogger in this case).

By default, this logger publishes messages to System.err, but it can actually write to System.out or any file.

val sl: Logger = LoggerFactory.getLogger("My App")
sl.info("message from {}", "slf4j!")

Output:

[main] INFO My App — message from slf4j!

The style and destination can be configured using System variables or via a properties file.

-Dorg.slf4j.simpleLogger.showDateTime=true
-Dorg.slf4j.simpleLogger.dateTimeFormat="yyyy-MM-dd HH:mm:ss"
-Dorg.slf4j.simpleLogger.levelInBrackets=true
-Dorg.slf4j.simpleLogger.logFile=simpleLogger.log

Here, we say we want to log into a file “simpleLogger.log”.

For the sake of clarity and organization, it’s preferable to put those props in a dedicated file such as src/main/resources/simplelogger.properties:

org.slf4j.simpleLogger.showDateTime=true
org.slf4j.simpleLogger.dateTimeFormat="yyyy-MM-dd HH:mm:ss"
org.slf4j.simpleLogger.levelInBrackets=true
org.slf4j.simpleLogger.logFile=simpleLogger.log

This was our first sl4j logging implementation. But we already saw another one: JUL !

slf4j to JUL

slf4j can redirect its logs to JUL that provides the “writing” piece as we already saw.

libraryDependencies += "org.slf4j" % "slf4j-api" % "1.7.21"
libraryDependencies += "org.slf4j" % "slf4j-jdk14" % "1.7.21"

The name “slf4j-jdk14″ is because JUL package appeared in the JDK1.4 as we said. Strange name to pick but well.

Output:

INFO: message from slf4j! [Thu Aug 18 23:45:15 CEST 2016]

The code is the same as previously, we just changed the implementation. Notice the output is different than the SimpleLogger’s.

This logger is actually an instance of JDK14LoggerAdapter. It’s using the style we defined at the beginning, in logging.properties, used by JUL, remember ?.

Note that you don’t have the full control on the Logger via the API as we had when using directly java.util.logging.Logger which exposes more methods. We just have access to the slf4j’s ones. This is why the configuration files comes in handy.

Multiple implementations

If we have multiple implementations available, slf4j will have to pick between them, and it will leave you a small warning about that.

libraryDependencies += "org.slf4j" % "slf4j-api" % "1.7.21"
libraryDependencies += "org.slf4j" % "slf4j-jdk14" % "1.7.21"
libraryDependencies += "org.slf4j" % "slf4j-simple" % "1.7.21"

Output:

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [.../slf4j-simple/jars/slf4j-simple-1.7.21.jar!...]
SLF4J: Found binding in [.../org.slf4j/slf4j-jdk14/jars/slf4j-jdk14–1.7.21.jar!...]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.SimpleLoggerFactory]
"2016-08-18 23:53:54" [main] [INFO] My App - message from slf4j!

As we said org.slf4j.impl.StaticLoggerBinder is the class slf4j-api is looking for in the classpath to get an implementation. This is the class that must exist in a slf4j implementation jar.

This message is just a warning, the logging will work. But slf4j will simply pick one available logging implementation and deal with it. But it’s a bad smell that should be fixed, because maybe it won’t pick the one you want.

It often happens when pom.xml or build.sbt imports dependencies that themselves depends on one of the slf4j implementation.

They have to be excluded and your own program should import a slf4j implementation itself. If you don’t, you could run in a no-logging issue.

A real case causing logs loss

For a real case, let’s import the hadoop client lib:

libraryDependencies += "org.apache.hadoop" % "hadoop-client" % "2.3.0"

If we restart our program, it’s getting more verbose and we’re getting a surprise:

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [.../org.slf4j/slf4j-log4j12/jars/slf4j-log4j12–1.7.5.jar!...]
SLF4J: Found binding in [.../org.slf4j/slf4j-jdk14/jars/slf4j-jdk14–1.7.21.jar!...]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
log4j:WARN No appenders could be found for logger (My App).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.

We can see some log4j warnings that we never imported, and we don’t even see our own message! Where did it go?

It went into log4j that is not configured, meaning into a blackhole.

One way is to exclude the log4j impl from the dependencies:

libraryDependencies += "org.slf4j" % "slf4j-api" % "1.7.21"
libraryDependencies += "org.slf4j" % "slf4j-jdk14" % "1.7.21"
libraryDependencies += "org.apache.hadoop" % "hadoop-client" % "2.3.0" exclude("org.slf4j", "slf4j-log4j12")

If we restart our program, we can see that our JUL console logs are back.

Note that the hadoop logging will still be voided, because it still rely on log4j configuration we didn’t configured.

One way to fix this and get the hadoop logs, would be to redirect log4j api to slf4j api. It’s possible, we simply need to add a dependency to org.slf4j:log4j-over-slf4j.

Again, we’ll see that in details later in this article, but the point is: you shouldn’t have multiple logging implementations available in one program.

slf4j implementations should be declared as optional

A best practice when writing a library or any module that can be imported somewhere, is to set slf4j implementation dependency as “optional”:

libraryDependencies += "org.slf4j" % "slf4j-jdk14" % "1.7.21" % "optional"

  org.slf4j
  slf4j-jdk14
  true

With optional, the dependency won’t be imported by transitivity.

The program which depends on it can use anything, no need to exclude it. More details here https://maven.apache.org/guides/introduction/introduction-to-optional-and-excludes-dependencies.html.

JCL/ACL

libraryDependencies += "org.slf4j" % "slf4j-api" % "1.7.21"
libraryDependencies += "org.slf4j" % "slf4j-jcl" % "1.7.21"

JCL stands for Jakarta Commons Logging.

Jakarta is an old retired Apache project, basically, it’s known as ACL now, Apache Commons Logging. It’s not maintained anymore (since 2014), but we can find it in old projects.

It serves the same purpose as slf4j, meaning it’s an abstraction over different logging frameworks such as log4j or JUL.

slf4j’s getLogger() will return a JCLLoggerAdapter that will look for a specific “Log” implementation set by the System variable “org.apache.commons.logging.Log”.

If not set, it will try to fallback on any implementations it can find in the classpath (log4j, JUL..).

New projects should forget about it. Only, if they depends on an old project that depends on JCL, then it should be considered to add a bridge to redirect JCL logs to the implementation of the project.

log4j

log4j is a widely-used logging framework. v1.x has been refactored and improved a lot to create the v2.x called log4j2.

Again, it can be used as an abstraction over a logging implementation, but it can be used as an implementation as well.

log4j1.2

log4j1.2 has reached end of life in 2015.

libraryDependencies += "org.slf4j" % "slf4j-api" % "1.7.21"
libraryDependencies += "org.slf4j" % "slf4j-log4j12" % "1.7.21"

Note that this will pull log4j1.2 library too. Here is the dependency tree:


[info] +-org.slf4j:slf4j-log4j12:1.7.21
[info]   +-log4j:log4j:1.2.17
[info]   +-org.slf4j:slf4j-api:1.7.21

When calling slf4j’s getLogger(“My App”), it will use log4j API to create the logger:

org.apache.log4j.LogManager.getLogger(name);

Note that this LogManager has nothing to do with the JUL’s one.

When you don’t have slf4j but just log4j, this is the method you call to get a Logger. slf4j-log4j12 just does the same.

Anyway, that’s not enough:

log4j:WARN No appenders could be found for logger (My App).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.

log4j needs a configuration file. We can create a simple properties file “src/main/resources/log4j.properties”:

log4j.rootLogger=DEBUG, STDOUT
log4j.appender.STDOUT=org.apache.log4j.ConsoleAppender
log4j.appender.STDOUT.layout=org.apache.log4j.PatternLayout
log4j.appender.STDOUT.layout.ConversionPattern=%-4r [%t] %-5p %c %x - %m%n

If we restart our program, we can see our message:

0 [main] INFO My App — message from slf4j!

Or if we like xml (nobody?), we can create a file “log4j.xml” (notice the lowercase tags):





 
   
     
   
 

 
   
   
 


Output:

2016–08–22 01:06:38,194 INFO [main] App$ (App.scala:11) — message from slf4j!

But you shouldn’t useWhen you don’t have slf4j but just log4j, this is the method you call to get a Logger. slf4j-log4j12 just does the same.

log4j2

Now, let’s say we want to use the latest version of log4j. It may be the most popular slf4j’s binding used nowadays.

libraryDependencies += "org.slf4j" % "slf4j-api" % "1.7.21"
libraryDependencies += "org.apache.logging.log4j" % "log4j-slf4j-impl" % "2.6.2"

Notice the organization of the binding is “org.apache.logging.log4j”, and not “org.slf4j” like log4j12’s.

Only adding this dependency is not enough :

Failed to instantiate SLF4J LoggerFactory
Reported exception:
java.lang.NoClassDefFoundError: org/apache/logging/log4j/spi/AbstractLoggerAdapter
...

We need to add log4j-api dependency ourselves:

libraryDependencies += "org.slf4j" % "slf4j-api" % "1.7.21"
libraryDependencies += "org.apache.logging.log4j" % "log4j-slf4j-impl" % "2.6.2"
libraryDependencies += "org.apache.logging.log4j" % "log4j-api" % "2.6.2"

Not enough yet!

ERROR StatusLogger Log4j2 could not find a logging implementation. Please add log4j-core to the classpath. Using SimpleLogger to log to the console…

We need to add log4j-core dependency too

libraryDependencies += "org.slf4j" % "slf4j-api" % "1.7.21"
libraryDependencies += "org.apache.logging.log4j" % "log4j-slf4j-impl" % "2.6.2"
libraryDependencies += "org.apache.logging.log4j" % "log4j-api" % "2.6.2"
libraryDependencies += "org.apache.logging.log4j" % "log4j-core" % "2.6.2"

We get another error message (!) :

ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.

That’s better, we just need a configuration file, that’s the last step.

Let’s create a sample log4j2.xml (notice the caps):




 
   
     
   
   
     
   
   
     
   
 

 
   
     
     
   
 


Our message is finally back and a file A1.log is created too:

2016–08–22 01:51:49,912 INFO [run-main-a] App$ (App.scala:8) — message from slf4j!

log4j2 is excellent because it has a vast collections of Appenders where to write the logs : https://logging.apache.org/log4j/log4j-2.4/manual/appenders.html

  • Console, File, RollingFile, MemoryMappedFile
  • Flume, Kafka, JDBC, JMS, Socket
  • SMTP (emails on errors, woo!)
  • Any Appender can be treated as Async too (doing the logging in another thread, to not block the main thread cause of i/o)

logback

logback has the same father as log4j, it was meant to be the successor of log4j.

The syntax of the configuration is therefore quite similar.

libraryDependencies += "org.slf4j" % "slf4j-api" % "1.7.21"
libraryDependencies += "ch.qos.logback" % "logback-classic" % "1.1.7"

“logback-classic” will pull-down “logback-core” as dependency, no need to add it.

It will run without configuration (finally!):

02:17:43.032 [run-main-1f] INFO My App — message from slf4j!

But of course, you can create a logback.xml to customize its behavior:



    
        
            %d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n
        
    

    
        
    

  • debug : to display some info about the logging system creation on startup
  • scan : modification are taken into account in live. This is particularly useful in production when you just want to get debug message for a short amount of time.
  • notice the xml style is log4j1.2’s

It’s also possible to use a custom config file

-Dlogback.configurationFile=src/main/resources/logback.xml

logback has the same collection of appenders as log4j. Some are not part of the official package, such as:

TLDR

Add a dependency to slf4j which is a logging interface: “org.slf4j” % “slf4j-api” and add an logging implementation:

Implementation Dependency(ies) Configuration / Note
to the console “org.slf4j” % “slf4j-simple” simplelogger.properties
to java.util.logging (JUL) “org.slf4j” % “slf4j-jdk14″ logging.properties
to JCL/ACL “org.slf4j” % “slf4j-jcl” (deprecated)
to log4j1.2 “org.slf4j” % “slf4j-log4j12″ (deprecated) log4.[properties|xml]
to log4j2 “org.apache.logging.log4j” % “log4j-[slf4j-impl|api|core]” log4j2.xml
to logback “ch.qos.logback” % “logback-classic” logback.xml

A very nice picture to resume what we just saw (we didn’t talked about sl4j-nop, it’s just a black hole):


http://www.slf4j.org/manual.html

 

So we learned about multiple implementations/bindings of slf4j’s api.

But if your project depends on other projects that are not using slf4j but directly JUL or log4j, it’s possible to redirect them to your own slf4j’s implementation, thanks to the bridges.

Bridges

Previously, we imported hadoop-client and our logs disappeared because it was using a log4j logger we never configured.

We excluded its implementation from the program and could see our logs again, but the logs of the hadoop-client library was still using log4j, and therefore its logs went into the void.

To avoid that, it’s possible to create a bridge to send log4j messages to slf4j, that we will dispatch where we want.

The bridge package generally contains both sides in the name, as “org.apache.logging.log4j” % “log4j-to-slf4j” % “2.6.2”.

For instance, with those dependencies :

libraryDependencies += "org.slf4j" % "slf4j-api" % "1.7.21"
libraryDependencies += "org.slf4j" % "slf4j-jdk14" % "1.7.21"
libraryDependencies += "org.apache.hadoop" % "hadoop-client" % "2.3.0"
libraryDependencies += "org.apache.logging.log4j" % "log4j-to-slf4j" % "2.6.2"

The path of the logs is:
hadoop’s log(…) → ACL → log4j → bridge → slf4j → JUL → System.err
Phew!

val sl: Logger = LoggerFactory.getLogger("My App")
sl.info("message from {}", "slf4j!")
// generate some hadoop logs
new DFSClient(new InetSocketAddress(1337), new Configuration)

We are actually “lucky” because 2 implementations were available for slf4j: log4j (provided in hadoop-client) and “slf4j-jdk14″.

Fortunately for us, slf4j pock “slf4j-jdk14″. Otherwise we would have get trap into an infinite loop :

hadoop’s log(…) → ACL → log4j → bridge → slf4j → log4j → log4j → bridge → slf4j → log4j→ log4j → bridge → slf4j → log4j…

Output:

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [...slf4j-jdk14–1.7.21.jar!...]
SLF4J: Found binding in [...slf4j-log4j12–1.7.5.jar!...]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]
INFO: message from slf4j! [Fri Aug 19 01:08:46 CEST 2016]
FINE: dfs.client.use.legacy.blockreader.local = false [Fri Aug 19 01:08:46 CEST 2016]
FINE: dfs.client.read.shortcircuit = false [Fri Aug 19 01:08:46 CEST 2016]
FINE: dfs.client.domain.socket.data.traffic = false [Fri Aug 19 01:08:46 CEST 2016]
FINE: dfs.domain.socket.path = [Fri Aug 19 01:08:46 CEST 2016]
…

Another bridge supposedly doing the same exists : “org.slf4j” % “log4j-over-slf4j” % “1.7.21”. Unfortunately, it creates the infinite loop in our case, because slf4j pick “slf4j-log4j12″:

SLF4J: Found binding in [...slf4j-log4j12–1.7.5.jar!...]
SLF4J: Found binding in [...slf4j-jdk14–1.7.21.jar!...]
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Exception in thread "main" java.lang.StackOverflowError

But we can explicitely exclude the other implementation :

libraryDependencies += "org.apache.hadoop" % "hadoop-client" % "2.3.0" exclude("org.slf4j", "slf4j-log4j12")

If we do, both bridges are working as expected.

As you can see, without altering anything in the hadoop library, we made it generate logs where and with the format we wanted.

Bridges between those common implementations are available (they couldn’t agree on the naming it seems..):

  • jcl-over-slf4j
  • log4j-over-slf4j
  • jul-to-slf4j

That’s the power of slf4j and its implementations. It’s completely decoupled from the source.

TLDR

Here’s a picture resuming the available bridges to slf4j:


http://www.slf4j.org/legacy.html

Performance

Some applications can generate a tremendous amount of logs. Some precautions should be taken care of:

  • async logging should always be preferred (another thread doing the logging, not the caller’s). This is often available in the logging configuration itself ()
  • you should not add level guards (if (logger.isDebugEnabled)…) before logging, that brings us to the next point:
  • do not concat strings yourself in the message: use the placeholder syntax such as log.info(“the values are {} and {}”, item, item2). The .toString() won’t be computed if it’s not needed (it can be cpu intensive, but basically, it’s just useless to call it if the log level is not enough).
  • In Scala, you generally use https://github.com/Log4s/log4s to avoid this and just use classic string interpolation. It’s based on macros and will automatically add guard conditions.

Some benchmarks and comparaison: https://logging.apache.org/log4j/2.x/performance.html

Last notes

slf4j is useful combined to a powerful implementation such as log4j2 or logback.

But be careful when the application is managed by another application, like supervisor, because they can handle the logging themselves too like file rolling, or logstash to somewhere. Often, keeping the logging configuration simple (stdout) is enough.

A lot of frameworks have traits or abstract classes or globals to provide the logging directly  :

  • Akka : provides LoggingReceive, ActorLogging, akka.event.Logging, akka-slf4j.jar
  • Spark : it’s using log4j and had a trait org.apache.spark.Logging (removed in 2.0.0)
  • Play Framework: it’s using logback and provides a Logger object/class on top of slf4j’s Logger

 

 

JNA—Java Native Access: enjoy the native functions

August 3rd, 2016 | java, jna, jni, native, scala |

Before JNA: JNI

If you’re into the Java world, you’ve probably heard of JNI: Java Native Interface.
It’s used to call the native functions of the system or of any native library.
Some good JNI explanations and examples here: http://www.ibm.com/developerworks/java/tutorials/j-jni/j-jni.html

Most of developers will never use it because it’s not often necessary to access the system resources, the windows, the volumes etc. That really depends of your business.

Sometimes, you want to use a library that’s not written in Java but in C. It’s very performant and battle-tested, you need to create a bridge. This is where JNI and JNA comes into play.

About resources, Java provides already some high-level API for some system aspects (memory, disks), such as:

  • Runtime.getRuntime().maxMemory()
  • Runtime.getRuntime().availableProcessors()
  • File.listRoots()(0).getFreeSpace()

But it’s pretty limited. Behind the scene, they are declared as native and rely on JNI.

You can use some projects that offers more options, such as oshi (Operating System & Hardware Information). It makes all possible information on the OS and hardware of the machine available (all memory and cpu metrics, network, battery, usb, sensors..).

It’s not using JNI: it’s using JNA!
JNA is JNI’s cousin: created to be simpler to use, and to write only Java code. (Scala in our case :) Note that there is a slight call overhead compared to JNI because of the dynamic bindings.

JNA

Basically, it dynamically links the functions of the native library to some functions declared in a Java/Scala interface/trait. Nothing more.

The difficulty comes with the signature of the functions you want to “import”.
You can easily find their native signatures (Google is our friend), but it’s not always obvious to find how to translate them using the Java/Scala types.

Hopefully, the documentation of JNA is pretty good to understand the subtle cases : Using the library, FAQ.

 

Let’s review how to use it using Scala and SBT (instead of Java).

How to use it

First, SBT:

libraryDependencies ++= Seq(
  "net.java.dev.jna" % "jna" % "4.2.2",
  "net.java.dev.jna" % "jna-platform" % "4.2.2")

The “jna” dependency is the core.

“jna-platform” is optional. It contains a lot of already written interfaces to access some standard libraries on several systems (Windows (kernel32, user32, COM..), Linux (X11), Mac). If you plan to use any system library, check out this package first.

Then, the Scala part.

Use the existing platform bindings

With jna-platform, you can use the existing bindings:

import com.sun.jna.platform.win32.Kernel32
import com.sun.jna.ptr.IntByReference

val cn = new Array[Char](256)
val success: Boolean = Kernel32.INSTANCE.GetComputerName(cn, new IntByReference(256))
println(if (success) Native.toString(cn) else Kernel32.INSTANCE.GetLastError())

You can feel the native way when calling this function (most native functions follows this style):

  • you provide a buffer and its length
  • you get a boolean as result to indicate success/failure
  • in case of a failure, you call to know the code of the error (such as 111 for )
  • in case of a success, the buffer contains the name

That’s very manual but that’s the way. (nowadays, we would return the String and throw an Exception on failure)

For information, the native signature is :

BOOL WINAPI GetComputerName(
  _Out_   LPTSTR  lpBuffer,
  _Inout_ LPDWORD lpnSize);

A pointer to some buffer to write into and its size (use as input and as output).

Listing the opened windows

Another more complex example to retrieve the list of opened windows :

import com.sun.jna.platform.win32.{User32, WinUser}

User32.INSTANCE.EnumWindows(new WinUser.WNDENUMPROC {
  override def callback(hWnd: HWND, arg: Pointer): Boolean = {
    val buffer = new Array[Char](256)
    User32.INSTANCE.GetWindowText(hWnd, buffer, 256)
    println(s"$hWnd: ${Native.toString(buffer)}")
    true
  }
}, null)

Output:

native@0xb0274: JavaUpdate SysTray Icon 
native@0x10342: GDI+ Window 
native@0x10180: Windows Push Notifications Platform 
(a lot more)...

The native signature of is :

BOOL WINAPI EnumWindows(
  _In_ WNDENUMPROC lpEnumFunc,
  _In_ LPARAM      lParam);
  • we use User32 because it contains the windows functions of Windows
  • a WNDENUMPROC  is a pointer to a callback. JNA already has an interface of the same name to be able to create this type in the JVM.
  • we call another function of User32 in get the title of each window

Create a custom binding

It’s time to fly with our own wings.

Let’s call a famous function of the Windows API: MessageBox. You know, the popups? It’s in User32.lib but JNA did not implemented it. Let’s do it ourselves.

First, we create an interface with the Java/Scala signature of the which is :

int WINAPI MessageBox(
  _In_opt_ HWND    hWnd,
  _In_opt_ LPCTSTR lpText,
  _In_opt_ LPCTSTR lpCaption,
  _In_     UINT    uType);

The Scala equivalence could be:

import com.sun.jna.Pointer
import com.sun.jna.win32.StdCallLibrary

trait MyUser32 extends StdCallLibrary {
  def MessageBox(hWnd: Pointer, lpText: String, lpCaption: String, uType: Int)
}
  • We use simple Strings and not Array[Char] because they are only used as inputs (_In_).
  • The name of the function must be exactly the native’s one (with caps)

Now, we need to instantiate the interface with JNA and call our function:

val u32 = Native.loadLibrary("user32", classOf[MyUser32], W32APIOptions.UNICODE_OPTIONS).asInstanceOf[MyUser32]
val MB_YESNO = 0x00000004
val MB_ICONEXCLAMATION = 0x00000030
u32.MessageBox(null, "Hello there!", "Hi", MB_YESNO | MB_ICONEXCLAMATION)

  • Always use W32APIOptions.UNICODE_OPTIONS or you’ll get into troubles when calling functions (that will automatically convert the input/output of the calls)

It was quite simple right? That’s the purpose of JNA. Just need an interface with the native method declaration, you can call it.

The difficulty could be to write the Java signature, but a tool can help: JNAerator. From the native language, it can generate Java signatures, pretty cool!

 

More examples of JNA usage on their github’s: https://github.com/java-native-access/jna/tree/master/contrib

 

Java CLI, GC, memory, and tools overview

January 10th, 2016 | gc, java, performance |

Back to the Java world, I’ve made my mind and knew that I didn’t know enough, that I wasn’t enough confident.

Therefore I’ve looked around myself at some “simple” aspects of Java (CLI, GC, tools) to consolidate my knowledge, and made this post to give a global overview of what the Java CLI has to offer, how to configure the memory heap, what are the GC principles and most useful options, and introduce some tools to debug and profile the JVM.

I assume you already know Java and know what do the Garbage Collector with the Young Generation and the Old Generation. Hopefully, this post will learn you some new tricks.

I won’t talk about framework here, just about the basics :

  • Java command line and options
  • Concise summary of the Garbage Collectors and theirs logs
  • Memory tuning and its limits
  • UI tools to debug and profile a JVM
  • CLI tools shipped with the JVM

java version “1.8.0_66″
Java(TM) SE Runtime Environment (build 1.8.0_66-b18)
Java HotSpot(TM) 64-Bit Server VM (build 25.66-b18, mixed mode)

Java flags overview

I’ll introduce some of the most useful flags we can enable with java to get more info and understand a bit more what’s going on under the hood.

Quick note :

  • -XX:+[option] : enable the following option
  • -XX:-[option] : disable the following option
  • -XX:[property]= : give a value to the property

-XX:+PrintCommandLineFlags

First, it’s interesting to know what are the default options of the JVM.

$ java -XX:+PrintCommandLineFlags -version
-XX:InitialHeapSize=268055680 -XX:MaxHeapSize=4288890880 -XX:+PrintCommandLineFlags
-XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:-UseLargePagesIndividualAllocation
-XX:+UseParallelGC
  • -XX:InitialHeapSize=268055680 : 256m, default to 1/64 of the RAM (alias for -Xms)
  • -XX:MaxHeapSize=4288890880 : 4g, default to 1/4 of the RAM (alias for -Xmx)
  • -XX:+UseParallelGC : Parallel GC (PSYoungGen (Parallel Scavenge) + ParOldGen). Checkout the GC chapter if it’s scary.
  • the others are for misc optimisations

-XX:+PrintFlagsFinal

We can list every existing flags and their values. There are hundreds of them.
Below is just the list of all the Print* we can use to display more info in the logs and some GC related ones.

$ java -XX:+PrintFlagsFinal -version
[Global flags]
...
bool PrintAdaptiveSizePolicy             = false      {product}
bool PrintCMSInitiationStatistics        = false      {product}
intx PrintCMSStatistics                  = 0          {product}
bool PrintClassHistogram                 = false      {manageable}
bool PrintClassHistogramAfterFullGC      = false      {manageable}
bool PrintClassHistogramBeforeFullGC     = false      {manageable}
bool PrintCodeCache                      = false      {product}
bool PrintCodeCacheOnCompilation         = false      {product}
bool PrintCommandLineFlags               = false      {product}
bool PrintCompilation                    = false      {product}
bool PrintConcurrentLocks                = false      {manageable}
intx PrintFLSCensus                      = 0          {product}
intx PrintFLSStatistics                  = 0          {product}
bool PrintFlagsFinal                    := true       {product}
bool PrintFlagsInitial                   = false      {product}
bool PrintGC                             = false      {manageable}
bool PrintGCApplicationConcurrentTime    = false      {product}
bool PrintGCApplicationStoppedTime       = false      {product}
bool PrintGCCause                        = true       {product}
bool PrintGCDateStamps                   = false      {manageable}
bool PrintGCDetails                      = false      {manageable}
bool PrintGCID                           = false      {manageable}
bool PrintGCTaskTimeStamps               = false      {product}
bool PrintGCTimeStamps                   = false      {manageable}
bool PrintHeapAtGC                       = false      {product rw}
bool PrintHeapAtGCExtended               = false      {product rw}
bool PrintHeapAtSIGBREAK                 = true       {product}
bool PrintJNIGCStalls                    = false      {product}
bool PrintJNIResolving                   = false      {product}
bool PrintOldPLAB                        = false      {product}
bool PrintOopAddress                     = false      {product}
bool PrintPLAB                           = false      {product}
bool PrintParallelOldGCPhaseTimes        = false      {product}
bool PrintPromotionFailure               = false      {product}
bool PrintReferenceGC                    = false      {product}
bool PrintSafepointStatistics            = false      {product}
intx PrintSafepointStatisticsCount       = 300        {product}
intx PrintSafepointStatisticsTimeout     = -1         {product}
bool PrintSharedArchiveAndExit           = false      {product}
bool PrintSharedDictionary               = false      {product}
bool PrintSharedSpaces                   = false      {product}
bool PrintStringDeduplicationStatistics  = false      {product}
bool PrintStringTableStatistics          = false      {product}
bool PrintTLAB                           = false      {product}
bool PrintTenuringDistribution           = false      {product}
bool PrintTieredEvents                   = false      {product}
bool PrintVMOptions                      = false      {product}
bool PrintVMQWaitTime                    = false      {product}
bool PrintWarnings                       = true       {product}
...
bool UseParNewGC                         = false      {product}
bool UseParallelGC                      := true       {product}
bool UseParallelOldGC                    = true       {product}
...

The := means that the default value was overridden by something (you or the JVM Ergonomics).
You can see the JVM Ergonomics estimates java could use the ParallelGC on my PC.

Moreover, you can know what is the value of any flag the JVM handles.
For instance, you can find out what is the Young Generation size (“NewSize”) doing some | grep NewSize :

uintx MaxNewSize                       := 1430257664  {product}
uintx NewSize                          := 89128960    {product}

More details on how to read that on javaworld or codecentric.

Get more details in the logs

As a reminder :

-XX:+PrintGC / -verbose:gc

This is the first step to know what’s going on with your program and its GC.

[GC (Allocation Failure)  954K->896K(1536K), 0.0008951 secs]
[Full GC (Ergonomics)  896K->290K(1536K), 0.0026976 secs]
[GC (Allocation Failure)  778K->290K(1536K), 0.0006170 secs]

We can see the total heap going from 954K to 896K, with a max at 1536K for instance.

  • Allocation Failure : the JVM couldn’t find any more space in the Young Generation and had to clean it up. This is a normal behavior.
  • Ergonomics : the JVM decided to start a Full GC on its own.
  • Metadata GC Threshold : Metaspace size is exhausted. Raise the default MetaspaceSize and maybe the max MaxMetaspaceSize.

-XX:+PrintGCDetails

It’s more interesting, you still see the heap size changes, but you also see the young generation PSYoungGen, the old generation ParOldGen, and the Metaspace changes (because I was running with the Parallel GC, it’s different according to which GC is used).

[GC (Allocation Failure)
  [PSYoungGen: 465K->400K(1024K)]
  954K->896K(1536K), 0.0011948 secs]
  [Times: user=0.00 sys=0.00, real=0.00 secs]
[Full GC (Ergonomics)
  [PSYoungGen: 400K->0K(1024K)]
  [ParOldGen: 496K->290K(512K)]
  896K->290K(1536K),
  [Metaspace: 2520K->2520K(1056768K)], 0.0032720 secs]
  [Times: user=0.00 sys=0.00, real=0.00 secs]
[GC (Allocation Failure)
  [PSYoungGen: 488K->0K(1024K)]
  778K->290K(1536K), 0.0010046 secs]
  [Times: user=0.00 sys=0.00, real=0.00 secs]

-XX:+PrintReferenceGC

This options works with the previous one.
It adds information about the different *Reference type variables (Soft, Weak, Final, Phantom, JNI) the program might used.

The PhantomReference are quite tricky to work with the GC, be careful. But if you’re using them, I’m pretty sure you know it, right ? plumbr has some nice tips about it.

[GC (Allocation Failure)
  [SoftReference, 0 refs, 0.0003665 secs]
  [WeakReference, 9 refs, 0.0001271 secs]
  [FinalReference, 7 refs, 0.0001104 secs]
  [PhantomReference, 0 refs, 0 refs, 0.0001707 secs]
  [JNI Weak Reference, 0.0002208 secs]
  [PSYoungGen: 465K->400K(1024K)]
  954K->896K(1536K), 0.0026939 secs]
  [Times: user=0.00 sys=0.00, real=0.00 secs]

-XX:+PrintGCTimeStamps / -XX:+PrintGCDateStamps

It’s useful to know when things happen and how often.
The date is useful to be able to match easily with other logs.

2016-01-11T01:12:48.878+0100: 1.071: [GC (Allocation Failure)  954K->928K(1536K), 0.0020453 secs]
2016-01-11T01:12:48.878+0100: 1.072: [Full GC (Ergonomics)  928K->290K(1536K), 0.0031099 secs]
2016-01-11T01:12:49.883+0100: 2.075: [GC (Allocation Failure)  778K->290K(1536K), 0.0012529 secs]

-XX:+PrintGCApplicationStoppedTime

It’s useful to know how much time your application didn’t do anything, because the World was Stopped.
You really want to minimize those times.

Total time for which application threads were stopped: 0.0000492 seconds,
  Stopping threads took: 0.0000179 seconds
Total time for which application threads were stopped: 0.0033140 seconds,
  Stopping threads took: 0.0000130 seconds
Total time for which application threads were stopped: 0.0004002 seconds,
  Stopping threads took: 0.0000161 seconds

-XX:+PrintAdaptiveSizePolicy

This displays some metrics about survivals and promotions that the JVM Ergonomics is using to tune and optimize the GC behavior (by modifying space sizes).

[GC (Allocation Failure)
  AdaptiveSizePolicy::update_averages:  survived: 409616  promoted: 8192  overflow: false
  AdaptiveSizeStart: 1.087 collection: 1
  PSAdaptiveSizePolicy::compute_eden_space_size: costs minor_time: 0.000377 major_cost: 0.000000
    mutator_cost: 0.999623 throughput_goal: 0.990000 live_space: 268845056 free_space: 1048576
    old_eden_size: 524288 desired_eden_size: 524288
  AdaptiveSizeStop: collection: 1
 954K->896K(1536K), 0.0022971 secs]

-XX:-UseAdaptiveSizePolicy

The JVM Ergonomics tries to enhance the latency and the throughput of your application by tuning the GC behavior such as modifying the space sizes.
You can disable this behavior if you know you don’t need it.

And you can still have the details about the survivors and promotions if combined with the previous flag.

$ java -XX:+PrintAdaptiveSizePolicy -XX:-UseAdaptiveSizePolicy -XX:+PrintGC ...
[GC (Allocation Failure)
  AdaptiveSizePolicy::update_averages:  survived: 442384  promoted: 8192  overflow: false
  954K->928K(1536K), 0.0027480 secs]

Memory tuning

Heap size

Heap = Young Generation (Eden + Survivors) + Old Generation (Tenured)

This is the big part that you can impact for the better or for the worse.
If you think you need to change it, you need to be sure it’s necessary, know the existing GC cycles, know that you have reached the limit (or not).
Or you can just give a try and check the behavior, latency, and throughput of your application. ;-)

  • -Xms / -XX:InitialHeapSize : initial heap size
  • -Xmx / -XX:MaxHeapSize : maximum heap size

The MaxHeapSize influences the InitialHeapSize up until 256m.

if MaxHeapSize=2m   then InitialHeapSize=2m   (max)
if MaxHeapSize=256m then InitialHeapSize=256m (max)
if MaxHeapSize=512m then InitialHeapSize=256m (half)

Default size

As we already said, the default MaxHeapSize is 1/4 of the machine RAM, and the InitialHeapSize is 1/64.

For instance, on my machine, I have 16GB of RAM, that gives :

InitialHeapSize = 268435456  = 256m
MaxHeapSize     = 4290772992 = 4092m

Be careful with big numbers and PrintFlagsFinal, it won’t display them properly if greater than 4094m, because it displays them as uint, thus the limit is 4,294,967,295.

$ java -Xmx4g -XX:+PrintFlagsFinal -version |  grep "MaxHeapSize"
 uintx MaxHeapSize                              := 0                                   {product}

Minimum size

The minimum heap size you can set is 1M (your program doesn’t do much but it’s still possible!).

If you try to put less, you’ll end up with :

Too small initial heap

But actually, even if you ask for 1m, you’ll end with 2m :

$ java -Xmx1m -XX:+PrintFlagsFinal -version |  grep HeapSize
    uintx InitialHeapSize                          := 2097152                             {product}
    uintx MaxHeapSize                              := 2097152                             {product}

You will always get a MaxHeapSize divisible by 2.

Not enough heap ?

And if your program needs more heap and can’t find it with a lovely OOM :

Exception in thread "main" java.lang.OutOfMemoryError: Java heap space

Young generation (in heap)

Young Generation = Eden + Survivors

This part of the heap is where all variables start their lifecycle. They are born here, and they will likely evolve to survivors, then ends up in the old generation; if alive long enough.

  • -XX:NewSize : young generation initial size
  • -XX:MaxNewSize : young generation maximum size
  • -Xmn : shortcut for both

The MaxHeapSize and InitialHeapSize influences the MaxNewSize and NewSize.

if MaxHeapSize=1g (InitialHeapSize=256m) then MaxNewSize=341m       and NewSize=85m
if MaxHeapSize=4g (InitialHeapSize=256m) then MaxNewSize=1365m (x4) and NewSize=85m
if MaxHeapSize=4g and InitialHeapSize=1g then MaxNewSize=1365m      and NewSize=341m (x4)

By default, the ratio is 3/1 between MaxHeap/MaxNewSize and InitialHeapSize/NewSize.

Default size

We just saw that NewSize/MaxNewSize are linked to InitialHeapSize/MaxHeapSize.

The default of MaxHeapSize is 1/4 of the machine RAM, and the InitialHeapSize is 1/64.
Therefore, the default of MaxNewSize is (1/4)/3 of the RAM, and NewSize is (1/4)/3 of InitialHeapSize.

On my machine, I have 16GB of RAM, that gives :

InitialHeapSize = 256m
MaxHeapSize     = 4092m

MaxNewSize = 1430257664 = 1364m (= 4092m/3)
NewSize    = 89128960   = 85m   (= 256m/3)

Minimum size

You can’t have MaxNewSize < NewSize : [code] Java HotSpot(TM) 64-Bit Server VM warning: NewSize (1536k) is greater than the MaxNewSize (1024k). A new max generation size of 1536k will be used. [/code] The 1536k will be equally separated between the Eden Space, the from Survivor space, and the to Survivor space. (512k each)

You can’t neither have the MaxNewSize >= HeapSize (young gen size can’t be greater than the total heap size) :

$ java -Xmx2m -XX:MaxNewSize=2m -XX:+PrintFlagsFinal -version | grep NewSize
Java HotSpot(TM) 64-Bit Server VM warning:
MaxNewSize (2048k) is equal to or greater than the entire heap (2048k).
A new max generation size of 1536k will be used.
    uintx MaxNewSize                               := 1572864                             {product}
    uintx NewSize                                  := 1572864                             {product}

Not enough space ?

Even if you have a MaxNewSize of 1m and your program tries to allocate 1g bytes, it will work if you have a big enough heap size. The allocation will just directly go into the old generation space.

Thread Stack (off heap)

Each and every threads in the program will allocate this size for their stack.

This is where they will store the function parameters values they are currently executing (and they are removed when the function exits). The deeper calls you do, the deeper you’re going into the stack. (FILO)

Recursive calls can go very deeply because of their intrinsic natures. This is where you have to be careful in your logic and maybe increase the default ThreadStackSize.

  • -Xss / -XX:ThreadStackSize : thread stack size

Default size

If you look yourself, you’ll find out it’s 0 :

$ java -XX:+PrintFlagsFinal -version | grep ThreadStackSize
intx ThreadStackSize                           = 0                                   {pd product}

0 means it will fallback to the OS default.
Check out Oracle website for unix or Windows, it’s between 320kB and 1MB.

Minimum size

The usage says you must specify at least 108k, but you can run *only* with 65k to start a simple program.

The stack size specified is too small, Specify at least 108k
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.

StackOverflow

Be specially careful with recursion and its stop condition :

Exception in thread "main" java.lang.StackOverflowError

A simple program with a recursive function taking 2 int parameters can be called up to :

ThreadStackSize at 65k  : 888 times
ThreadStackSize at 130k : 1580 times
ThreadStackSize at 260k : 2944 times

But the more parameters to the function you add, the less times you’ll be able to call it.

Metaspace (off heap)

This is where the class definitions are. It’s a special space because class definitions are not mutable, they are loaded once and for all.

You will probably never touched the default configuration.

  • (Java < 8) -XX:MaxPermSize (it was a fixed size, difficult to estimate)
  • (Java >= 8) -XX:MaxMetaSpaceSize (is unlimited by default)

Default size

As we said, it’s unlimited.
Well, if we look closely, it’s defined at 4GB for me :

$ java -XX:+PrintFlagsFinal -version | grep Metaspace
{product}
uintx MaxMetaspaceSize                          = 4294901760                          {product}
uintx MetaspaceSize                             = 21807104                            {pd product}

MaxHeapSize has no impact on it, it’s a memory off-heap.

Minimum size

You can’t set a too small size (< 5m) otherwise you'll end up with those errors : [code] Error occurred during initialization of VM OutOfMemoryError: Metaspace - java.lang.OutOfMemoryError: Metaspace <> - Exception in thread "main" java.lang.OutOfMemoryError: Metaspace [/code] But it's a good idea to set a max size, to be sure the JVM will never take an “unlimited” memory (and breaks the other apps on the server) in case of some bugs.

But you can mess with it

I have a program that create dynamically classes on the fly and load them up. (https://gist.github.com/chtefi/018493089f4c75f36662)

$ java -XX:MaxMetaspaceSize=10m
-Djava.home="C:\Program Files\Java\jdk1.8.0_60\jre"
-classpath "C:\wip\out\production\test"
com.company.TestMetaspaceLimit

With 10m of metaspace, it crashes around 300 classes loaded (it’s not a lot in the Java world if you’re using some frameworks).

If you enable the GCDetails logs, you’ll see a lot of cool errors :

[Full GC (Metadata GC Threshold)
  [PSYoungGen: 64K->0K(1396224K)]
  [PSOldGen: 6548K->6548K(2793472K)] 6612K->6548K(4189696K),
  [Metaspace: 9954K->9954K(1058816K)], 0.0174232 secs]
  [Times: user=0.02 sys=0.00, real=0.02 secs]
[GC (Last ditch collection)
  [PSYoungGen: 0K->0K(1396224K)] 6548K->6548K(4189696K), 0.0006371 secs]
  [Times: user=0.00 sys=0.00, real=0.00 secs]
[Full GC (Last ditch collection)
  [PSYoungGen: 0K->0K(1396224K)]
  [PSOldGen: 6548K->6548K(2793472K)] 6548K->6548K(4189696K),
  [Metaspace: 9954K->9954K(1058816K)], 0.0183340 secs]
  [Times: user=0.01 sys=0.00, real=0.02 secs]

Garbage Collectors

Each GC deals differently with the Young Generation Space (new objects) and the Old Generation Space (variables referenced since a while), because the Young is a very fast-paced space, not the old one.

The Young Generation space should never be too big, 2GB seems like a good limit. Otherwise, algorithms could not be as performant when processing it.

-XX:+UseSerialGC

It’s the basic GC : the Serial GC.

It’s using a single-core and Stop the World when processing.

  • Young Generation : Mark + Copy (using survivor spaces) / in the logs : DefNew
  • Old Generation : Mark + Sweep + Compact / in the logs : Tenured

Example of output :

[GC (Allocation Failure)
  [DefNew: 652352K->81514K(733888K), 0.2248788 secs]
  1630766K->1620630K(2364580K), 0.2255284 secs]
  [Times: user=0.19 sys=0.03, real=0.22 secs]
[GC (Allocation Failure)
  [DefNew: 733839K->81489K(733888K), 0.2495329 secs]
  [Tenured: 2180251K->1993562K(2180276K), 0.3855474 secs]
  2272954K->1993562K(2914164K),
  [Metaspace: 2765K->2765K(1056768K)], 0.6373276 secs]
  [Times: user=0.55 sys=0.09, real=0.64 secs]

-XX:+UseParallelGC -XX:+UseOldParallelGC

The Parallel GC.

It’s an evolution of the Serial one.
It’s doing the same, but faster because it’s using multiple cores to do the job.
And again, it’s Stopping the World when processing.

With Java 8, specifying -XX:+UseParallelGC automatically set -XX:+UseOldParallelGC.

  • Young Generation : Parallel Mark + Copy (using survivor spaces) / in the logs : PSYoungGen
  • Old Generation : Parallel Mark + Sweep + Compact / in the logs : ParOldGen

Example of output :

[GC (Allocation Failure)
  [PSYoungGen: 76221K->10729K(141824K)]
  127345K->126994K(316928K), 0.0173292 secs]
  [Times: user=0.05 sys=0.02, real=0.02 secs]
[Full GC (Ergonomics)
  [PSYoungGen: 10729K->0K(141824K)]
  [ParOldGen: 116265K->126876K(287744K)]
  126994K->126876K(429568K),
  [Metaspace: 2742K->2742K(1056768K)], 0.0224399 secs]
  [Times: user=0.03 sys=0.00, real=0.02 secs]
  • GC (Allocation Failure) : a minor GC (Young generation) was done because space was not available
  • Full GC (Ergonomics) : the JVM decided to do a Full GC (Young + Old generations) because of some thresholds

But you can force-disable it by doing -XX:-UseOldParallelGC : you’ll end up using the PSOldGen old generation collector. It’s not parallel anymore but serial (as the SerialGC). You should probably not used it.

You can control how many threads the parallel phases are using -XX:ParallelGCThreads=N.
It is by default the number of cores the computer has. (must be at least 1)

-XX:+UseParNewGC -XX:+UseConcMarkSweepGC

The Concurrent Mark and Sweep GC.

It’s an evolution of the Parallel GC. This time, it’s not a Stop The World algo everywhere.
It can collect the old generation concurrently while the application is still running, meaning you should have a better latency.

ParNewGC, while collecting the young generation, will send some stats to the ConcMarkSweepGC, that will estimate if it should run a GC (according to the trend of the promotion rates in the young generation). This is why the CMS works with this one and not the classic parallel UseParallelGC.

Moreover, while being mostly concurrent, it has just of few phases where it still must Stop The World but they are very short period, contrary to the previous algorithms.

With Java 8, specifying -XX:+UseConcMarkSweepGC automatically set -XX:+UseParNewGC.

  • Young Generation : Mark + Copy (using survivor spaces) / in the logs : ParNew
  • Old Generation : Mark + Sweep : do NOT Stop the World (mostly) / in the logs : CMS Initial Mark, CMS Final Remark

Example of output (times were removed for the sake of clarity) :

[GC (CMS Initial Mark) [1 CMS-initial-mark: 1446700K(1716408K)] 1456064K(1795064K), 0.0006139 secs]
[CMS-concurrent-mark-start]
[CMS-concurrent-mark: 0.014/0.014 secs]
[CMS-concurrent-preclean-start]
[CMS-concurrent-preclean: 0.003/0.003 secs]
[CMS-concurrent-abortable-preclean-start]
[CMS-concurrent-abortable-preclean: 0.021/0.381 secs]
[GC (CMS Final Remark)
  [YG occupancy: 14224 K (78656 K)]
  [Rescan (parallel) , 0.0013603 secs]
  [1 CMS-remark: 1585968K(1716408K)] 1600193K(1795064K), 0.0032058 secs]
[CMS-concurrent-sweep-start]
[CMS-concurrent-sweep: 0.003/0.003 secs]
[CMS-concurrent-reset-start]
[CMS-concurrent-reset: 0.004/0.004 secs]

The Stop The World events happen during the CMS Initial Mark and CMS Final Remark.

You can have notice that the Old Generation is not compacted at the end, meaning it can still exists holes in the memory that is not used because it’s too small.

If Java can’t find any more memory because of that, it will trigger a Full GC by calling the Parallel GC (well, a GC that does the compaction but Stop The World). Moreover, this can happen also when a CMS compaction is currently in progress (concurrently), then suddenly, a lot of survivors are promoted to the old generation and boom, no more space.
This is why the CMS must be triggered way before the space is filled.

It is the role of the flag +XX:CMSInitiatingOccupancyFraction, by default, it’s around 92% according to Oracle.

Moreover, you can control how many threads to use for the concurrent part using -XX:ConcGCThreads=N. (measure before change)

-XX:+UseG1GC

The latest Java HotSpot VM GC.

It handles the space differently compared to its predecessors (being closer to the ConcMarkSweepGC).

There is no more Young and Old regions only. There are a bunch of regions of different sizes (certains will be automatically resized on the fly by the GC to enhance performance), each of them deal with only one type of generation : an Eden, a Survivor, or a Old. (and some with Humongous objects : they are so big they span on several regions). It targets around 2000 regions, each of them between 1MB and 32MB.

It is oriented for quite big heaps (> 4GB), and for small latency environments : you specify how much pause time (max) you desire for GCs (default: 0.5s).

It is mostly concurrent (does not affect too much the latency of the application) and parallel (for the Stop the World phases), but is a bit more computing intensive (compute stats to enhance behavior, predict what to clean, to reach the desired pause time).

It’s a bit more complicated than the others, you can refer to those two great resources to get more details : Getting Started with the G1 Garbage Collector, and Garbage First Garbage Collector Tuning.

It’s a bit like the CMS GC :

  • you have a STW Mark
  • then a concurrent scan from the marked references
  • then a STW Remark (to take into account the updates since the Mark)
  • then the cleaning and copy of regions

Example of output :

[GC pause (G1 Evacuation Pause) (young) 1478M->1475M(3764M), 0.0540170 secs]
[GC pause (G1 Evacuation Pause) (young) 1767M->1766M(3830M), 0.0581689 secs]
[GC pause (G1 Evacuation Pause) (young) (initial-mark) 2105M->2106M(3830M), 0.0674928 secs]
[GC concurrent-root-region-scan-start]
[GC concurrent-root-region-scan-end, 0.0004460 secs]
[GC concurrent-mark-start]
[GC concurrent-mark-end, 0.0153593 secs]
[GC remark, 0.0065189 secs]
[GC cleanup 2126M->2114M(3830M), 0.0021820 secs]
[GC concurrent-cleanup-start]
[GC concurrent-cleanup-end, 0.0001478 secs]
[GC pause (G1 Evacuation Pause) (young) 2483M->2484M(3830M), 0.0773962 secs]
[GC pause (G1 Evacuation Pause) (mixed) 2620M->2586M(3830M), 0.0467784 secs]
[GC pause (G1 Evacuation Pause) (young) 3029M->3023M(3830M), 0.0782551 secs]
[GC pause (G1 Evacuation Pause) (young) (initial-mark) 3248M->3237M(3830M), 0.0752451 secs]
[GC concurrent-root-region-scan-start]
[GC concurrent-root-region-scan-end, 0.0003445 secs]
[GC concurrent-mark-start]
[GC concurrent-mark-end, 0.0189316 secs]
[GC remark, 0.0083292 secs]
[GC cleanup 3278M->2968M(3830M), 0.0026447 secs]
[GC concurrent-cleanup-start]
[GC concurrent-cleanup-end, 0.0004819 secs]
[GC pause (G1 Evacuation Pause) (young) 3082M->3078M(3830M), 0.0309070 secs]
[GC pause (G1 Evacuation Pause) (mixed) 3245M->3078M(3830M), 0.0408398 secs]
  • G1 Evacuation Pause : copy alive objects (Eden or Survivors) to another region(s) compacting them and promoting them if old enough (to an Old Generation region). It’s a Stop The World process
  • concurrent-* : marks and scan alive objects and do some cleaning while the application is still running
  • (mixed) : both young and old generations copied (“evacuated”) elsewhere at the same time

Profiling

ASCII profiling

If you’re an hardcore player, you can use the Java agent hpref to retrieve a human-readable heap dump with the Java profile of your application (when it ends).
It’s bundled by default in the HotSpot JVM.

$ java -agentlib:hprof=heap=sites com.company.MyApp

That will generate a file java.hprof.txt where you can easily find out what are the most expensive function calls :

SITES BEGIN (ordered by live bytes) Tue Jan 12 22:38:06 2016
          percent          live          alloc'ed  stack class
 rank   self  accum     bytes objs     bytes  objs trace name
    1 14.87% 14.87%   2103552 30499   2103552 30499 302579 char[]
    2 10.35% 25.21%   1463952 30499   1463952 30499 302580 com.sun.tools.javac.file.ZipFileIndex$Entry
    3  9.27% 34.48%   1311424   11   1311424    11 301304 com.sun.tools.javac.util.SharedNameTable$NameImpl[]

So, it seems I’ve allocated a tons of char[] (2M, 31000 objects).
To know the callstack, find the trace value in the file, you’ll end up with something like this :

TRACE 302579:
        java.lang.StringCoding$StringDecoder.decode(:Unknown line)
        java.lang.StringCoding.decode(:Unknown line)
        java.lang.String.(:Unknown line)
        com.sun.tools.javac.file.ZipFileIndex$ZipDirectory.readEntry(ZipFileIndex.java:665)

Et voilà, this is it. (it was not my fault!)

Another option is to collect function calls count and CPU usage using cpu=times :

$ java -agentlib:hprof=cpu=times com.company.MyApp
...
TRACE 312480:
        com.sun.tools.javac.file.ZipFileIndex$ZipDirectory.readEntry(ZipFileIndex.java:Unknown line)
        com.sun.tools.javac.file.ZipFileIndex$ZipDirectory.buildIndex(ZipFileIndex.java:Unknown line)
        com.sun.tools.javac.file.ZipFileIndex$ZipDirectory.access$000(ZipFileIndex.java:Unknown line)
        com.sun.tools.javac.file.ZipFileIndex.checkIndex(ZipFileIndex.java:Unknown line)
...
CPU TIME (ms) BEGIN (total = 17046) Tue Jan 12 22:52:08 2016
rank   self  accum   count trace method
   1  3.64%  3.64%   30711 312480 com.sun.tools.javac.file.ZipFileIndex$ZipDirectory.readEntry
   2  2.53%  6.17%    7392 312914 java.io.WinNTFileSystem.normalize
   3  2.38%  8.54%    3984 301205 java.lang.String$CaseInsensitiveComparator.compare
   4  2.09% 10.64%  324312 301204 java.lang.String.charAt

In a few seconds :

  • 30711 calls to ZipDirectory.readEntry
  • 324312 calls to String.charAt

That’s quite straight-forward, can be processed by third-party tools, or gather for comparaisons.

If you want the live stats, this is not your tool. An IDE with a true Profiler will be a better solution.
But anyway, that can come in handy !

There are a few more options, check out hprof.

JMX

A nice and easy way to get in-depth of your live program (local or distant), is to enable JMX when starting the application. JMX can be secured, but if you don’t want to be bother with that, start the JVM with those settings :

  • -Dcom.sun.management.jmxremote.port=5432
  • -Dcom.sun.management.jmxremote.authenticate=false
  • -Dcom.sun.management.jmxremote.ssl=false

If will expose its internals through the protocol JMX on port 5432.

You need a program to read from it. Fortunately, there is one installed by default : jvisualvm.
Just start your Java program somewhere, then start jvisualvm.

If on the same computer, it will automatically find it.
Install the plugin VisualGC if you don’t have it, to monitor the GC in details it’s a win.
You can even do the CPU and Memory profiling live.


Alternatives exist (ofc) such as JProfiler and YourKit. Check out .

You can also use jconsole (shipped with java). You don’t even need to start your process with JMX, jconsole can just plug itself.

Java CLI tools

The HotSpot JVM has some useful console tools shipped within too.

If you encounter any odd errors, ensure you have access to folder /tmp/hsperfdata_ being the user that started the Java process.

jps

List the java process running on the machine. (remember doing ps aux | grep java ?)

$ jps
11080 Launcher
11144 Jps
12140 TestMetaspaceLimit
$ jps -lvV
11080 org.jetbrains.jps.cmdline.Launcher -Xmx700m -D...
12140 com.company.TestMetaspaceLimit -Djava.home=C:\Program Files\Java\jdk1.8.0_60\jre -D...
6028 sun.tools.jps.Jps -Dapplication.home=C:\Program Files\Java\jdk1.8.0_60 -Xms8m

Official documentation.

jstat

Monitor some aspects of a running JVM (no JMX needed).

List of aspects :

-class
-compiler
-gc
-gccapacity
-gccause
-gcmetacapacity
-gcnew
-gcnewcapacity
-gcold
-gcoldcapacity
-gcutil
-printcompilation

Monitor the GC, show the timestamp in front, pull every 1s :

$ jstat -gc -t 7844 1s
Timestamp        S0C    S1C    S0U    S1U      EC       EU        OC         OU       MC     MU    CCSC   CCSU   YGC     YGCT    FGC    FGCT     GCT
           14,0 40960,0 53248,0 40947,8  0,0   506880,0 343724,3  175104,0   62801,2   13440,0 12979,7 1664,0 1552,3      8    0,144   0      0,000    0,144
           15,1 40960,0 53248,0 40947,8  0,0   506880,0 454765,2  175104,0   62801,2   13440,0 12979,7 1664,0 1552,3      8    0,144   0      0,000    0,144
           16,1 77824,0 53248,0  0,0   53240,9 506880,0 40423,7   175104,0   104781,8  13952,0 13581,6 1664,0 1596,0      9    0,203   0      0,000    0,203

Official documentation.

jinfo

Get the value of any flag of a running Java process.

$ jinfo -flag MaxHeapSize 5044
-XX:MaxHeapSize=4290772992

Official documentation.

jstack

Get the current stack trace of all the threads of a running Java process.
Useful if you wonder what’s going on with a process.

$ jstack 1204
...
"main" #1 prio=5 os_prio=0 tid=0x0000000002c9e000 nid=0x2d88 runnable [0x000000000347e000]
   java.lang.Thread.State: RUNNABLE
        at java.io.RandomAccessFile.length(Native Method)
        at java.io.RandomAccessFile.skipBytes(Unknown Source)
        at com.sun.tools.javac.file.ZipFileIndex.readBytes(ZipFileIndex.java:381)
        ...
        at com.sun.tools.javac.api.JavacTaskImpl.call(JavacTaskImpl.java:138)
        at com.company.TestMetaspaceLimit.createClass(TestMetaspaceLimit.java:42)
        at com.company.TestMetaspaceLimit.main(TestMetaspaceLimit.java:28)
...

Official documentation.

jmap

You can display the config and usage of the whole heap.
It’s useful but I think it’s more useful to use some UI to monitor that such as jvisualvm or jconsole.

$ jmap -heap 11080
Attaching to process ID 11080, please wait...          
Debugger attached successfully.                        
Server compiler detected.                              
JVM version is 25.60-b23                               
                                                       
using thread-local object allocation.                  
Parallel GC with 4 thread(s)                           
                                                       
Heap Configuration:                                    
   MinHeapFreeRatio         = 0                        
   MaxHeapFreeRatio         = 100                      
   MaxHeapSize              = 734003200 (700.0MB)      
   NewSize                  = 89128960 (85.0MB)        
   MaxNewSize               = 244318208 (233.0MB)      
   OldSize                  = 179306496 (171.0MB)      
   NewRatio                 = 2                        
   SurvivorRatio            = 8                        
   MetaspaceSize            = 21807104 (20.796875MB)   
   CompressedClassSpaceSize = 1073741824 (1024.0MB)    
   MaxMetaspaceSize         = 17592186044415 MB        
   G1HeapRegionSize         = 0 (0.0MB)                
                                                       
Heap Usage:                                            
PS Young Generation                                    
Eden Space:                                            
   capacity = 67108864 (64.0MB)                        
   used     = 8111152 (7.7353973388671875MB)           
   free     = 58997712 (56.26460266113281MB)           
   12.08655834197998% used                             
From Space:                                            
   capacity = 11010048 (10.5MB)                        
   used     = 6575688 (6.271064758300781MB)            
   free     = 4434360 (4.228935241699219MB)            
   59.72442626953125% used                             
To Space:                                              
   capacity = 11010048 (10.5MB)                        
   used     = 0 (0.0MB)                                
   free     = 11010048 (10.5MB)                        
   0.0% used                                           
PS Old Generation                                      
   capacity = 179306496 (171.0MB)                      
   used     = 81936 (0.0781402587890625MB)             
   free     = 179224560 (170.92185974121094MB)         
   0.04569605777138158% used                           
                                                       
6521 interned Strings occupying 524504 bytes.          

Official documentation.

Resources

I hope this overview was clear and wide enough to make you feel stronger about the basis of Java and that you learned some new tricks. I did.