Posts Tagged ‘java’
Cross-posted from Zolo Labs
In the previous part of this series, we learnt how we could store log data so it’s easy to get insights from it later. For instance, we use the GELF log format on Zolodeck, our side project. In this part, we’ll look at how to actually get insights from our logs, using a bunch of open source tools.
Here’re the simple 3-steps to get insights from our logs:
1) Write Logs
2) Transport Logs
3) Process Logs
In part 3, we saw how logging in a standard JSON format is beneficial. Some of my readers asked me why not use Clojure or Ruby data structures instead of JSON. Here’s why it’s better to use JSON format:
- JSON is accessible from all languages
- There are already a bunch of tools available to transport and process logs that accept JSON
Always write logs to local disk. It is tempting to have a log4j appender that directly sends logs to a remote server using UDP or HTTP. Unfortunately, you can’t guarantee that you’ll consistently reach those servers. So it’s better to write to local disk first, and then transport your logs to wherever your logs are going to be processed. There are many open source tools available for transporting logs, and depending on what tool you end up using to process your logs, your choice for transporting logs will change. Some tools that you can use for transporting logs are:
- Scribe - Facebook open-sourced this. This tool does more than just transporting logs.
- Logstash - This tool does a lot more than transporting logs.
We need to be able to collect, index, and search through our log data for anything we care to find. Some open-source tools out there for processing logs are:
Both Logstash (Kibana) and Graylog2 provides web interfaces to make life easy when analyzing and searching the underlying logs.
As you can see there are many options for managing and analyzing logs. Here’s what we currently do in our project:
It is simple for now and we’re hoping to keep it that way
Logs are useful. Logs that provide us with insights are more useful , and if they do this easily, they’re even more so. When we start a new project, we need to spend some time thinking about logging up front, since this is crucial part of managing growth. Thanks to a variety of open-source tools and libraries, it is not that expensive to try out different logging strategies. A properly thought-out logging architecture will save you a lot of time later on. I hope this series has shed some light on logging and why it’s important in this age of distributed computing.
Please do share your experiences, how’re handling logging on your projects. What went well? What didn’t? I’d love to see what folks are doing out there, and document them here, to make this knowledge available for others. Onward!
Cross-posted from Zolo Labs
In part 1 and 2, we looked at the history of logging and how to use SLF4J (a library I’m using with Zolodeck). In this part, we’re going to learn about different formats that we can use to log things in. We need to choose a correct log format so we can then get insights from our logs when we want them. If we cannot easily process the logs to get at the insights we need, then it doesn’t matter what logging framework we use and how many gigabytes of logs we collect everyday.
Purpose of logs:
- Debugging issues
- Historic analysis
- Business and Operational Analysis
If the logs are not adequate for these purposes, it means we’re doing something wrong. Unfortunately, I’ve seen this happen on many projects.
Consumer of logs:
Before we look into what format to log in, we need to know who is going to consume our logs for insights. Most logging implementations assume that humans will be consuming log statements. So they’re essentially formatted string (think printf) so that humans can easily read them. In these situations, what we’re really doing is creating too much log data for humans to consume to get any particularly useful insights. People then try to solve this overload problem by being cautious about what they log, the idea being that lesser information will be easier for humans to handle. Unfortunately, we can’t know beforehand what information we may need to debug an issue. What always ends up happening is that some important piece of information is missed out.
Remember how we can program machines to consume lots of data and provide better insights? So instead of creating log files for humans to consume, we need to create them for machines.
Format of logs:
Now that we know that machines are will be consuming our logs, we need to make a decision what format our logs should be in. Optimizing for machine readability makes sense, of course.
We can easily write a program using regex to parse our formatted strings log messages. Formatted strings, however, are still not a good fit, because of these following reasons:
- Logging Java stack traces can break our parser thanks to new line characters
- Developers can’t remove or add fields without breaking the parser.
What is a better way then?
JSON objects aren’t particularly human readable, but machines love them. We can use any JSON library to parse our logs. Developers can add and remove fields, our parser will still work fine. We can also log Java stacktraces without breaking our parser by just treating it as a field of data.
JSON log object fields:
Now that it makes sense to use JSON objects as logs, the question is what basic fields ought to be included. Obviously, this will depend on the application and business requirements. But at a minimum, we’d need the following fields:
- Log Level
- Module / Facility
- Line Number
- Trace ID
Standard JSON log format:
Instead of coming up with a custom JSON log format, we ought to just use a standard JSON log format. One option is to use GELF, which is used by many log analysis tools. GELF stands for Graylog Extended Log Format. There are lot of open source log appenders that create logs in GELF format. I’m using it on my side project Zolodeck, and we use logback-gelf.
In this part of this blog series, we learnt why we need to think about machine readable logs, and why we ought to use JSON as the log format. In the next part, we will look at how to get insights from logs, using a bunch of open source tools.
Cross-posted to Zolo Labs
In part 1 , we looked at the history of logging in Java. This time, we’ll learn more about SLF4J (Simple Logging Facade for Java).
From part 1, we saw that SLF4J is not a proxy to other logging frameworks, rather it is an API that allows end users to inject their desired logging framework at deployment time. SLF4J comes with adapters for many commonly used logging frameworks.
For my side project, I’m using SLF4J API. As many other projects, my project is dependent on many libraries. Unfortunately, not all libraries use SLF4J; and indeed, some of them use log4j API. You’ll be surprised to see how many newly written libraries use log4j (even though log4j is old and horrid). Even if they see the benefits to changing to SLF4J, it probably won’t happen soon. SLF4J comes with bridging modules for JCL, JUL and log4j to consolidate logging. These bridging modules redirect calls made to log4j, JCL and JUL to SLF4J instead. The image below explains the idea.
Mapped Diagnostic Context
Another awesome feature of SLF4J is MDC ( Mapped Diagnostic Context). Even though it sounds very complicated, it is simple to use and yet so powerful. MDC is essentially a hash map maintained by the logging framework that can be inserted into log messages. Applications can update this hash map using SLF4J. Currently only log4j and logback offer MDC functionality. SLF4J will simply delegate to log4j or logback. If you use some other logging framework, then SLF4J will maintain the hash map, but you’ll need to write some custom code to retrieve the information from the map.
What’s the use of MDC?
One of the main goals of logging is to audit and debug complex real-world, distributed systems. These systems handle multiple clients simultaneously. So log messages are going to be interleaved. So, it’s very important to consolidate log messages of a single client or a single API call. The simplest way is to tag all log messages with client-info and trace-id (we’ll discuss more about this in the next part of this series). Without MDC, we’d need to put this information in every logging call. With MDC, all we have to do is setup the context (client-info , trace-id etc) and all our log messages will automatically have this information. This transforms our log messages into an amazing resource to learn about the system and its users.
Using MDC in Clojure:
Unfortunately, as of now, clojure.tools.logging does not support MDC. I think this a big hole in clojure.tools.logging, as Clojure is known for building complex systems. Luckily the Clojure community is vibrant, and there’s an open source project by Malcolm Sparks called clj-logging-config. The main purpose of this library is to programmatically setup logging config files. But there is one function, with-logging-context that allows us to setup MDC. Even though I’m not programmatically setting up logging config in my project right now, I am using this library just for this with-logging-context function. I strongly believe this library (or at least the with-logging-context function) should be part of clojure.tools.logging.
So there! In this part we learnt more about SLF4J and MDC. In the next part we will learn more about structured logging.
(Via Zolo Labs)
Ok this title is little misleading. This blog post is not just about logging in clojure (or the JVM), but also about the whole amazing world of log management and analysis. I am working on this side project during nights and weekends. Since it’s a green-field project, I wanted to get logging right. Ater all, at the end of all my past projects , I always came to the conclusion that I could have done logging better. It isn’t just me, but even when I speak with other software developers they also feel that logging could use more attention. This is my attempt to help myself and in the process help others get logging right.
Logging library in clojure:
It is a very simple ecosystem. We have clojure/tools.logging. It is a set of logging macros that delegates to specific logging implementations. OK, and what does specific logging implementation mean? To understand this better, we need to understand the logging ecosystem in Java.
History of logging in Java:
Log4J was the first well known java logging library. In fact, it is still used in many projects. It’s probably the most popular.
When Sun realized that logging is important, instead of incorporating log4j, they went ahead and created another logging framework. It was called JUL (short form for java.utils.logging). Honestly, I do not see any benefit of using JUL over log4j. They probably went “hey, log4j wasn’t invented here, so lets create something else to do the same thing”. So that pretty much created the first split between java libraries where some were using log4j and some were using JUL. Already, it started getting difficult to make different libraries work together.
As libraries should not impose the use of a particular logging implementation, another project created called Commons Logging came to be. It was advertised as an ultra thin bridge between different logging implementations. So a library using commons logging could change the logging implementation (log4j or JUL ) at runtime. It used class-loader magic and dynamic binding to load specific logging implementations. Of course, it became complex to debug issues. In the end, it created more problems than it solved, and people were not particularly happy about commons logging.
So the creator of Log4j, Ceki Gulcu, created a logging facade called SL4j(short form for “Simple Logging Facade for Java”). SLF4J, unlike commons logging is an API, which allows end users to plug-in their desired logging system at deployment time. It does not have any class-loader magic or dynamic binding like commons loggings. SLF4J binding is hardwired at compile time to use a specific framework. As slf4j is an API, it needed an adapter layer on top of log4j, JUL and commons logging. SLF4J already comes with adaptor layers for commonly used logging frameworks.
[image from http://www.slf4j.org/manual.html]
Ceki Gulcu also felt it was time to improve log4j. So he created logback. You can go through the reasons to switch from log4j (or JUL) to logback. For now logback is considered the best logging implementation we have for the JVM. The best part of logback is it natively implements SLF4J API. So we do not need any slf4j adaptor layer to use logback with slf4j.
Now that we have seen the logging ecosystem in JVM, in the next part of this series, we’ll take a look some more at SLF4J.
Importing Java Class
In repl, when you want to import one Java class you can do
When you want to import more Java classes from a same package you can do
(import [java.util Date HashMap])
(ns com.techbehindtech.java (:import [java.util Date HashMap]))
(import 'java.util.Date) (def today (new Date)) ;; or (def today (Date.))
Calling Java instance methods
user> (import 'java.util.Date) java.util.Date user> (let [today (Date.)] (.getTime today)) 1286749020847
Calling Java static methods
user> (System/currentTimeMillis) 1286847946813
You want to write a function that will return UTC Java Calendar object set at a specific time.
user> (import [java.util Calendar TimeZone Date]) user> (defn utc-time [d] (let [cal (Calendar/getInstance)] (.setTimeZone cal (TimeZone/getTimeZone "UTC")) (.setTime cal d) cal)) #'user/utc-time
The let block is ugly. We could use doto to make this code better.
user> (import [java.util Calendar TimeZone Date]) user> (defn utc-time [d] (doto (Calendar/getInstance) (.setTimeZone (TimeZone/getTimeZone "UTC")) (.setTime d))) #'user/utc-time
Sometimes in java you want to make calls in chain
user> (.length (.getProperty (System/getProperties) "user.country")) 2
user> (. (. (System/getProperties) getProperty "user.country") length) 2
user> (.. (System/getProperties) (getProperty "user.country") (length)) 2
By default jvm will be using reflection to identify type. Reflection is slow. But we can give type hints that way jvm does not have to use reflection. For example we can rewrite utc-time with type-hints like this
user> (set! *warn-on-reflection* true) true user> (defn str-length [s] (.length s)) Reflection warning, NO_SOURCE_FILE:1 - reference to field length can't be resolved. #'user/str-length user> (defn str-length [#^String s] (.length s)) #'user/str-length
Implementing interfaces and extending classes
Let us implement Java Runnable interface
user> (proxy [Runnable]  (run  (println "running ..."))) #<Object$Runnable$36fc6471 user.proxy$java.lang.Object$Runnable$36fc6471@6dd33544>
In clojure 1.2, you could use reify macro to implement. In fact it is better than using proxy.
user> (reify Runnable (run [this] (println "running ..."))) #<user$eval1664$reify__1665 user$eval1664$reify__1665@574f7121>