Professional Documents
Culture Documents
Logging Standards: Objective
Logging Standards: Objective
1 Objective:
2 Centralized Logging.
2.1 What are we using for centralized logging?
3 Use a standard interface across different projects.
3.1 Java Projects:
3.1.1 How do I add slf4j to my project?
3.1.2 How do I use the slf4j interface in Java?
3.2 How do I log in Apache Camel?
3.3 How do I log in BPM processes?
3.4 How do I log in SNOW?
4 Categorization of logs into levels
5 Standard easy to read format.
5.1 The multiple log lines per logging operation problem.
5.2 The 1 to 1 logging format.
5.3 The JSON log fields.
6 Logging patterns and anti patterns.
6.1 Log Flooding:
6.1.1 How to avoid log flooding.
6.1.1.1 Anti Pattern:
6.1.1.2 Pattern:
6.2 Senseless Logging.
6.2.1 How to avoid senseless logging.
6.2.1.1 Anti Pattern Example
6.2.1.2 Pattern:
6.3 Log and Throw
6.3.1 Anti-pattern:
6.3.2 Pattern:
6.4 Withhold the exception
6.4.1 Anti-pattern:
6.4.2 Pattern:
6.5 Use of System.out.println, System.err.println, e.printStackTrace
6.5.1 Anti-pattern
6.5.2 Pattern
Objective:
Define a set of standards and patterns that meet the following criteria:
1. Centralized logging.
2. Use a standard interface across different projects.
3. Categorization of logs.
4. Standard easy to read format.
5. Logging patterns and anti patterns.
Centralized Logging.
To enable management of logs across several projects/application it is imperative that we use a centralized logging solution. Centralized logging comes
with several benefits which is listed below:
Java Projects:
For java projects all logging must happen via the Simple Logging 4 Java (slf4j) interface. This interface abstract the underlying logging implementation so
that developers can simply log using a standard interface. This also allows us to choose/change a logging implementation without developers having to
retrain.
Interface first then implementation details.
Note that the instructions below is not complete. It only details the interfaces. Slf4j cannot run by itself. We will address this at the end where we briefly for
the sake of sanity outline the implementation we are using.
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
</dependency>
package com.ventia.wms.opti.api;
import java.io.BufferedReader;
import java.io.DataOutputStream;
import java.io.InputStreamReader;
import java.net.HttpURLConnection;
import java.net.URL;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
Maven Dependencies
Since apache camel is essentially a Java project you need to add the same dependencies to the maven file as outlined in the Java section.
<route id="foo_service">
<from id="foo_service_from" uri="direct:foo_service"/>
<log logName="com.ventia.wms.fuse.foo_service" loggingLevel="INFO" message="foo_service request header:
${headers}"/>
<log logName="com.ventia.wms.fuse.foo_service" loggingLevel="INFO" message="foo_service request body:
${body}"/>
<setHeader headerName="CamelHttpMethod" id="foo_service_set_http_method">
<constant>GET</constant>
</setHeader>
<setHeader headerName="CamelHttpPath" id="foo_service_set_http_path">
<constant/>
</setHeader>
<inOut id="foo_service_call" uri="http4://fooservice.com"/>
<unmarshal id="foo_service_unmarshal">
<json library="Jackson"/>
</unmarshal>
<log logName="com.ventia.wms.fuse.foo_service" loggingLevel="INFO" message="foo_service response
header: ${headers}"/>
<log logName="com.ventia.wms.fuse.foo_service" loggingLevel="INFO" message="foo_service response body:
${body}"/>
</route>
logger
logDebug
logInfo
logError
logger Method
The logger method in Pulse class internally calls logInfo with the provided string.
Include the below line into your Script task to log in a standard fashion
com.ventia.Pulse.logger(data_task_id, 'I like logging at INFO, but really I should call LogInfo');
com.ventia.Pulse.logDebug(data_task_id, 'I like logging at DEBUG');
com.ventia.Pulse.logInfo(data_task_id, 'I like logging at INFO');
com.ventia.Pulse.logError(data_task_id, 'I like logging at ERROR');
Instead, SNOW provides the script include GSLog() out of the box.
GSLog provides the following benefits:
It can log at following different levels: Debug, Info, Notice, Warning, Error & Critical
It will tag the log entry with a caller label so that the source of the entry can easily be identified.
The level of logging can be set simply by modifying a system property.
//definition
var gl = new GSLog('ventia.log.level', '<script include / business rule name>');
//usage
gl.debug('<Say debug/testing stuff here>');
//usage
this.logger.debug('<Say debug/testing stuff here>');
INFO Local Development, Development Servers, Use to log inputs, outputs only.
Production Servers
ERROR Local Development, Development Servers, Use to log exceptions when they are handled
Production Servers
WARN Local Development, Development Servers, Use to log when normal logic flow is not working. Use to log exceptions that you
Production Servers throw up the stack.
DEBUG Local Development, Development Servers Use to log normal logic flow and any supporting variables.
TRACE Local Development, Development Servers Use to log everything but the kitchen sink. Beware this can cause analysis
paralysis.
For example:
Log A Item
log.info("hello world");
Log Output
However when logging exceptions this assumption falls apart as stack traces are logged as several log lines, see the example below:
Here you can see the logging format fall apart as one log entry does not mean one log line. This makes the standard logging format unsuitable for
centralized logging as exception will get spread over several lines and force the user to trawl through lines to assemble the complete log entry. This format
is a PITA to deal with.
{"@timestamp":"2019-09-25T10:33:28.964+10:00","@version":"1","message":"hello world","thread_name":"main","
level":"INFO","level_value":20000,"appName":"foo","env":"dev","mvnVersion":"1.16"}
Here you can see the structure of the log very clearly. Even when you log an exception which is multiple lines long you get 1 log line per log operation as
shown below:
Error Log In One Line
Notice that the stacktrace is now logged in a stacktrace element of the json structure. The stack trace still has its old format with line breaks and tabs in
however they are now capture as a whole and not a individual lines.
version The json schema version for logstash. Auto generated by our implementation.
message The value that was logged by the logging operation. Make it descriptive but try stay away from overly verbose messages.
logger_name The name of the class that logged this message. Auto generated by our implementation.
thread_name The name of the thread that logged this message. Auto generated by our implementation.
level The logging level for this logging operation. Valid values are INFO, WARN, DEBUG, ERROR and TRACE. These can be used in filters.
level_value The logging level value for the logging operation. Auto generated by our implementation.
HOSTNAME The host name of the machine the application was running on. Auto generated by our implementation.
appName The application name injected from the Maven information in the pom file. Auto generated by our implementation.
env The environment the application is running in: Valid values are dev, test, uat and prod. Auto generated by our implementation.
stack_trace This will contain the multi line stack traces when there is an error. Auto generated by our implementation when an exception occurs.
Log Flooding:
This is where log are flooded with messages from a loop/iteration.
1. Log all iterations on DEBUG level. Logging normal execution flow is not needed on INFO level.
2. Only log the exceptions with supporting information.
Anti Pattern:
Pattern:
How Not To Flood The Log
Senseless Logging.
Logging normal execution of logic is senseless.
if(value==1){
LOG.info("value 1 found");
}
else if(value==2){
LOG.info("value 2 found");
Pattern:
if(value==1){
...
}
else if(value==2){
...
}
else{
LOG.warn("Houston we have a problem the value was not 1 or 2");
}
Imagine the following problem when performing the anti-pattern “Log and throw”: if you log your exception and throw it, the calling method might log that
exception too. And hey – maybe the next one does the same. Your log-files are going to hold the same information several times. Which doesn’t increase
the logs’ readability.
Anti-pattern:
Pattern:
Anti-pattern:
Pattern:
Show Exception Pattern
Anti-pattern
Pattern