K
K
kirawa2016-10-26 14:09:01
Java
kirawa, 2016-10-26 14:09:01

How to set up log4j for a multi-threaded environment?

There is a service that receives phone calls, all data is logged to files using log4j.
There are a lot of sessions and they overlap each other. I tried to add a SessionID through MDC in order to at least somehow deal with the logs, but since the MDC is a static class, the data was overwritten and mess again. How to be?
one of the templates is:

# ERROR
#log4j.appender.ERROR=org.apache.log4j.DailyRollingFileAppender
log4j.appender.ERROR=org.apache.log4j.RollingFileAppender
log4j.appender.ERROR.File=${folder_real}error.log
#log4j.appender.ERROR.DatePattern='.'yyyy-MM-dd
log4j.appender.ERROR.MaxFileSize=4096MB
log4j.appender.ERROR.MaxBackupIndex=1
log4j.appender.ERROR.layout=org.apache.log4j.PatternLayout
log4j.appender.ERROR.layout.ConversionPattern=%d{dd/MM/yyyy HH:mm:ss} %-5p - %X{SCESession}:%C{1}:%L %m%n 
log4j.appender.ERROR.Encoding=UTF-8

I'm looking at this appender log4j.appender.asyncLog=com.log.AsyncAppenderHelper
Will it help? The thing is that it takes a lot of time for me to roll out a new patch to the prom.

Answer the question

In order to leave comments, you need to log in

1 answer(s)
A
Andrew, 2016-11-05
@AndreiLED

since the MDC static class data was overwritten and mess again

This is not so - MDC uses ThreadLocal, so the values ​​​​in it for each thread are their own, therefore, in the log file for each message, exactly the same id that was specified in the MDC for the thread that generated the message will be displayed.
The porridge, most likely, arises simply from the fact that parallel streams write in parallel and as a result, in the logs, the records of each specific session do not go all in a row, but are interspersed with other sessions.
But this is perfectly normal for logs in a parallel environment. The session id is added to the logs not at all for ordering, but so that later it would be easy to extract the logs related to a specific session. In its simplest form, this is done throughgrep %session% file.log. If the system generates a lot of logs, then systems like Splunk are used to simplify searching through many files and organize all the logs into a single database.
log4j.appender.asyncLog=com.log.AsyncAppenderHelper
will it help?

I wonder why some third-party class (judging by the package), and not AsyncAppender from log4j itself.
But nevertheless, this will not help: the task that AsyncAppender solves is to accumulate messages in memory and then write them to disk in a batch (RollingFileAppender is a synchronous appender and it writes all messages to the file at once). The order of messages when using AsyncAppender does not change in any way - they still go in the order in which they were generated.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question