Concurrency pattern of logger in multithreaded application

Posted by Dipan Mehta on Programmers See other posts from Programmers or by Dipan Mehta
Published on 2012-10-11T12:52:01Z Indexed on 2012/10/11 15:47 UTC
Read the original article Hit count: 234

The context: We are working on a multi-threaded (Linux-C) application that follows a pipeline model.

Each module has a private thread and encapsulated objects which do processing of data; and each stage has a standard form of exchanging data with next unit.

The application is free from memory leak and is threadsafe using locks at the point where they exchange data. Total number of threads is about 15- and each thread can have from 1 to 4 objects. Making about 25 - 30 odd objects which all have some critical logging to do.

Most discussion I have seen about different levels as in Log4J and it's other translations. The real big questions is about how the overall logging should really happen?

One approach is all local logging does fprintf to stderr. The stderr is redirected to some file. This approach is very bad when logs become too big.

If all object instantiate their individual loggers - (about 30-40 of them) there will be too many files. And unlike above, one won't have the idea of true order of events. Timestamping is one possibility - but it is still a mess to collate.

If there is a single global logger (singleton) pattern - it indirectly blocks so many threads while one is busy putting up logs. This is unacceptable when processing of the threads are heavy.

So what should be the ideal way to structure the logging objects? What are some of the best practices in actual large scale applications?

I would also love to learn from some of the real designs of large scale applications to get inspirations from!

© Programmers or respective owner

Related posts about design-patterns

Related posts about multithreading