Best practice for handling memory leaks in large Java projects?

Posted by knorv on Stack Overflow See other posts from Stack Overflow or by knorv
Published on 2010-06-02T18:15:49Z Indexed on 2010/06/02 18:24 UTC
Read the original article Hit count: 186

Filed under:
|
|

In almost all larger Java projects I've been involved with I've noticed that the quality of service of the application degrades with the uptime of the container. This is most probably due to memory leaks in the code.

The correct way to solve this problem is obviously to trace back to the root cause of the problem and fix the leaks in the code. The quick and dirty way of solving the problem is simply restarting Tomcat (or whichever servlet container you're using).

These are my three questions:

  • Assume that you choose to solve the problem by tracing the root cause of the problem (the memory leaks), how would you collect data to zoom in on the problem?

  • Assume that you choose the quick and dirty way of speeding things up by simply restarting the container, how would you collect data to choose the optimal restart cycle?

  • Have you been able to deploy and run projects over an extended period of time without ever restarting the servlet container to regain snappiness? Or is an occasional servlet restart something that one has to simply accept?

© Stack Overflow or respective owner

Related posts about java

Related posts about Performance