Programmatic resource monitoring per process in Linux

Posted by tuxx on Stack Overflow See other posts from Stack Overflow or by tuxx
Published on 2009-11-02T21:19:29Z Indexed on 2010/05/30 21:32 UTC
Read the original article Hit count: 312

Filed under:
|
|
|
|

Hi,

I want to know if there is an efficient solution to monitor a process resource consumption (cpu, memory, network bandwidth) in Linux. I want to write a daemon in C++ that does this monitoring for some given PIDs. From what I know, the classic solution is to periodically read the information from /proc, but this doesn't seem the most efficient way (it involves many system calls). For example to monitor the memory usage every second for 50 processes, I have to open, read and close 50 files (that means 150 system calls) every second from /proc. Not to mention the parsing involved when reading these files.

Another problem is the network bandwidth consumption: this cannot be easily computed for each process I want to monitor. The solution adopted by NetHogs involves a pretty high overhead in my opinion: it captures and analyzes every packet using libpcap, then for each packet the local port is determined and searched in /proc to find the corresponding process.

Do you know if there are more efficient alternatives to these methods presented or any libraries that deal with this problems?

© Stack Overflow or respective owner

Related posts about linux

Related posts about process