mmap() for large file I/O?

Posted by Boatzart on Stack Overflow See other posts from Stack Overflow or by Boatzart
Published on 2010-04-20T23:48:20Z Indexed on 2010/04/20 23:53 UTC
Read the original article Hit count: 299

Filed under:
|
|

I'm creating a utility in C++ to be run on Linux which can convert videos to a proprietary format. The video frames are very large (up to 16 megapixels), and we need to be able to seek directly to exact frame numbers, so our file format uses libz to compress each frame individually, and append the compressed data onto a file. Once all frames are finished being written, a journal which includes meta data for each frame (including their file offsets and sizes) is written to the end of the file.

I'm currently using ifstream and ofstream to do the file i/o, but I am looking to optimize as much as possible. I've heard that mmap() can increase performance in a lot of cases, and I'm wondering if mine is one of them. Our files will be in the tens to hundreds of gigabytes, and although writing will always be done sequentially, random access reads should be done in constant time. Any thoughts as to whether I should investigate this further, and if so does anyone have any tips for things to look out for?

Thanks!

© Stack Overflow or respective owner

Related posts about linux

Related posts about file-io