Failure remediation strategy for File I/O

Posted by Brett on Stack Overflow See other posts from Stack Overflow or by Brett
Published on 2013-11-03T21:40:33Z Indexed on 2013/11/03 21:54 UTC
Read the original article Hit count: 237

Filed under:
|
|

I'm doing buffered IO into a file, both read and write. I'm using fopen(), fseeko(), standard ANSI C file I/O functions. In all cases, I'm writing to a standard local file on a disk. How often do these file I/O operations fail, and what should the strategy be for failures? I'm not exactly looking for stats, but I'm looking for a general purpose statement on how far I should go to handle error conditions.

For instance, I think everyone recognizes that malloc() could and probably will fail someday on some user's machine and the developer should check for a NULL being returned, but there is no great remediation strategy since it probably means the system is out of memory. At least, this seems to be the approach taken with malloc() on desktop systems, embedded systems are different.

Likewise, is it worth reattempting a file I/O operation, or should I just consider a failure to be basically unrecoverable, etc.

I would appreciate some code samples demonstrating proper usage, or a library guide reference that indicates how this is to be handled. Any other data is, of course, welcome.

© Stack Overflow or respective owner

Related posts about c

    Related posts about file