Archive software for big files and fast index

Posted by AkiRoss on Super User See other posts from Super User or by AkiRoss
Published on 2012-10-05T22:19:56Z Indexed on 2012/10/06 3:40 UTC
Read the original article Hit count: 427

Filed under:
|
|

I'm currently using tar for archiving some files. Problem is: archives are pretty big, contains many data and tar is very slow when listing and extracting.

I often need to extract single files or folders from the archive, but I don't currently have an external index of files.

So, is there an alternative for Linux, allowing me to build uncompressed archive files, preserving the file attributes AND having fast access list table?

I'm talking about archives of 10 to 100 GB, and it's pretty impractical to wait several minutes to access a single file.

Anyway, any trick to solve this problem is welcome (but single archives are non-optional, so no rsync or similar).

Thanks in advance!

EDIT: I'm not compressing archives, and using tar I think they are too slow. To be precise about "slow", I'd like that:

  • listing archive content should take time linear in files count inside the archive, but with very little constant (e.g. if a list of all the files is included at the head of the archive, it could be very fast).
  • extraction of a target file/directory should (filesystem premitting) take time linear with the target size (e.g. if I'm extracting a 2MB PDF file in a 40GB directory, I'd really like it to take less than few minutes... If not seconds).

Of course, this is just my idea and not a requirement. I guess such performances could be achievable if the archive contained an index of all the files with respective offset and such index is well organized (e.g. tree structure).

© Super User or respective owner

Related posts about linux

Related posts about archiving