Bash: how to simply parallelize tasks?
        Posted  
        
            by NoozNooz42
        on Stack Overflow
        
        See other posts from Stack Overflow
        
            or by NoozNooz42
        
        
        
        Published on 2010-06-09T01:12:00Z
        Indexed on 
            2010/06/09
            2:02 UTC
        
        
        Read the original article
        Hit count: 421
        
bash
|concurrency
I'm writing a tiny script that calls the "PNGOUT" util on a few hundred PNG files. I simply did this:
find $BASEDIR -iname "*png" -exec pngout {} \;
And then I looked at my CPU monitor and noticed only one of the core was used, which is quite sad.
In this day and age of dual, quad, octo and hexa (?) cores desktop, how do I simply parallelize this task with Bash? (it's not the first time I've had such a need, for quite a lot of these utils are mono-threaded... I already had the case with mp3 encoders).
Would simply running all the pngout in the background do? How would my find command look like then? (I'm not too sure how to mix find and the '&' character)
I if have three hundreds pictures, this would mean swapping between three hundreds processes, which doesn't seem great anyway!?
Or should I copy my three hundreds files or so in "nb dirs", where "nb dirs" would be the number of cores, then run concurrently "nb finds"? (which would be close enough)
But how would I do this?
© Stack Overflow or respective owner