Optimize shell and awk script

Posted by bryan on Super User See other posts from Super User or by bryan
Published on 2011-02-16T21:46:34Z Indexed on 2011/02/16 23:28 UTC
Read the original article Hit count: 340

Filed under:
|
|
|

I am using a combination of a shell script, awk script and a find command to perform multiple text replacements in hundreds of files. The files sizes vary between a few hundred bytes and 20 kbytes.

I am looking for a way to speed up this script.

I am using cygwin.

The shell script -

#!/bin/bash

if [ $# = 0 ]; then
 echo "Argument expected"
 exit 1
fi



while [ $# -ge 1 ]
do
   if [ ! -f $1 ]; then
     echo "No such file as $1"
     exit 1
   fi


  awk -f ~/scripts/parse.awk $1  > ${1}.$$

   if [ $? != 0 ]; then
      echo "Something went wrong with the script"
     rm ${1}.$$
      exit 1
   fi
mv ${1}.$$ $1
shift
done

The awk script (simplified) -

#! /usr/bin/awk -f

/HHH.Web/{
    if ( index($0,"Email") == 0)  {
        sub(/HHH.Web/,"HHH.Web.Email");
    }
    printf("%s\r\n",$0); 
    next;
}

The command line

find .  -type f  | xargs ~/scripts/run_parser.sh

© Super User or respective owner

Related posts about shell

Related posts about script