Search Results

Search found 5140 results on 206 pages for 'crazy bash'.

Page 81/206 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • How do I fix my Ruby installation

    - by Robin Fisher
    Hi all, I rather cleverly (or not in hindsight) installed RVM, which kept hanging whilst compiling Rubies. I have removed the .rvm directory but now my system has reverted to Ruby 1.8.7 i.e. when I type: ruby -v which ruby they both point to 1.8.7. How do I get the ruby command to point to my 1.9.1 installation, which is located in /usr/local/lib/ruby/1.9.1? I'm on OSX 10.6. Thanks Robin

    Read the article

  • shell scripting: nested subshell ++

    - by jhon
    Hi guys, more than a problem, this is a request for "another way to do this" actually, if a want to use the result from a previous command I into another one, I use: R1=$("cat somefile | awk '{ print $1 }'" ) myScript -c $R1 -h123 then, a "better way"is: myScript -c $("cat somefile | awk '{ print $1 }'" ) -h123 but, what if I have to use several times the result, let's say: using several times $R1, well the 2 options: option 1 R1=$("cat somefile | awk '{ print $1}'") myScript -c $R1 -h123 -x$R1 option 2 myScript -c $("cat somefile | awk '{ print $1 }'" ) -h123 -x $("cat somefile | awk '{ print $1 }'" ) do you know another way to "store" the result of a previous command/script and use it as a argument into another command/script? thanks

    Read the article

  • Accessing variable from ARGV

    - by snaken
    I'm writing a cPanel postwwwact script, if you're not familiar with the script its run after a new account is created. it relies on the user account variable being passed to the script which i then use for various things (creating databases etc). However, I can't seem to find the right way to access the variable i want. I'm not that good with shell scripts so i'd appreciate some advice. I had read somewhere that the value i wanted would be included in $ARGV{'user'} but this simply gives "root" as opposed to the value i need. I've tried looping through all the arguments (list of arguments here) like this: #!/bin/sh for var do touch /root/testvars/$var done and the value i want is in there, i'm just not sure how to accurately target it. There's info here on doing this with PHP or Perl but i have to do this as a shell script. EDIT Ideally i would like to be able to call the variable by something other than $1 or $2 etc as this would create issues if an argument is added or removed Any ideas?

    Read the article

  • Capture log4J output with grep

    - by Fork
    Hi, I know that log4j by default outputs to stderror. I have been capturing the out put of my application with the following command: application_to_run 2> log ; cat log | grep FATAL Is there a way to capture the output without the auxiliary file?

    Read the article

  • Using terminal to record/save a data stream

    - by jonhurlock
    I want to be able to save a data stream which i am returning using the curl command. I have tried using the cat command, and piping it the curl command, however i'm doing it wrong. The code im currently using is: cat > file.txt | curl http://datastream.com/data Any help would be appreciated.

    Read the article

  • Find directories not containing a specific directory

    - by Morgan ARR Allen
    Been searching around for a bit and cannot find a solution for this one. I guess I'm looking for a leaf-directory by name. In this example I'd like to get a list of directories call 'modules' that do NOT have a subdirectory called module. modules/package1/modules/spackage1 modules/package1/modules/spackage2 modules/package1/modules/spackage3/modules modules/package1/modules/spackage3/modules/spackage1 modules/package2/modules/ The list I desire would contain modules/package1/modules/spackage3/modules/ modules/package2/modules/ All the directories named module that do not have a subdirectory called module I started with trying something this with no luck find . -name modules \! -exec sh -c 'find -name modules' \; -exec works on exit code, okay lets pass the count as exit code find . -name modules -exec sh -c 'exit $(find {} -name modules|grep -n ""|tail -n1|cut -d: -f1)' \; This should take the count of each subdirectory called modules and exit with it. No such love.

    Read the article

  • cd Terminal at a given directory after running a Python script?

    - by Dave Everitt
    I'm working on a simple Python script that can use subprocess and/or os to execute some commands, which is working fine. However, when the script exits I'd like to cd the actual Terminal (in this case OS X) so on exit, the new files are ready to use in the directory where the have been created. All the following (subprocess.Popen, os.system, os.chdir) can do what I want from within the script (i.e. they execute stuff in the target directory) but on exit leave the Terminal at the script's own directory, not the target directory. I'd like to avoid writing a shell script to temporary file just to achieve this, if this is at all possible anyway?

    Read the article

  • escape from a linux cli for loop

    - by aidan
    I'm doing something like this: for f in `find -iname '*.html'`; do scp $f remoteserver:$f; done; I've got through about 3 of the 1000 files and I've decided I want to abort the operation. CTRL+C only escapes the SCP login prompt and takes me to the next one, rather than escaping the for loop. Is there a better way than hitting CTRL+C 9997 times? Thanks!

    Read the article

  • Running an array of processes

    - by User1
    I have the following array: procs=( 'one a b c' 'two d e f' 'three g h i' ) I try run these processes from a loop (using echo instead of eval so I can debug): for proc in ${procs[@]} do echo $proc done I get: one a b c two d e f three g h i I wanted: one a b c two d e f three g h i What went wrong?

    Read the article

  • sed replacement does not work

    - by Robin Hood
    Hello, I have trouble using sed. I need to replace some lines in very deprecated HTML sites which consist of many files. My script does not work and I do not why. When I tried to find exact pattern with Netbeas it worked. find . -type f -name "*.htm?" -exec sed -i -r 's/ing\. Šuhajda Dušan\, Mírová 767\, 518 01 Dobruška\, \+420 737 980 333\,/REPLACEMENT/g' {} \; Where is the mistake? Is there an alternative to replace text without searching regular expression but plain text? Thanks for any respond.

    Read the article

  • Sort command not working as expected

    - by user964689
    If anybody can help me to write a loop to iterate over files in a folder it would save me a huge amount of time. I think it must be quite a simple solution ,but I currently don't know how to nest a loop within a loop. So far I have this script: cd /folderlocation/ for i in `</textfile_containing_lines_to_iterate_through` do #size=`echo $i | perl -nE '/:([\d-]+)/ && say abs(eval $1)'` #echo "$size" zcat dataset | head -n 18 > temp"$i".vcf tabix dataset $i >> temp"$i".vcf vcftools --window-pi 1000000 --vcf temp10individuals"$i".vcf >> run_summary.txt cat out.windowed.pi >> outputfile_2 #rm temp* done grep -v "PI" outputfile_2 > outputfile rm outputfile_2 I need to expand this so that the script will run multiple times, through all of the 'textfiles_containing_lines_to_iterate_through'. Currently I change the name of the textfile manually each time and re-run the script. So I'd need a loop that does this for file in folder, and also that uses the name of the file as part of the outputfile name so that I can match an output file to an inputfile. Any help would be really useful and greatly appreciated! Many thanks in advance.

    Read the article

  • Backup of folder + database - Python

    - by RadiantHex
    Hi there, I feel like this is quite delicate, I have various folders whith projects I would like to backup into a zip/tar file, but would like to avoid backing up files such as pyc files and temporary files. I also have a Postgres db I need to backup. Any tips for running this operation as a python script? Also, would there be anyway to stop the process from hogging resources in the process? Help would be very much appreciated.

    Read the article

  • Redirect output from sed 's/c/d/' myFile to myFile

    - by sixtyfootersdude
    I am using sed in a script to do a replace and I want to have the replaced file overwrite the file. Normally I think that you would use this: % sed -i 's/cat/dog/' manipulate sed: illegal option -- i However as you can see my sed does not have that command. I tried this: % sed 's/cat/dog/' manipulate > manipulate But this just turns manipulate into an empty file (makes sense). This works: % sed 's/cat/dog/' manipulate > tmp; mv tmp manipulate But I was wondering if there was a standard way to redirect output into the same file that input was taken from.

    Read the article

  • Shell Script- each unique user

    - by Dinis Monteiro
    Hi guys I need "for each unique user, report which group they are a member of and when they last logged in" so i have: #!/bin/sh echo "Your initial login:" who | cut -d' ' -f1 | sort | uniq echo "Now is logged:" whoami echo "Group ID:" id -G $whoami case $1 in "-l") last -Fn 10 | tr -s " " ;; *) last -Fn 10 | tr -s " " | egrep -v '(^reboot)|(^$)|(^wtmp a)|(^ftp)' | cut -d" " -f1,5,7 | sort -uM | uniq -c esac My question is: how i can show the each unique user? the script above only show the more recent user logged in the system, but i need all unique users. anyone can help? thanks

    Read the article

  • Basic Google search using a shell script

    - by Lri
    Something like this but using just basic shell scripting: #!/usr/bin/env python import urllib import json base = 'http://ajax.googleapis.com/ajax/services/search/web?v=1.0&' query = urllib.urlencode({'q' : "something"}) response = urllib.urlopen(base + query).read() data = json.loads(response) print data['responseData']['results'][0]['url'] Any more convenient alternatives to ajax.googleapis.com? If not, how should you encode the URL and parse JSON?

    Read the article

  • Is there a reasonable way to attach new path to PATH in bashrc?

    - by Ripley
    Guys I constantly need to attach new paths to the PATH environment variable in .bashrc, like below: export PATH=/usr/local/bin:$PATH Then to make it take effect, I always do 'source ~/.bashrc' or '. ~/.bashrc', while I found one shortcoming of doing so which make me uncomfortable. If I keep doing so, the PATH will getting longer and longer with many duplicated entries, for example in the previous command, if I source it twice, the value of PATH will be PATH=/usr/local/bin:/usr/local/bin:/usr/local/bin:$PATH(<-the original path). Is there a more decent way to attach new path to PATH in bashrc without making it ugly?

    Read the article

  • run multiple programs in linux

    - by Betamoo
    I am trying to write a .sh file that runs many programs simultaneously I tried this prog1 prog2 But that runs prog1 then waits until prog1 ends and then starts prog2... So how can I run them in parallel? Thanks

    Read the article

  • Add zip files from one archive to another using command line

    - by Curious2learn
    I have two zip archives. Say, set1 has 10 csv files created using Mac OS X 10.5.8 compress option, and set2 has 4 csv files similarly created. I want to take the 4 files from zipped archive set2 and add them to list of files in archive set1. Is there a way I can do that? I tried the following in Terminal: zip set1.zip set2.zip This adds the whole archive set2.zip to set1.zip, i.e., in set1.zip now I have: file1.csv, file2.csv,..., file10.csv, set2.zip What I instead want is: file1.csv, file2.csv,..., file10.csv, file11.csv, ..., file14.csv where, set2.zip is the archive containing file11.csv, ..., file14.csv. Thanks.

    Read the article

  • search for a string , and add if it matches

    - by Sharat Chandra
    I have a file that has 2 columns as given below.... 101 6 102 23 103 45 109 36 101 42 108 21 102 24 109 67 and so on...... I want to write a script that adds the values from 2nd column if their corresponding first column matches for example add all 2nd column values if it's 1st column is 101 add all 2nd column values if it's 1st colummn is 102 add all 2nd column values if it's 1st colummn is 103 and so on ... i wrote my script like this , but i'm not getting the correct result awk '{print $1}' data.txt > col1.txt while read line do awk ' if [$1 == $line] sum+=$2; END {print "Sum for time stamp", $line"=", sum}; sum=0' data.txt done < col1.txt

    Read the article

  • How to pass a variable to an awk print parameter...

    - by Jamie
    I'm trying extract the nth + 1 and nth + 3 columns from a file. This is what tried, which is a useful pseudo code: for i in {1..100} ; do awk -F "," " { printf \"%3d, %12.3f, %12.3f\\n\", \$1, \$($i+1), \$($i+3) } " All_Runs.csv > Run-$i.csv which, obviously doesn't work (but it seemed reasonable to hope). How can I do this?

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >