Search Results

Search found 22 results on 1 pages for 'subshell'.

Page 1/1 | 1 

  • How to pass bash script arguments to a subshell

    - by Ralf Holly
    I have a wrapper script that does some work and then passes the original parameters on to another tool: #!/bin/bash # ... other_tool -a -b "$@" This works fine, unless the "other tool" is run in a subshell: #!/bin/bash # ... bash -c "other_tool -a -b $@" If I call my wrapper script like this: wrapper.sh -x "blah blup" then, only the first orginal argument (-x) is handed to "other_tool". In reality, I do not create a subshell, but pass the original arguments to a shell on an Android phone, which shouldn't make any difference: #!/bin/bash # ... adb sh -c "other_tool -a -b $@"

    Read the article

  • Why does subshell not inherit exported variable (PS1)?

    - by amn
    After some debugging I finally narrowed down the problem as to why my X session xterm prompt does not appear according to my PS1 setting. If I run sh -c env, it doesn't even show PS1 in the list. Why? export PS1='test' sh -c env # No PS1 in the list, default prompt appearance (shell name + version) Substituting sh with bash yields same result, alas the behavior appears to be the same for both shells/modes. As far as I understood from man bash, the environment resulting from command run by shell with -c should include the exported variables. And it does - exporting FOOBAR results in FOOBAR listed in env run by subshell. It appears that the story is different if the variable is PS1 however. What is going on? I want my prompt propagated throughout the process tree and system. For matters sake, it is set in /etc/profile.d/user.sh (a file I created myself) with the following: PS1='\u@\H \w \$ ' export PS1 I am running Arch Linux (updated yesterday.)

    Read the article

  • How to Detect that Current (Bash) Shell is a (Vi/Vim) Subshell?

    - by Jeet
    From inside Vi/Vim, I can type: :shell to drop into a shell. Is there any way to detect that I am in a Vi-spawned subshell? The environmental variable SHLVL is 2, but that does not tell me explicitly that I am in a Vi/Vim-spawned subshell. On OS X, the following variables are also set: MYVIMRC, VIMRUNTIME, VIM. How universal are these? Can I count on these being set in any system, if and only if I am in a Vi/Vim subshell? If not, is there any portable, robust and hopefully efficient way to tell that I am in a Vi/Vim subshell? Thanks.

    Read the article

  • bash/ksh/scripting eval subshell quotes

    - by jhon
    Hi tehere, I'm using ksh and have some little troubles, could you help me? why does not this code run? [root]$ CMD="ls -ltr" [root]$ eval "W=$( $CMD )" [root]$ ksh: ls -ltr: not found. [root]$ echo $W and this works fine: [root]$ CMD="ls -ltr" [root]$ eval 'W=$('$CMD')' [root]$ echo $W Thanks :-)

    Read the article

  • shell scripting: nested subshell ++

    - by jhon
    Hi guys, more than a problem, this is a request for "another way to do this" actually, if a want to use the result from a previous command I into another one, I use: R1=$("cat somefile | awk '{ print $1 }'" ) myScript -c $R1 -h123 then, a "better way"is: myScript -c $("cat somefile | awk '{ print $1 }'" ) -h123 but, what if I have to use several times the result, let's say: using several times $R1, well the 2 options: option 1 R1=$("cat somefile | awk '{ print $1}'") myScript -c $R1 -h123 -x$R1 option 2 myScript -c $("cat somefile | awk '{ print $1 }'" ) -h123 -x $("cat somefile | awk '{ print $1 }'" ) do you know another way to "store" the result of a previous command/script and use it as a argument into another command/script? thanks

    Read the article

  • Data in linux FIFO seems lost

    - by Utoah
    Hi, I have a bash script which wants to do some work in parallel, I did this by putting each job in an subshell which is run in the background. While the number of job running simultaneously should under some limit, I achieve this by first put some lines in a FIFO, then just before forking the subshell, the parent script is required to read a line from this FIFO. Only after it gets a line can it fork the subshell. Up to now, everything works fine. But when I tried to read a line from the FIFO in the subshell, it seems that only one subshell can get a line, even if there are apparently more lines in the FIFO. So I wonder why cannot other subshell(s) read a line even when there are more lines in the FIFO. My testing code looks something like this: #!/bin/sh fifo_path="/tmp/fy_u_test2.fifo" mkfifo $fifo_path #open fifo for r/w at fd 6 exec 6 $fifo_path process_num=5 #put $process_num lines in the FIFO for ((i=0; i<${process_num}; i++)); do echo "$i" done &6 delay_some(){ local index="$1" echo "This is what u can see. $index \n" sleep 20; } #In each iteration, try to read 2 lines from FIFO, one from this shell, #the other from the subshell for i in 1 2 do date /tmp/fy_date #If a line can be read from FIFO, run a subshell in bk, otherwise, block. read -u6 echo " $$ Read --- $REPLY --- from 6 \n" /tmp/fy_date { delay_some $i #Try to read a line from FIFO read -u6 echo " $$ This is in child # $i, read --- $REPLY --- from 6 \n" /tmp/fy_date } & done And the output file /tmp/fy_date has content of: Mon Apr 26 16:02:18 CST 2010 32561 Read --- 0 --- from 6 \n Mon Apr 26 16:02:18 CST 2010 32561 Read --- 1 --- from 6 \n 32561 This is in child # 1, read --- 2 --- from 6 \n

    Read the article

  • How to prevent command/script from changing global environment

    - by guillermooo
    I need to run scriptblocks/scripts from the current top-level shell and I want them to leave the global environment unmodified. So far, I've only been able to think of the following possibilities: powershell -file <script> powershell -noprofile -command <scriptblock> The problem is, that they are very slow. For instance, I would like to be able to do: mkdir newdir cd newdir $env:NEW_VAR = 100 ni -item f 'newfile.txt' ...so that my shell's working dir wouldn't change and $env:NEW_VAR wouldn't be set in the global environment. Are there any more alternatives to accomplish this?

    Read the article

  • how to export VARs from a subshell to a parent shell?

    - by webwesen
    I have a Korn shell script #!/bin/ksh # set the right ENV case $INPUT in abc) export BIN=${ABC_BIN} ;; def) export BIN=${DEF_BIN} ;; *) export BIN=${BASE_BIN} ;; esac # exit 0 <- bad idea for sourcing the file now these VARs are export'ed only in a subshell, but I want them to be set in my parent shell as well, so when I am at the prompt those vars are still set correctly. I know about . .myscript.sh but is there a way to do it without 'sourcing'? as my users often forget to 'source'. EDIT1: removing the "exit 0" part - this was just me typing without thinking first EDIT2: to add more detail on why do i need this: my developers write code for (for simplicity sake) 2 apps : ABC & DEF. every app is run in production by separate users usrabc and usrdef, hence have setup their $BIN, $CFG, $ORA_HOME, whatever - specific to their apps. so ABC's $BIN = /opt/abc/bin # $ABC_BIN in the above script DEF's $BIN = /opt/def/bin # $DEF_BIN etc. now, on the dev box developers can develop both ABC and DEF at the same time under their own user account 'justin_case', and I make them source the file (above) so that they can switch their ENV var settings back and forth. ($BIN should point to $ABC_BIN at one time and then I need to switch to $BIN=$DEF_BIN) now, the script should also create new sandboxes for parallel development of the same app, etc. this makes me to do it interactively, asking for sandbox name, etc. /home/justin_case/sandbox_abc_beta2 /home/justin_case/sandbox_abc_r1 /home/justin_case/sandbox_def_r1 the other option i have considered is writing aliases and add them to every users' profile alias 'setup_env=. .myscript.sh' and run it with setup_env parameter1 ... parameterX this makes more sense to me now

    Read the article

  • bash and flock (file lock) - Doesn't seem to be locking....

    - by Rory
    I am playing with using flock, a bash command for file locks to prevent 2 different instances of the code from running more than once. I am using this testing code: ( ( flock -x 200 ; sleep 10 ; echo "original finished" ; ) 200>./test.lock ) & ( sleep 2 ; ( flock -x -w 2 200 ; echo "a finished" ) 200>./test.lock ) & I am running 2 subshells (backgrounded). The (flock NUM; ...) NUM>FILE syntax is from flock's man page. I expect that the first subshell will get an exclusive lock on test.lock, then wait 10 seconds, then print "original finished", all the time holding the lock. The second subshell will start at more or less the same time, wait 2 seconds, then try to get a lock on test.lock, but timeout after 2 seconds. If it gets a lock, then it'll print "a finished". If it doesn't get the lock, that subshell should stop, and nothing should be printed. Since the first subshell is waiting longer, it will keep the lock for 10 seconds, so the second subshell should not get the lock, and shouldn't finish. i.e. one should see "original finished" printed and not both. What actually happens is that "a finished" is printed, then "original finished" is printed. This implies that that the second subshell is either (a) not using the same lock as the first subhsell or (b) that it fails to get the lock, but continues to execute or (c) something else. Why don't those locks work?

    Read the article

  • Is it possible to get the exit code from a subshell?

    - by Geo
    Let's imagine I have a bash script, where I call this: bash -c "some_command" do something with code of some_command here Is it possible to obtain the code of some_command? I'm not executing some_command directly in the shell running the script because I don't want to alter it's environment.

    Read the article

  • why is $0 set to -bash?

    - by James Shimer
    First login process name seems to be set to "-bash", but if I subshell then it becomes "bash". for example: root@nowere:~# echo $0 -bash root@nowere:~# bash root@nowere:~# echo $0 bash -bash is causing some scripts to fail, such as . /usr/share/debconf/confmodule exec /usr/share/debconf/frontend -bash Can't exec "-bash": No such file or directory at /usr/share/perl/5.14/IPC/Open3.pm line 186. open2: exec of -bash failed at /usr/share/perl5/Debconf/ConfModule.pm line 59 Anyone know the reason $0 is set to -bash?

    Read the article

  • /proc/pid/environ missing variables

    - by Josh Arenberg
    google is giving no love on this one today, so I turn to the experts... I'm currently hacking together a script that relies on the /proc/pid/environ feature in Linux (RHEL 4) to check for a particular environment variable. Trouble is, it seems certain environment variables aren't showing up in there for some reason. Example: create some test vars: $ export T_1=testval TEST_1=testval T=testval TESTING_LONGEST=testval open a subshell: $bash $ cat /proc/self/environ|tr "\0" "\n"|grep testval TESTVARIABLE_LONGEST=testval T=testval hmm... where did T_1 and TEST_1 go?? what rules govern this strange universe? Thanks in advance, Josh

    Read the article

  • Add directory to $PATH if it's not already there

    - by Doug Harris
    Has anybody written a bash function to add a directory to $PATH only if it's not already there? I typically add to PATH using something like: export PATH=/usr/local/mysql/bin:$PATH If I construct my PATH in .bash_profile, then it's not read unless the session I'm in is a login session -- which isn't always true. If I construct my PATH in .bashrc, then it runs with each subshell. So if I launch a Terminal window and then run screen and then run a shell script, I get: $ echo $PATH /usr/local/mysql/bin:/usr/local/mysql/bin:/usr/local/mysql/bin:.... I'm going to try building a bash function called add_to_path() which only adds the directory if it's not there. But, if anybody has already written (or found) such a thing, I won't spend the time on it.

    Read the article

  • Change present working directory of a calling shell from a ruby script

    - by Erik Kastman
    I'm writing a simple ruby sandbox command-line utility to copy and unzip directories from a remote filesystem to a local scratch directory in order to unzip them and let users edit the files. I'm using Dir.mktmpdir as the default scratch directory, which gives a really ugly path (for example: /var/folders/zz/zzzivhrRnAmviuee+++1vE+++yo/-Tmp-/d20100311-70034-abz5zj) I'd like the last action of the copy-and-unzip script to cd the calling shell into the new scratch directory so people can access it easily, but I can't figure out how to change the PWD of the calling shell. One possibility is to have the utility print out the new path to stdout and then run the script as part of a subshell (i.e. cd $(sandbox my_dir) ), but I want to print out progress on the copy-and-unzipping since it can take up to 10 minutes, so this won't work. Should I just have it go to a pre-determined, easy-to-find scratch directory? Does anyone have a better suggestion? Thanks in advance for your help. -Erik

    Read the article

  • How prevalent is the use of Emacs' eshell in multi-platform development?

    - by pajato0
    I've only recently become aware of Emacs' eshell tool. It looks quite powerful in that it is entirely written in Emacs Lisp and does not require native subshell support. The Emacs info documentation is a bit sparse but EmacsWiki has pretty decent information, at least on a first glance. Given the potential value of eshell as a scripting tool/programmer's aid that works equally well on multiple platforms I'm wondering how prevalent the use of eshell versus the normal (bash) shell is among software developers. Would those of you who have taken the time to learn it recommend it or is it one of those many interesting ideas that did not really pan out?

    Read the article

  • How do I redirect stdin/stdout when I have a sequence of commands in Bash?

    - by Tom
    I've currently got a Bash command being executed (via Python's subprocess::Popen) which is reading from stdin, doing something and outputing to stdout. Something along the lines of: pid = subprocess.Popen( ["-c", "cmd1 | cmd2"], stdin = subprocess.PIPE, stdout = subprocess.PIPE, shell =True ) output_data = pid.communicate( "input data\n" ) Now, what I want to do is to change that to execute another command in that same subshell that will alter the state before the next commands execute, so my shell command line will now (conceptually) be: cmd0; cmd1 | cmd2 Is there any way to have the input sent to cmd1 instead of cmd0 in this scenario? I'm assuming the output will include cmd0's output (which will be empty) followed by cmd2's output. cmd0 shouldn't actually read anything from stdin, does that make a difference in this situation? I know this is probably just a dumb way of doing this, I'm trying to patch in cmd0 without altering the other code too significantly. That said, I'm open to suggestions if there's a much cleaner way to approach this.

    Read the article

  • Killing a script launched in a Process via os.system()

    - by L.J.
    I have a python script which launches several processes. Each process basically just calls a shell script: from multiprocessing import Process import os import logging def thread_method(n = 4): global logger command = "~/Scripts/run.sh " + str(n) + " >> /var/log/mylog.log" if (debug): logger.debug(command) os.system(command) I launch several of these threads, which are meant to run in the background. I want to have a timeout on these threads, such that if it exceeds the timeout, they are killed: t = [] for x in range(10): try: t.append(Process(target=thread_method, args=(x,) ) ) t[-1].start() except Exception as e: logger.error("Error: unable to start thread") logger.error("Error message: " + str(e)) logger.info("Waiting up to 60 seconds to allow threads to finish") t[0].join(60) for n in range(len(t)): if t[n].is_alive(): logger.info(str(n) + " is still alive after 60 seconds, forcibly terminating") t[n].terminate() The problem is that calling terminate() on the process threads isn't killing the launched run.sh script - it continues running in the background until I either force kill it from the command line, or it finishes internally. Is there a way to have terminate also kill the subshell created by os.system()?

    Read the article

  • While loop read multiple lines from a grep

    - by Basil
    I'm writing a script in AIX 5.3 that will loop through the output of a df and check each volume against another config file. If the volume appears in the config file, it will set a flag which is needed later in the script. If my config file only has a single column and I use a for loop, this works perfectly. My problem, however, is that if I use a while read loop to populate more than one variable per line, any variables I set between the while and the done are discarded. For example, assuming the contents of /netapp/conf/ExcludeFile.conf are a bunch of lines containing two fields each: volName="myVolume" utilization=70 thresholdFlag=0 grep volName /netapp/conf/ExcludeFile.conf | while read vol threshold; do if [ $utilization -ge $threshold ] ; then thresholdFlag=1 fi done echo "$thresholdFlag" In this example, thresholdFlag will always be 0, even if the volume appears in the file and its utilization is greater than the threshold. I could have added an echo "setting thresholdFlag to 1" in there, see the echo, and it'll still echo a 0 at the end. Is there a clean way to do this? I think my while loop is being done in a subshell, and changes I make to variables in there are actually being made to local variables that are discarded after the done.

    Read the article

  • How to use a common library of environment variables among different languages?

    - by JDS
    We have three main languages with which we perform system tasks: Bash, Ruby, and PHP, and Perl. Four, four main languages. We use managed environment variables to provide authorization info that automated scripts need. For example, a mysql user account and password. We'd like to use one single managed file to maintain these variables. In some instances, for example, in cron, these environment variables are not available. They are made available in CLI scripts because we source the env file in everyone's profile. But something like cron doesn't do that. On the CLI, when the env file is sourced, any given script can access those variables. Bash has them directly, PHP in $_ENV, ruby in ENV, etc. We can't source the file into non-Bash scripts, because most languages implement shell commands by running them in a subshell. We considered parsing the Bash, converting to the script's lang, and running the equivalent of "exec(parsed_output)" on the resulting strings. What is a good solution to providing managed environment vars to scripts running in cron, or similar?

    Read the article

  • removing a case clause: bash expansion in sed regexp: X='a\.b' ; Y=';;' sed -n '/${X}/,/${Y}/d'

    - by ChrisSM
    I'm trying to remove a case clause from a bash script. The clause will vary, but will always have backslashes as part of the case-match string. I was trying sed but could use awk or a perl one-liner within the bash script. The target of the edit is straightforward, resembles: $cat t.sh case N in a\.b); #[..etc., varies] ;; esac I am running afoul of the variable expansion escaping backslashes, semicolons or both. If I 'eval' I strip my backslash escapes. If I don't, the semi-colons catch me up. So I tried subshell expansion within the sed. This fouls the interpreter as I've written it. More escaping the semi-colons doesn't seem to help. X='a\.b' ; Y=';;' sed -i '/$(echo ${X} | sed -n 's/\\/\\\\/g')/,/$(echo ${Y} | sed -n s/\;/\\;/g')/d t.sh And this: perl -i.bak -ne 'print unless /${X}/ .. /{$Y}/' t.sh # which empties t.sh and eval perl -i.bak -ne \'print unless /${X}/ .. /{$Y}/' t.sh # which does nothing

    Read the article

  • What steps can you take to ensure sane build environments when compiling software?

    - by Chris Adams
    Hi guys, I've been stuck with a compilation problem when building a standardised virtual machine on CentOS 5.4, and I'm in the dark here as to a) why this error is occurring, and b) how to fix it, and in the hope that someone else stumbles across this problem too, I'm hoping someone can help me find the solution here. I'm getting a configure: error: newly created file is older than distributed files! error when trying to compile Ruby Enterprise like below when I try to run the installer, and the solutions offered to on the forums (of checking the tine, and touching the files to update the time associated with them) don't seem to be helping here. What steps can I take to work out what the cause of this problem? [vagrant@vagrant-centos-5 ruby-enterprise-1.8.7-2009.10]$ sudo ./installer Welcome to the Ruby Enterprise Edition installer This installer will help you install Ruby Enterprise Edition 1.8.7-2009.10. Don't worry, none of your system files will be touched if you don't want them to, so there is no risk that things will screw up. You can expect this from the installation process: 1. Ruby Enterprise Edition will be compiled and optimized for speed for this system. 2. Ruby on Rails will be installed for Ruby Enterprise Edition. 3. You will learn how to tell Phusion Passenger to use Ruby Enterprise Edition instead of regular Ruby. Press Enter to continue, or Ctrl-C to abort. Checking for required software... * C compiler... found at /usr/bin/gcc * C++ compiler... found at /usr/bin/g++ * The 'make' tool... found at /usr/bin/make * Zlib development headers... found * OpenSSL development headers... found * GNU Readline development headers... found -------------------------------------------- Target directory Where would you like to install Ruby Enterprise Edition to? (All Ruby Enterprise Edition files will be put inside that directory.) [/opt/ruby-enterprise] : -------------------------------------------- Compiling and optimizing the memory allocator for Ruby Enterprise Edition In the mean time, feel free to grab a cup of coffee. ./configure --prefix=/opt/ruby-enterprise --disable-dependency-tracking checking build system type... i686-pc-linux-gnu checking host system type... i686-pc-linux-gnu checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... configure: error: newly created file is older than distributed files! Check your system clock This is a virtual machine running on virtualbox, and the time of the host and the virtual machine are identical, and up to date. I've also tried running this after updating time with an ntp-client, so no avail. I tried this after reading this post here of someone having a similar problem [vagrant@vagrant-centos-5 ruby-enterprise-1.8.7-2009.10]$ date Tue Apr 27 08:09:05 BST 2010 The other approach I've tried is to touch the top level the files in the build folder like suggested here, but this hasn't worked either (an to be honest, I'm not sure why it would have worked either) [vagrant@vagrant-centos-5 ruby-enterprise-1.8.7-2009.10]$ sudo touch ruby-enterprise-1.8.7-2009.10/* I'm not sure what I can do next here - the problem seems to be the bash configure script that returns this error error: newly created file is older than distributed files!, at line :2214 { echo "$as_me:$LINENO: checking whether build environment is sane" >&5 echo $ECHO_N "checking whether build environment is sane... $ECHO_C" >&6; } # Just in case sleep 1 echo timestamp > conftest.file # Do `set' in a subshell so we don't clobber the current shell's # arguments. Must try -L first in case configure is actually a # symlink; some systems play weird games with the mod time of symlinks # (eg FreeBSD returns the mod time of the symlink's containing # directory). if ( set X `ls -Lt $srcdir/configure conftest.file 2> /dev/null` if test "$*" = "X"; then # -L didn't work. set X `ls -t $srcdir/configure conftest.file` fi rm -f conftest.file if test "$*" != "X $srcdir/configure conftest.file" \ && test "$*" != "X conftest.file $srcdir/configure"; then # If neither matched, then we have a broken ls. This can happen # if, for instance, CONFIG_SHELL is bash and it inherits a # broken ls alias from the environment. This has actually # happened. Such a system could not be considered "sane". { { echo "$as_me:$LINENO: error: ls -t appears to fail. Make sure there is not a broken alias in your environment" >&5 echo "$as_me: error: ls -t appears to fail. Make sure there is not a broken alias in your environment" >&2;} { (exit 1); exit 1; }; } fi ### PROBLEM LINE #### # this line is the problem line - this is returned true, sometimes it isn't and I can't # see a pattern that that determines when this will test will pass or not. test "$2" = conftest.file ) then # Ok. : else { { echo "$as_me:$LINENO: error: newly created file is older than distributed files! Check your system clock" >&5 echo "$as_me: error: newly created file is older than distributed files! Check your system clock" >&2;} { (exit 1); exit 1; }; } fi the thing that makes this really frustrating is that this script works sometimes, when the VM has been running for an hour or so it works, but not at boot. There's nothing I see in the crontab that suggests any hourly tasks are run that might change the state of the system enough make a difference to this script working. I'm totally at a loss when it comes to debugging beyond here. What's the best approach to take here? Thanks

    Read the article

  • Cross-platform, human-readable, du on root partition that truly ignores other filesystems

    - by nice_line
    I hate this so much: Linux builtsowell 2.6.18-274.7.1.el5 #1 SMP Mon Oct 17 11:57:14 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux df -kh Filesystem Size Used Avail Use% Mounted on /dev/mapper/mpath0p2 8.8G 8.7G 90M 99% / /dev/mapper/mpath0p6 2.0G 37M 1.9G 2% /tmp /dev/mapper/mpath0p3 5.9G 670M 4.9G 12% /var /dev/mapper/mpath0p1 494M 86M 384M 19% /boot /dev/mapper/mpath0p7 7.3G 187M 6.7G 3% /home tmpfs 48G 6.2G 42G 14% /dev/shm /dev/mapper/o10g.bin 25G 7.4G 17G 32% /app/SIP/logs /dev/mapper/o11g.bin 25G 11G 14G 43% /o11g tmpfs 4.0K 0 4.0K 0% /dev/vx lunmonster1q:/vol/oradb_backup/epmxs1q1 686G 507G 180G 74% /rpmqa/backup lunmonster1q:/vol/oradb_redo/bisxs1q1 4.0G 1.6G 2.5G 38% /bisxs1q/rdoctl1 lunmonster1q:/vol/oradb_backup/bisxs1q1 686G 507G 180G 74% /bisxs1q/backup lunmonster1q:/vol/oradb_exp/bisxs1q1 2.0T 1.1T 984G 52% /bisxs1q/exp lunmonster2q:/vol/oradb_home/bisxs1q1 10G 174M 9.9G 2% /bisxs1q/home lunmonster2q:/vol/oradb_data/bisxs1q1 52G 5.2G 47G 10% /bisxs1q/oradata lunmonster1q:/vol/oradb_redo/bisxs1q2 4.0G 1.6G 2.5G 38% /bisxs1q/rdoctl2 ip-address1:/vol/oradb_home/cspxs1q1 10G 184M 9.9G 2% /cspxs1q/home ip-address2:/vol/oradb_backup/cspxs1q1 674G 314G 360G 47% /cspxs1q/backup ip-address2:/vol/oradb_redo/cspxs1q1 4.0G 1.5G 2.6G 37% /cspxs1q/rdoctl1 ip-address2:/vol/oradb_exp/cspxs1q1 4.1T 1.5T 2.6T 37% /cspxs1q/exp ip-address2:/vol/oradb_redo/cspxs1q2 4.0G 1.5G 2.6G 37% /cspxs1q/rdoctl2 ip-address1:/vol/oradb_data/cspxs1q1 160G 23G 138G 15% /cspxs1q/oradata lunmonster1q:/vol/oradb_exp/epmxs1q1 2.0T 1.1T 984G 52% /epmxs1q/exp lunmonster2q:/vol/oradb_home/epmxs1q1 10G 80M 10G 1% /epmxs1q/home lunmonster2q:/vol/oradb_data/epmxs1q1 330G 249G 82G 76% /epmxs1q/oradata lunmonster1q:/vol/oradb_redo/epmxs1q2 5.0G 609M 4.5G 12% /epmxs1q/rdoctl2 lunmonster1q:/vol/oradb_redo/epmxs1q1 5.0G 609M 4.5G 12% /epmxs1q/rdoctl1 /dev/vx/dsk/slaxs1q/slaxs1q-vol1 183G 17G 157G 10% /slaxs1q/backup /dev/vx/dsk/slaxs1q/slaxs1q-vol4 173G 58G 106G 36% /slaxs1q/oradata /dev/vx/dsk/slaxs1q/slaxs1q-vol5 75G 952M 71G 2% /slaxs1q/exp /dev/vx/dsk/slaxs1q/slaxs1q-vol2 9.8G 381M 8.9G 5% /slaxs1q/home /dev/vx/dsk/slaxs1q/slaxs1q-vol6 4.0G 1.6G 2.2G 42% /slaxs1q/rdoctl1 /dev/vx/dsk/slaxs1q/slaxs1q-vol3 4.0G 1.6G 2.2G 42% /slaxs1q/rdoctl2 /dev/mapper/appoem 30G 1.3G 27G 5% /app/em Yet, I equally, if not quite a bit more, also hate this: SunOS solarious 5.10 Generic_147440-19 sun4u sparc SUNW,SPARC-Enterprise Filesystem size used avail capacity Mounted on kiddie001Q_rpool/ROOT/s10s_u8wos_08a 8G 7.7G 1.3G 96% / /devices 0K 0K 0K 0% /devices ctfs 0K 0K 0K 0% /system/contract proc 0K 0K 0K 0% /proc mnttab 0K 0K 0K 0% /etc/mnttab swap 15G 1.8M 15G 1% /etc/svc/volatile objfs 0K 0K 0K 0% /system/object sharefs 0K 0K 0K 0% /etc/dfs/sharetab fd 0K 0K 0K 0% /dev/fd kiddie001Q_rpool/ROOT/s10s_u8wos_08a/var 31G 8.3G 6.6G 56% /var swap 512M 4.6M 507M 1% /tmp swap 15G 88K 15G 1% /var/run swap 15G 0K 15G 0% /dev/vx/dmp swap 15G 0K 15G 0% /dev/vx/rdmp /dev/dsk/c3t4d4s0 3 20G 279G 41G 88% /fs_storage /dev/vx/dsk/oracle/ora10g-vol1 292G 214G 73G 75% /o10g /dev/vx/dsk/oec/oec-vol1 64G 33G 31G 52% /oec/runway /dev/vx/dsk/oracle/ora9i-vol1 64G 33G 31G 59% /o9i /dev/vx/dsk/home 23G 18G 4.7G 80% /export/home /dev/vx/dsk/dbwork/dbwork-vol1 292G 214G 73G 92% /db03/wk01 /dev/vx/dsk/oradg/ebusredovol 2.0G 475M 1.5G 24% /u21 /dev/vx/dsk/oradg/ebusbckupvol 200G 32G 166G 17% /u31 /dev/vx/dsk/oradg/ebuscrtlvol 2.0G 475M 1.5G 24% /u20 kiddie001Q_rpool 31G 97K 6.6G 1% /kiddie001Q_rpool monsterfiler002q:/vol/ebiz_patches_nfs/NSA0304 203G 173G 29G 86% /oracle/patches /dev/odm 0K 0K 0K 0% /dev/odm The people with the authority don't rotate logs or delete packages after install in my environment. Standards, remediation, cohesion...all fancy foreign words to me. ============== How am I supposed to deal with / filesystem full issues across multiple platforms that have a devastating number of mounts? On Red Hat el5, du -x apparently avoids traversal into other filesystems. While this may be so, it does not appear to do anything if run from the / directory. On Solaris 10, the equivalent flag is du -d, which apparently packs no surprises, allowing Sun to uphold its legacy of inconvenience effortlessly. (I'm hoping I've just been doing it wrong.) I offer up for sacrifice my Frankenstein's monster. Tell me how ugly it is. Tell me I should download forbidden 3rd party software. Tell me I should perform unauthorized coreutils updates, piecemeal, across 2000 systems, with no single sign-on, no authorized keys, and no network update capability. Then, please help me make this bastard better: pwd / du * | egrep -v "$(echo $(df | awk '{print $1 "\n" $5 "\n" $6}' | \ cut -d\/ -f2-5 | egrep -v "[0-9]|^$|Filesystem|Use|Available|Mounted|blocks|vol|swap")| \ sed 's/ /\|/g')" | egrep -v "proc|sys|media|selinux|dev|platform|system|tmp|tmpfs|mnt|kernel" | \ cut -d\/ -f1-2 | sort -k2 -k1,1nr | uniq -f1 | sort -k1,1n | cut -f2 | xargs du -shx | \ egrep "G|[5-9][0-9]M|[1-9][0-9][0-9]M" My biggest failure and regret is that it still requires a single character edit for Solaris: pwd / du * | egrep -v "$(echo $(df | awk '{print $1 "\n" $5 "\n" $6}' | \ cut -d\/ -f2-5 | egrep -v "[0-9]|^$|Filesystem|Use|Available|Mounted|blocks|vol|swap")| \ sed 's/ /\|/g')" | egrep -v "proc|sys|media|selinux|dev|platform|system|tmp|tmpfs|mnt|kernel" | \ cut -d\/ -f1-2 | sort -k2 -k1,1nr | uniq -f1 | sort -k1,1n | cut -f2 | xargs du -shd | \ egrep "G|[5-9][0-9]M|[1-9][0-9][0-9]M" This will exclude all non / filesystems in a du search from the / directory by basically munging an egrepped df from a second pipe-delimited egrep regex subshell exclusion that is naturally further excluded upon by a third egrep in what I would like to refer to as "the whale." The munge-fest frantically escalates into some xargs du recycling where -x/-d is actually useful, and a final, gratuitous egrep spits out a list of directories that almost feels like an accomplishment: Linux: 54M etc/gconf 61M opt/quest 77M opt 118M usr/ ##===\ 149M etc 154M root 303M lib/modules 313M usr/java ##====\ 331M lib 357M usr/lib64 ##=====\ 433M usr/lib ##========\ 1.1G usr/share ##=======\ 3.2G usr/local ##========\ 5.4G usr ##<=============Ascending order to parent 94M app/SIP ##<==\ 94M app ##<=======Were reported as 7gb and then corrected by second du with -x. Solaris: 63M etc 490M bb 570M root/cores.ric.20100415 1.7G oec/archive 1.1G root/packages 2.2G root 1.7G oec Guess what? It's really slow. Edit: Are there any bash one-liner heroes out there than can turn my bloated abomination into divine intervention, or at least something resembling gingerly copypasta?

    Read the article

1