Search Results

Search found 721 results on 29 pages for 'stdout'.

Page 4/29 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Write STDOUT & STDERR to a logfile, also write STDERR to screen

    - by Stefan Lasiewski
    I would like to run several commands, and capture all output to a logfile. I also want to print any errors to the screen (or optionally mail the output to someone). Here's an example. The following command will run three commands, and will write all output (STDOUT and STDERR) into a single logfile. { command1 && command2 && command3 ; } > logfile.log 2>&1 Here is what I want to do with the output of these commands: STDERR and STDOUT for all commands goes to a logfile, in case I need it later--- I usually won't look in here unless there are problems. Print STDERR to the screen (or optionally, pipe to /bin/mail), so that any error stands out and doesn't get ignored. It would be nice if the return codes were still usable, so that I could do some error handling. Maybe I want to send email if there was an error, like this: { command1 && command2 && command3 ; } logfile.log 2&1 || mailx -s "There was an error" [email protected] The problem I run into is that STDERR loses context during I/O redirection. A '2&1' will convert STDERR into STDOUT, and therefore I cannot view errors if I do 2 error.log Here are a couple juicier examples. Let's pretend that I am running some familiar build commands, but I don't want the entire build to stop just because of one error so I use the '--keep-going' flag. { ./configure && make --keep-going && make install ; } > build.log 2>&1 Or, here's a simple (And perhaps sloppy) build and deploy script, which will keep going in the event of an error. { ./configure && make --keep-going && make install && rsync -av --keep-going /foo devhost:/foo} > build-and-deploy.log 2>&1 I think what I want involves some sort of Bash I/O Redirection, but I can't figure this out.

    Read the article

  • read subprocess stdout line by line

    - by Caspin
    My python script uses subprocess to call a linux utility that is very noisy. I want to store all of the output to a log file, but only show some of it to the user. I thought the following would work, but the output does show up in my application until the utility has produced a significant amount of output. #fake_utility.py, just generates lots of output over time import time i = 0 while True: print hex(i)*512 i += 1 time.sleep(0.5) #filters output import subprocess proc = subprocess.Popen(['python','fake_utility.py'],stdout.subprocess.PIPE) for line in proc.stdout: #the real code does filtering here print "test:", line.rstrip() The behavior I really want is for the filter script to print each line as it is received from the subprocess. Sorta like what tee does but with python code. What am I missing? Is this even possible?

    Read the article

  • Gzip and subprocess' stdout in python

    - by pythonic metaphor
    I'm using python 2.6.4 and discovered that I can't use gzip with subprocess the way I might hope. This illustrates the problem: May 17 18:05:36> python Python 2.6.4 (r264:75706, Mar 10 2010, 14:41:19) [GCC 4.1.2 20071124 (Red Hat 4.1.2-42)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import gzip >>> import subprocess >>> fh = gzip.open("tmp","wb") >>> subprocess.Popen("echo HI", shell=True, stdout=fh).wait() 0 >>> fh.close() >>> [2]+ Stopped python May 17 18:17:49> file tmp tmp: data May 17 18:17:53> less tmp "tmp" may be a binary file. See it anyway? May 17 18:17:58> zcat tmp zcat: tmp: not in gzip format Here's what it looks like inside less HI ^_<8B>^H^Hh<C0><F1>K^B<FF>tmp^@^C^@^@^@^@^@^@^@^@^@ which looks like it put in the stdout as text and then put in an empty gzip file. Indeed, if I remove the "Hi\n", then I get this: May 17 18:22:34> file tmp tmp: gzip compressed data, was "tmp", last modified: Mon May 17 18:17:12 2010, max compression What is going on here?

    Read the article

  • About redirected stdout in System.Diagnostics.Process

    - by sforester
    I've been recently working on a program that convert flac files to mp3 in C# using flac.exe and lame.exe, here are the code that do the job: ProcessStartInfo piFlac = new ProcessStartInfo( "flac.exe" ); piFlac.CreateNoWindow = true; piFlac.UseShellExecute = false; piFlac.RedirectStandardOutput = true; piFlac.Arguments = string.Format( flacParam, SourceFile ); ProcessStartInfo piLame = new ProcessStartInfo( "lame.exe" ); piLame.CreateNoWindow = true; piLame.UseShellExecute = false; piLame.RedirectStandardInput = true; piLame.RedirectStandardOutput = true; piLame.Arguments = string.Format( lameParam, QualitySetting, ExtractTag( SourceFile ) ); Process flacp = null, lamep = null; byte[] buffer = BufferPool.RequestBuffer(); flacp = Process.Start( piFlac ); lamep = new Process(); lamep.StartInfo = piLame; lamep.OutputDataReceived += new DataReceivedEventHandler( this.ReadStdout ); lamep.Start(); lamep.BeginOutputReadLine(); int count = flacp.StandardOutput.BaseStream.Read( buffer, 0, buffer.Length ); while ( count != 0 ) { lamep.StandardInput.BaseStream.Write( buffer, 0, count ); count = flacp.StandardOutput.BaseStream.Read( buffer, 0, buffer.Length ); } Here I set the command line parameters to tell lame.exe to write its output to stdout, and make use of the Process.OutPutDataRecerved event to gather the output data, which is mostly binary data, but the DataReceivedEventArgs.Data is of type "string" and I have to convert it to byte[] before put it to cache, I think this is ugly and I tried this approach but the result is incorrect. Is there any way that I can read the raw redirected stdout stream, either synchronously or asynchronously, bypassing the OutputDataReceived event? PS: the reason why I don't use lame to write to disk directly is that I'm trying to convert several files in parallel, and direct writing to disk will cause severe fragmentation. Thanks a lot!

    Read the article

  • user crontabs don't work, only /etc/crontab

    - by gletscher
    Hi when I use the command crontab -e to set up cron, the commands get triggered (according to syslog) but nothing happens. Also if I run sudo crontab -e. The only way to actually get cron working is to manually etc /etc/crontab I'm confused, since syslog gives me the same output for both methods. Any ideas for tracing down this bug? Thanks alot!

    Read the article

  • Suppress EXT3-fs warning on mount

    - by STM
    I am familiar with output suppress on Unix machines, ie: cat /file/that/doesnt/exist > /dev/null 2>& However I can't seem to suppress the output of mount when an ext3 filesystem is mounted for the nth time, and it recommends an fsck. As it happens, fscks are run regularly by another machine, so these warning messages are needlessly interrupting the flow of output to my pretty bash script. These are the errors: # mount -t ext3 /dev/sda1 /mnt > /dev/null 2>& kjournald starting. Commit interval 5 seconds EXT3-fs warning: maximal mount count reached, running e2fsck is recommended EXT3 FS 2.4-0.9.19, 19 August 2002 on sd(8,1), internal journal EXT3-fs: mounted filesystem with ordered data mode. Can anyone shed some light on this? I'm clearly blocking both fd's, but somehow output is still getting through. This is GNU Bash v2.05a

    Read the article

  • Capture STDOUT/STDERR of Windows Applictation

    - by Mark Roddy
    I am using an open source application (conkeror) which is configured via a file containing javascript. Any issues with the javascript is printed to the console. This is fine under Linux as all the information is shown as long as the it is started from a shell. However, under Windows none of the information is show even when started from a shell. Is there a way to make this information displayed in the shell or to capture it in say a file?

    Read the article

  • Use procmail to deliver to stdout and a second server

    - by Halfgaar
    I would like a Postfix server to deliver each message to a certain transport as well as relay to a second server. In master.cf, I have the following transport: zarafa unix - n n - 10 pipe flags= user=vmail argv=/usr/bin/zarafa-dagent ${user} Because I can't get Postfix to deliver to two transports, what I probably need, is a wrapper transport, using procmail maybe, that delivers to zarafa-dagent and relays to a second server (not just forward to an address; relay to a second server). It can also be a script that calls sendmail or whatever, but at the moment, I don't know how to proceed.

    Read the article

  • linux output show only right of ':'

    - by acidzombie24
    I forgot a lot of my command line. I am doing cat file | grep "error" and i would like it to show everything to the right of G:/ including G:/ if possible. I figure its an awk command but i dont know what. I tried awk '{print $8+}' but + does not work like i hoped and guessed.

    Read the article

  • gzip: stdout: File too large when running customized backup script

    - by Roland
    I've create a plain and siple backup script that only backs up certain files and folders. tar -zcf $DIRECTORY/var.www.tar.gz /var/www tar -zcf $DIRECTORY/development.tar.gz /development tar -zcf $DIRECTORY/home.tar.gz /home Now this script runs for about 30mins then gives me the following error gzip: stdout: File too large Any other solutions that I can use to backup my files using shell scripting or a way to solve this error? I'm grateful for any help.

    Read the article

  • Directing stdout of system command in Perl to filehandle

    - by syker
    With the open command in Perl, you can use a filehandle. However I have trouble getting back the exit code with the open command in Perl. With the system command in Perl, I can get back the exit code of the program I'm running. However I want to just redirect the STDOUT to some filehandle (no stderr). Is that possible?

    Read the article

  • Perl Test::More, get stdout

    - by Mike
    Is there a way inside a Perl test case using Test::More to get the program's stdout For instance, if I do use Test::More; ok(foo()); #in the code I am testing sub foo() { print "hello" } "hello" will not be visible. I would like to see it. edit I know I can use diag(), however this would not work if the print is inside the code I am testing

    Read the article

  • stdout and stderr anomalies

    - by momo
    from the interactive prompt: >>> import sys >>> sys.stdout.write('is the') is the6 what is '6' doing there? another example: >>> for i in range(3): ... sys.stderr.write('new black') ... 9 9 9 new blacknew blacknew black where are the numbers coming from?

    Read the article

  • find: Prevent .git folders from printing to STDOUT

    - by Nathan Neff
    Hello, I have a find command that I run, to find files named 'foo' in a directory. I want to skip the ".git" directory. The command below works, EXCEPT, it prints an annoying ".git" any time it skips a .git directory find . ( -name .git ) -prune -o -name 'foo' How can I prevent the skipped ".git" directories from printing to STDOUT? Thanks, --Nate

    Read the article

  • Python stdout, \r progress bar and sshd with Putty not updating regularly

    - by Kyle MacFarlane
    I have a dead simple progress "bar" using something like the following: import sys from time import sleep current = 0 limit = 50 while current <= limit: sys.stdout.write('\rSynced %s/%s orders' % (current, limit)) current_order += 1 sleep(1) Works fine, except over ssh with Putty. Putty only updates every 3 minutes or if a line ends with \n. Is this a Putty setting, sshd_config, or can I code around it?

    Read the article

  • vim filters and stdout/stderr

    - by ahe
    When I use :%! to run the contents of a file through a filter and the filter fails (it returns another code than 0) and prints an error message to stderr I get my file replaced with this error message. Is there a way to tell vim to skip the filtering if the filter returns an status code that indicates an error and/or ignore output the filter program writes to stderr? There are cases where you want your file to replaced with the output of the filter but most often this behavior is wrong. Of course I can just undo the filtering with one keypress but it isn't optimal. Also I have a similar problem when writing a custom vim script to do the filtering. I have a script that calls a filter program with system() and replaces the file in the buffer with its output but there doesn't seem to be a way to detect if the lines returned by system() where written to stdout or to stderr. Is there a way to tell them apart in vim script?

    Read the article

  • Python: Converting a tuple to a string with 'err'

    - by skylarking
    Given this : import os import subprocess def check_server(): cl = subprocess.Popen(["nmap","10.7.1.71"], stdout=subprocess.PIPE) result = cl.communicate() print result check_server() check_server() returns this tuple: ('\nStarting Nmap 4.53 ( http://insecure.org ) at 2010-04-07 07:26 EDT\nInteresting ports on 10.7.1.71:\nNot shown: 1711 closed ports\nPORT STATE SERVICE\n21/tcp open ftp\n22/tcp open ssh\n80/tcp open http\n\nNmap done: 1 IP address (1 host up) scanned in 0.293 seconds\n', None) Changing the second line in the method to result, err = cl.communicate() results in check_server() returning : Starting Nmap 4.53 ( http://insecure.org ) at 2010-04-07 07:27 EDT Interesting ports on 10.7.1.71: Not shown: 1711 closed ports PORT STATE SERVICE 21/tcp open ftp 22/tcp open ssh 80/tcp open http Nmap done: 1 IP address (1 host up) scanned in 0.319 seconds Looks to be the case that the tuple is converted to a string, and the \n's are being stripped.... but how? What is 'err' and what exactly is it doing?

    Read the article

  • Capturing stdout within the same process in Python

    - by danben
    I've got a python script that calls a bunch of functions, each of which writes output to stdout. Sometimes when I run it, I'd like to send the output in an e-mail (along with a generated file). I'd like to know how I can capture the output in memory so I can use the email module to build the e-mail. My ideas so far were: use a memory-mapped file (but it seems like I have to reserve space on disk for this, and I don't know how long the output will be) bypass all this and pipe the output to sendmail (but this may be difficult if I also want to attach the file)

    Read the article

  • Strange behaviour with fputs and a loop.

    - by Jonathan
    When running the following code I get no output but I cannot work out why. # include <stdio.h> int main() { fputs("hello", stdout); while (1); return 0; } Without the while loop it works perfectly but as soon as I add it in I get no output. Surely it should output before starting the loop? Is it just on my system? Do I have to flush some sort of buffer or something? Thanks in advance.

    Read the article

  • Maintaining Logging and/or stdout/stderr in Python Daemon

    - by dave mankoff
    Every recipe that I've found for creating a daemon process in Python involves forking twice (for Unix) and then closing all open file descriptors. (See http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/ for an example). This is all simple enough but I seem to have an issue. On the production machine that I am setting up, my daemon is aborting - silently since all open file descriptors were closed. I am having a tricky time debugging the issue currently and am wondering what the proper way to catch and log these errors are. What is the right way to setup logging such that it continues to work after daemonizing? Do I just call logging.basicConfig() a second time after daemonizing? What's the right way to capture stdout and stderr? I am fuzzy on the details of why all the files are closed. Ideally, my main code could just call daemon_start(pid_file) and logging would continue to work.

    Read the article

  • Shell script to emulate warnings-as-errors?

    - by talkaboutquality
    Some compilers let you set warnings as errors, so that you'll never leave any compiler warnings behind, because if you do, the code won't build. This is a Good Thing. Unfortunately, some compilers don't have a flag for warnings-as-errors. I need to write a shell script or wrapper that provides the feature. Presumably it parses the compilation console output and returns failure if there were any compiler warnings (or errors), and success otherwise. "Failure" also means (I think) that object code should not be produced. What's the shortest, simplest UNIX/Linux shell script you can write that meets the explicit requirements above, as well as the following implicit requirements of otherwise behaving just like the compiler: - accepts all flags, options, arguments - supports redirection of stdout and stderr - produces object code and links as directed Key words: elegant, meets all requirements. Extra credit: easy to incorporate into a GNU make file. Thanks for your help. === Clues === This solution to a different problem, using shell functions (?), Append text to stderr redirects in bash, might figure in. Wonder how to invite litb's friend "who knows bash quite well" to address my question? === Answer status === Thanks to Charlie Martin for the short answer, but that, unfortunately, is what I started out with. A while back I used that, released it for office use, and, within a few hours, had its most severe drawback pointed out to me: it will PASS a compilation with no warnings, but only errors. That's really bad because then we're delivering object code that the compiler is sure won't work. The simple solution also doesn't meet the other requirements listed. Thanks to Adam Rosenfield for the shorthand, and Chris Dodd for introducing pipefail to the solution. Chris' answer looks closest, because I think the pipefail should ensure that if compilation actually fails on error, that we'll get failure as we should. Chris, does pipefail work in all shells? And have any ideas on the rest of the implicit requirements listed above?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >