Search Results

Search found 406 results on 17 pages for 'subprocess'.

Page 1/17 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Properly using subprocess.PIPE in python?

    - by Gordon Fontenot
    I'm trying to use subprocess.Popen to construct a sequence to grab the duration of a video file. I've been searching for 3 days, and can't find any reason online as to why this code isn't working, but it keeps giving me a blank result: import sys import os import subprocess def main(): the_file = "/Volumes/Footage/Acura/MDX/2001/Crash Test/01 Acura MDX Front Crash.mov" ffmpeg = subprocess.Popen(['/opt/local/bin/ffmpeg', '-i', the_file], stdout = subprocess.PIPE, ) grep = subprocess.Popen(['grep', 'Duration'], stdin = subprocess.PIPE, stdout = subprocess.PIPE, ) cut = subprocess.Popen(['cut', '-d', ' ', '-f', '4'], stdin = subprocess.PIPE, stdout = subprocess.PIPE, ) sed = subprocess.Popen(['sed', 's/,//'], stdin = subprocess.PIPE, stdout = subprocess.PIPE, ) duration = sed.communicate() print duration if __name__ == '__main__': main()

    Read the article

  • Why doesn't Apache2::SubProcess spawn my subprocess?

    - by codeholic
    The following script works without errors, but /tmp/test.touch is not being created (even being checked later in the command line). It seems to me as if $r->spawn_proc_prog doesn't spawn a process. What may cause the problem? #!/usr/bin/perl use strict; use warnings; use Apache2::RequestUtil; use Apache2::SubProcess (); my $r = Apache2::RequestUtil->request; print "Content-Type: text/plain\n\n"; print eval { $r->spawn_proc_prog('/usr/bin/touch', ['/tmp/test.touch']) } ? `ls -l /tmp/test.touch` : $@;

    Read the article

  • subprocess.Popen doesn't work when args is sequence

    - by pero
    I'm having a problem with subprocess.Popen when args parameter is given as sequence. For example: import subprocess maildir = "/home/support/Maildir" This works (it prints the correct size of /home/support/Maildir dir): size = subprocess.Popen(["du -s -b " + maildir], shell=True, stdout=subprocess.PIPE).communicate()[0].split()[0] print size But, this doesn't work (try it): size = subprocess.Popen(["du", "-s -b", maildir], shell=True, stdout=subprocess.PIPE).communicate()[0].split()[0] print size What's wrong?

    Read the article

  • Python 3 - Module: subprocess

    - by Rhys
    Hi Stack Overflow users, I've encountered a frustrating problem, can't find the answer to it. Yesterday I was trying to find a way to HIDE a subprocess.Popen. So for example, if i was opening the cmd. I would like it to be hidden, permanently. I found this code: kwargs = {} if subprocess.mswindows: su = subprocess.STARTUPINFO() su.dwFlags |= subprocess.STARTF_USESHOWWINDOW su.wShowWindow = subprocess.SW_HIDE kwargs['startupinfo'] = su subprocess.Popen("cmd.exe", **kwargs) It worked like a charm! But today, for reasons I don't need to get into, I had to reinstall python 3 (32bit) Now, when I run my program I get this error: Traceback (most recent call last): File "C:\Python31\hello.py", line 7, in <module> su.dwFlags |= subprocess.STARTF_USESHOWWINDOW AttributeError: 'module' object has no attribute 'STARTF_USESHOWWINDOW' I'm using 32bit, python3.1.3 ... just like before. If you have any clues/alternatives PLEASE post, thanks. NOTE: I am looking for a SHORT method to hide the app, not like two pages of code please

    Read the article

  • How to determine subprocess.Popen() failed when shell=True

    - by Malcolm
    Windows version of Python 2.6.4: Is there any way to determine if subprocess.Popen() fails when using shell=True? Popen() successfully fails when shell=False >>> import subprocess >>> p = subprocess.Popen( 'Nonsense.application', shell=False ) Traceback (most recent call last): File ">>> pyshell#258", line 1, in <module> p = subprocess.Popen( 'Nonsense.application' ) File "C:\Python26\lib\subprocess.py", line 621, in __init__ errread, errwrite) File "C:\Python26\lib\subprocess.py", line 830, in _execute_child startupinfo) WindowsError: [Error 2] The system cannot find the file specified But when shell=True, there appears to be no way to determine if a Popen() call was successful or not. >>> p = subprocess.Popen( 'Nonsense.application', shell=True ) >>> p >>> subprocess.Popen object at 0x0275FF90&gt;&gt;&gt; >>> p.pid 6620 >>> p.returncode >>> Ideas appreciated. Regards, Malcolm

    Read the article

  • unable to install anything ,getting error subprocess installed post-installation script returned error exit status 1

    - by soum
    dpkg: error processing mono-4.0-gac (--configure): subprocess installed post-installation script returned error exit status 1 Processing triggers for mousetweaks ... No apport report written because MaxReports is reached already postinst called with unknown argument `triggered' dpkg: error processing mousetweaks (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for mozilla-plugin-vlc ... postinst called with unknown argument `triggered' dpkg: error processing mozilla-plugin-vlc (--configure): subprocess installed post-installation script returned error exit status 1 Processing triggers for mtools ... No apport report written because MaxReports is reached already postinst called with unknown argument `triggered' dpkg: error processing mtools (--configure): subprocess installed post-installation script returned error exit status 1 Processing triggers for network-manager-pptp-gnome ... No apport report written because MaxReports is reached already postinst called with unknown argument `triggered' dpkg: error processing network-manager-pptp-gnome (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for network-manager-pptp ... postinst called with unknown argument `triggered' dpkg: error processing network-manager-pptp (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for network-manager-gnome ... /var/lib/dpkg/info/network-manager-gnome.postinst called with unknown argument `triggered' dpkg: error processing network-manager-gnome (--configure): subprocess installed post-installation script returned error exit status 1 Processing triggers for network-manager ... No apport report written because MaxReports is reached already /var/lib/dpkg/info/network-manager.postinst called with unknown argument `triggered' dpkg: error processing network-manager (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for mscompress ... postinst called with unknown argument `triggered' dpkg: error processing mscompress (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: netbase mtr-tiny module-init-tools mountmanager mono-4.0-gac mousetweaks mozilla-plugin-vlc mtools network-manager-pptp-gnome network-manager-pptp network-manager-gnome network-manager mscompress E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Executing python subprocess via git hook

    - by aljesco
    I'm running Gitolite over the Git repository and I have post-receive hook there written in Python. I need to execute "git" command at git repository directory. There are few lines of code: proc = subprocess.Popen(['git', 'log', '-n1'], cwd='/home/git/repos/testing.git' stdout=subprocess.PIPE, stderr=subprocess.PIPE) proc.communicate() After I make new commit and push to repository, scripts executes and says fatal: Not a git repository: '.' If I run proc = subprocess.Popen(['pwd'], cwd='/home/git/repos/testing.git' stdout=subprocess.PIPE, stderr=subprocess.PIPE) it says, as expected, correct path to git repository (/home/git/repos/testing.git) If I run this script manually from bash, it works correct and show correct output of "git log". What I'm doing wrong?

    Read the article

  • Asynchronous subprocess on Windows

    - by Stigma
    First of all, the overall problem I am solving is a bit more complicated than I am showing here, so please do not tell me 'use threads with blocking' as it would not solve my actual situation without a fair, FAIR bit of rewriting and refactoring. I have several applications which are not mine to modify, which take data from stdin and poop it out on stdout after doing their magic. My task is to chain several of these programs. Problem is, sometimes they choke, and as such I need to track their progress which is outputted on STDERR. pA = subprocess.Popen(CommandA, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE) # ... some more processes make up the chain, but that is irrelevant to the problem pB = subprocess.Popen(CommandB, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=pA.stdout ) Now, reading directly through pA.stdout.readline() and pB.stdout.readline(), or the plain read() functions, is a blocking matter. Since different applications output in different paces and different formats, blocking is not an option. (And as I wrote above, threading is not an option unless at a last, last resort.) pA.communicate() is deadlock safe, but since I need the information live, that is not an option either. Thus google brought me to this asynchronous subprocess snippet on ActiveState. All good at first, until I implement it. Comparing the cmd.exe output of pA.exe | pB.exe, ignoring the fact both output to the same window making for a mess, I see very instantaneous updates. However, I implement the same thing using the above snippet and the read_some() function declared there, and it takes over 10 seconds to notify updates of a single pipe. But when it does, it has updates leading all the way upto 40% progress, for example. Thus I do some more research, and see numerous subjects concerning PeekNamedPipe, anonymous handles, and returning 0 bytes available even though there is information available in the pipe. As the subject has proven quite a bit beyond my expertise to fix or code around, I come to Stack Overflow to look for guidance. :) My platform is W7 64-bit with Python 2.6, the applications are 32-bit in case it matters, and compatibility with Unix is not a concern. I can even deal with a full ctypes or pywin32 solution that subverts subprocess entirely if it is the only solution, as long as I can read from every stderr pipe asynchronously with immediate performance and no deadlocks. :)

    Read the article

  • how to kill (or avoid) zombie processes with subprocess module

    - by Dave
    When I kick off a python script from within another python script using the subprocess module, a zombie process is created when the subprocess "completes". I am unable to kill this subprocess unless I kill my parent python process. Is there a way to kill the subprocess without killing the parent? I know I can do this by using wait(), but I need to run my script with no_wait().

    Read the article

  • Running Subprocess from Python

    - by Rohit
    I want to run a cmd exe using a python script. I have the following code: def run_command(command): p = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT) return p.communicate() then i use: run_command(r"C:\Users\user\Desktop\application\uploader.exe") this returns the option menu where i need to specify additional parameter for the cmd exe to run. So i pass additional parameters for the cmd exe to run. How do i accomplish this. I've looked at subprocess.communicate but i was unable to understand it

    Read the article

  • Python's subprocess.Popen object hangs gathering child output when child process does not exit

    - by Daniel Miles
    When a process exits abnormally or not at all, I still want to be able to gather what output it may have generated up until that point. The obvious solution to this example code is to kill the child process with an os.kill, but in my real code, the child is hung waiting for NFS and does not respond to a SIGKILL. #!/usr/bin/python import subprocess import os import time import signal import sys child_script = """ #!/bin/bash i=0 while [ 1 ]; do echo "output line $i" i=$(expr $i \+ 1) sleep 1 done """ childFile = open("/tmp/childProc.sh", 'w') childFile.write(child_script) childFile.close() cmd = ["bash", "/tmp/childProc.sh"] finish = time.time() + 3 p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE) while p.poll() is None: time.sleep(0.05) if finish < time.time(): print "timed out and killed child, collecting what output exists so far" out, err = p.communicate() print "got it" sys.exit(0) In this case, the print statement about timing out appears and the python script never exits or progresses. Does anybody know how I can do this differently and still get output from my child processe

    Read the article

  • Cleaning up temp folder after long-running subprocess exits

    - by dbr
    I have a Python script (running inside another application) which generates a bunch of temporary images. I then use subprocess to launch an application to view these. When the image-viewing process exists, I want to remove the temporary images. I can't do this from Python, as the Python process may have exited before the subprocess completes. I.e I cannot do the following: p = subprocess.Popen(["imgviewer", "/example/image1.jpg", "/example/image1.jpg"]) p.communicate() os.unlink("/example/image1.jpg") os.unlink("/example/image2.jpg") ..as this blocks the main thread, nor could I check for the pid exiting in a thread etc The only solution I can think of means I have to use shell=True, which I would rather avoid: cmd = ['imgviewer'] cmd.append("/example/image2.jpg") for x in cleanup: cmd.extend(["&&", "rm", x]) cmdstr = " ".join(cmd) subprocess.Popen(cmdstr, shell = True) This works, but is hardly elegant, and will fail with filenames containing spaces etc.. Basically, I have a background subprocess, and want to remove the temp files when it exits, even if the Python process no longer exists.

    Read the article

  • read subprocess stdout line by line

    - by Caspin
    My python script uses subprocess to call a linux utility that is very noisy. I want to store all of the output to a log file, but only show some of it to the user. I thought the following would work, but the output does show up in my application until the utility has produced a significant amount of output. #fake_utility.py, just generates lots of output over time import time i = 0 while True: print hex(i)*512 i += 1 time.sleep(0.5) #filters output import subprocess proc = subprocess.Popen(['python','fake_utility.py'],stdout.subprocess.PIPE) for line in proc.stdout: #the real code does filtering here print "test:", line.rstrip() The behavior I really want is for the filter script to print each line as it is received from the subprocess. Sorta like what tee does but with python code. What am I missing? Is this even possible?

    Read the article

  • cx_Oracle makes subprocess give OSError

    - by Shrikant Sharat
    I am trying to use the cx_Oracle module with python 2.6.6 on ubuntu Maverick, with Oracle 11gR2 Enterprise edition. I am able to connect to my oracle db just fine, but once I do that, the subprocess module does not work anymore. Here is an iPython session that reproduces the problem... In [1]: import subprocess as sp, cx_Oracle as dbh In [2]: sp.call(['whoami']) sharat Out[2]: 0 In [3]: con = dbh.connect('system', 'password') In [4]: con.close() In [5]: sp.call(['whomai']) --------------------------------------------------------------------------- OSError Traceback (most recent call last) /home/sharat/desk/calypso-launcher/<ipython console> in <module>() /usr/lib/python2.6/subprocess.pyc in call(*popenargs, **kwargs) 468 retcode = call(["ls", "-l"]) 469 """ --> 470 return Popen(*popenargs, **kwargs).wait() 471 472 /usr/lib/python2.6/subprocess.pyc in __init__(self, args, bufsize, executable, stdin, stdout, stderr, preexec_fn, close_fds, shell, cwd, env, universal_newlines, startupinfo, creationflags) 621 p2cread, p2cwrite, 622 c2pread, c2pwrite, --> 623 errread, errwrite) 624 625 if mswindows: /usr/lib/python2.6/subprocess.pyc in _execute_child(self, args, executable, preexec_fn, close_fds, cwd, env, universal_newlines, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite) 1134 1135 if data != "": -> 1136 _eintr_retry_call(os.waitpid, self.pid, 0) 1137 child_exception = pickle.loads(data) 1138 for fd in (p2cwrite, c2pread, errread): /usr/lib/python2.6/subprocess.pyc in _eintr_retry_call(func, *args) 453 while True: 454 try: --> 455 return func(*args) 456 except OSError, e: 457 if e.errno == errno.EINTR: OSError: [Errno 10] No child processes So, the call to sp.call works fine before connecting to oracle, but breaks after that. Even if I have closed the connection to the database. Looking around, I found http://bugs.python.org/issue1731717 as somewhat related to this issue, but I am not dealing with threads here. I don't know if cx_Oracle is. Moreover, the above issue mentions that adding a time.sleep(1) fixes it, but it didn't help me. Any help appreciated. Thanks.

    Read the article

  • Python: nonblocking read from stdout of threaded subprocess

    - by sberry2A
    I have a script (worker.py) that prints unbuffered output in the form... 1 2 3 . . . n where n is some constant number of iterations a loop in this script will make. In another script (service_controller.py) I start a number of threads, each of which starts a subprocess using subprocess.Popen(stdout=subprocess.PIPE, ...); Now, in my main thread (service_controller.py) I want to read the output of each thread's worker.py subprocess and use it to calculate an estimate for the time remaining till completion. I have all of the logic working that reads the stdout from worker.py and determines the last printed number. The problem is that I can not figure out how to do this in a non-blocking way. If I read a constant bufsize then each read will end up waiting for the same data from each of the workers. I have tried numerous ways including using fcntl, select + os.read, etc. What is my best option here? I can post my source if needed, but I figured the explanation describes the problem well enough. Thanks for any help here. EDIT Adding sample code I have a worker that starts a subprocess. class WorkerThread(threading.Thread): def __init__(self): self.completed = 0 self.process = None self.lock = threading.RLock() threading.Thread.__init__(self) def run(self): cmd = ["/path/to/script", "arg1", "arg2"] self.process = subprocess.Popen(cmd, stdout=subprocess.PIPE, bufsize=1, shell=False) #flags = fcntl.fcntl(self.process.stdout, fcntl.F_GETFL) #fcntl.fcntl(self.process.stdout.fileno(), fcntl.F_SETFL, flags | os.O_NONBLOCK) def get_completed(self): self.lock.acquire(); fd = select.select([self.process.stdout.fileno()], [], [], 5)[0] if fd: self.data += os.read(fd, 1) try: self.completed = int(self.data.split("\n")[-2]) except IndexError: pass self.lock.release() return self.completed I then have a ThreadManager. class ThreadManager(): def __init__(self): self.pool = [] self.running = [] self.lock = threading.Lock() def clean_pool(self, pool): for worker in [x for x in pool is not x.isAlive()]: worker.join() pool.remove(worker) del worker return pool def run(self, concurrent=5): while len(self.running) + len(self.pool) > 0: self.clean_pool(self.running) n = min(max(concurrent - len(self.running), 0), len(self.pool)) if n > 0: for worker in self.pool[0:n]: worker.start() self.running.extend(self.pool[0:n]) del self.pool[0:n] time.sleep(.01) for worker in self.running + self.pool: worker.join() and some code to run it. threadManager = ThreadManager() for i in xrange(0, 5): threadManager.pool.append(WorkerThread()) threadManager.run() I have stripped out a log of the other code in hopes to try to pinpoint the issue.

    Read the article

  • xterm python subprocess

    - by Quacked Python
    If using subprocess to execute an xterm on linux, which in turn executes some other process, it seems that Python (2.6.5) will never recognize that the process (xterm) has completed execution. Consider the following code: import subprocess import shlex import time proc = subprocess.Popen(shlex.split('xterm -iconic -title "FOO_BAR" -e sleep 5')) while True: if proc.poll(): print 'Process completed' time.sleep(0.1) This will loop infinitely until you terminate the Python interpreter. I'm guessing that this is probably caused by some oddity with xterm, and not a direct cause of the Python subprocess module, but maybe there are some other smart people out there that could shed some light on the situation. Note: Calling proc.communicate() will in fact return when the xterm completes, but for some reason the poll method will not work.

    Read the article

  • python subprocess block

    - by MW
    Hello. I'm having a problem with the module subprocess. I'm running a script from python through: subprocess.Popen('./run_pythia.sh',shell=True).communicate() and sometimes it just blocks and it doesn't finish to execute the script. Before I was using .wait() instead of .communicate() but then because of this: http://dcreager.net/2009/08/06/subprocess-communicate-drawbacks/ I changed to .communicate(). Nevertheless the problem continues. Can anyone help me?

    Read the article

  • Subprocess fails to catch the standard output

    - by user343934
    I am trying to generate tree with fasta file input and Alignment with MuscleCommandline import sys,os, subprocess from Bio import AlignIO from Bio.Align.Applications import MuscleCommandline cline = MuscleCommandline(input="c:\Python26\opuntia.fasta") child= subprocess.Popen(str(cline), stdout = subprocess.PIPE, stderr=subprocess.PIPE, shell=(sys.platform!="win32")) align=AlignIO.read(child.stdout,"fasta") outfile=open('c:\Python26\opuntia.phy','w') AlignIO.write([align],outfile,'phylip') outfile.close() I always encounter with these problems Traceback (most recent call last): File "<string>", line 244, in run_nodebug File "C:\Python26\muscleIO.py", line 11, in <module> align=AlignIO.read(child.stdout,"fasta") File "C:\Python26\Lib\site-packages\Bio\AlignIO\__init__.py", line 423, in read raise ValueError("No records found in handle") ValueError: No records found in handle

    Read the article

  • Error setting env thru subprocess.call to run a python script on a remote linux machine

    - by John Smith
    I am running a python script on a windows machine to invoke another python script on a remote linux machine. I am using subprocess.call with ssh to do this, like below: subprocess.call('ssh -i <identify file> username@hostname python <script_on_linux_machine>') and this works fine. However, if I want to set some environment variables, like below: subprocess.call('ssh -i <identify file> username@hostname python <script_on_linux_machine>', env={key1:value1}) it fails. I get the following error: ssh_connect: getnameinfo failed ssh: connect to host <hostname> port 22: Operation not permitted 255 I've tried splitting the ssh commands into list and passing. Didn't help. I've tried to run other 'local'(windows) commands thru subprocess.call() and tried setting the env. It works fine. I've tried to run other commands(such as ls) on the remote linux machine. Again, subprocess.call() works fine, as long as I don't try to set the environment. What am I doing wrong? Would I be able to set the environment for a python script on a remote machine? Any help will be appreciated.

    Read the article

  • why does setting stderr=subprocess.STDOUT fix a subprocess.check_output call?

    - by ShankarG
    I have a python script running on a small server that is called in three different ways - from within another python script, by cron, or by gammu-smsd (an SMS daemon with the wonderful mobile utility [gammu]). The script is for maintenance and contained the following kludge to measure used space on the system (presumably this is possible from within Python, but this was quick and dirty): reportdict['Used Space'] = subprocess.check_output(["df / | tail -1 | awk '{ print $5; }'"], shell=True)[0:-1] Oddly enough this line would only fail when the script was called by a shell script running from gammu-smsd. The line would fail with a CalledProcessError exception saying "returned exit status 2", even though the output attribute of the CalledProcessError object contained the correct output. The only command in the sequence of shell commands that would give such an error status would be awk, with status 2 indicating a fatal error. If the python script with this line was called by cron, by another python script, or from the command line, this line would work fine. I broke my head trying to fix the environment for the script, thinking this must be the problem. Finally though I put in stderr=subprocess.STDOUT, like so: reportdict['Used Space'] = subprocess.check_output(["df / | tail -1 | awk '{ print $5; }'"], stderr=subprocess.STDOUT, shell=True)[0:-1] This was a debug measure to help me figure out if some output was coming on stderr. But after this the script started working, even when called from gammu-smsd! Why might this be the case? I ask for future reference when using subprocess...

    Read the article

  • Multiple calls to /dev/stdin using python subprocess (*nix)

    - by Alex Leach
    Hi, I have a python subprocess call which I would like to link up to three pipes (two standard in and one standard out). I know that there is only one /dev/stdin, but there's all those other devices in /dev I don't know about, and don't know of any python os, sys or subprocess modules that will utilise them in a manner which allows me to give the device path to subprocess.Popen. The reason I ask is because I would like to pipe information from a mysql database or tar archive rather than a directory structure I currently have which has 28,000 directories in. The directory names alone uses a LOT of space! The alternative is to tar / gunzip the entire directory structure and manoeuvre through the compressed archive. With either solution, mysql or tar, I would still like to have two pipes into subprocess.Popen and one out, so that I can bypass the HDD. Any need for an example??

    Read the article

  • Hadoop streaming with Python and python subprocess

    - by Ganesh
    I have established a basic hadoop master slave cluster setup and able to run mapreduce programs (including python) on the cluster. Now I am trying to run a python code which accesses a C binary and so I am using the subprocess module. I am able to use the hadoop streaming for a normal python code but when I include the subprocess module to access a binary, the job is getting failed. As you can see in the below logs, the hello executable is recognised to be used for the packaging, but still not able to run the code. . . packageJobJar: [/tmp/hello/hello, /app/hadoop/tmp/hadoop-unjar5030080067721998885/] [] /tmp/streamjob7446402517274720868.jar tmpDir=null JarBuilder.addNamedStream hello . . 12/03/07 22:31:32 INFO mapred.FileInputFormat: Total input paths to process : 1 12/03/07 22:31:32 INFO streaming.StreamJob: getLocalDirs(): [/app/hadoop/tmp/mapred/local] 12/03/07 22:31:32 INFO streaming.StreamJob: Running job: job_201203062329_0057 12/03/07 22:31:32 INFO streaming.StreamJob: To kill this job, run: 12/03/07 22:31:32 INFO streaming.StreamJob: /usr/local/hadoop/bin/../bin/hadoop job -Dmapred.job.tracker=master:54311 -kill job_201203062329_0057 12/03/07 22:31:32 INFO streaming.StreamJob: Tracking URL: http://master:50030/jobdetails.jsp?jobid=job_201203062329_0057 12/03/07 22:31:33 INFO streaming.StreamJob: map 0% reduce 0% 12/03/07 22:32:05 INFO streaming.StreamJob: map 100% reduce 100% 12/03/07 22:32:05 INFO streaming.StreamJob: To kill this job, run: 12/03/07 22:32:05 INFO streaming.StreamJob: /usr/local/hadoop/bin/../bin/hadoop job -Dmapred.job.tracker=master:54311 -kill job_201203062329_0057 12/03/07 22:32:05 INFO streaming.StreamJob: Tracking URL: http://master:50030/jobdetails.jsp?jobid=job_201203062329_0057 12/03/07 22:32:05 ERROR streaming.StreamJob: Job not Successful! 12/03/07 22:32:05 INFO streaming.StreamJob: killJob... Streaming Job Failed! Command I am trying is : hadoop jar contrib/streaming/hadoop-*streaming*.jar -mapper /home/hduser/MARS.py -reducer /home/hduser/MARS_red.py -input /user/hduser/mars_inputt -output /user/hduser/mars-output -file /tmp/hello/hello -verbose where hello is the C executable. It is a simple helloworld program which I am using to check the basic functioning. My Python code is : #!/usr/bin/env python import subprocess subprocess.call(["./hello"]) Any help with how to get the executable run with Python in hadoop streaming or help with debugging this will get me forward in this. Thanks, Ganesh

    Read the article

  • Python subprocess: 64 bit windows server PIPE doesn't exist :(

    - by Spaceman1861
    I have a GUI that launches selected python scripts and runs it in cmd next to the gui window. I am able to get my launcher to work on my (windows xp 32 bit) laptop but when I upload it to the server(64bit windows iss7) I am running into some issues. The script runs, to my knowledge but spits back no information into the cmd window. My script is a bit of a Frankenstein that I have hacked and slashed together to get it to work I am fairly certain that this is a very bad example of the subprocess module. Just wondering if i could get a hand :). My question is how do i have to alter my code to work on a 64bit windows server. :) from Tkinter import * import pickle,subprocess,errno,time,sys,os PIPE = subprocess.PIPE if subprocess.mswindows: from win32file import ReadFile, WriteFile from win32pipe import PeekNamedPipe import msvcrt else: import select import fcntl def recv_some(p, t=.1, e=1, tr=5, stderr=0): if tr < 1: tr = 1 x = time.time()+t y = [] r = '' pr = p.recv if stderr: pr = p.recv_err while time.time() < x or r: r = pr() if r is None: if e: raise Exception(message) else: break elif r: y.append(r) else: time.sleep(max((x-time.time())/tr, 0)) return ''.join(y) def send_all(p, data): while len(data): sent = p.send(data) if sent is None: raise Exception(message) data = buffer(data, sent) The code above isn't mine def Run(): print filebox.get(0) location = filebox.get(0) location = location.__str__().replace(listbox.get(ANCHOR).__str__(),"") theTime = time.asctime(time.localtime(time.time())) lastbox.delete(0, END) lastbox.insert(END,theTime) for line in CookieCont: if listbox.get(ANCHOR) in line and len(line) > 4: line[4] = theTime else: "Fill In the rip Details to record the time" if __name__ == '__main__': if sys.platform == 'win32' or sys.platform == 'win64': shell, commands, tail = ('cmd', ('cd "'+location+'"',listbox.get(ANCHOR).__str__()), '\r\n') else: return "Please use contact admin" a = Popen(shell, stdin=PIPE, stdout=PIPE) print recv_some(a) for cmd in commands: send_all(a, cmd + tail) print recv_some(a) send_all(a, 'exit' + tail) print recv_some(a, e=0) The Code above is mine :)

    Read the article

  • Clean stop of Python bottle webserver when started from subprocess

    - by luc
    Hello all, I would like to embed the great Bottle web framework into a small application (1st target is Windows OS). This app starts the bottle webserver thanks to the subprocess module. import subprocess p = subprocess.Popen('python websrv.py') The bottle app is quite simple @route("/") def index(): return template('index') run(reloader=True) It starts the default webserver into a Windows console. All seems Ok except the fact that I must press Ctrl-C to close the bottle webserver. I would like that the master app terminates the webserver when it shutdowns. I can't find a way to do that (p.terminate() doesn't work in this case unfortunately) Any idea? Thanks in advance

    Read the article

  • why does python.subprocess hang after proc.communicate()?

    - by ccfenix
    I've got an interactive program called my_own_exe. First, it prints out alive, then you input S\n and then it prints out alive again. Finally you input L\n. It does some processing and exits. However, when I call it from the following python script, the program seemed to hang after printing out the first 'alive'. Can anyone here tell me why this is happening? Thanks proc2 = subprocess.Popen("my_own_exe", shell=True , stdin=subprocess.PIPE, stdout=subprocess.PIPE) print proc2.communicate()[0] time.sleep(2); print "alive" # 'hang' after print this line proc2.communicate('S\n')[0] print "alive" print proc2.communicate()[0] time.sleep(6)

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >