Search Results

Search found 24629 results on 986 pages for 'python c api'.

Page 8/986 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Python web devlopment framework for python 3.1 user

    - by iama
    I have been learning python for some time now. While starting this "learning python" endeavor I decided to learn the latest and greatest 3.1 version of python. I regret this decision now because I wanted to try my hands on some of the python web development frameworks & it looks like many of them does not support 3.1 yet & it looks like it might take them years to support the new version of Python especially Django and TurboGears. This is really disappointing. Therefore, SO users, do you have any recommendation for a web framework for me that runs on 3.1 and supports some of the modern (I guess I will never learn ;-)) web framework features like MVC/ORM/URL Routing/Caching etc. Many thanks for your response.

    Read the article

  • How can I sandbox Python in pure Python?

    - by Blixt
    I'm developing a web game in pure Python, and want some simple scripting available to allow for more dynamic game content. Game content can be added live by privileged users. It would be nice if the scripting language could be Python. However, it can't run with access to the environment the game runs on since a malicious user could wreak havoc which would be bad. Is it possible to run sandboxed Python in pure Python? If not, are there any open source script interpreters written in pure Python that I could use? The requirements are support for variables, basic conditionals and function calls (not definitions).

    Read the article

  • How to create markers on a google local search api?

    - by cheesebunz
    As the question says, i do not want to use it from the API, and instead combine it on my code, but i can't seem to implement it with the code i have now. the markers do not come out and the search completely disappears if i try implementing with the code. This is a section of my codings : http://www.mediafire.com/?0minqxgwzmx

    Read the article

  • When is a Google Maps API key required?

    - by Thomas
    Recently Google changed it's policy on the use API keys. You're now supposed to no longer need an API key to place Google Maps on your website. And this worked perfectly. But now I have this map (without API key) running on my localhost, which works fine. But as soon as I place it online, I get a popup saying that I need another API key. And on another page on that website, Google Maps does work. Could it maybe have something to do with that the map that doesn't work have a lot (30+) of markers on it? Actually using an API key wouldn't be a very nice solution to me, as this is part of a Wordpress plugin used on many websites.

    Read the article

  • How to choose python version to install in gentoo

    - by Shamanu4
    Hello, I'm using linux gentoo and i want to install python2.5 but it's a problem. emerge -av python shows These are the packages that would be merged, in order: Calculating dependencies... done! [ebuild U ] dev-lang/python-3.1.2-r3 [3.1.1-r1] USE="gdbm ipv6 ncurses readline ssl threads (wide-unicode%*) xml -build -doc -examples -sqlite* -tk -wininst (-ucs2%)" 9,558 kB [ebuild U ] app-admin/python-updater-0.8 [0.7] 8 kB and there are ebuild for more versions: # ls /usr/portage/dev-lang/python ChangeLog files Manifest metadata.xml python-2.4.6.ebuild python-2.5.4-r4.ebuild python-2.6.4-r1.ebuild python-2.6.5-r2.ebuild python-3.1.2-r3.ebuild How to choose ebuild that I want? (python-2.5.4-r4)

    Read the article

  • what could cause a script to fail to find python when it has `#!/usr/bin/env python` in the first line?

    - by jcollum
    Trying to get casperjs running on Ubuntu 12.04. After installing it when I run I get: 09:20 $ ll /usr/local/bin/casperjs lrwxrwxrwx 1 root root 26 Nov 6 16:49 /usr/local/bin/casperjs -> /opt/casperjs/bin/casperjs 09:20 $ /usr/bin/env python --version Python 2.7.3 09:20 $ cat /opt/casperjs/bin/casperjs | head -4 #!/usr/bin/env python import os import sys 09:20 $ casperjs : No such file or directory 09: 22 $ python Python 2.7.3 (default, Sep 26 2013, 20:03:06) [GCC 4.6.3] on linux2 So Python is present and runnable, casperjs is pointing to the right place and it is a python script. But when I run it I get "No such file". I can fix it by changing the first line of the casperjs python file from: #!/usr/bin/env python to: #!/usr/bin/python Result: $ casperjs --version 1.1.0-DEV I managed to fix it, but I'm wondering why it didn't work with #!/usr/bin/env python, since that seems to be a normal interpreter line. Do I have something configured wrong? Here are the steps to get casperjs: $ git clone git://github.com/n1k0/casperjs.git $ cd casperjs $ ln -sf `pwd`/bin/casperjs /usr/local/bin/casperjs $ casperjs : No such file or directory

    Read the article

  • Odd difference between Python 2.5 and Python 2.6 on MacOS 10.6 using ctypes and libproc proc_pidinfo

    - by cemasoniv
    I'm trying to determine the current working directory of a process given its PID. The command-line utility lsof does something similar. Here's the source to the python script: import ctypes from ctypes import util import sys PROC_PIDVNODEPATHINFO = 9 proc = ctypes.cdll.LoadLibrary(util.find_library("libproc")) print(proc.proc_pidinfo) class vnode_info(ctypes.Structure): _fields_ = [('data', ctypes.c_ubyte * 152)] class vnode_info_path(ctypes.Structure): _fields_ = [('vip_vi', vnode_info), ('vip_path', ctypes.c_char * 1024)] class proc_vnodepathinfo(ctypes.Structure): _fields_ = [('pvi_cdir', vnode_info_path), ('pvi_rdir', vnode_info_path)] inst = proc_vnodepathinfo() pid = int(sys.argv[1]) ret = proc.proc_pidinfo( pid, PROC_PIDVNODEPATHINFO, 0, ctypes.byref(inst), ctypes.sizeof(inst) ) print(ret, inst.pvi_cdir.vip_path) However, even though this script behaves as expected on Python 2.6, it does not work in Python 2.5: host:dir user$ sudo /usr/bin/python2.6 script.py 2698 <_FuncPtr object at 0x100419ae0> (2352, '/') host:dir user$ sudo /usr/bin/python2.5 script.py 2698 <_FuncPtr object at 0x19fdc0> (0, '') (PID 2698 is "Activity Monitor.app"). Note the different return values. Since this program strongly based on ctypes, I can't imagine any difference in Python itself that would cause this. The same behavior (as Python 2.5) occurs with my self-built Python 3.2. I'm not sure what versioning information I can give to help track down the weirdness -- or even come up with a solution for 2.5 -- but here's some stuff: host:dir user$ otool -L /usr/bin/python2.6 /usr/bin/python2.6: /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 125.2.0) host:dir user$ otool -L /usr/bin/python2.5 /usr/bin/python2.5 (architecture i386): /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 125.2.0) /usr/bin/python2.5 (architecture ppc7400): /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 125.2.0) host:dir user$ uname -a Darwin host.local 10.8.0 Darwin Kernel Version 10.8.0: Tue Jun 7 16:33:36 PDT 2011; root:xnu-1504.15.3~1/RELEASE_I386 i386 Thanks to anyone that has a clue about what's going on here:)

    Read the article

  • Facebook Graph API authentication in canvas app and track session

    - by cdpnet
    Short question is: how can i use graph api oauth redirects mechanism to authenticate user and save retrieved access_token and also use javascript SDK when needed (the problem is javascript SDK will have different access_token when initialized). I have initially setup my facebook iframe canvas app, with single sign on. This works well with graph api, as I am able to use access_token saved by facebook's javascript when it detects sessionchange(user logged in). But, I want to rather not do single sign-on. But, use graph api redirect and force user to send to a permissions dialog. But, if he has already given permissions, I shouldn't redirect user. How to handle this? Another question: I have done graph api redirects for authentication and have retrieved access_token also. But then, what if I want to use javascript call FB.ui to do stream.Publish? I think it will use it's own access_token which is set during FB.init and detecting session. So, I am looking for some path here. How to use graph api for authentication and also use facebook's javascript SDK when needed. P.S. I'm using ASP .NET MVC 2. I have an authentication filter developed, which needs to detect the user's authentication state and redirect.(currently it does this to graph api authorize url)

    Read the article

  • Hadoop streaming with Python and python subprocess

    - by Ganesh
    I have established a basic hadoop master slave cluster setup and able to run mapreduce programs (including python) on the cluster. Now I am trying to run a python code which accesses a C binary and so I am using the subprocess module. I am able to use the hadoop streaming for a normal python code but when I include the subprocess module to access a binary, the job is getting failed. As you can see in the below logs, the hello executable is recognised to be used for the packaging, but still not able to run the code. . . packageJobJar: [/tmp/hello/hello, /app/hadoop/tmp/hadoop-unjar5030080067721998885/] [] /tmp/streamjob7446402517274720868.jar tmpDir=null JarBuilder.addNamedStream hello . . 12/03/07 22:31:32 INFO mapred.FileInputFormat: Total input paths to process : 1 12/03/07 22:31:32 INFO streaming.StreamJob: getLocalDirs(): [/app/hadoop/tmp/mapred/local] 12/03/07 22:31:32 INFO streaming.StreamJob: Running job: job_201203062329_0057 12/03/07 22:31:32 INFO streaming.StreamJob: To kill this job, run: 12/03/07 22:31:32 INFO streaming.StreamJob: /usr/local/hadoop/bin/../bin/hadoop job -Dmapred.job.tracker=master:54311 -kill job_201203062329_0057 12/03/07 22:31:32 INFO streaming.StreamJob: Tracking URL: http://master:50030/jobdetails.jsp?jobid=job_201203062329_0057 12/03/07 22:31:33 INFO streaming.StreamJob: map 0% reduce 0% 12/03/07 22:32:05 INFO streaming.StreamJob: map 100% reduce 100% 12/03/07 22:32:05 INFO streaming.StreamJob: To kill this job, run: 12/03/07 22:32:05 INFO streaming.StreamJob: /usr/local/hadoop/bin/../bin/hadoop job -Dmapred.job.tracker=master:54311 -kill job_201203062329_0057 12/03/07 22:32:05 INFO streaming.StreamJob: Tracking URL: http://master:50030/jobdetails.jsp?jobid=job_201203062329_0057 12/03/07 22:32:05 ERROR streaming.StreamJob: Job not Successful! 12/03/07 22:32:05 INFO streaming.StreamJob: killJob... Streaming Job Failed! Command I am trying is : hadoop jar contrib/streaming/hadoop-*streaming*.jar -mapper /home/hduser/MARS.py -reducer /home/hduser/MARS_red.py -input /user/hduser/mars_inputt -output /user/hduser/mars-output -file /tmp/hello/hello -verbose where hello is the C executable. It is a simple helloworld program which I am using to check the basic functioning. My Python code is : #!/usr/bin/env python import subprocess subprocess.call(["./hello"]) Any help with how to get the executable run with Python in hadoop streaming or help with debugging this will get me forward in this. Thanks, Ganesh

    Read the article

  • how to do asynchronous http requests with epoll and python 3.1

    - by flow
    there is an interesting page http://scotdoyle.com/python-epoll-howto.html about how to do asnchronous / non-blocking / AIO http serving in python 3. there is the tornado web server which does include a non-blocking http client. i have managed to port parts of the server to python 3.1, but the implementation of the client requires pyCurl and seems to have problems (with one participant stating how ‘Libcurl is such a pain in the neck’, and looking at the incredibly ugly pyCurl page i doubt pyCurl will arrive in py3+ any time soon). now that epoll is available in the standard library, it should be possible to do asynchronous http requests out of the box with python. i really do not want to use asyncore or whatnot; epoll has a reputation for being the ideal tool for the task, and it is part of the python distribution, so using anything but epoll for non-blocking http is highly counterintuitive (prove me wrong if you feel like it). oh, and i feel threading is horrible. no threading. i use stackless. people further interested in the topic of asynchronous http should not miss out on this talk by peter portante at PyCon2010; also of interest is the keynote, where speaker antonio rodriguez at one point emphasizes the importance of having up-to-date web technology libraries right in the standard library.

    Read the article

  • Pass by reference in Boost::Python

    - by Fabzter
    Hi everybody. Consider something like: struct Parameter { int a; Parameter(){a = 0} void setA(int newA){a = newA;} }; struct MyClass { void changeParameter(Parameter &p){ p.setA(-1);} }; Well, let's fast forward, and imagine I already wrapped those classes, exposing everything to python, and imagine also I instantiate an object of Parameter in the C++ code, which I pass to the python script, and that python script uses a MyClass object to modify the instance of Parameter I created at the beginning in the C++ code. After that code executes, in C++ Parameter instance is unchanged!!! This means it was passed by value (or something alike :S), not by reference. But I thought I declared it to be passed by reference... I can't seem to find Boost::Python documentation about passing by reference (although there seems to be enough doc about returning by reference...). Can anyone give some hint or pointer please?

    Read the article

  • making python 2.6 exception backward compatible

    - by m2o
    I have the following python code: try: pr.update() except ConfigurationException as e: returnString=e.line+' '+e.errormsg This works under python 2.6, but the "as e" syntax fails under previous versions. How can I resolved this? Or in other words, how do I catch user-defined exceptions (and use their instance variables) under python 2.6. Thank you!

    Read the article

  • mod_python with Python 2.6 on Windows

    - by Vulcan Eager
    How do I install mod_python to run with Python 2.6 on a Windows machine? I could not find an installer for Python 2.6. I downloaded this installer for (mod_python on Python 2.5): mod_python-3.3.1.win32-py2.5-Apache2.2.exe and extracted it to get PLATLIB and SCRIPTS folders. Where do I go from here?

    Read the article

  • Python HTTPS requests (urllib2) fails on Ubuntu 12.04 without proxy

    - by Pablo
    I have an little app I wrote in Python and it used to work... until yesterday, when it suddenly started giving me an error in a HTTPS connection. I don't remember if there was an update, but both Python 2.7.3rc2 and Python 3.2 are failing just the same. I googled it and found out that this happens when people are behind a proxy, but I'm not (and nothing have changed in my network since the last time it worked). My syster's computer running windows and Python 2.7.2 has no problems (in the same network). response = urllib2.urlopen(url).read() File "/usr/lib/python2.7/urllib2.py", line 126, in urlopen return _opener.open(url, data, timeout) File "/usr/lib/python2.7/urllib2.py", line 400, in open response = self._open(req, data) File "/usr/lib/python2.7/urllib2.py", line 418, in _open '_open', req) File "/usr/lib/python2.7/urllib2.py", line 378, in _call_chain result = func(*args) File "/usr/lib/python2.7/urllib2.py", line 1215, in https_open return self.do_open(httplib.HTTPSConnection, req) File "/usr/lib/python2.7/urllib2.py", line 1177, in do_open raise URLError(err) urllib2.URLError: <urlopen error [Errno 8] _ssl.c:504: EOF occurred in violation of protocol> What's wrong? Any help is appreciated. PS.: Older python versions don't work either, not in my system and not in a live session from USB, but DO work in a Ubuntu 11.10 live session.

    Read the article

  • Python bindings for C++ code using OpenCV giving segmentation fault

    - by lightalchemist
    I'm trying to write a python wrapper for some C++ code that make use of OpenCV but I'm having difficulties returning the result, which is a OpenCV C++ Mat object, to the python interpreter. I've looked at OpenCV's source and found the file cv2.cpp which has conversions functions to perform conversions to and fro between PyObject* and OpenCV's Mat. I made use of those conversions functions but got a segmentation fault when I tried to use them. I basically need some suggestions/sample code/online references on how to interface python and C++ code that make use of OpenCV, specifically with the ability to return OpenCV's C++ Mat to the python interpreter or perhaps suggestions on how/where to start investigating the cause of the segmentation fault. Currently I'm using Boost Python to wrap the code. Thanks in advance to any replies. The relevant code: // This is the function that is giving the segmentation fault. PyObject* ABC::doSomething(PyObject* image) { Mat m; pyopencv_to(image, m); // This line gives segmentation fault. // Some code to create cppObj from CPP library that uses OpenCV cv::Mat processedImage = cppObj->align(m); return pyopencv_from(processedImage); } The conversion functions taken from OpenCV's source follows. The conversion code gives segmentation fault at the commented line with "if (!PyArray_Check(o)) ...". static int pyopencv_to(const PyObject* o, Mat& m, const char* name = "<unknown>", bool allowND=true) { if(!o || o == Py_None) { if( !m.data ) m.allocator = &g_numpyAllocator; return true; } if( !PyArray_Check(o) ) // Segmentation fault inside PyArray_Check(o) { failmsg("%s is not a numpy array", name); return false; } int typenum = PyArray_TYPE(o); int type = typenum == NPY_UBYTE ? CV_8U : typenum == NPY_BYTE ? CV_8S : typenum == NPY_USHORT ? CV_16U : typenum == NPY_SHORT ? CV_16S : typenum == NPY_INT || typenum == NPY_LONG ? CV_32S : typenum == NPY_FLOAT ? CV_32F : typenum == NPY_DOUBLE ? CV_64F : -1; if( type < 0 ) { failmsg("%s data type = %d is not supported", name, typenum); return false; } int ndims = PyArray_NDIM(o); if(ndims >= CV_MAX_DIM) { failmsg("%s dimensionality (=%d) is too high", name, ndims); return false; } int size[CV_MAX_DIM+1]; size_t step[CV_MAX_DIM+1], elemsize = CV_ELEM_SIZE1(type); const npy_intp* _sizes = PyArray_DIMS(o); const npy_intp* _strides = PyArray_STRIDES(o); bool transposed = false; for(int i = 0; i < ndims; i++) { size[i] = (int)_sizes[i]; step[i] = (size_t)_strides[i]; } if( ndims == 0 || step[ndims-1] > elemsize ) { size[ndims] = 1; step[ndims] = elemsize; ndims++; } if( ndims >= 2 && step[0] < step[1] ) { std::swap(size[0], size[1]); std::swap(step[0], step[1]); transposed = true; } if( ndims == 3 && size[2] <= CV_CN_MAX && step[1] == elemsize*size[2] ) { ndims--; type |= CV_MAKETYPE(0, size[2]); } if( ndims > 2 && !allowND ) { failmsg("%s has more than 2 dimensions", name); return false; } m = Mat(ndims, size, type, PyArray_DATA(o), step); if( m.data ) { m.refcount = refcountFromPyObject(o); m.addref(); // protect the original numpy array from deallocation // (since Mat destructor will decrement the reference counter) }; m.allocator = &g_numpyAllocator; if( transposed ) { Mat tmp; tmp.allocator = &g_numpyAllocator; transpose(m, tmp); m = tmp; } return true; } static PyObject* pyopencv_from(const Mat& m) { if( !m.data ) Py_RETURN_NONE; Mat temp, *p = (Mat*)&m; if(!p->refcount || p->allocator != &g_numpyAllocator) { temp.allocator = &g_numpyAllocator; m.copyTo(temp); p = &temp; } p->addref(); return pyObjectFromRefcount(p->refcount); } My python test program: import pysomemodule # My python wrapped library. import cv2 def main(): myobj = pysomemodule.ABC("faces.train") # Create python object. This works. image = cv2.imread('61.jpg') processedImage = myobj.doSomething(image) cv2.imshow("test", processedImage) cv2.waitKey() if __name__ == "__main__": main()

    Read the article

  • Segmentation fault when creating a Phonon MediaObject

    - by Luke Hansford
    I have music playing program made using PySide which uses Phonon to playback audio. I updated to MacOS X Mavericks a few days ago, which meant I needed to reinstall PySide. I'm not sure which of these actions has caused this, but now whenever I try to create a Phonon MediaObject I get a Segmentation Fault: 11 from Python. It's not just in my program, it happens when trying to create a MediaObject in Python without any other actions. I'm getting the following error message from my Mac whenever it crashes: Process: Python [13711] Path: /usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python Identifier: org.python.python Version: 2.7.5 (2.7.5) Code Type: X86-64 (Native) Parent Process: bash [13707] Responsible: Terminal [13704] User ID: 501 Date/Time: 2013-11-01 19:47:53.164 +1000 OS Version: Mac OS X 10.9 (13A603) Report Version: 11 Anonymous UUID: C2686854-18CA-9D37-26E9-60050E3C4DA6 Sleep/Wake UUID: BB983BF6-CCE2-44D1-82A0-1C73382DFFE4 Crashed Thread: 0 Dispatch queue: com.apple.main-thread Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0x0000000000000008 VM Regions Near 0x8: --> __TEXT 00000001082e8000-00000001082e9000 [ 4K] r-x/rwx SM=COW /usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python Thread 0 Crashed:: Dispatch queue: com.apple.main-thread 0 QtCore 0x000000010a1b34cb QObject::moveToThread(QThread*) + 17 1 QtDBus 0x000000010d55f98b QDBusDefaultConnection::QDBusDefaultConnection(QDBusConnection::BusType, char const*) + 171 2 QtDBus 0x000000010d55ebdf QDBusConnection::sessionBus() + 71 3 phonon 0x000000010d50228d Phonon::FactoryPrivate::FactoryPrivate() + 189 4 phonon 0x000000010d5024d5 Phonon::$_249::operator->() + 99 5 phonon 0x000000010d502991 Phonon::Factory::registerFrontendObject(Phonon::MediaNodePrivate*) + 17 6 phonon 0x000000010d50b27e Phonon::MediaNodePrivate::MediaNodePrivate(Phonon::MediaNodePrivate::CastId) + 80 7 phonon 0x000000010d50f570 Phonon::MediaObjectPrivate::MediaObjectPrivate() + 24 8 phonon 0x000000010d50be9d Phonon::MediaObject::MediaObject(QObject*) + 45 9 phonon.so 0x000000010d42f24a Sbk_Phonon_MediaObject_Init + 458 10 org.python.python 0x0000000108338707 type_call + 189 11 org.python.python 0x00000001082f74fd PyObject_Call + 101 12 org.python.python 0x00000001083714f0 PyEval_EvalFrameEx + 15525 13 org.python.python 0x0000000108373aaf fast_function + 182 14 org.python.python 0x0000000108370919 PyEval_EvalFrameEx + 12494 15 org.python.python 0x000000010836d721 PyEval_EvalCodeEx + 1638 16 org.python.python 0x000000010836d0b5 PyEval_EvalCode + 54 17 org.python.python 0x000000010838beb8 run_mod + 53 18 org.python.python 0x000000010838bf5f PyRun_FileExFlags + 137 19 org.python.python 0x000000010838baad PyRun_SimpleFileExFlags + 718 20 org.python.python 0x000000010839c58b Py_Main + 3039 21 libdyld.dylib 0x00007fff8e4fb5fd start + 1 Thread 1:: Dispatch queue: com.apple.libdispatch-manager 0 libsystem_kernel.dylib 0x00007fff8c938662 kevent64 + 10 1 libdispatch.dylib 0x00007fff923e743d _dispatch_mgr_invoke + 239 2 libdispatch.dylib 0x00007fff923e7152 _dispatch_mgr_thread + 52 Thread 2: 0 libsystem_kernel.dylib 0x00007fff8c937e6a __workq_kernreturn + 10 1 libsystem_pthread.dylib 0x00007fff90bd8f08 _pthread_wqthread + 330 2 libsystem_pthread.dylib 0x00007fff90bdbfb9 start_wqthread + 13 Thread 3: 0 libsystem_kernel.dylib 0x00007fff8c937e6a __workq_kernreturn + 10 1 libsystem_pthread.dylib 0x00007fff90bd8f08 _pthread_wqthread + 330 2 libsystem_pthread.dylib 0x00007fff90bdbfb9 start_wqthread + 13 Thread 4: 0 libsystem_kernel.dylib 0x00007fff8c937e6a __workq_kernreturn + 10 1 libsystem_pthread.dylib 0x00007fff90bd8f08 _pthread_wqthread + 330 2 libsystem_pthread.dylib 0x00007fff90bdbfb9 start_wqthread + 13 Thread 0 crashed with X86 Thread State (64-bit): rax: 0x00007feba0d19700 rbx: 0x000000010d5b7098 rcx: 0x00000000002f4180 rdx: 0x000000000012c040 rdi: 0x0000000000000000 rsi: 0x00007feba0d19700 rbp: 0x00007fff57917210 rsp: 0x00007fff579171d0 r8: 0x00007feba0fd5d10 r9: 0x00007feba0ff5310 r10: 0x0000000019c04cbe r11: 0x0000000070769b38 r12: 0x00007fff57917220 r13: 0x00007feba0c07190 r14: 0x0000000000000000 r15: 0x00007feba0fe1430 rip: 0x000000010a1b34cb rfl: 0x0000000000010202 cr2: 0x0000000000000008 Logical CPU: 0 Error Code: 0x00000004 Trap Number: 14 Anyone have any ideas about what is happening?

    Read the article

  • Python script won't write data when ran from cron

    - by Ruud
    When I run a python script in a terminal it runs as expected; downloads file and saves it in the desired spot. sudo python script.py I've added the python script to the root crontab, but then it runs as it is supposed to except it does not write the file. $ sudo crontab -l > * * * * * python /home/test/script.py >> /var/log/test.log 2>&1 Below is a simplified script that still has the problem: #!/usr/bin/python scheduleUrl = 'http://test.com/schedule.xml' schedule = '/var/test/schedule.xml' # Download url and save as filename def wget(url, filename): import urllib2 try: response = urllib2.urlopen(url) except Exception: import traceback logging.exception('generic exception: ' + traceback.format_exc()) else: print('writing:'+filename+';') output = open(filename,'wb') output.write(response.read()) output.close() # Download the schedule wget(scheduleUrl, schedule) I do get the message "writing:name of file;" inside the log, to which the cron entry outputs. But the actual file is nowhere to be found... The dir /var/test is chmodded to 777 and using whatever user, I am allowed to add and change files as I please.

    Read the article

  • Emacs: methods for debugging python

    - by Tom Willis
    I use emacs for all my code edit needs. Typically, I will use M-x compile to run my test runner which I would say gets me about 70% of what I need to do to keep the code on track however lately I've been wondering how it might be possible to use M-x pdb on occasions where it would be nice to hit a breakpoint and inspect things. In my googling I've found some things that suggest that this is useful/possible. However I have not managed to get it working in a way that I fully understand. I don't know if it's the combination of buildout + appengine that might be making it more difficult but when I try to do something like M-x pdb Run pdb (like this): /Users/twillis/projects/hydrant/bin/python /Users/twillis/bin/pdb /Users/twillis/projects/hydrant/bin/devappserver /Users/twillis/projects/hydrant/parts/hydrant-app/ Where .../bin/python is the interpreter buildout makes with the path set for all the eggs. ~/bin/pdb is a simple script to call into pdb.main using the current python interpreter HellooKitty:hydrant twillis$ cat ~/bin/pdb #! /usr/bin/env python if __name__ == "__main__": import sys sys.version_info import pdb pdb.main() HellooKitty:hydrant twillis$ .../bin/devappserver is the dev_appserver script that the buildout recipe makes for gae project and .../parts/hydrant-app is the path to the app.yaml I am first presented with a prompt Current directory is /Users/twillis/bin/ C-c C-f Nothing happens but HellooKitty:hydrant twillis$ ps aux | grep pdb twillis 469 100.0 1.6 168488 67188 s002 Rs+ 1:03PM 0:52.19 /usr/local/bin/python2.5 /Users/twillis/projects/hydrant/bin/python /Users/twillis/bin/pdb /Users/twillis/projects/hydrant/bin/devappserver /Users/twillis/projects/hydrant/parts/hydrant-app/ twillis 477 0.0 0.0 2435120 420 s000 R+ 1:05PM 0:00.00 grep pdb HellooKitty:hydrant twillis$ something is happening C-x [space] will report that a breakpoint has been set. But I can't manage to get get things going. It feels like I am missing something obvious here. Am I? So, is interactive debugging in emacs worthwhile? is interactive debugging a google appengine app possible? Any suggestions on how I might get this working?

    Read the article

  • resolving overloads in boost.python

    - by swarfrat
    I have a C++ class like this: class ConnectionBase { public: ConnectionBase(); template <class T> Publish(const T&); private: virtual void OnEvent(const Overload_a&) {} virtual void OnEvent(const Overload_b&) {} }; My templates & overloads are a known fixed set of types at compile time. The application code derives from ConnectionBase and overrides OnEvent for the events it cares about. I can do this because the set of types is known. OnEvent is private because the user never calls it, the class creates a thread that calls it as a callback. The C++ code works. I have wrapped this in boost.python, I can import it and publish from python. I want do create the equivalent of the following in python : class ConnectionDerived { public: ConnectionDerived(); private: virtual void OnEvent(const Overload_b&) { // application code } }; But ... since python isn't typed, and all the boost.python examples I've seen dealing with internals are on the C++ side, I'm a little puzzled as to how to do this. How do I override specific overloads?

    Read the article

  • Explain Python extensions multithreading

    - by Checkers
    Python interpreter has a Global Interpreter Lock, and it is my understanding that extensions must acquire it in a multi-threaded environment. But Boost.Python HOWTO page says the extension function must release the GIL and reacquire it on exit. I want to resist temptation to guess here, so I would like to know what should be GIL locking patterns in the following scenarios: Extension is called from python (presumably running in a python thread). And extension's background thread calls back into Py_* functions. And a final question is, why the linked document says the GIL should be released and re-acquired?

    Read the article

  • Architecting Python application consisting of many small scripts

    - by Duke Dougal
    I am building an application which, at the moment, consists of many small Python scripts. Each Python script processes items from one Amazon SQS queue. Emails come into an initial queue and are processed by a script and typically the script will do a small unit of processing (for example, parse email and store some database fields), then an item will be placed on the next queue for further processing, until eventually the email has finished going through the various scripts and queues. What I like about this approach is that it is very loosely coupled. However, I'm not sure how I should implement live. Should I make each script a daemon which is constantly polling it's inbound queue for things to do? Or should there be some overarching orchestration program or process? Or maybe I should not have lots of small Python scripts but one large application? Specific questions: How should I run each of these scripts - as a daemon with some sort or restart monitor to restart them in case they stop for any reason? If yes, should I have some program which orchestrates this? Or is the idea of many small script not a good one, would it make more sense to have a larger python program which contains all the functionality and does all the queue polling and execution of functionality for each queue? What is the current preferred approach to daemonising Python scripts? Broadly I would welcome any comments or opinions on any aspect of this. thanks

    Read the article

  • Implementing PyMyType_Check methods with Python C API?

    - by Paul D.
    All the Python-provided types have a check method (i.e., PyList_Check) that allows you to check if an arbitrary PyObject* is actually a specific type. How can I implement this for my own types? I haven't found anything good online for this, though it seems like a pretty normal thing to want to do. Also, maybe I'm just terrible at looking through large source trees, but I cannot for the life of me find the implementation of PyList_Check or any of it's companions in the Python (2.5) source.

    Read the article

  • How does whitespace affect Python code?

    - by Codereview
    I've started programming about a year ago, I've learned the C and C++ languages and bits of Java. Recently I've started to learn the Python language (Notable: I'm using the Eclipse IDE). I'm used to formatting my code with whitespace, placing statements a bit to the right of my code for easier readability. Since I started working with Python it seems whitespace is a problem, I get some unnecessary whitespace warnings, and my code gets underlined (In eclipse). After a while I figured Python is very restrictive about whitespace for some reason, so I've been looking for the effects of whitespace on Python code. How does it affect the code? Does the code work different with unnecessary whitespace?

    Read the article

  • Pass FORTRAN variable to Python [migrated]

    - by Matthew Bilskie
    I have a FORTRAN program that is called from a Python script (as an ArcGIS tool). I need to pass an array, Raster_Z(i,j), from FORTRAN to python. I have read about the Python subprocess module; however, I have not had much luck in understanding how to do this with FORTRAN. All examples I have found involve simple unix command line calls and not actual programs. Has anyone had any experience in passing a variable from FORTRAN to Python via a memory PIPE? Thank you for your time.

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >