Search Results

Search found 15224 results on 609 pages for 'parallel python'.

Page 359/609 | < Previous Page | 355 356 357 358 359 360 361 362 363 364 365 366  | Next Page >

  • Making HTTP POST request

    - by infrared
    I'm trying to make a POST request to retrieve information about a book. Here is the code that returns HTTP code: 302, Moved import httplib, urllib params = urllib.urlencode({ 'isbn' : '9780131185838', 'catalogId' : '10001', 'schoolStoreId' : '15828', 'search' : 'Search' }) headers = {"Content-type": "application/x-www-form-urlencoded", "Accept": "text/plain"} conn = httplib.HTTPConnection("bkstr.com:80") conn.request("POST", "/webapp/wcs/stores/servlet/BuybackSearch", params, headers) response = conn.getresponse() print response.status, response.reason data = response.read() conn.close() When I try from a browser, from this page: http://www.bkstr.com/webapp/wcs/stores/servlet/BuybackMaterialsView?langId=-1&catalogId=10001&storeId=10051&schoolStoreId=15828 , it works. What am I missing in my code? Thanks EDIT: Here's what I get when I call print response.msg 302 Moved Date: Tue, 07 Sep 2010 16:54:29 GMT Vary: Host,Accept-Encoding,User-Agent Location: http://www.bkstr.com/webapp/wcs/stores/servlet/BuybackSearch X-UA-Compatible: IE=EmulateIE7 Content-Length: 0 Content-Type: text/plain; charset=utf-8 Seems that the location points to the same url I'm trying to access in the first place? EDIT2: I've tried using urllib2 as suggested here. Here is the code: import urllib, urllib2 url = 'http://www.bkstr.com/webapp/wcs/stores/servlet/BuybackSearch' values = {'isbn' : '9780131185838', 'catalogId' : '10001', 'schoolStoreId' : '15828', 'search' : 'Search' } data = urllib.urlencode(values) req = urllib2.Request(url, data) response = urllib2.urlopen(req) print response.geturl() print response.info() the_page = response.read() print the_page And here is the output: http://www.bkstr.com/webapp/wcs/stores/servlet/BuybackSearch Date: Tue, 07 Sep 2010 16:58:35 GMT Pragma: No-cache Cache-Control: no-cache Expires: Thu, 01 Jan 1970 00:00:00 GMT Set-Cookie: JSESSIONID=0001REjqgX2axkzlR6SvIJlgJkt:1311s25dm; Path=/ Vary: Accept-Encoding,User-Agent X-UA-Compatible: IE=EmulateIE7 Content-Length: 0 Connection: close Content-Type: text/html; charset=utf-8 Content-Language: en-US Set-Cookie: TSde3575=225ec58bcb0fdddfad7332c2816f1f152224db2f71e1b0474c866f3b; Path=/

    Read the article

  • Is this the correct way to convert a UTC datetime string into localtime?

    - by Steve
    Is this the correct way to convert a UTC string into local time allowing for daylight savings? It looks ok to me but you never know :) import time UTC_STRING = "2010-03-25 02:00:00" stamp = time.mktime(time.strptime(UTC_STRING,"%Y-%m-%d %H:%M:%S")) stamp -= time.timezone now = time.localtime() if now[8] == 1: stamp += 60*60 elif now[8] == -1: stamp -= 60*60 print 'UTC: ', time.gmtime(stamp) print 'Local: ', time.localtime(stamp) --- Results from New Zealand (GMT+12 dst=1) --- UTC: (2010, 3, 25, 2, 0, 0, 3, 84, 0) Local: (2010, 3, 25, 15, 0, 0, 3, 84, 1)

    Read the article

  • openerp client customization

    - by iamgopal
    openerp client seems to be nice and working , i would like to hack it and use it as a front end to my open erp solution. but the documentation regarding client side design or customization is poor on openerp site , is there any good reference or documentation available for further digging in to openerp client side coding ? or more : if any similar client solution available that can be plug in to any back end system. ( i.e. rich internet client )

    Read the article

  • 404 not found in telnet, works fine in browser

    - by Viranch Mehta
    i am having a very irritating problem, when i open a url ( http://celebs.widewallpapers.net/md/a/adriana-lima/1440/Adriana-Lima-1440x900-002.jpg ) in browser, it works fine.. but when i try to access it by telnet on bash, i get 404 not found!! my exact terminal: $ telnet celebs.widewallpapers.net 80 HEAD /md/a/adriana-lima/1440/Adriana-Lima-1440x900-002.jpg HTTP/1.0 [enter] [enter] HTTP/1.1 404 Not Found Server: nginx Date: Sun, 23 May 2010 21:36:05 GMT Content-Type: text/html; charset=windows-1251 Content-Length: 166 Connection: close please help me with this as i m trying to make a C batch-downloader, which is almost working as same as the telnet.

    Read the article

  • Unexplained file not found for an existing file

    - by knishua
    Following is the error that occurs in this part of the code. Although the path is valid, a RuntimeError occurs—strange. What is happening, and how can I get this to work? for root,dirs,files in os.walk(self.path): for f in files : if (f.split('.')[1] == "mb"): z = utils.executeInMainThreadWithResult(self.contains,(f.split('.')[0])) if not (isinstance(z,NoneType)): cmds.symbolButton(self.arSubCategory + f.split('.')[0], image=(z[1].replace("\\","/")), width = 35,height = 70, c = "h.imp_file(" + "\"" + root.replace("\\","/") + "/" + f + "\"" + ")") def contains(self,imageName): print 'imageName : ',imageName,'\n' for root, dirs, files in os.walk(self.path+"images"): for g in files: x = re.search(imageName,g) if not (isinstance(x, NoneType)): print 'g ',root+"/"+g.replace("\\","/"),'\n' return (1,(root+"/"+g)) Error: # z is (1, 'T:/Reference_Library/Reference_work/Char_models/Workfiles/images\\rboxdisk1\\female\\highpoly/granny01_highpoly.jpg') Error: File not found: T:/Reference_Library/Reference_work/Char_models/Workfiles/images/rboxdisk1/female/highpoly/granny01_highpoly.jpg Traceback (most recent call last): File "<maya console>", line 115, in <module> File "<maya console>", line 65, in showWindowanimLibrary RuntimeError: File not found: T:/Reference_Library/Reference_work/Char_models/Workfiles/images/rboxdisk1/female/highpoly/granny01_highpoly.jpg

    Read the article

  • PyML 0.7.2 - How to prevent accuracy from dropping after storing/loading a classifier?

    - by Michael Aaron Safyan
    This is a followup from "Save PyML.classifiers.multi.OneAgainstRest(SVM()) object?". The solution to that question was close, but not quite right, (the SparseDataSet is broken, so attempting to save/load with that dataset container type will fail, no matter what. Also, PyML is inconsistent in terms of whether labels should be numbers or strings... it turns out that the oneAgainstRest function is actually not good enough, because the labels need to be strings and simultaneously convertible to floats, because there are places where it is assumed to be a string and elsewhere converted to float) and so after a great deal of hacking and such I was finally able to figure out a way to save and load my multi-class classifier without it blowing up with an error.... however, although it is no longer giving me an error message, it is still not quite right as the accuracy of the classifier drops significantly when it is saved and then reloaded (so I'm still missing a piece of the puzzle). I am currently using the following custom mutli-class classifier for training, saving, and loading: class SVM(object): def __init__(self,features_or_filename,labels=None,kernel=None): if isinstance(features_or_filename,str): filename=features_or_filename; if labels!=None: raise ValueError,"Labels must be None if loading from a file."; with open(os.path.join(filename,"uniquelabels.list"),"rb") as uniquelabelsfile: self.uniquelabels=sorted(list(set(pickle.load(uniquelabelsfile)))); self.labeltoindex={}; for idx,label in enumerate(self.uniquelabels): self.labeltoindex[label]=idx; self.classifiers=[]; for classidx, classname in enumerate(self.uniquelabels): self.classifiers.append(PyML.classifiers.svm.loadSVM(os.path.join(filename,str(classname)+".pyml.svm"),datasetClass = PyML.VectorDataSet)); else: features=features_or_filename; if labels==None: raise ValueError,"Labels must not be None when training."; self.uniquelabels=sorted(list(set(labels))); self.labeltoindex={}; for idx,label in enumerate(self.uniquelabels): self.labeltoindex[label]=idx; points = [[float(xij) for xij in xi] for xi in features]; self.classifiers=[PyML.SVM(kernel) for label in self.uniquelabels]; for i in xrange(len(self.uniquelabels)): currentlabel=self.uniquelabels[i]; currentlabels=['+1' if k==currentlabel else '-1' for k in labels]; currentdataset=PyML.VectorDataSet(points,L=currentlabels,positiveClass='+1'); self.classifiers[i].train(currentdataset,saveSpace=False); def accuracy(self,pts,labels): logger=logging.getLogger("ml"); correct=0; total=0; classindexes=[self.labeltoindex[label] for label in labels]; h=self.hypotheses(pts); for idx in xrange(len(pts)): if h[idx]==classindexes[idx]: logger.info("RIGHT: Actual \"%s\" == Predicted \"%s\"" %(self.uniquelabels[ classindexes[idx] ], self.uniquelabels[ h[idx] ])); correct+=1; else: logger.info("WRONG: Actual \"%s\" != Predicted \"%s\"" %(self.uniquelabels[ classindexes[idx] ], self.uniquelabels[ h[idx] ])) total+=1; return float(correct)/float(total); def prediction(self,pt): h=self.hypothesis(pt); if h!=None: return self.uniquelabels[h]; return h; def predictions(self,pts): h=self.hypotheses(self,pts); return [self.uniquelabels[x] if x!=None else None for x in h]; def hypothesis(self,pt): bestvalue=None; bestclass=None; dataset=PyML.VectorDataSet([pt]); for classidx, classifier in enumerate(self.classifiers): val=classifier.decisionFunc(dataset,0); if (bestvalue==None) or (val>bestvalue): bestvalue=val; bestclass=classidx; return bestclass; def hypotheses(self,pts): bestvalues=[None for pt in pts]; bestclasses=[None for pt in pts]; dataset=PyML.VectorDataSet(pts); for classidx, classifier in enumerate(self.classifiers): for ptidx in xrange(len(pts)): val=classifier.decisionFunc(dataset,ptidx); if (bestvalues[ptidx]==None) or (val>bestvalues[ptidx]): bestvalues[ptidx]=val; bestclasses[ptidx]=classidx; return bestclasses; def save(self,filename): if not os.path.exists(filename): os.makedirs(filename); with open(os.path.join(filename,"uniquelabels.list"),"wb") as uniquelabelsfile: pickle.dump(self.uniquelabels,uniquelabelsfile,pickle.HIGHEST_PROTOCOL); for classidx, classname in enumerate(self.uniquelabels): self.classifiers[classidx].save(os.path.join(filename,str(classname)+".pyml.svm")); I am using the latest version of PyML (0.7.2, although PyML.__version__ is 0.7.0). When I construct the classifier with a training dataset, the reported accuracy is ~0.87. When I then save it and reload it, the accuracy is less than 0.001. So, there is something here that I am clearly not persisting correctly, although what that may be is completely non-obvious to me. Would you happen to know what that is?

    Read the article

  • PyOpenGL - passing transformation matrix into shader

    - by M-V
    I am having trouble passing projection and modelview matrices into the GLSL shader from my PyOpenGL code. My understanding is that OpenGL matrices are column major, but when I pass in projection and modelview matrices as shown, I don't see anything. I tried the transpose of the matrices, and it worked for the modelview matrix, but the projection matrix doesn't work either way. Here is the code: import OpenGL from OpenGL.GL import * from OpenGL.GL.shaders import * from OpenGL.GLU import * from OpenGL.GLUT import * from OpenGL.GLUT.freeglut import * from OpenGL.arrays import vbo import numpy, math, sys strVS = """ attribute vec3 aVert; uniform mat4 uMVMatrix; uniform mat4 uPMatrix; uniform vec4 uColor; varying vec4 vCol; void main() { // option #1 - fails gl_Position = uPMatrix * uMVMatrix * vec4(aVert, 1.0); // option #2 - works gl_Position = vec4(aVert, 1.0); // set color vCol = vec4(uColor.rgb, 1.0); } """ strFS = """ varying vec4 vCol; void main() { // use vertex color gl_FragColor = vCol; } """ # particle system class class Scene: # initialization def __init__(self): # create shader self.program = compileProgram(compileShader(strVS, GL_VERTEX_SHADER), compileShader(strFS, GL_FRAGMENT_SHADER)) glUseProgram(self.program) self.pMatrixUniform = glGetUniformLocation(self.program, 'uPMatrix') self.mvMatrixUniform = glGetUniformLocation(self.program, "uMVMatrix") self.colorU = glGetUniformLocation(self.program, "uColor") # attributes self.vertIndex = glGetAttribLocation(self.program, "aVert") # color self.col0 = [1.0, 1.0, 0.0, 1.0] # define quad vertices s = 0.2 quadV = [ -s, s, 0.0, -s, -s, 0.0, s, s, 0.0, s, s, 0.0, -s, -s, 0.0, s, -s, 0.0 ] # vertices self.vertexBuffer = glGenBuffers(1) glBindBuffer(GL_ARRAY_BUFFER, self.vertexBuffer) vertexData = numpy.array(quadV, numpy.float32) glBufferData(GL_ARRAY_BUFFER, 4*len(vertexData), vertexData, GL_STATIC_DRAW) # render def render(self, pMatrix, mvMatrix): # use shader glUseProgram(self.program) # set proj matrix glUniformMatrix4fv(self.pMatrixUniform, 1, GL_FALSE, pMatrix) # set modelview matrix glUniformMatrix4fv(self.mvMatrixUniform, 1, GL_FALSE, mvMatrix) # set color glUniform4fv(self.colorU, 1, self.col0) #enable arrays glEnableVertexAttribArray(self.vertIndex) # set buffers glBindBuffer(GL_ARRAY_BUFFER, self.vertexBuffer) glVertexAttribPointer(self.vertIndex, 3, GL_FLOAT, GL_FALSE, 0, None) # draw glDrawArrays(GL_TRIANGLES, 0, 6) # disable arrays glDisableVertexAttribArray(self.vertIndex) class Renderer: def __init__(self): pass def reshape(self, width, height): self.width = width self.height = height self.aspect = width/float(height) glViewport(0, 0, self.width, self.height) glEnable(GL_DEPTH_TEST) glDisable(GL_CULL_FACE) glClearColor(0.8, 0.8, 0.8,1.0) glutPostRedisplay() def keyPressed(self, *args): sys.exit() def draw(self): glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) # build projection matrix fov = math.radians(45.0) f = 1.0/math.tan(fov/2.0) zN, zF = (0.1, 100.0) a = self.aspect pMatrix = numpy.array([f/a, 0.0, 0.0, 0.0, 0.0, f, 0.0, 0.0, 0.0, 0.0, (zF+zN)/(zN-zF), -1.0, 0.0, 0.0, 2.0*zF*zN/(zN-zF), 0.0], numpy.float32) # modelview matrix mvMatrix = numpy.array([1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.5, 0.0, -5.0, 1.0], numpy.float32) # render self.scene.render(pMatrix, mvMatrix) # swap buffers glutSwapBuffers() def run(self): glutInitDisplayMode(GLUT_RGBA) glutInitWindowSize(400, 400) self.window = glutCreateWindow("Minimal") glutReshapeFunc(self.reshape) glutDisplayFunc(self.draw) glutKeyboardFunc(self.keyPressed) # Checks for key strokes self.scene = Scene() glutMainLoop() glutInit(sys.argv) prog = Renderer() prog.run() When I use option #2 in the shader without either matrix, I get the following output: What am I doing wrong?

    Read the article

  • Issue reading packets from a pcap file. dpkt module. What gives?

    - by Chris
    I am running the following test script to try to read packets from a sample .pcap file I have downloaded. It won't seem to run. I have all of the modules, but no examples seem to be running. import socket import dpkt import sys pcapReader = dpkt.pcap.Reader(file("test1.pcap", "rb")) for ts, data in pcapReader: ether = dpkt.ethernet.Ethernet(data) if ether.type != dpkt.ethernet.ETH_TYPE_IP: raise ip = ether.data src = socket.inet_ntoa(ip.src) dst = socket.inet_ntoa(ip.dst) print "%s -> %s" % (src, dst) For some reason, this is not being interpreted properly. When running it, I get KeyError: 138 module body in test.py at line 4 function __init__ in pcap.py at line 105 Program exited. Why is this? What's wrong?

    Read the article

  • pycurl script can't login to website

    - by The Jug
    I'm currently trying to get a grasp on pycurl. I'm attempting to login to my own website. After logging into the site it should redirect to the main page. However when trying this script it just gets returned to the login page. What might I be doing wrong? import pycurl import urllib import StringIO import pycurl pf = {'username' : 'user', 'password' : 'pass' } fields = urllib.urlencode(pf) pageContents = StringIO.StringIO() p = pycurl.Curl() p.setopt(pycurl.FOLLOWLOCATION, 1) p.setopt(pycurl.COOKIEFILE, './cookie_test.txt') p.setopt(pycurl.COOKIEJAR, './cookie_test.txt') p.setopt(pycurl.POST, 1) p.setopt(pycurl.POSTFIELDS, fields) p.setopt(pycurl.WRITEFUNCTION, pageContents.write) p.setopt(pycurl.URL, 'http://localhost') p.perform() pageContents.seek(0) print pageContents.readlines()

    Read the article

  • SQL Alchemy related Objects Error

    - by alex
    from sqlalchemy.orm import relation, backref from sqlalchemy import Table, Column, Integer, String, MetaData, ForeignKey, Date, Sequence from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class GUI_SCENARIO(Base): __tablename__ = 'GUI_SCENARIO' Scenario_ID = Column(Integer, primary_key=True) Definition_Date = Column(Date) guiScenarioDefinition = relation('GUI_SCENARIO_DEFINITION', order_by='GUI_SCENARIO_DEFINITION.Scenario_Definition_ID', backref='guiScenario') def __init__(self, Scenario_ID=None, Definition_Date=None): self.Scenario_ID = Scenario_ID self.Definition_Date = Definition_Date class GUI_SCENARIO_DEFINITION(Base): __tablename__='GUI_SCENARIO_DEFINITION' Scenario_Definition_ID = Column(Integer, Sequence('Scenario_Definition_ID_SEQ'), primary_key=True) Scenario_FK = Column(Integer, ForeignKey('GUI_SCENARIO.Scenario_ID')) Definition_Date=Column(Date) guiScenario = relation(GUI_SCENARIO, backref=backref('guiScenarioDefinition', order_by=Scenario_Definition_ID)) def __init__(self, Scenario_FK, Definition_Date): self.Scenario_FK = Scenario_FK self.Definition_Date = Definition_Date guiScenario = relation(GUI_SCENARIO, backref=backref('guiScenarioDefinition', order_by=Scenario_Definition_ID)) tableNameScenario = "GUI_SCENARIO" scenarioClass = getattr(MappingTablesScenario, tableNameScenario) tableScenario = Table(tableNameScenario, meta, autoload=True) mapper(scenarioClass, tableScenario) scenarioName = scenarioDefinition.name scenarioDefinitionDate = datetime.today() newScenario = MappingTablesScenario.GUI_SCENARIO(scenarioName, scenarioDefinitionDate) print newScenario.guiScenarioDefinition If I try to get the objects related to a scenarioObject, I always get this error: AttributeError: 'GUI_SCENARIO' object has no attribute 'guiScenarioDefinition' Does anyone know, why I get this error?

    Read the article

  • Django TestCase testing order

    - by ziang
    If there are several methods in the test class, I found that the order to execute is alphabetical. But I want to customize the order of execution. How to define the execution order? For example: testTestA will be loaded first than testTestB. class Test(TestCase): def setUp(self): ... def testTestB(self): #test code def testTestA(self): #test code

    Read the article

  • pyODBC and Unicode Problem

    - by Aviv Giladi
    Hey guys, I'm working with pyODBC communicate with a MS SQL 2005 Express server. The table to which i'm trying to save the data consists of nvarchar columns. query = u"INSERT INTO tblPersons (name, birthday, gender) VALUES('" query = query + name + u"', '" query = query + birthday + u"', '" query = query + gender + u"')" cur.execute(query ) The variables name, birthrday and gende are read from an Excel file and they are Unicode strings. When I execute the query and either look at the table with SQL Server Management Studio or execute a query that fetches the data that was just inserted, all the data that was written in a non-English languages turn into question marks. The data that was written in English is preserved and appears in the table in the correct way. I tried adding CHARSET=UTF16 to my connection string, but had no luck with that. I can use UTF-8 which works fine but as a working convention, I need all the data saved in my DB to be UTF16. Thanks!

    Read the article

  • Bug when drawing a QImage on a widget with PIL and PyQt

    - by oulipo
    I'm trying to write a small graphic application, and I need to construct some image using PIL that I show in a widget. The image is correctly constructed (I can check with im.show()), I can convert it to a QImage, that I can save normally to disk (using QImage.save), but if I try to draw it directly on my QWidget, it only show a white square. Here I commented out the code that is not working (converting the Image into QImage then QPixmap result in a white square), and I made a dirty hack to save the image to a temporary file and load it directly in a QPixmap, which work but is not what I want to do https://gist.github.com/f6d479f286ad75bf72b7 Someone has an idea? If it can help, when I try to save my QImage in a BMP file, I can access its content, but if I try to save it to a PNG it is completely white

    Read the article

  • PyQt4: Hide widget and resize window

    - by masterLoki
    Hi everyone: I'm working with several widgets but the solution just won't come out. What I have is a series of buttons in series of QHBoxLayouts. Some buttons are hidden by default, but they will appear when needed. To solve space issues, all buttons have a minimum and maximum size so they always look well packed. Also I have a QTextEdit, visible by default, which is in a QVBoxLayout with the QHBoxLayout that hold the buttons So the problem is this: When I hide the QTextEdit and show the other buttons, the window won't resize. After searching I found that using self.ui.layout().setSizeConstraint(QtGui.QLayout.SetFixedSize) will do the trick, but the problem is that it takes the maximum size from all widgets, therefore I end a huge window. Doing self.ui.layout().setSizeConstraint(QtGui.QLayout.SetMinAndMaxSize) won't resize the window I already tried using self.ui.resize(0,0), and when doing a self.ui.layout().update() I got False (which I find odd, http://doc.trolltech.com/4.6/qlayout.html#activate), and also tried to override sizeHint() but it keeps using the max size for all widgets. Is there a way to resize the window and while taking care of the min and max size of a widget? Thanks in advance

    Read the article

  • Django snippet with logic

    - by etam
    Hi, is there a way to create a Django snippet that has logic? I think about something like contact template tag: {% contact_form %} with template: <form action="send_contact_form" method="POST">...</form> with logic: def send_contact_form(): ... I want to be able to use it anywhere in my projects. It should work only by specifying 1 template tag... Do you know what I mean? Is it possible? Thanks in advance, Etam.

    Read the article

  • How do I automatically rebuild the Sphinx index under django-sphinx?

    - by Apreche
    I just setup django-sphinx, and it is working beautifully. I am now able to search my model and get amazing results. The one problem is that I have to build the index by hand using the indexer command. That means every time I add new content, I have to manually hit the command line to rebuild the search index. That is just not acceptable. I could make a cron job that automatically runs the indexer command every so often, but that's far from optimal. New data won't be indexed until the cron runs again. In addition, the indexer will run unnecessarily most times as my site doesn't have data being added very often. How do I set it up so that the Sphinx index will automatically rebuild itself whenever data is added to or modified in a searchable django model?

    Read the article

  • Rebuilding website from Django 0.96 to Django 1.2

    - by Neytiri
    I've got a website done in Django 0.96 (done in 2007), and now we are thinking about rebuilding it (not just migrating) for Django 1.2 . Can anyone point me to the new (and worth the while) widgets, plugins and other stuff for Django 1.2 (released in april 2010). I've heard of "South" and of a widget for debugging (can't remember the name), but I'm a little lost here.

    Read the article

  • Loading SQL dump before running Django tests

    - by knutin
    I have a fairly complex Django project which makes it hard/impossible to use fixtures for loading data. What I would like to do is to load a database dump from the production database server after all tables has bene created by the testrunner and before the actual tests start running. I've tried various "magic" in MyTestCase.setUp(), but with no luck. Any suggestions would be most welcome. Thanks.

    Read the article

  • Downloading a Directory Tree with FTPLIB

    - by Anthony Lemmer
    I'd like to download a directory and all of its contents to the local HD. Here's the code I have thus far (crashes if there's a sub-directory, else grabs all the files): import ftplib import configparser import os def runBackups(): #Load INI filename = 'connections.ini' config = configparser.SafeConfigParser() config.read(filename) connections = config.sections() i = 0 while i < len(connections): #Load Settings uri = config.get(connections[i], "uri") username = config.get(connections[i], "username") password = config.get(connections[i], "password") backupPath = config.get(connections[i], "backuppath") archiveTo = config.get(connections[i], "archiveto") #Start Back-ups ftp = ftplib.FTP(uri) ftp.login(username, password) ftp.set_debuglevel(2) ftp.cwd(backupPath) files = ftp.nlst() for filename in files: ftp.retrbinary('RETR %s' % filename, open(os.path.join(archiveTo, filename), 'wb').write) ftp.quit() i += 1 print() print("Back-ups complete.") print()

    Read the article

  • stopping a cherrypy server over http

    - by d.c
    I have a cherrypy app that I'm controlling over http with a wxpython ui. I want to kill the server when the ui closes, but I don't know how to do that. Right now I'm just doing a sys.exit() on the window close event but thats resulting in Traceback (most recent call last): File "ui.py", line 67, in exitevent urllib.urlopen("http://"+server+"/?sigkill=1") File "c:\python26\lib\urllib.py", line 87, in urlopen return opener.open(url) File "c:\python26\lib\urllib.py", line 206, in open return getattr(self, name)(url) File "c:\python26\lib\urllib.py", line 354, in open_http 'got a bad status line', None) IOError: ('http protocol error', 0, 'got a bad status line', None) is that because I'm not stopping cherrypy properly?

    Read the article

  • Best way to detect IronPython

    - by Adal
    I need to write a module which will be used from both CPython and IronPython. What's the best way to detect IronPython, since I need a slightly different behaviour in that case? I noticed that sys.platform is "win32" on CPython, but "cli" on IronPython. Is there another preferred/standard way of detecting it?

    Read the article

  • Installing PySide - OSX

    - by jeremynealbrown
    Anyone had success installing and using PySide on OSX? I am following the install instructions on the PySide site, though I'm running into issues building the API Extractor. I run cmake on the CMakeLists.txt file inside the api extractor dir and: This error is thrown- CMake Error at /Applications/CMake 2.8-0.app/Contents/share/cmake-2.8/Modules/FindBoost.cmake:894 (message): Unable to find the requested Boost libraries. Unable to find the Boost header files. Please set BOOST_ROOT to the root directory containing Boost or BOOST_INCLUDEDIR to the directory containing Boost's headers. Call Stack (most recent call first): CMakeLists.txt:5 (find_package) I am new to building source w/ cmake and I'm not event really sure what Boost is. Any light you might shed on the set up process would be great. Thanks

    Read the article

  • is there a way to generate pdf containing non-ascii symbols with pisa from django template?

    - by mihailt
    Hi. i'm trying to generate a pdf from template using this snippet: def write_pdf(template_src, context_dict): template = get_template(template_src) context = Context(context_dict) html = template.render(context) result = StringIO.StringIO() pdf = pisa.pisaDocument(StringIO.StringIO(html.encode("UTF-8")), result) if not pdf.err: return http.HttpResponse(result.getvalue(), mimetype='application/pdf') except Exception('PDF error') but all non-latin symbols are not showing correctly, the template and view are saved using utf-8 encoding. i've tried saving view as ANSI and then to user unicode(html,"UTF-8"), but it throws TypeError. Also i thought that maybe it's because the default fonts somehow do not support utf-8 so according to pisa documentation i tried to set fontface in template body in style section. that still gave no results. Does any one have some ideas how to solve this issue?

    Read the article

  • Is it possible to use Regex through Hexadecimal to find email addresses

    - by LukeJenx
    Not sure if this is even possible but I have been looking at using Regex to get an email address that is in Hex. Basically this is to build up some of my automated forensic tools but I am having problems on making a suitable Regex algorithm. Regex for email: /^([a-z0-9_.-]+)@([\da-z.-]+).([a-z.]{2,6})$/ Hex values: @ = 40 . = 2E .com = 636f6d _ = 5f A/a = 41/61 [1] Z/z = 5a/7a - = 2d This is what I have got at the moment (it only takes into account lower case and .com). But it doesn't work! Have I messed something simple up? "/^([61-7a]+)40([61-7a]+)23(636f6d)$/" [1] I know email can only be lower case but I need to take uppercase into account too.

    Read the article

< Previous Page | 355 356 357 358 359 360 361 362 363 364 365 366  | Next Page >