Search Results

Search found 15087 results on 604 pages for 'python multithreading'.

Page 153/604 | < Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >

  • Python Post Upload JPEG to Server?

    - by iJames
    It seems like this answer has been provided a bunch of times but in all of it, I'm still getting errors from the server and I'm sure it has to do with my code. I've tried HTTP, and HTTPConnection from httplib and both create quite different terminal outputs in terms of formatting/encoding so I'm not sure where the problem lies. Does anything stand out here? Or is there just a better way? Pieced together from an ancient article because I really needed to understand the basis of creating the post: http://code.activestate.com/recipes/146306-http-client-to-post-using-multipartform-data/ Note, the jpeg is supposed to be "unformatted". The pseudocode: boundary = "somerandomsetofchars" BOUNDARY = '--' + boundary CRLF = '\r\n' fields = [('aspecialkey','thevalueofthekey')] files = [('Image.Data','mypicture.jpg','/users/home/me/mypicture.jpg')] bodylines = [] for (key, value) in fields: bodylines.append(BOUNDARY) bodylines.append('Content-Disposition: form-data; name="%s"' % key) bodylines.append('') bodylines.append(value) for (key, filename, fileloc) in files: bodylines.append(BOUNDARY) bodylines.append('Content-Disposition: form-data; name="%s"; filename="%s"' % (key, filename)) bodylines.append('Content-Type: %s' % self.get_content_type(fileloc)) bodylines.append('') bodylines.append(open(fileloc,'r').read()) bodylines.append(BOUNDARY + '--') bodylines.append('') #print bodylines content_type = 'multipart/form-data; boundary=%s' % BOUNDARY body = CRLF.join(bodylines) #conn = httplib.HTTP("www.ahost.com") # In both this and below, the file part was garbling the rest of the body?!? conn = httplib.HTTPConnection("www.ahost.com") conn.putrequest('POST', "/myuploadlocation/uploadimage") headers = { 'content-length': str(len(body)), 'Content-Type' : content_type, 'User-Agent' : 'myagent' } for headerkey in headers: conn.putheader(headerkey, headers[headerkey]) conn.endheaders() conn.send(body) response = conn.getresponse() result = response.read() responseheaders = response.getheaders() It's interesting in that the real code I've implemented seems to work and is getting back valid responses, but the problem it it's telling me that it can't find the image data. Maybe this is particular to the server, but I'm just trying to rule out that I'm not doing some thing exceptionally stupid here. Or perhaps there's other methodologies for doing this more efficiently. I've not tried poster yet because I want to make sure I'm formatting the POST correctly first. I figure I can upgrade to poster after it's working yes?

    Read the article

  • Python - excel - xlwt: colouring every second row

    - by konjo
    Hi, i just finish some MYSQL to excel script with xlwt and I need to colour every second row for easy reading. I have tried this: row = easyxf('pattern: pattern solid, fore_colour blue') for i in range(0,10,2): ws0.row(i).set_style(row) Alone this colouring is fine, but when when I write my data rows are again white. Can some please show me some example 'cuz I m lost in coding :/ Best Regards.

    Read the article

  • Reading Python Documentation for 3rd party modules

    - by Shadyabhi
    I recently downloaded IMDbpy moduele.. When I do, import imdb help(imdb) i dont get the full documentation.. I have to do im = imdb.IMDb() help(im) to see the available methods. I dont like this console interface. Is there any better way of reading the doc. I mean all the doc related to module imdb in one page..

    Read the article

  • Windows path in python

    - by Gareth
    Hi all. What is the best way to represent a windows directory, for example "C:\meshes\as"? I have been trying to modify a script but it never works because I can't seem to get the directory right, I assume because of the '\' acting as escape character? Thanks, Gareth

    Read the article

  • How to merge duplicates in 2D python arrays

    - by Wei Lou
    Hi, I have a set of data similar to this: No Start Time End Time CallType Info 1 13:14:37.236 13:14:53.700 Ping1 RTT(Avr):160ms 2 13:14:58.955 13:15:29.984 Ping2 RTT(Avr):40ms 3 13:19:12.754 13:19:14.757 Ping3_1 RTT(Avr):620ms 3 13:19:12.754 Ping3_2 RTT(Avr):210ms 4 13:14:58.955 13:15:29.984 Ping4 RTT(Avr):360ms 5 13:19:12.754 13:19:14.757 Ping1 RTT(Avr):40ms 6 13:19:59.862 13:20:01.522 Ping2 RTT(Avr):163ms ... when i parse through it, i need merge the results of Ping3_1 and Ping3_2. Then take average of those two row export as one row. So the end of result would be like this: No Start Time End Time CallType Info 1 13:14:37.236 13:14:53.700 Ping1 RTT(Avr):160ms 2 13:14:58.955 13:15:29.984 Ping2 RTT(Avr):40ms 3 13:19:12.754 13:19:14.757 Ping3 RTT(Avr):415ms 4 13:14:58.955 13:15:29.984 Ping4 RTT(Avr):360ms 5 13:19:12.754 13:19:14.757 Ping1 RTT(Avr):40ms 6 13:19:59.862 13:20:01.522 Ping2 RTT(Avr):163ms currently i am concatenating column 0 and 1 to make a unique key, find duplication there then doing rest of special treatment for those parallel Pings. It is not elegant at all. Just wonder what is the better way to do it. Thanks!

    Read the article

  • what's faster: merging lists or dicts in python?

    - by tipu
    I'm working with an app that is cpu-bound more than memory bound, and I'm trying to merge two things whether they be sets or dicts. Now the thing is i can choose either one, but I'm wondering if merging dicts would be faster since it's all in memory? Or is it always going to be O(n), n being the size of the smaller set. The reason I asked about dicts rather than sets is because I can't convert a set to json, because that results in {key1, key2, key3} and json needs a key/value pair, so I am using a dict so json dumps returns {key1:1, key2:1, key3:1}. Yes this is wasteful, but if it proves to be faster then I'm okay with it.

    Read the article

  • Parsing specific numeric data from csv file using python

    - by KJ Lim
    Good morning. I have series of data in cvs file like below, 1,,, 1,137.1,1198,1.6 2,159,300,0.4 3,176,253,0.3 4,197,231,0.3 5,198,525,0.7 6,199,326,0.4 7,215,183,0.2 8,217.1,178,0.2 9,244.2,416,0.5 10,245.1,316,0.4 I want to extract specific data from second column for example 217.1 and 245.1 and have them concatenated into a new file like, 8,217.1,178,0.2 10,245.1,316,0.4 I use cvs module to read my cvs file, but, I can't extract specific data as I desire. Could anyone kindly please help me. Thank you.

    Read the article

  • python appengine form-posted utf8 file issue

    - by khany
    hi, i am trying to form-post a sql file that consists on many INSERTS, eg. INSERT INTO `TABLE` VALUES ('abcdé', 2759); then i use re.search to parse it and extract the fields to put into my own datastore. The problem is that, although the file contains accented characters (see the e is a é), once uploaded it loses it and either errors or stores a bytestring representation of it. Heres what i am currently using (and I have tried loads of alternatives): form = cgi.FieldStorage() uFile = form['sql'] uSql = uFile.file.read() lineX = uSql.split("\n") # to get each line and so on. has anyone got a robust way of making this work? remember i am on appengine so access to some libraries is restricted/forbidden

    Read the article

  • Multiple levels of 'collection.defaultdict' in Python

    - by Morlock
    Thanks to some great folks on SO, I discovered the possibilities offered by collections.defaultdict, notably in readability and speed. I have put them to use with success. Now I would like to implement three levels of dictionaries, the two top ones being defaultdict and the lowest one being int. I don't find the appropriate way to do this. Here is my attempt: from collections import defaultdict d = defaultdict(defaultdict) a = [("key1", {"a1":22, "a2":33}), ("key2", {"a1":32, "a2":55}), ("key3", {"a1":43, "a2":44})] for i in a: d[i[0]] = i[1] Now this works, but the following, which is the desired behavior, doesn't: d["key4"]["a1"] + 1 I suspect that I should have declared somewhere that the second level defaultdict is of type int, but I didn't find where or how to do so. The reason I am using defaultdict in the first place is to avoid having to initialize the dictionary for each new key. Any more elegant suggestion? Thanks pythoneers!

    Read the article

  • writing header in csv python with DictWriter

    - by user248237
    assume I have a csv.DictReader object and I want to write it out as a csv file. How can I do this? I thought of the following: dr = csv.DictReader(open(f), delimiter='\t') # process my dr object # ... # write out object output = csv.DictWriter(open(f2, 'w'), delimiter='\t') for item in dr: output.writerow(item) Is that the best way? More importantly, how can I make it so a header is written out too, in this case the object "dr"s .fieldnames property? thanks.

    Read the article

  • Python Threading

    - by anteater7171
    I'm trying to make a simple program that continually displays and updates a label that displays the CPU usage, while having other unrelated things going on. I've done enough research to know that threading is likely going to be involved. However, I'm having trouble applying what I've seen in simple examples of threading to what I'm trying to do. What I currently have going: import Tkinter import psutil,time from PIL import Image, ImageTk class simpleapp_tk(Tkinter.Tk): def __init__(self,parent): Tkinter.Tk.__init__(self,parent) self.parent = parent self.initialize() def initialize(self): self.labelVariable = Tkinter.StringVar() self.label = Tkinter.Label(self,textvariable=self.labelVariable) self.label.pack() self.button = Tkinter.Button(self,text='button',command=self.A) self.button.pack() def A (self): G = str(round(psutil.cpu_percent(), 1)) + '%' print G self.labelVariable.set(G) def B (self): print "hello" if __name__ == "__main__": app = simpleapp_tk(None) app.mainloop() In the above code I'm basically trying to get command A continually running, while allowing command B to be done when the users presses the button.

    Read the article

  • Please explain this python behavior

    - by StackUnderflow
    class SomeClass(object): def __init__(self, key_text_pairs = None): ..... for key, text in key_text_pairs: ...... ...... x = SomeClass([1, 2, 3]) The value of key_text_pairs inside the init is None even if I pass a list as in the above statement. Why is it so?? I want to write a generic init which can take all iterator objects... Thanks

    Read the article

  • python unittest howto

    - by zubin71
    I`d like to know how I could unit-test the following module. def download_distribution(url, tempdir): """ Method which downloads the distribution from PyPI """ print "Attempting to download from %s" % (url,) try: url_handler = urllib2.urlopen(url) distribution_contents = url_handler.read() url_handler.close() filename = get_file_name(url) file_handler = open(os.path.join(tempdir, filename), "w") file_handler.write(distribution_contents) file_handler.close() return True except ValueError, IOError: return False

    Read the article

  • Python: Figure out local timezone

    - by Adam Matan
    I want to compare UTC timestamps from a log file with local timestamps. When creating the local datetime object, I use something like: >>> local_time=datetime.datetime(2010, 4, 27, 12, 0, 0, 0, tzinfo=pytz.timezone('Israel')) I want to find an automatic tool that would replace thetzinfo=pytz.timezone('Israel') with the current local time zone. Any ideas?

    Read the article

  • getting expat to use .dtd for entity replacement in python

    - by nicolas78
    I'm trying to read in an xml file which looks like this <?xml version="1.0" encoding="ISO-8859-1"?> <!DOCTYPE dblp SYSTEM "dblp.dtd"> <dblp> <incollection> <author>Jos&eacute; A. Blakeley</author> </incollection> </dblp> The point that creates the problem looks is the Jos&eacute; A. Blakeley part: The parser calls its character handler twice, once with "Jos", once with " A. Blakeley". Now I understand this may be the correct behaviour if it doesn't know the eacute entity. However, this is defined in the dblp.dtd, which I have. I don't seem to be able to convince expat to use this file, though. All I can say is p = xml.parsers.expat.ParserCreate() # tried with and without following line p.SetParamEntityParsing(xml.parsers.expat.XML_PARAM_ENTITY_PARSING_ALWAYS) p.UseForeignDTD(True) f = open(dblp_file, "r") p.ParseFile(f) but expat still doesn't recognize my entity. Why is there no way to tell expat which DTD to use? I've tried putting the file into the same directory as the XML putting the file into the program's working directory replacing the reference in the xml file by an absolute path What am I missing? Thx.

    Read the article

  • Using Python's ConfigParser to read a file without section name

    - by Arrieta
    Hello: I am using ConfigParser to read the runtime configuration of a script. I would like to have the flexibility of not providing a section name (there are scripts which are simple enough; they don't need a 'section'). ConfigParser will throw the NoSectionError exception, and will not accept the file. How can I make ConfigParser simply retrieve the (key, value) tuples of a config file without section names? For instance: key1=val1 key2:val2 I would rather not write to the config file.

    Read the article

  • Reading Binary Plist files with Python

    - by Zeki Turedi
    I am currently using the Plistlib module to read Plist files but I am currently having an issue with it when it comes to Binary Plist files. I am wanting to read the data into a string to later to be analysed/printed etc. I am wondering if their is anyway of reading in a Binary Plist file without using the plutil function and converting the binary file into XML? Thank you for your help and time in advance.

    Read the article

< Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >