Search Results

Search found 34110 results on 1365 pages for 'gdata python client'.

Page 164/1365 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • Python nested function scopes

    - by Thomas O
    I have code like this: def write_postcodes(self): """Write postcodes database. Write data to file pointer. Data is ordered. Initially index pages are written, grouping postcodes by the first three characters, allowing for faster searching.""" status("POSTCODE", "Preparing to sort...", 0, 1) # This function returns the key of x whilst updating the displayed # status of the sort. ctr = 0 def keyfunc(x): ctr += 1 status("POSTCODE", "Sorting postcodes", ctr, len(self.postcodes)) return x sort_res = self.postcodes[:] sort_res.sort(key=keyfunc) But ctr responds with a NameError: Traceback (most recent call last): File "PostcodeWriter.py", line 53, in <module> w.write_postcodes() File "PostcodeWriter.py", line 47, in write_postcodes sort_res.sort(key=keyfunc) File "PostcodeWriter.py", line 43, in keyfunc ctr += 1 UnboundLocalError: local variable 'ctr' referenced before assignment How can I fix this? I thought nester scopes would have allowed me to do this. I've tried with 'global', but it still doesn't work.

    Read the article

  • Reading numeric Excel data as text using xlrd in Python

    - by Brian
    Hi guys, I am trying to read in an Excel file using xlrd, and I am wondering if there is a way to ignore the cell formatting used in Excel file, and just import all data as text? Here is the code I am using for far: import xlrd xls_file = 'xltest.xls' xls_workbook = xlrd.open_workbook(xls_file) xls_sheet = xls_workbook.sheet_by_index(0) raw_data = [['']*xls_sheet.ncols for _ in range(xls_sheet.nrows)] raw_str = '' feild_delim = ',' text_delim = '"' for rnum in range(xls_sheet.nrows): for cnum in range(xls_sheet.ncols): raw_data[rnum][cnum] = str(xls_sheet.cell(rnum,cnum).value) for rnum in range(len(raw_data)): for cnum in range(len(raw_data[rnum])): if (cnum == len(raw_data[rnum]) - 1): feild_delim = '\n' else: feild_delim = ',' raw_str += text_delim + raw_data[rnum][cnum] + text_delim + feild_delim final_csv = open('FINAL.csv', 'w') final_csv.write(raw_str) final_csv.close() This code is functional, but there are certain fields, such as a zip code, that are imported as numbers, so they have the decimal zero suffix. For example, is there is a zip code of '79854' in the Excel file, it will be imported as '79854.0'. I have tried finding a solution in this xlrd spec, but was unsuccessful.

    Read the article

  • Most efficient way to search the last x lines of a file in python

    - by Harley
    I have a file and I don't know how big it's going to be (it could be quite large, but the size will vary greatly). I want to search the last 10 lines or so to see if any of them match a string. I need to do this as quickly and efficiently as possible and was wondering if there's anything better than: s = "foo" last_bit = fileObj.readlines()[-10:] for line in last_bit: if line == s: print "FOUND"

    Read the article

  • Importing files in Python from __init__.py

    - by Federico Builes
    Suppose I have the following structure: app/ __init__.py foo/ a.py b.py c.py __init__.py a.py, b.py and c.py share some common imports (logging, os, re, etc). Is it possible to import these three or four common modules from the __init__.py file so I don't have to import them in every one of the files? Edit: My goal is to avoid having to import 5-6 modules in each file and it's not related to performance reasons.

    Read the article

  • Extract strings in python

    - by shadyabhi
    Basically, I want to extract the strings "AAA", "BBB", "CCC", "DDD" from a text file.. ...... (other text goes here)..... <TD align="left" class=texttd><font class='textfont'>AAA</font></TD> ..... (useless text here)..... <TD align="left" class=texttd><font class='textfont'>BBB</font></TD> ....(more text)..... <TD align="left" class=texttd><font class='textfont'>CCC</font></TD> <TD align="left" class=texttd><font class='textfont'>DDD</font></TD> ......(more text)..... I want something like if I do:- data = foo("file.txt") i get:- data = ['AAA','BBB','CCC','DDD'] What is the best possible way? My file is not big..

    Read the article

  • How should I use random.jumpahead in Python

    - by Peter Smit
    I have a application that does a certain experiment 1000 times (multi-threaded, so that multiple experiments are done at the same time). Every experiment needs appr. 50.000 random.random() calls. What is the best approach to get this really random. I could copy a random object to every experiment and do than a jumpahead of 50.000 * expid. The documentation suggests that jumpahead(1) already scrambles the state, but is that really true? Or is there another way to do this in 'the best way'? (No, the random numbers are not used for security, but for a metropolis hasting algorithm. The only requirement is that the experiments are independent, not whether the random sequence is somehow predictable or so)

    Read the article

  • Proper way to assert type of variable in Python

    - by Morlock
    In using a function, I wish to ensure that the type of the variables are as expected. How to do it right? Here is an example fake function trying to do just this before going on with its role: def my_print(text, begin, end): """Print text in UPPER between 'begin' and 'end' in lower """ for i in (text, begin, end): assert type(i) == type("") out = begin.lower() + text.upper() + end.lower() print out Is this approach valid? Should I use something else than type(i) == type("") ? Should I use try/except instead? Thanks pythoneers

    Read the article

  • Reading Python Documentation for 3rd party modules

    - by Shadyabhi
    I recently downloaded IMDbpy moduele.. When I do, import imdb help(imdb) i dont get the full documentation.. I have to do im = imdb.IMDb() help(im) to see the available methods. I dont like this console interface. Is there any better way of reading the doc. I mean all the doc related to module imdb in one page..

    Read the article

  • strange behavior in python

    - by fsm
    The tags might not be accurate since I am not sure where the problem is. I have a module where I am trying to read some data from a socket, and write the results into a file (append) It looks something like this, (only relevant parts included) if __name__ == "__main__": <some init code> for line in file: t = Thread(target=foo, args=(line,)) t.start() while nThreads > 0: time.sleep(1) Here are the other modules, def foo(text): global countLock, nThreads countLock.acquire() nThreads += 1 countLock.release() """connect to socket, send data, read response""" writeResults(text, result) countLock.acquire() nThreads -= 1 countLock.release() def writeResults(text, result): """acquire file lock""" """append to file""" """release file lock""" Now here's the problem. Initially, I had a typo in the function 'foo', where I was passing the variable 'line' to writeResults instead of 'text'. 'line' is not defined in the function foo, it's defined in the main block, so I should have seen an error, but instead, it worked fine, except that the data was appended to the file multiple times, instead of being written just once, which is the required behavior, which I got when I fixed the typo. My question is, 1) Why didn't I get an error? 2) Why was the writeResults function being called multiple times?

    Read the article

  • Python - excel - xlwt: colouring every second row

    - by konjo
    Hi, i just finish some MYSQL to excel script with xlwt and I need to colour every second row for easy reading. I have tried this: row = easyxf('pattern: pattern solid, fore_colour blue') for i in range(0,10,2): ws0.row(i).set_style(row) Alone this colouring is fine, but when when I write my data rows are again white. Can some please show me some example 'cuz I m lost in coding :/ Best Regards.

    Read the article

  • writing header in csv python with DictWriter

    - by user248237
    assume I have a csv.DictReader object and I want to write it out as a csv file. How can I do this? I thought of the following: dr = csv.DictReader(open(f), delimiter='\t') # process my dr object # ... # write out object output = csv.DictWriter(open(f2, 'w'), delimiter='\t') for item in dr: output.writerow(item) Is that the best way? More importantly, how can I make it so a header is written out too, in this case the object "dr"s .fieldnames property? thanks.

    Read the article

  • Windows path in python

    - by Gareth
    Hi all. What is the best way to represent a windows directory, for example "C:\meshes\as"? I have been trying to modify a script but it never works because I can't seem to get the directory right, I assume because of the '\' acting as escape character? Thanks, Gareth

    Read the article

  • Parsing specific numeric data from csv file using python

    - by KJ Lim
    Good morning. I have series of data in cvs file like below, 1,,, 1,137.1,1198,1.6 2,159,300,0.4 3,176,253,0.3 4,197,231,0.3 5,198,525,0.7 6,199,326,0.4 7,215,183,0.2 8,217.1,178,0.2 9,244.2,416,0.5 10,245.1,316,0.4 I want to extract specific data from second column for example 217.1 and 245.1 and have them concatenated into a new file like, 8,217.1,178,0.2 10,245.1,316,0.4 I use cvs module to read my cvs file, but, I can't extract specific data as I desire. Could anyone kindly please help me. Thank you.

    Read the article

  • python unittest howto

    - by zubin71
    I`d like to know how I could unit-test the following module. def download_distribution(url, tempdir): """ Method which downloads the distribution from PyPI """ print "Attempting to download from %s" % (url,) try: url_handler = urllib2.urlopen(url) distribution_contents = url_handler.read() url_handler.close() filename = get_file_name(url) file_handler = open(os.path.join(tempdir, filename), "w") file_handler.write(distribution_contents) file_handler.close() return True except ValueError, IOError: return False

    Read the article

  • Python Threading

    - by anteater7171
    I'm trying to make a simple program that continually displays and updates a label that displays the CPU usage, while having other unrelated things going on. I've done enough research to know that threading is likely going to be involved. However, I'm having trouble applying what I've seen in simple examples of threading to what I'm trying to do. What I currently have going: import Tkinter import psutil,time from PIL import Image, ImageTk class simpleapp_tk(Tkinter.Tk): def __init__(self,parent): Tkinter.Tk.__init__(self,parent) self.parent = parent self.initialize() def initialize(self): self.labelVariable = Tkinter.StringVar() self.label = Tkinter.Label(self,textvariable=self.labelVariable) self.label.pack() self.button = Tkinter.Button(self,text='button',command=self.A) self.button.pack() def A (self): G = str(round(psutil.cpu_percent(), 1)) + '%' print G self.labelVariable.set(G) def B (self): print "hello" if __name__ == "__main__": app = simpleapp_tk(None) app.mainloop() In the above code I'm basically trying to get command A continually running, while allowing command B to be done when the users presses the button.

    Read the article

  • Multiple levels of 'collection.defaultdict' in Python

    - by Morlock
    Thanks to some great folks on SO, I discovered the possibilities offered by collections.defaultdict, notably in readability and speed. I have put them to use with success. Now I would like to implement three levels of dictionaries, the two top ones being defaultdict and the lowest one being int. I don't find the appropriate way to do this. Here is my attempt: from collections import defaultdict d = defaultdict(defaultdict) a = [("key1", {"a1":22, "a2":33}), ("key2", {"a1":32, "a2":55}), ("key3", {"a1":43, "a2":44})] for i in a: d[i[0]] = i[1] Now this works, but the following, which is the desired behavior, doesn't: d["key4"]["a1"] + 1 I suspect that I should have declared somewhere that the second level defaultdict is of type int, but I didn't find where or how to do so. The reason I am using defaultdict in the first place is to avoid having to initialize the dictionary for each new key. Any more elegant suggestion? Thanks pythoneers!

    Read the article

  • python appengine form-posted utf8 file issue

    - by khany
    hi, i am trying to form-post a sql file that consists on many INSERTS, eg. INSERT INTO `TABLE` VALUES ('abcdé', 2759); then i use re.search to parse it and extract the fields to put into my own datastore. The problem is that, although the file contains accented characters (see the e is a é), once uploaded it loses it and either errors or stores a bytestring representation of it. Heres what i am currently using (and I have tried loads of alternatives): form = cgi.FieldStorage() uFile = form['sql'] uSql = uFile.file.read() lineX = uSql.split("\n") # to get each line and so on. has anyone got a robust way of making this work? remember i am on appengine so access to some libraries is restricted/forbidden

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >