Search Results

Search found 64010 results on 2561 pages for 'google app engine python'.

Page 60/2561 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • Python - CSV: Large file with rows of different lengths

    - by dassouki
    In short, I have a 20,000,000 line csv file that has different row lengths. This is due to archaic data loggers and proprietary formats. We get the end result as a csv file in the following format. MY goal is to insert this file into a postgres database. How Can I do the following: Keep the first 8 columns and my last 2 columns, to have a consistent CSV file Add a Column to the file. 1, 2, 3, 4, 5, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, img_id.jpg, -50 1, 2, 3, 4, 5, 0,0,0,0,0,0,0,0,0, img_id.jpg, -50 1, 2, 3, 4, 5, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, img_id.jpg, -50 1, 2, 3, 4, 5, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, img_id.jpg, -50 1, 2, 3, 4, 5, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, img_id.jpg, -50 1, 2, 3, 4, 5, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, img_id.jpg, -50 1, 2, 3, 4, 5, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, img_id.jpg, -50 1, 2, 3, 4, 5, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, img_id.jpg, -50 1, 2, 3, 4, 5, 0,0,0,0,0,0, img_id.jpg, -50 1, 2, 3, 4, 5, 0,0,0,0,0,0,0,0,0,0,0 img_id.jpg, -50

    Read the article

  • Using httplib2 in python 3 properly? (Timeout problems)

    - by Sho Minamimoto
    Hey, first time post, I'm really stuck on httplib2. I've been reading up on it from diveintopython3.org, but it mentions nothing about a timeout function. I look up the documentation, but the only thing I see is an ability to put a timeout int but there are no units specified (seconds? milliseconds? What's the default if None?) This is what I have (I also have code to check what the response is and try again, but it's never tried more than once) h = httplib2.Http('.cache', timeout=None) for url in list: response, content = h.request(url) more stuff... So the Http object stays around until some arbitrary time, but I'm downloading a ton of pages from the same server, and after a while, it hangs on getting a page. No errors are thrown, the thing just hangs at a page. So then I try: h = httplib2.Http('.cache', timeout=None) for url in list: try: response, content = h.request(url) except: h = httplib2.Http('.cache', timeout=None) more stuff... But then it recreates another Http object every time (goes down the 'except' path)...I dont understand how to keep getting with the same object, until it expires and I make another. Also, is there a way to set a timeout on an individual request? Thanks for the help!

    Read the article

  • Generating permutation in Python with specific rule

    - by twfx
    Let say a=[A, B, C, D], each element has a weight w, and is set to 1 if selected, 0 if otherwise. I'd like to generate permutation in the below order 1,1,1,1 1,1,1,0 1,1,0,1 1,1,0,0 1,0,1,1 1,0,1,0 1,0,0,1 1,0,0,0 Let's w=[1,2,3,4] for item A,B,C,D ... and max_weight = 4. For each permutation, if the accum weight has exceeded max_weight, stop calculation for that permutation, move to next permutation. For eg. 1,1,1 --> 6 > 4, exceeded, stop, move to next 1,1,1 --> 6 > 4, exceeded, stop, move to next 1,1,0,1 --> 7 > 4 finished, move to next 1,1,0,0 --> 3 finished, move to next 1,0,1,1 --> 8 > 4, finished, stop, move to next 1,0,1,0 --> 4 finished, move to next 1,0,0,1 --> 5 > 4 finished, move to next 1,0,0,0 --> 1 finished, move to next [1,0,1,0] is the best combination which does not exceeded max_weight 4 My questions are What's the algorithm which generate the required permutation? Or any suggestion I could generate the permutation? As the number of element can be up to 10000, and the calculation stop if the accum weight for the branch exceeds max_weight, it is not necessary to generate all permutation first before the calculation. How can the algo in (1) generate permutation on the fly?

    Read the article

  • How is the 'is' keyword implemented in Python?

    - by Srikanth
    ... the is keyword that can be used for equality in strings. >>> s = 'str' >>> s is 'str' True >>> s is 'st' False I tried both __is__() and __eq__() but they didn't work. >>> class MyString: ... def __init__(self): ... self.s = 'string' ... def __is__(self, s): ... return self.s == s ... >>> >>> >>> m = MyString() >>> m is 'ss' False >>> m is 'string' # <--- Expected to work False >>> >>> class MyString: ... def __init__(self): ... self.s = 'string' ... def __eq__(self, s): ... return self.s == s ... >>> >>> m = MyString() >>> m is 'ss' False >>> m is 'string' # <--- Expected to work, but again failed False >>> Thanks for your help!

    Read the article

  • How to build sqlite for Python 2.4?

    - by Verrtex
    I would like to use pysqlite interface between Python and sdlite database. I have already Python and SQLite on my computer. But I have troubles with installation of pysqlite. During the installation I get the following error message: error: command 'gcc' failed with exit status 1 As far as I understood the problems appears because version of my Python is 2.4.3 and SQLite is integrated in Python since 2.5. However, I also found out that it IS possible to build sqlite for Python 2.4 (using some tricks, probably). Does anybody know how to build sqlite for Python 2.4? As another option I could try to install higher version of Python. However I do not have root privileges. Does anybody know what will be the easiest way to solve the problem (build SQLite fro Python 2.4, or install newer version of Python)? I have to mention that I would not like to overwrite the old version version of Python. Thank you in advance.

    Read the article

  • how to: dynamically load google ajax api into chrome extension content script

    - by Hoff
    Hi there, I'm trying to make use of google's ajax apis in a chorme extension's "content script". On a regular html page, I would just do this: <script src="http://www.google.com/jsapi"></script> <script> google.load("language", "1"); </script> But since I'm trying to load the tranlation library dynamically from js code, I've tried: script = document.createElement("script"); script.src = "http://www.google.com/jsapi"; script.type = "text/javascript"; document.getElementsByTagName("head")[0].appendChild(script); google.load('language','1') but the last line throws the following error: Uncaught TypeError: Object # has no method 'load' Funny enough, when i enter the same "google.load('language','1')" in chrome's js console, it works as intended... I've also tried with jquery's .getScript() but the same problem persists... Does anybody have any clue what might be the problem and how it could be solved? Many thanks in advance! Martin

    Read the article

  • Python try...except comma vs 'as' in except

    - by peter
    What is the difference between ',' and 'as' in except statements, eg: try: pass except Exception, exception: pass and: try: pass except Exception as exception: pass Is the second syntax legal in 2.6? It works in CPython 2.6 on Windows but the 2.5 interpreter in cygwin complains that it is invalid. If they are both valid in 2.6 which should I use?

    Read the article

  • importing files in python

    - by Yosy
    I have that file structure- Blog\DataObjects\User.py Blog\index.py I want to import the function(say_hello) at User.py from index.py. I am trying this code - from Blog.DataObjects.User import say_hello say_hello() And I have that error - Traceback (most recent call last): File "index.py", line 1, in <module> from Blog.DataObjects import User ImportError: No module named Blog.DataObjects

    Read the article

  • Python with PIL and Libjpeg on Leopard

    - by thescreamingdrills
    I'm having trouble getting pictures supported with PIL - it throws me this: "IOError: decoder jpeg not available" I installed PIL from binary, not realizing I needed libjpeg. I installed libjpeg and freetype2 through fink. I tried to reinstall PIL using instructions from http://timhatch.com/ (bottom of the page) "* Download PIL 1.1.6 source package and have the Developer Tools already installed * Patch setup.py with this patch so it can find the Freetype you already have. (patch -p0 < leopard_freetype2.diff) * sudo apt-get install libjpeg if you have fink (otherwise, build by hand and adjust paths)" But I'm still getting the same error. I'm on Leopard PPC.

    Read the article

  • python __getattr__ help

    - by Stefanos Tux Zacharakis
    Reading a Book, i came across this code... # module person.py class Person: def __init__(self, name, job=None, pay=0): self.name = name self.job = job self.pay = pay def lastName(self): return self.name.split()[-1] def giveRaise(self, percent): self.pay = int(self.pay *(1 + percent)) def __str__(self): return "[Person: %s, %s]" % (self.name,self.pay) class Manager(): def __init__(self, name, pay): self.person = Person(name, "mgr", pay) def giveRaise(self, percent, bonus=.10): self.person.giveRaise(percent + bonus) def __getattr__(self, attr): return getattr(self.person, attr) def __str__(self): return str(self.person) It does what I want it to do, but i do not understand the __getattr__ function in the Manager class. I know that it Delegates all other attributes from Person class. but I do not understand the way it works. for example why from Person class? as I do not explicitly tell it to. person(module is different than Person(class) Any help is highly appreciated :)

    Read the article

  • python appengine form-posted utf8 file issue

    - by khany
    hi, i am trying to form-post a sql file that consists on many INSERTS, eg. INSERT INTO `TABLE` VALUES ('abcdé', 2759); then i use re.search to parse it and extract the fields to put into my own datastore. The problem is that, although the file contains accented characters (see the e is a é), once uploaded it loses it and either errors or stores a bytestring representation of it. Heres what i am currently using (and I have tried loads of alternatives): form = cgi.FieldStorage() uFile = form['sql'] uSql = uFile.file.read() lineX = uSql.split("\n") # to get each line and so on. has anyone got a robust way of making this work? remember i am on appengine so access to some libraries is restricted/forbidden

    Read the article

  • Get Chinese Romanization from Google Translate API

    - by krubo
    The Google language translate API works cleanly to translate into Chinese: <script type="text/javascript" src="http://www.google.com/jsapi"></script> <script> google.load('language','1'); function googletrans(text) { google.language.translate(text,'en','zh',function(result) { alert(result.translation); }); } </script> <input onchange="googletrans(this.value);"> Example input: "Hello" Result: "??" My problem is I can't get the Romanization (pronunciation using English letters). This is a known issue. Now the data is right there on translate.google.com (Example input: "Hello" Result: "Ni hao") and I can even see it by pointing my browser to: http://translate.google.com/translate_a/t?client=t&text=hello&hl=en&sl=en&tl=zh-CN&otf=2&pc=0 Result: {"sentences":[{"trans":"??","orig":"hello","translit":"Ni hao"}], "dict":[{"pos":"interjection","terms":["?"]}],"src":"en"} But somehow when I try to get this URL with ajax it fails (XMLHttpRequest Exception 101). Is there any way to retrieve this Romanization data with ajax?

    Read the article

  • Massive Crawling requests from Google Apps Engine useragent

    - by SilentPlayer
    Hi friends, I'm badly affected with 'Google AppEngine-Google' UserAgent.. receiving 5/6 requests per second on http server. This bot is crawling my site just like GoogleBot does. Following is the sample of url in my access logs. 72.14.192.3 - - [19/May/2010:01:27:06 +0000] "GET /some-url/etc-123.htm HTTP/1.1" 200 4707 "-" "AppEngine-Google; (+http://code.google.com/appengine; appid: harpy000)" I have checked the ip address it is registered with Google Inc. Can anyone tell me where i can report Abuse to Google Inc. Or any information about this issue. Thank you!

    Read the article

  • insert multiple elements in string in python

    - by Anurag Sharma
    I have to build a string like this { name: "john", url: "www.dkd.com", email: "[email protected]" } where john, www.dkd.com and [email protected] are to be supplied by variables I tried to do the following s1 = "{'name:' {0},'url:' {1},'emailid:' {2}}" s1.format("john","www.dkd.com","[email protected]") I am getting the following error Traceback (most recent call last): File "<stdin>", line 1, in <module> KeyError: "'name" Dont able to understand what I am doing wrong

    Read the article

  • Parsing JSON file with Python -> google map api

    - by Hannes
    Hi all, I am trying to get started with JSON in Python, but it seems that I misunderstand something in the JSON concept. I followed the google api example, which works fine. But when I change the code to a lower level in the JSON response (as shown below, where I try to get access to the location), I get the following error message for code below: Traceback (most recent call last): File "geoCode.py", line 11, in test = json.dumps([s['location'] for s in jsonResponse['results']], indent=3) KeyError: 'location' How can I get access to lower information level in the JSON file in python? Do I have to go to a higher level and search the result string? That seems very weird to me? Here is the code I have tried to run: import urllib, json URL2 = "http://maps.googleapis.com/maps/api/geocode/json?address=1600+Amphitheatre+Parkway,+Mountain+View,+CA&sensor=false" googleResponse = urllib.urlopen(URL2); jsonResponse = json.loads(googleResponse.read()) test = json.dumps([s['location'] for s in jsonResponse['results']], indent=3) print test Thank you for your responses.

    Read the article

  • importing files at python

    - by Yosy
    I have that file strudctue- Blog\DataObjects\User.py Blog\index.py I want to import the function(say_hello) at User.py from index.py. I am trying this code - from Blog.DataObjects.User import say_hello say_hello() And I have that error - Traceback (most recent call last): File "index.py", line 1, in from Blog.DataObjects import User ImportError: No module named Blog.DataObjects

    Read the article

  • Python Regular Expressions: Capture lookahead value (capturing text without consuming it)

    - by Lattyware
    I wish to use regular expressions to split words into groups of (vowels, not_vowels, more_vowels), using a marker to ensure every word begins and ends with a vowel. import re MARKER = "~" VOWELS = {"a", "e", "i", "o", "u", MARKER} word = "dog" if word[0] not in VOWELS: word = MARKER+word if word[-1] not in VOWELS: word += MARKER re.findall("([%]+)([^%]+)([%]+)".replace("%", "".join(VOWELS)), word) In this example we get: [('~', 'd', 'o')] The issue is that I wish the matches to overlap - the last set of vowels should become the first set of the next match. This appears possible with lookaheads, if we replace the regex as follows: re.findall("([%]+)([^%]+)(?=[%]+)".replace("%", "".join(VOWELS)), word) We get: [('~', 'd'), ('o', 'g')] Which means we are matching what I want. However, it now doesn't return the last set of vowels. The output I want is: [('~', 'd', 'o'), ('o', 'g', '~')] I feel this should be possible (if the regex can check for the second set of vowels, I see no reason it can't return them), but I can't find any way of doing it beyond the brute force method, looping through the results after I have them and appending the first character of the next match to the last match, and the last character of the string to the last match. Is there a better way in which I can do this? The two things that would work would be capturing the lookahead value, or not consuming the text on a match, while capturing the value - I can't find any way of doing either.

    Read the article

  • Types in Python - Google Appengine

    - by Chris M
    Getting a bit peeved now; I have a model and a class thats just storing a get request in the database; basic tracking. class SearchRec(db.Model): WebSite = db.StringProperty()#required=True WebPage = db.StringProperty() CountryNM = db.StringProperty() PrefMailing = db.BooleanProperty() DateStamp = db.DateTimeProperty(auto_now_add=True) IP = db.StringProperty() class AddSearch(webapp.RequestHandler): def get(self): searchRec = SearchRec() searchRec.WebSite = self.request.get('WEBSITE') searchRec.WebPage = self.request.get('WEBPAGE') searchRec.CountryNM = self.request.get('COUNTRY') searchRec.PrefMailing = bool(self.request.get('MAIL')) searchRec.IP = self.request.get('IP') Bool has my biscuit; I thought that setting bool(self.reque....) would set the type of the string but no matter what I pass it it still stores it as TRUE in the database. I had the same issue with using required=True on strings for the model; the damn thing kept saying that nothing was being passed... but it had. Ta

    Read the article

  • How to get the equivalent of the accuracy in Google Map Geocoder V3

    - by Scorpi0
    Hi, I want to get geocode from google, and I used to do it with the V2 of the API. Google send in the json a pretty good information, the accuracy, reference here : http://code.google.com/intl/fr-FR/apis/maps/documentation/javascript/v2/reference.html#GGeoAddressAccuracy In V3, Google doesn't seem to send me exactly the same information. There is the array "adresse_component", which seem bigger if the accuracy is better, but not exactly. For example, I have a request accuracy to the street number, the array is of size 8. Another query is accuracy to the route, so less accuracy, but the array is style of size 8, as there is a row 'sublocality', which not appear in the first case. Ok, for a result, Google send a data 'types', which have the 'best' accuracy. This types are here : http://code.google.com/intl/fr-FR/apis/maps/documentation/geocoding/#Types But, there is no real order, and if I wan't the result better than postal_code, I have no clue to how to do that. So, how can I get this equivalent of the V2 accuracy, whithout some dumb and horrible code ?

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >