Search Results

Search found 34110 results on 1365 pages for 'gdata python client'.

Page 183/1365 | < Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >

  • Plotting a cumulative graph of python datetimes

    - by ventolin
    Say I have a list of datetimes, and we know each datetime to be the recorded time of an event happening. Is it possible in matplotlib to graph the frequency of this event occuring over time, showing this data in a cumulative graph (so that each point is greater or equal to all of the points that went before it), without preprocessing this list? (e.g. passing datetime objects directly to some wonderful matplotlib function) Or do I need to turn this list of datetimes into a list of dictionary items, such as: {"year": 1998, "month": 12, "date": 15, "events": 92} and then generate a graph from this list? Sorry if this seems like a silly question - I'm not all too familiar with matplotlib, and would like to save myself the effort of doing this the latter way if matplotlib can already deal with datetime objects itself.

    Read the article

  • Optimizing python link matching regular expression

    - by Matt
    I have a regular expression, links = re.compile('<a(.+?)href=(?:"|\')?((?:https?://|/)[^\'"]+)(?:"|\')?(.*?)>(.+?)</a>',re.I).findall(data) to find links in some html, it is taking a long time on certain html, any optimization advice? One that it chokes on is http://freeyourmindonline.net/Blog/

    Read the article

  • python destructuring-bind dictionary contents

    - by Stephen
    Hi, I am trying to 'destructure' a dictionary and associate values with variables names after its keys. Something like params = {'a':1,'b':2} a,b = params.values() but since dictionaries are not ordered, there is no guarantee that params.values() will return values in the order of (a,b). Is there a nice way to do this? Thanks

    Read the article

  • python function that returns a function from list of functions

    - by thkang
    I want to make following function: 1)input is a number. 2)functions are indexed, return a function whose index matches given number here's what I came up with: def foo_selector(whatfoo): def foo1(): return def foo2(): return def foo3(): return ... def foo999(): return #something like return foo[whatfoo] the problem is, how can I index the functions (foo#)? I can see functions foo1 to foo999 by dir(). however, dir() returns name of such functions, not the functions themselves. In the example, those foo-functions aren't doing anything. However in my program they perform different tasks and I can't automatically generate them. I write them myself, and have to return them by their name.

    Read the article

  • How to find links and modify an Html using BeautifulSoup in Python

    - by systempuntoout
    Starting from an Html input like this: <p> <a href="http://www.foo.com">this if foo</a> <a href="http://www.bar.com">this if bar</a> </p> using BeautifulSoup, i would like to change this Html in: <p> <a href="http://www.foo.com">this if foo[1]</a> <a href="http://www.bar.com">this if bar[2]</a> </p> saving parsed links in a dictionary with a result like this: links_dict = {"1":"http://www.foo.com","2":"http://www.bar.com"} Is it possible to do this using BeautifulSoup? Any valid alternative?

    Read the article

  • Using Memcached in Python/Django - questions.

    - by Thomas
    I am starting use Memcached to make my website faster. For constant data in my database I use this: from django.core.cache import cache cache_key = 'regions' regions = cache.get(cache_key) if result is None: """Not Found in Cache""" regions = Regions.objects.all() cache.set(cache_key, regions, 2592000) #(2592000sekund = 30 dni) return regions For seldom changed data I use signals: from django.core.cache import cache from django.db.models import signals def nuke_social_network_cache(self, instance, **kwargs): cache_key = 'networks_for_%s' % (self.instance.user_id,) cache.delete(cache_key) signals.post_save.connect(nuke_social_network_cache, sender=SocialNetworkProfile) signals.post_delete.connect(nuke_social_network_cache, sender=SocialNetworkProfile) Is it correct way? I installed django-memcached-0.1.2, which show me: Memcached Server Stats Server Keys Hits Gets Hit_Rate Traffic_In Traffic_Out Usage Uptime 127.0.0.1 15 220 276 79% 83.1 KB 364.1 KB 18.4 KB 22:21:25 Can sombody explain what columns means? And last question. I have templates where I am getting much records from a few table (relationships). So in my view I get records from one table and in templates show it and related info from others. Generating page last a few seconds for very small table (<100records). Is it some easy way to cache queries from templates? Have I to do some big structure in my view (with all related tables), cache it and send to template?

    Read the article

  • Python + Expat: Error on &#0; entities

    - by clacke
    I have written a small function, which uses ElementTree and xpath to extract the text contents of certain elements in an xml file: #!/usr/bin/env python2.5 import doctest from xml.etree import ElementTree from StringIO import StringIO def parse_xml_etree(sin, xpath): """ Takes as input a stream containing XML and an XPath expression. Applies the XPath expression to the XML and returns a generator yielding the text contents of each element returned. >>> parse_xml_etree( ... StringIO('<test><elem1>one</elem1><elem2>two</elem2></test>'), ... '//elem1').next() 'one' >>> parse_xml_etree( ... StringIO('<test><elem1>one</elem1><elem2>two</elem2></test>'), ... '//elem2').next() 'two' >>> parse_xml_etree( ... StringIO('<test><null>&#0;</null><elem3>three</elem3></test>'), ... '//elem2').next() 'three' """ tree = ElementTree.parse(sin) for element in tree.findall(xpath): yield element.text if __name__ == '__main__': doctest.testmod(verbose=True) The third test fails with the following exception: ExpatError: reference to invalid character number: line 1, column 13 Is the � entity illegal XML? Regardless whether it is or not, the files I want to parse contain it, and I need some way to parse them. Any suggestions for another parser than Expat, or settings for Expat, that would allow me to do that?

    Read the article

  • Convert a GTK python script to C

    - by Jessica
    The following script will take a screenshot on a Gnome desktop. import gtk.gdk w = gtk.gdk.get_default_root_window() sz = w.get_size() pb = gtk.gdk.Pixbuf(gtk.gdk.COLORSPACE_RGB,False, 8, sz[0], sz[1]) pb = pb.get_from_drawable(w, w.get_colormap(), 0, 0, 0, 0, sz[0], sz[1]) if (pb != None): pb.save("screenshot.png", "png") print "Screenshot saved to screenshot.png." else: print "Unable to get the screenshot." Now, I've been trying to convert this to C and use it in one of the apps I am writing but so far i've been unsuccessful. Is there any what to do this in C (on Linux)? Thanks! Jess.

    Read the article

  • python list/dict property best practice

    - by jterrace
    I have a class object that stores some properties that are lists of other objects. Each of the items in the list has an identifier that can be accessed with the id property. I'd like to be able to read and write from these lists but also be able to access a dictionary keyed by their identifier. Let me illustrate with an example: class Child(object): def __init__(self, id, name): self.id = id self.name = name class Teacher(object): def __init__(self, id, name): self.id = id self.name = name class Classroom(object): def __init__(self, children, teachers): self.children = children self.teachers = teachers classroom = Classroom([Child('389','pete')], [Teacher('829','bob')]) This is a silly example, but it illustrates what I'm trying to do. I'd like to be able to interact with the classroom object like this: #access like a list print classroom.children[0] #append like it's a list classroom.children.append(Child('2344','joe')) #delete from like it's a list classroom.children.pop(0) But I'd also like to be able to access it like it's a dictionary, and the dictionary should be automatically updated when I modify the list: #access like a dict print classroom.childrenById['389'] I realize I could just make it a dict, but I want to avoid code like this: classroom.childrendict[child.id] = child I also might have several of these properties, so I don't want to add functions like addChild, which feels very un-pythonic anyway. Is there a way to somehow subclass dict and/or list and provide all of these functions easily with my class's properties? I'd also like to avoid as much code as possible.

    Read the article

  • Surface Area of a Spheroid in Python

    - by user3678321
    I'm trying to write a function that calculates the surface area of a prolate or oblate spheroid. Here's a link to where I got the formulas (http://en.wikipedia.org/wiki/Prolate_spheroid & http://en.wikipedia.org/wiki/Oblate_spheroid). I think I've written them wrong, but here is my code so far; from math import pi, sqrt, asin, degrees, tanh def checkio(height, width): height = float(height) width = float(width) lst = [] if height == width: r = 0.5 * width surface_area = 4 * pi * r**2 surface_area = round(surface_area, 2) lst.append(surface_area) elif height > width: #If spheroid is prolate a = 0.5 * width b = 0.5 * height e = 1 - a / b surface_area = 2 * pi * a**2 * (1 + b / a * e * degrees(asin**-1(e))) surface_area = round(surface_area, 2) lst.append(surface_area) elif height < width: #If spheroid is oblate a = 0.5 * height b = 0.5 * width e = 1 - b / a surface_area = 2 * pi * a**2 * (1 + 1 - e**2 / e * tanh**-1(e)) surface_area = round(surface_area, 2) lst.append(surface_area, 2) return lst

    Read the article

  • Python: Comparing specific columns in two csv files

    - by coder999
    Say that I have two CSV files (file1 and file2) with contents as shown below: file1: fred,43,Male,"23,45",blue,"1, bedrock avenue" file2: fred,39,Male,"23,45",blue,"1, bedrock avenue" I would like to compare these two CSV records to see if columns 0,2,3,4, and 5 are the same. I don't care about column 1. What's the most pythonic way of doing this? EDIT: Some example code would be appreciated.

    Read the article

  • Python: eliminating stack traces into library code?

    - by Mark Harrison
    When I get a runtime exception from the standard library, it's almost always a problem in my code and not in the library code. Is there a way to truncate the exception stack trace so that it doesn't show the guts of the library package? For example, I would like to get this: Traceback (most recent call last): File "./lmd3-mkhead.py", line 71, in <module> main() File "./lmd3-mkhead.py", line 66, in main create() File "./lmd3-mkhead.py", line 41, in create headver1[depotFile]=rev TypeError: Data values must be of type string or None. and not this: Traceback (most recent call last): File "./lmd3-mkhead.py", line 71, in <module> main() File "./lmd3-mkhead.py", line 66, in main create() File "./lmd3-mkhead.py", line 41, in create headver1[depotFile]=rev File "/usr/anim/modsquad/oses/fc11/lib/python2.6/bsddb/__init__.py", line 276, in __setitem__ _DeadlockWrap(wrapF) # self.db[key] = value File "/usr/anim/modsquad/oses/fc11/lib/python2.6/bsddb/dbutils.py", line 68, in DeadlockWrap return function(*_args, **_kwargs) File "/usr/anim/modsquad/oses/fc11/lib/python2.6/bsddb/__init__.py", line 275, in wrapF self.db[key] = value TypeError: Data values must be of type string or None.

    Read the article

  • Join a list of lists together into 1 list in Python

    - by dotty
    Hay All. I have a list which consists of many lists, here is an example [ [Obj, Obj, Obj, Obj], [Obj], [Obj], [ [Obj,Obj], [Obj,Obj,Obj] ] ] Is there a way to join all these items together as 1 list, so the output will be something like [Obj,Obj,Obj,Obj,Obj,Obj,Obj,Obj,Obj,Obj,Obj] Thanks

    Read the article

  • Working with a list, performing arithmetic logic in Python

    - by haea ohoh
    Suppose I have made a large list of numbers, and I want to make another one which I will add, pairwise, with the first list. Here's the first list, A: [109, 77, 57, 34, 94, 68, 96, 72, 39, 67, 49, 71, 121, 89, 61, 84, 45, 40, 104, 68, 54, 60, 68, 62, 91, 45, 41, 118, 44, 35, 53, 86, 41, 63, 111, 112, 54, 34, 52, 72, 111, 113, 47, 91, 107, 114, 105, 91, 57, 86, 32, 109, 84, 85, 114, 48, 105, 109, 68, 57, 78, 111, 64, 55, 97, 85, 40, 100, 74, 34, 94, 78, 57, 77, 94, 46, 95, 60, 42, 44, 68, 89, 113, 66, 112, 60, 40, 110, 89, 105, 113, 90, 73, 44, 39, 55, 108, 110, 64, 108] And here's B: [35, 106, 55, 61, 81, 109, 82, 85, 71, 55, 59, 38, 112, 92, 59, 37, 46, 55, 89, 63, 73, 119, 70, 76, 100, 49, 117, 77, 37, 62, 65, 115, 93, 34, 107, 102, 91, 58, 82, 119, 75, 117, 34, 112, 121, 58, 79, 69, 68, 72, 110, 43, 111, 51, 102, 39, 52, 62, 75, 118, 62, 46, 74, 77, 82, 81, 36, 87, 80, 56, 47, 41, 92, 102, 101, 66, 109, 108, 97, 49, 72, 74, 93, 114, 55, 116, 66, 93, 56, 56, 93, 99, 96, 115, 93, 111, 57, 105, 35, 99] How might I generate the arithmatic addition logic, processing each pairwise value one by one (A[0] and B[0], through A[99], B[99]) and producing the list C (A[0] + B[0] through A[99]+ B[99])?

    Read the article

  • Python finding substring between certain characters using regex and replace()

    - by jCuga
    Suppose I have a string with lots of random stuff in it like the following: strJunk ="asdf2adsf29Value=five&lakl23ljk43asdldl" And I'm interested in obtaining the substring sitting between 'Value=' and '&', which in this example would be 'five'. I can use a regex like the following: match = re.search(r'Value=?([^&>]+)', strJunk) >>> print match.group(0) Value=five >>> print match.group(1) five How come match.group(0) is the whole thing 'Value=five' and group(1) is just 'five'? And is there a way for me to just get 'five' as the only result? (This question stems from me only having a tenuous grasp of regex) I am also going to have to make a substitution in this string such such as the following: val1 = match.group(1) strJunk.replace(val1, "six", 1) Which yields: 'asdf2adsf29Value=six&lakl23ljk43asdldl' Considering that I plan on performing the above two tasks (finding the string between 'Value=' and '&', as well as replacing that value) over and over, I was wondering if there are any other more efficient ways of looking for the substring and replacing it in the original string. I'm fine sticking with what I've got but I just want to make sure that I'm not taking up more time than I have to be if better methods are out there.

    Read the article

  • optimize python code

    - by user283405
    i have code that uses BeautifulSoup library for parsing. But it is very slow. The code is written in such a way that threads cannot be used. Can anyone help me about this? I am using beautifulsoup library for parsing and than save in DB. if i comment the save statement, than still it takes time so there is no problem with database. def parse(self,text): soup = BeautifulSoup(text) arr = soup.findAll('tbody') for i in range(0,len(arr)-1): data=Data() soup2 = BeautifulSoup(str(arr[i])) arr2 = soup2.findAll('td') c=0 for j in arr2: if str(j).find("<a href=") > 0: data.sourceURL = self.getAttributeValue(str(j),'<a href="') else: if c == 2: data.Hits=j.renderContents() #and few others... #... c = c+1 data.save() Any suggestions? Note: I already ask this question here but that was closed due to incomplete information.

    Read the article

  • fetching from a specified index in a set using python

    - by tipu
    I'm using pagination on a values from a set. So what this results in is me needing to get values from x to x + 20 which can be in the middle of a set with 50,000 entries. Is it possible that I can fetch these values by grabbing by the space in the set? Would it make more sense to do result = [] my_dict = dict(very_big_set) for i in range(30000, 30020) result.append(my_dict[i])

    Read the article

  • Custom constructors for models in Google App Engine (python)

    - by Nikhil Chelliah
    I'm getting back to programming for Google App Engine and I've found, in old, unused code, instances in which I wrote constructors for models. It seems like a good idea, but there's no mention of it online and I can't test to see if it works. Here's a contrived example, with no error-checking, etc.: class Dog(db.Model): name = db.StringProperty(required=True) breeds = db.StringListProperty() age = db.IntegerProperty(default=0) def __init__(self, name, breed_list, **kwargs): db.Model.__init__(**kwargs) self.name = name self.breeds = breed_list.split() rufus = Dog('Rufus', 'spaniel terrier labrador') rufus.put() The **kwargs are passed on to the Model constructor in case the model is constructed with a specified parent or key_name, or in case other properties (like age) are specified. This constructor differs from the default in that it requires that a name and breed_list be specified (although it can't ensure that they're strings), and it parses breed_list in a way that the default constructor could not. Is this a legitimate form of instantiation, or should I just use functions or static/class methods? And if it works, why aren't custom constructors used more often?

    Read the article

  • speed up calling lot of entities, and getting unique values, google app engine python

    - by user291071
    OK this is a 2 part question, I've seen and searched for several methods to get a list of unique values for a class and haven't been practically happy with any method so far. So anyone have a simple example code of getting unique values for instance for this code. Here is my super slow example. class LinkRating2(db.Model): user = db.StringProperty() link = db.StringProperty() rating2 = db.FloatProperty() def uniqueLinkGet(tabl): start = time.time() dic = {} query = tabl.all() for obj in query: dic[obj.link]=1 end = time.time() print end-start return dic My second question is calling for instance an iterator instead of fetch slower? Is there a faster method to do this code below? Especially if the number of elements called be larger than 1000? query = LinkRating2.all() link1 = 'some random string' a = query.filter('link = ', link1) adic ={} for itema in a: adic[itema.user]=itema.rating2

    Read the article

< Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >