Search Results

Search found 20677 results on 828 pages for 'python team'.

Page 436/828 | < Previous Page | 432 433 434 435 436 437 438 439 440 441 442 443  | Next Page >

  • Decorator that can take both init args and call args?

    - by digitala
    Is it possible to create a decorator which can be __init__'d with a set of arguments, then later have methods called with other arguments? For instance: from foo import MyDecorator bar = MyDecorator(debug=True) @bar.myfunc(a=100) def spam(): pass @bar.myotherfunc(x=False) def eggs(): pass If this is possible, can you provide a working example?

    Read the article

  • How to calculate cointegrations of two lists?

    - by Damiano
    Hello everybody! Thank you in advance for your help! I have two lists with some stocks prices, example: a = [10.23, 11.65, 12.36, 12.96] b = [5.23, 6.10, 8.3, 4.98] I can calculate the correlation of these two lists, with: import scipy.stats scipy.stats.pearsonr(a, b)[0] But, I didn't found a method to calculate the co-integration of two lists. Could you give me some advices? Thank you very much!

    Read the article

  • Appengine backreferences - need composite index?

    - by davezor
    I have a query that is very recently starting to throw: "The built-in indices are not efficient enough for this query and your data. Please add a composite index for this query." I checked the line on which this exception is being thrown, and the problem query is this one: count = self.vote_set.filter("direction =", 1).count() This is literally a one-filter operation using appengine's built-in backreferences. I have no idea how to optimize this query...anyone have any suggestions? I tried to add this index: - kind: Vote properties: - name: direction direction: desc - kind: Vote properties: - name: direction And I got a message (obviously) saying this was an unnecessary index. Thanks for your help in advance.

    Read the article

  • Can I override a query in DJango?

    - by stinkypyper
    I know you can override delete and save methods in DJango models, but can you override a select query somehow to intercept and change a parameter slightly. I have a hashed value I want to check for, and would like to keep the hashing internal to the model.

    Read the article

  • gevent, sockets and syncronisation

    - by schlamar
    I have multiple greenlets sending on a common socket. Is it guaranteed that each package sent via socket.sendall is well separated or do I have to acquire a lock before each call to sendall. So I want to prevent the following scenario: g1 sends ABCD g2 sends 1234 received data is mixed up, for example AB1234CD expected is either ABCD1234 or 1234ABCD Update After a look at the sourcecode I think this scenario cannot happen. But I have to use a lock because g1 or g2 can crash on the sendall. Can someone confirm this?

    Read the article

  • List filtering: list comprehension vs. lambda + filter

    - by Agos
    I happened to find myself having a basic filtering need: I have a list and I have to filter it by an attribute of the items. My code looked like this: list = [i for i in list if i.attribute == value] But then i thought, wouldn't it be better to write it like this? filter(lambda x: x.attribute == value, list) It's more readable, and if needed for performance the lambda could be taken out to gain something. Question is: are there any caveats in using the second way? Any performance difference? Am I missing the Pythonic Way™ entirely and should do it in yet another way (such as using itemgetter instead of the lambda)? Thanks in advance

    Read the article

  • Conditional CellRenderCombo in pyGTK TreeView

    - by Präriewolf
    I have a two column TreeView attached to a ListStore. Both columns are CellRenderCombo combo boxes. When the user selects an entry in the first box, I need to dynamically load a set of options in the second. For example, the behavior I want is: On row 0, the user selects "Alphabet" in the first column box. The second column box is populated with the letters "A-Z". On row 1, the user selects "Numbers" in the first column box. The second column box is populated with the numbers "0-9". On row 2, the user selects "Alphabet" in the first column box. The second column box is populated with the letters "A-Z". etc. Does anyone know how to do this, or seen any open source pygtk or gtk projects that have similar behavior which I can analyze?

    Read the article

  • Actual SQL statement after bind variables specified

    - by bioffe
    I am trying to log every SQL statement executed from my scripts. However I contemplate one problem I can not overcome. Is there a way to compute actual SQL statement after bind variables were specified. In SQLite I had to compute the statement to be executed manually, using code below: def __sql_to_str__(self, value,args): for p in args: if type(p) is IntType or p is None: value = value.replace("?", str(p) ,1) else: value = value.replace("?",'\'' + p + '\'',1) return value It seems CX_Oracle has cursor.parse() facilities. But I can't figure out how to trick CX_Oracle to compute my query before its execution.

    Read the article

  • Mechanize Submit Form Error: Insufficient items with name '10427'

    - by maneh
    I'm trying to submit a form with Mechanize, I have tried different ways, but the problem persists. Can anyone help me on this. Thank you in advance! This is the form I want to submit: http://www.stpairways.st/ This is the code that I'm using: def stp_airways(url): import re import mechanize br = mechanize.Browser() br.open(url) print br.title() br.select_form(name = "frmbook") br.form['TypeTrajet'] = ["1"] br.form['id_depart'] = ["11967"] br.form['id_arrivee'] = ["10427"] br.form['txtDateAller'] = "5/7/2014" br.form['txtDateRetour'] = "12/7/2014" br.form['TypePassager1u1000r0b1'] = ["1"] br.form['TypePassager2u1000r0b1'] = ["0"] br.form['TypePassager3u1000r0b1'] = ["0"] br.form['CodeIsoDeviseClient'] = ["17,20,23,24,25,26,27,28,29,30,31,33,34,36,37,64,65,67,68,70,73,80,81,95,96,103,147,151,152,159,160,162,169,170TP1TPF"] br.form['CodeIsoDeviseClient'] = ["EUR"] # submit response1 = br.submit() print response1.read()

    Read the article

  • Website/App on Dotcloud is down

    - by user1576866
    The website is nhslhs.tk . The last time I edited something was four days ago. I tried to get a calendar on the Django datable, but deleted it all and never actually pushed it to the Dotcloud server. Also, few hours before that I was able to update HTML files, push them, and see the edits on the website. The link should take you to a log-in page (this is available when you google "nhslhs.tk" and click cache view) but it takes you to a search magnified advertisement-esque page. On a few sites, people claimed the error was due to a Trojan horse virus or server being down. Do you know how to fix this? Thanks!

    Read the article

  • Parsing a Multi-Index Excel File in Pandas

    - by rhaskett
    I have a time series excel file with a tri-level column MultiIndex that I would like to successfully parse if possible. There are some results on how to do this for an index on stack overflow but not the columns and the parse function has a header that does not seem to take a list of rows. The ExcelFile looks like is like the following: Column A is all the time series dates starting on A4 Column B has top_level1 (B1) mid_level1 (B2) low_level1 (B3) data (B4-B100+) Column C has null (C1) null (C2) low_level2 (C3) data (C4-C100+) Column D has null (D1) mid_level2 (D2) low_level1 (D3) data (D4-D100+) Column E has null (E1) null (E2) low_level2 (E3) data (E4-E100+) ... So there are two low_level values many mid_level values and a few top_level values but the trick is the top and mid level values are null and are assumed to be the values to the left. So, for instance all the columns above would have top_level1 as the top multi-index value. My best idea so far is to use transpose, but the it fills Unnamed: # everywhere and doesn't seem to work. In Pandas 0.13 read_csv seems to have a header parameter that can take a list, but this doesn't seem to work with parse.

    Read the article

  • Indexing over the results returned by selenium

    - by Guy
    Hi I try to index over results returned by an xpath. For example: xpath = '//a[@id="someID"]' can return a few results. I want to get a list of them. I thought that doing: numOfResults = sel.get_xpath_count(xpath) l = [] for i in range(1,numOfResults+1): l.append(sel.get_text('(%s)[%d]'%(xpath, i))) would work because doing something similar with firefox's Xpath checker works: (//a[@id='someID'])[2] returns the 2nd result. Ideas why the behavior would be different and how to do such a thing with selenium Thanks

    Read the article

  • How to merge or copy anonymous session data into user data when user logs in?

    - by benhoyt
    This is a general question, or perhaps a request for pointers to other open source projects to look at: I'm wondering how people merge an anonymous user's session data into the authenticated user data when a user logs in. For example, someone is browsing around your websites saving various items as favourites. He's not logged in, so they're saved to an anonymous user's data. Then he logs in, and we need to merge all that data into their (possibly existing) user data. Is this done different ways in an ad-hoc fashion for different applications? Or are there some best practices or other projects people can direct me to?

    Read the article

  • Using adaptive step sizes with scipy.integrate.ode

    - by Mike
    The (brief) documentation for scipy.integrate.ode says that two methods (dopri5 and dop853) have stepsize control and dense output. Looking at the examples and the code itself, I can only see a very simple way to get output from an integrator. Namely, it looks like you just step the integrator forward by some fixed dt, get the function value(s) at that time, and repeat. My problem has pretty variable timescales, so I'd like to just get the values at whatever time steps it needs to evaluate to achieve the required tolerances. That is, early on, things are changing slowly, so the output time steps can be big. But as things get interesting, the output time steps have to be smaller. I don't actually want dense output at equal intervals, I just want the time steps the adaptive function uses.

    Read the article

  • List comprehension, map, and numpy.vectorize performance

    - by mcstrother
    I have a function foo(i) that takes an integer and takes a significant amount of time to execute. Will there be a significant performance difference between any of the following ways of initializing a: a = [foo(i) for i in xrange(100)] a = map(foo, range(100)) vfoo = numpy.vectorize(foo) a = vfoo(range(100)) (I don't care whether the output is a list or a numpy array.) Is there a better way?

    Read the article

  • How do I serve a large file using Pylons?

    - by Chris R
    I am writing a Pylons-based download gateway. The gateway's client will address files by ID: /file_gw/download/1 Internally, the file itself is accessed via HTTP from an internal file server: http://internal-srv/path/to/file_1.content The files may be quite large, so I want to stream the content. I store metadata about the file in a StoredFile model object: class StoredFile(Base): id = Column(Integer, primary_key=True) name = Column(String) size = Column(Integer) content_type = Column(String) url = Column(String) Given this, what's the best (ie: most architecturally-sound, performant, et al) way to write my file_gw controller?

    Read the article

  • Object for storing strings geted from prints

    - by evg
    class MyWriter: def __init__(self, stdout): self.stdout = stdout self.dumps = [] def write(self, text): self.stdout.write(smart_unicode(text).encode('cp1251')) self.dumps.append(text) def close(self): self.stdout.close() writer = MyWriter(sys.stdout) save = sys.stdout sys.stdout = writer I use self.dumps list to store geted data from prints. Is it exists more convinient object for storing string lines in memory? ideally i want dump it to one big string. I can get it like this "\n".join(self.dumps) from code above. Mb it's better to just concat strings - self.dumps += text ?

    Read the article

  • Aggregation over a few models - Django

    - by RadiantHex
    Hi folks, I'm trying to compute the average of a field over various subsets of a queryset. Player.objects.order_by('-score').filter(sex='male').aggregate(Avg('level')) This works perfectly! But... if I try to compute it for the top 50 players it does not work. Player.objects.order_by('-score').filter(sex='male')[:50].aggregate(Avg('level')) This last one returns the exact same result as the query above it, which is wrong. What am I doing wrong? Help would be very much appreciated!

    Read the article

< Previous Page | 432 433 434 435 436 437 438 439 440 441 442 443  | Next Page >