Search Results

Search found 19662 results on 787 pages for 'python module'.

Page 444/787 | < Previous Page | 440 441 442 443 444 445 446 447 448 449 450 451  | Next Page >

  • One-line expression to map dictionary to another

    - by No Such IP
    I have dictionary like d = {'user_id':1, 'user':'user1', 'group_id':3, 'group_name':'ordinary users'} and "mapping" dictionary like: m = {'user_id':'uid', 'group_id':'gid', 'group_name':'group'} All i want to "replace" keys in first dictionary with keys from second (e.g. replace 'user_id' with 'uid', etc.) I know that keys are immutable and i know how to do it with 'if/else' statement. But maybe there is way to do it in one line expression?

    Read the article

  • Amazon S3 permissions

    - by Joe
    Trying to understand S3...How do you limit access to a file you upload to S3? For example, from a web application, each user has files they can upload, but how do you limit access so only that user has access to that file? It seems like the query string authentication requires an expiration date and that won't work for me, is there another way to do this?

    Read the article

  • Decorator that can take both init args and call args?

    - by digitala
    Is it possible to create a decorator which can be __init__'d with a set of arguments, then later have methods called with other arguments? For instance: from foo import MyDecorator bar = MyDecorator(debug=True) @bar.myfunc(a=100) def spam(): pass @bar.myotherfunc(x=False) def eggs(): pass If this is possible, can you provide a working example?

    Read the article

  • Parsing a Multi-Index Excel File in Pandas

    - by rhaskett
    I have a time series excel file with a tri-level column MultiIndex that I would like to successfully parse if possible. There are some results on how to do this for an index on stack overflow but not the columns and the parse function has a header that does not seem to take a list of rows. The ExcelFile looks like is like the following: Column A is all the time series dates starting on A4 Column B has top_level1 (B1) mid_level1 (B2) low_level1 (B3) data (B4-B100+) Column C has null (C1) null (C2) low_level2 (C3) data (C4-C100+) Column D has null (D1) mid_level2 (D2) low_level1 (D3) data (D4-D100+) Column E has null (E1) null (E2) low_level2 (E3) data (E4-E100+) ... So there are two low_level values many mid_level values and a few top_level values but the trick is the top and mid level values are null and are assumed to be the values to the left. So, for instance all the columns above would have top_level1 as the top multi-index value. My best idea so far is to use transpose, but the it fills Unnamed: # everywhere and doesn't seem to work. In Pandas 0.13 read_csv seems to have a header parameter that can take a list, but this doesn't seem to work with parse.

    Read the article

  • Find all A^x in a given range

    - by Austin Henley
    I need to find all monomials in the form AX that when evaluated falls within a range from m to n. It is safe to say that the base A is greater than 1, the power X is greater than 2, and only integers need to be used. For example, in the range 50 to 100, the solutions would be: 2^6 3^4 4^3 My first attempt to solve this was to brute force all combinations of A and X that make "sense." However this becomes too slow when used for very large numbers in a big range since these solutions are used in part of much more intensive processing. Here is the code: def monoSearch(min, max): base = 2 power = 3 while 1: while base**power < max: if base**power > min: print "Found " + repr(base) + "^" + repr(power) + " = " + repr(base**power) power = power + 1 base = base + 1 power = 3 if base**power > max: break I could remove one base**power by saving the value in a temporary variable but I don't think that would make a drastic effect. I also wondered if using logarithms would be better or if there was a closed form expression for this. I am open to any optimizations or alternatives to finding the solutions.

    Read the article

  • File mode for creating+reading+appending+binary

    - by MihaiD
    I need to open a file for reading and writing. If the file is not found, it should be created. It should also be treated as a binary for Windows. Can you tell me the file mode sequence I need to use for this? I tried 'r+ab' but that doesn't create the files if they are not found. Thanks

    Read the article

  • Django: How to write the reverse function for the following

    - by ninja123
    The urlconf and view is as follows: url(r'^register/$', register, { 'backend': 'registration.backends.default.DefaultBackend' }, name='registration_register'), def register(request, backend, success_url=None, form_class=None, disallowed_url='registration_disallowed', template_name='registration/registration_form.html', extra_context=None): What i want to do is redirect users to the register page and specify a success_url. I tried reverse('registration.views.register', kwargs={'success_url':'/test/' }) but that doesn't seem to work. I've been trying for hours and can't get my mind around getting it right. Thanks

    Read the article

  • Qt gstreamer problem

    - by ZolaKt
    Ptterb can you post your full code please? I copied your code. Added fvidscale_cap to pipeline, with: self.player.add(self.source, self.scaler, self.fvidscale_cap, self.sink) gst.element_link_many(self.source,self.scaler, self.fvidscale_cap, self.sink) From the main program I create a new QWidget, and pass its winId() to Vid constructor. The widget start loading, but crashes. The output says: should be playing Segmentation fault

    Read the article

  • UnicodeDecodeError from a GET-parameter in webapp2

    - by Aneon
    I'm getting a UnicodeDecodeError when recieving a GET-parameter from webapp2 that contains unicode characters, and then using it to do a NDB query. I get the same error message when manually running a unicode() on the parameter in the handler, so there either seems to be a problem in webapp2's URL routing or I've missed something. Preferably, all GET-parameters should be converted to unicode before getting passed into the handler so I don't need to do manual conversions in all of my handlers. I actually think it's worked before in an earlier version. The full error message read: UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 1: ordinal not in range(128) The GET-parameter contains the following string: göteborg. It looks fine when I raise an Exception on it, but gives me an error when I (or NDB) use unicode() on it. EDIT: In NDB, it fails on the following code: File "C:\Program Files (x86)\Google\google_appengine\google\appengine\api\datastore_types.py", line 1562, in PackString pbvalue.set_stringvalue(unicode(value).encode('utf-8')) Thanks.

    Read the article

  • Launch an SWF full screen

    - by Geoff
    I have a swf file (a flash game). I want to run some script to open it in full-screen mode. I'm not attached to any browser, but I do run Linux, so a bash, or generic answer is what I'm looking for. I'm also open to building a lite browser application if need-be.

    Read the article

  • why my code show messy code ..

    - by zjm1126
    class sss(webapp.RequestHandler): def get(self): url = "http://www.google.com/" result = urlfetch.fetch(url) if result.status_code == 200: self.response.out.write(result.content) and this view show : when i change code to this: if result.status_code == 200: self.response.out.write(result.content.decode('utf-8').encode('gb2312')) it show : so ,what i should do ? thanks

    Read the article

  • Query crashes MS Access

    - by user284651
    THE TASK: I am in the process of migrating a DB from MS Access to Maximizer. In order to do this I must take 64 tables in MS ACCESS and merge them into one. The output must be in the form of a TAB or CSV file. Which will then be imported into Maximizer. THE PROBLEM: Access is unable to perform a query that is so complex it seems, as it crashes any time I run the query. ALTERNATIVES: I have thought about a few alternatives, and would like to do the least time-consuming one, out of these, while also taking advantage of any opportunities to learn something new. Export each table into CSVs and import into SQLight and then make a query with it to do the same as what ACCESS fails to do (merge 64 tables). Export each table into CSVs and write a script to access each one and merge the CSVs into a single CSV. Somehow connect to the MS ACCESS DB (API), and write a script to pull data from each table and merge them into a CSV file. QUESTION: What do you recommend?

    Read the article

  • Mechanize Submit Form Error: Insufficient items with name '10427'

    - by maneh
    I'm trying to submit a form with Mechanize, I have tried different ways, but the problem persists. Can anyone help me on this. Thank you in advance! This is the form I want to submit: http://www.stpairways.st/ This is the code that I'm using: def stp_airways(url): import re import mechanize br = mechanize.Browser() br.open(url) print br.title() br.select_form(name = "frmbook") br.form['TypeTrajet'] = ["1"] br.form['id_depart'] = ["11967"] br.form['id_arrivee'] = ["10427"] br.form['txtDateAller'] = "5/7/2014" br.form['txtDateRetour'] = "12/7/2014" br.form['TypePassager1u1000r0b1'] = ["1"] br.form['TypePassager2u1000r0b1'] = ["0"] br.form['TypePassager3u1000r0b1'] = ["0"] br.form['CodeIsoDeviseClient'] = ["17,20,23,24,25,26,27,28,29,30,31,33,34,36,37,64,65,67,68,70,73,80,81,95,96,103,147,151,152,159,160,162,169,170TP1TPF"] br.form['CodeIsoDeviseClient'] = ["EUR"] # submit response1 = br.submit() print response1.read()

    Read the article

  • Django - Expression based model constraints

    - by rtmie
    Is it possible to set an expression based constraint on a django model object, e.g. If I want to impose a constraint where an owner can have only one widget of a given type that is not in an expired state, but can have as many others as long as they are expired. Obviously I can do this by overriding the save method, but I am wondering if it can be done by setting constraints, e.g. some derivative of the unique_together constraint WIDGET_STATE_CHOICES = ( ('NEW', 'NEW'), ('ACTIVE', 'ACTIVE'), ('EXPIRED', 'EXPIRED') ) class MyWidget(models.Model): owner = models.CharField(max_length=64) widget_type = models.CharField(max_length = 10) widget_state = models.CharField(max_length = 10, choices = WIDGET_STATE_CHOICES) #I'd like to be able to do something like class Meta: unique_together = (("owner","widget_type","widget_state" != 'EXPIRED')

    Read the article

  • Django admin, filter objects by ManyToMany reference

    - by Nick Z
    Hello! There's photologue application, simple photo gallery for django, implementing Photo and Gallery objects. Gallery object has ManyToMany field, which references Photo objects. I need to be able to get list of all Photos for a given Gallery. Is it possible to add Gallery filter to Photo's admin page? If it's possible, how to do it best?

    Read the article

  • Do not match if word appears in regex

    - by David542
    I have a url, and I want it to NOT match if the word 'season' is contained in the url. Here are two examples: CONTAINS SEASON, DO NOT MATCH 'http://imdb.com/title/tt0285331/episodes?this=1&season=7&ref_=tt_eps_sn_7' DOES NOT CONTAIN SEASON, MATCH 'http://imdb.com/title/tt0285331/ Here is what I have so far, but I'm afraid the .+ will match everything until the end. What would be the correct regex to use here? r'http://imdb.com/title/tt(\d)+/.+^[season].+'

    Read the article

  • nested list comprehension using intermediate result

    - by KentH
    I am trying to grok the output of a function which doesn't have the courtesy of setting a result code. I can tell it failed by the "error:" string which is mixed into the stderr stream, often in the middle of a different conversion status message. I have the following list comprehension which works, but scans for the "error:" string twice. Since it is only rescanning the actual error lines, it works fine, but it annoys me I can't figure out how to use a single scan. Here's the working code: errors = [e[e.find('error:'):] for e in err.splitlines() if 'error:' in e] The obvious (and wrong) way to simplify is to save the "find" result errors = [e[i:] for i in e.find('error:') if i != -1 for e in err.splitlines()] However, I get "UnboundLocalError: local variable 'e' referenced before assignment". Blindly reversing the 'for's in the comprehension also fails. How is this done? THanks. Kent

    Read the article

  • What plug in or module to use with WordPress? [migrated]

    - by Qacro
    I am developing travel website where users can search and book their travel deal. It goes like this: Providers are creating their travel deals (same as some blogger create blog in WordPress); Users book wanted travel deals; Providers, who have their account where they can see if users book their deal, are notified by the email and sms about just booked (sold) deal. Site is going to be developed using WordPress. Is there any plugin or module that I can use to accomplish this, or at least something similar to reconfigure and not to take this process from scratch?

    Read the article

  • Website/App on Dotcloud is down

    - by user1576866
    The website is nhslhs.tk . The last time I edited something was four days ago. I tried to get a calendar on the Django datable, but deleted it all and never actually pushed it to the Dotcloud server. Also, few hours before that I was able to update HTML files, push them, and see the edits on the website. The link should take you to a log-in page (this is available when you google "nhslhs.tk" and click cache view) but it takes you to a search magnified advertisement-esque page. On a few sites, people claimed the error was due to a Trojan horse virus or server being down. Do you know how to fix this? Thanks!

    Read the article

  • Converting time period strings to value/unit pair

    - by randomtoor
    I need to parse the contents of a string that represents a time period. The format of the string is value/unit, e.g.: 1s, 60min, 24h. I would separate the actual value (an int) and unit (a str) to separated variables. At the moment I do it like this: def validate_time(time): binsize = time.strip() unit = re.sub('[0-9]','',binsize) if unit not in ['s','m','min','h','l']: print "Error: unit {0} is not valid".format(unit) sys.exit(2) tmp = re.sub('[^0-9]','',binsize) try: value = int(tmp) except ValueError: print "Error: {0} is not valid".format(time) sys.exit(2) return value,unit However, it is not ideal as things like 1m0 are also (wrongly) validated (value=10,unit=m). What is the best way to validate/parse this input?

    Read the article

  • Stopping long-running requests in Pylons

    - by Jack
    I'm working on an application using Pylons and I was wondering if there was a way to make sure it doesn't spend way too much time handling one request. That is, I would like to find a way to put a timer on each request such that when too much time elapses, the request just stops (and possibly returns some kind of error). The application is supposed to allow users to run some complex calculations but I would like to make sure that if a calculation starts taking too much time, we stop it to allow other calculations to take place.

    Read the article

< Previous Page | 440 441 442 443 444 445 446 447 448 449 450 451  | Next Page >