Search Results

Search found 13815 results on 553 pages for 'gae python'.

Page 234/553 | < Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >

  • How can I login to a website with Python?

    - by Shady
    How can I do it? I was trying to enter some specified link (with urllib), but to do it, I need to log in. I have this source from the site: <form id="login-form" action="auth/login" method="post"> <div> <!--label for="rememberme">Remember me</label><input type="checkbox" class="remember" checked="checked" name="remember me" /--> <label for="email" id="email-label" class="no-js">Email</label> <input id="email-email" type="text" name="handle" value="" autocomplete="off" /> <label for="combination" id="combo-label" class="no-js">Combination</label> <input id="password-clear" type="text" value="Combination" autocomplete="off" /> <input id="password-password" type="password" name="password" value="" autocomplete="off" /> <input id="sumbitLogin" class="signin" type="submit" value="Sign In" /> Is this possible?

    Read the article

  • Python library to detect if a file has changed between different runs?

    - by Stefano Borini
    Suppose I have a program A. I run it, and performs some operation starting from a file foo.txt. Now A terminates. New run of A. It checks if the file foo.txt has changed. If the file has changed, A runs its operation again, otherwise, it quits. Does a library function/external library for this exists ? Of course it can be implemented with an md5 + a file/db containing the md5. I want to prevent reinventing the wheel.

    Read the article

  • What's the non brute force way to filter a Python dictionary?

    - by Thierry Lam
    I can filter the following dictionary like: data = { 1: {'name': 'stackoverflow', 'traffic': 'high'}, 2: {'name': 'serverfault', 'traffic': 'low'}, 3: {'name': 'superuser', 'traffic': 'low'}, 4: {'name': 'mathoverflow', 'traffic': 'low'}, } traffic = 'low' for k, v in data.items(): if v['traffic'] == traffic: print k, v Is there an alternate way to do the above filtering?

    Read the article

  • Can a python view template be made to be 'safe/secure' if I make it user editable?

    - by Blankman
    Say I need to have a templating system where a user can edit it online using an online editor. So they can put if tags, looping tags etc., but ONLY for specific objects that I want to inject into the template. Can this be made to be safe from security issues? i.e. them somehow outputing sql connection string information or scripting things outside of the allowable tags and injected objects.

    Read the article

  • Python: How to extract xml embedded in a html file?

    - by georgehu
    I have a html file with xml snipped embedded, the source code is pasted in the pastbin: http://pastebin.com/Hy0QaWk8 my task is to extract the text enclosed in the first textarea, which is a xml snippet, from the html. Without any change to the original snippet. I'm able to get it by using the BeautifulSoup, but it changes all the tag names into lower case.

    Read the article

  • In Python, are there builtin functions for elementwise boolean operators over boolean lists?

    - by bshanks
    For example, if you have n lists of bools of the same length, then elementwise boolean AND should return another list of that length that has True in those positions where all the input lists have True, and False everywhere else. It's pretty easy to write, i just would prefer to use a builtin if one exists (for the sake of standardization/readability). Here's an implementation of elementwise AND: def eAnd(*args): return [all(tuple) for tuple in zip(*args)] example usage: >>> eAnd([True, False, True, False, True], [True, True, False, False, True], [True, True, False, False, True]) [True, False, False, False, True] thx

    Read the article

  • How do I find difference between times in different timezones in Python?

    - by JasonA
    Hi All, I am trying to calculate difference(in seconds) between two date/times formatted as following: 2010-05-11 17:07:33 UTC 2010-05-11 17:07:33 EDT time1 = '2010-05-11 17:07:33 UTC' time2 = '2010-05-11 17:07:33 EDT' delta = time.mktime(time.strptime(time1,"%Y-%m-%d %H:%M:%S %Z"))-\ time.mktime(time.strptime(time2, "%Y-%m-%d %H:%M:%S %Z")) The problem I got is EDT is not recognized, the specific error is "ValueError: time data '2010-05-11 17:07:33 EDT' does not match format '%Y-%m-%d %H:%M:%S %Z'" Thanks,

    Read the article

  • Python Logging across multiple classes and files; how to configure so as to be easily disabled?

    - by mellort
    Currently, I have osmething like this in all of my classes: # Import logging to log information import logging # Set up the logger LOG_FILENAME = 'log.txt' logging.basicConfig(filename=LOG_FILENAME,level=logging.DEBUG) This works well, and I get the output I want, but I would really like to have all this sort of information in one place, and be able to just do something like import myLogger and then start logging, and then hopefully be able to just go into that file and turn off logging when I need an extra performance boost. Thanks in advance

    Read the article

  • Python: (sampling with replacement): efficient algorithm to extract the set of DISSIMILAR N-tuples from a set

    - by Homunculus Reticulli
    I have a set of items, from which I want to select DISSIMILAR tuples (more on the definition of dissimilar touples later). The set could contain potentially several thousand items, although typically, it would contain only a few hundreds. I am trying to write a generic algorithm that will allow me to select N items to form an N-tuple, from the original set. The new set of selected N-tuples should be DISSIMILAR. A N-tuple A is said to be DISSIMILAR to another N-tuple B if and only if: Every pair (2-tuple) that occurs in A DOES NOT appear in B Note: For this algorithm, A 2-tuple (pair) is considered SIMILAR/IDENTICAL if it contains the same elements, i.e. (x,y) is considered the same as (y,x). This is a (possible variation on the) classic Urn Problem. A trivial (pseudocode) implementation of this algorithm would be something along the lines of def fetch_unique_tuples(original_set, tuple_size): while True: # randomly select [tuple_size] items from the set to create first set # create a key or hash from the N elements and store in a set # store selected N-tuple in a container if end_condition_met: break I don't think this is the most efficient way of doing this - and though I am no algorithm theorist, I suspect that the time for this algorithm to run is NOT O(n) - in fact, its probably more likely to be O(n!). I am wondering if there is a more efficient way of implementing such an algo, and preferably, reducing the time to O(n). Actually, as Mark Byers pointed out there is a second variable m, which is the size of the number of elements being selected. This (i.e. m) will typically be between 2 and 5. Regarding examples, here would be a typical (albeit shortened) example: original_list = ['CAGG', 'CTTC', 'ACCT', 'TGCA', 'CCTG', 'CAAA', 'TGCC', 'ACTT', 'TAAT', 'CTTG', 'CGGC', 'GGCC', 'TCCT', 'ATCC', 'ACAG', 'TGAA', 'TTTG', 'ACAA', 'TGTC', 'TGGA', 'CTGC', 'GCTC', 'AGGA', 'TGCT', 'GCGC', 'GCGG', 'AAAG', 'GCTG', 'GCCG', 'ACCA', 'CTCC', 'CACG', 'CATA', 'GGGA', 'CGAG', 'CCCC', 'GGTG', 'AAGT', 'CCAC', 'AACA', 'AATA', 'CGAC', 'GGAA', 'TACC', 'AGTT', 'GTGG', 'CGCA', 'GGGG', 'GAGA', 'AGCC', 'ACCG', 'CCAT', 'AGAC', 'GGGT', 'CAGC', 'GATG', 'TTCG'] # Select 3-tuples from the original list should produce a list (or set) similar to: [('CAGG', 'CTTC', 'ACCT') ('CAGG', 'TGCA', 'CCTG') ('CAGG', 'CAAA', 'TGCC') ('CAGG', 'ACTT', 'ACCT') ('CAGG', 'CTTG', 'CGGC') .... ('CTTC', 'TGCA', 'CAAA') ] [[Edit]] Actually, in constructing the example output, I have realized that the earlier definition I gave for UNIQUENESS was incorrect. I have updated my definition and have introduced a new metric of DISSIMILARITY instead, as a result of this finding.

    Read the article

  • I'm doing a lot of lists and dictionary sorting...and this is causing memory errors in Python websit

    - by alex
    I retrieved data from the log table in my database. Then I started finding unique users, comparing/sorting lists, etc. In the end I got down to this. stats = {'2010-03-19': {'date': '2010-03-19', 'unique_users': 312, 'queries': 1465}, '2010-03-18': {'date': '2010-03-18', 'unique_users': 329, 'queries': 1659}, '2010-03-17': {'date': '2010-03-17', 'unique_users': 379, 'queries': 1845}, '2010-03-16': {'date': '2010-03-16', 'unique_users': 434, 'queries': 2336}, '2010-03-15': {'date': '2010-03-15', 'unique_users': 390, 'queries': 2138}, '2010-03-14': {'date': '2010-03-14', 'unique_users': 460, 'queries': 2221}, '2010-03-13': {'date': '2010-03-13', 'unique_users': 507, 'queries': 2242}, '2010-03-12': {'date': '2010-03-12', 'unique_users': 629, 'queries': 3523}, '2010-03-11': {'date': '2010-03-11', 'unique_users': 811, 'queries': 4274}, '2010-03-10': {'date': '2010-03-10', 'unique_users': 171, 'queries': 1297}, '2010-03-26': {'date': '2010-03-26', 'unique_users': 299, 'queries': 1617}, '2010-03-27': {'date': '2010-03-27', 'unique_users': 323, 'queries': 1310}, '2010-03-24': {'date': '2010-03-24', 'unique_users': 352, 'queries': 2112}, '2010-03-25': {'date': '2010-03-25', 'unique_users': 330, 'queries': 1290}, '2010-03-22': {'date': '2010-03-22', 'unique_users': 329, 'queries': 1798}, '2010-03-23': {'date': '2010-03-23', 'unique_users': 329, 'queries': 1857}, '2010-03-20': {'date': '2010-03-20', 'unique_users': 368, 'queries': 1693}, '2010-03-21': {'date': '2010-03-21', 'unique_users': 329, 'queries': 1511}, '2010-03-29': {'date': '2010-03-29', 'unique_users': 325, 'queries': 1718}, '2010-03-28': {'date': '2010-03-28', 'unique_users': 340, 'queries': 1815}, '2010-03-30': {'date': '2010-03-30', 'unique_users': 329, 'queries': 1891}} It's not a big dictionary. But when I try to do one last thing...it craps out on me. for k, v in stats: mylist.append(v) too many values to unpack What the heck does that mean??? TOO MANY VALUES TO UNPACK.

    Read the article

  • How do I create a list of timedeltas in python?

    - by eunhealee
    I've been searching through this website and have seen multiple references to time deltas, but haven't quite found what I'm looking for. Basically, I have a list of messages that are received by a comms server and I want to calcuate the latency time between each message out and in. It looks like this: 161336.934072 - TMsg out: [O] enter order. RefID [123] OrdID [4568] 161336.934159 - TMsg in: [A] accepted. ordID [456] RefNumber [123] Mixed in with these messages are other messages as well, however, I only want to capture the difference between the Out messages and in messages with the same RefID. So far, to sort out from the main log which messages are Tmessages I've been doing this, but it's really inefficient. I don't need to be making new files everytime.: big_file = open('C:/Users/kdalton/Documents/Minicomm.txt', 'r') small_file1 = open('small_file1.txt', 'w') for line in big_file: if 'T' in line: small_file1.write(line) big_file.close() small_file1.close() How do I calculate the time deltas between the two messages and sort out these messages from the main log?

    Read the article

  • How to make if-elif-else statement in python more space-saving?

    - by Neverland
    I have a lot of if-elif-else statements in my code if message == '0' or message == '3' or message == '5' or message == '7': ... elif message == '1' or message == '2' or message == '4' or message == '6' or message == '8': ... else: ... Is it possible to format this in a more space-saving way? I tried it this way: if message == '0' or '3' or '5' or '7': ... elif message == '1' or '2' or '4' or '6' or '8': ... else: ... But without success.

    Read the article

  • What would happen if a same file being read and appended at the same time(python programming)?

    - by Shane
    I'm writing a script using two separate thread one doing file reading operation and the other doing appending, both threads run fairly frequently. My question is, if one thread happens to read the file while the other is just in the middle of appending strings such as "This is a test" into this file, what would happen? I know if you are appending a smaller-than-buffer string, no matter how frequently you read the file in other threads, there would never be incomplete line such as "This i" appearing in your read file, I mean the os would either do: append "This is a test" - read info from the file; or: read info from the file - append "This is a test" to the file; and such would never happen: append "This i" - read info from the file - append "s a test". But if "This is a test" is big enough(assuming it's a bigger-than-buffer string), the os can't do appending job in one operation, so the appending job would be divided into two: first append "This i" to the file, then append "s a test", so in this kind of situation if I happen to read the file in the middle of the whole appending operation, would I get such result: append "This i" - read info from the file - append "s a test", which means I might read a file that includes an incomplete string?

    Read the article

  • Why are these strings escaping from my regular expression in python?

    - by dohkoxar
    In my code, I load up an entire folder into a list and then try to get rid of every file in the list except the .mp3 files. import os import re path = '/home/user/mp3/' dirList = os.listdir(path) dirList.sort() i = 0 for names in dirList: match = re.search(r'\.mp3', names) if match: i = i+1 else: dirList.remove(names) print dirList print i After I run the file, the code does get rid of some files in the list but keeps these two especifically: ['00. Various Artists - Indie Rock Playlist October 2008.m3u', '00. Various Artists - Indie Rock Playlist October 2008.pls'] I can't understand what's going on, why are those two specifically escaping my search.

    Read the article

< Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >