Search Results

Search found 70655 results on 2827 pages for 'python time'.

Page 131/2827 | < Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >

  • Python: Hack to call a method on an object that isn't of its class

    - by cool-RR
    Assume you define a class, which has a method which does some complicated processing: class A(object): def my_method(self): # Some complicated processing is done here return self And now you want to use that method on some object from another class entirely. Like, you want to do A.my_method(7). This is what you'd get: TypeError: unbound method my_method() must be called with A instance as first argument (got int instance instead). Now, is there any possibility to hack things so you could call that method on 7? I'd want to avoid moving the function or rewriting it. (Note that the method's logic does depend on self.) One note: I know that some people will want to say, "You're doing it wrong! You're abusing Python! You shouldn't do it!" So yes, I know, this is a terrible terrible thing I want to do. I'm asking if someone knows how to do it, not how to preach to me that I shouldn't do it.

    Read the article

  • Python - Using "Google AJAX Search" API's Local Search Objects

    - by user330739
    Hi! I've just started using Google's search API to find addresses and the distances between those addresses. I used geopy for this, but, I often had the problem of not getting the correct addresses for my queries. I decided to experiment, therefore, with Google's "Local Search" (http://code.google.com/apis/ajaxsearch/local.html). Anyway, I wanted to ask if I could use the "Local Search" objects provided by the API within python. Something tells me that I can't and that I have to use json. Does anyone know if there is a work around? PS: Im trying to make something like this: http://www.google.com/uds/samples/random/lead.html ... except a matrix type deal where the insides will be filled with distances between the addresses. Thanks for reading!

    Read the article

  • Group MySQL Data into Arbitrarily Sized Time Buckets

    - by Eric J.
    How do I count the number of records in a MySQL table based on a timestamp column per unit of time where the unit of time is arbitrary? Specifically, I want to count how many record's timestamps fell into 15 minute buckets during a given interval. I understand how to do this in buckets of 1 second, 1 minute, 1 hour, 1 day etc. using MySQL date functions, e.g. SELECT YEAR(datefield) Y, MONTH(datefield) M, DAY(datefield) D, COUNT(*) Cnt FROM mytable GROUP BY YEAR(datefield), MONTH(datefield), DAY(datefield) but how can I group by 15 minute buckets?

    Read the article

  • Loading a DB table into nested dictionaries in Python

    - by Hossein
    Hi, I have a table in MySql DB which I want to load it to a dictionary in python. the table columns is as follows: id,url,tag,tagCount tagCount is the number of times that a tag has been repeated for a certain url. So in that case I need a nested dictionary, in other words a dictionary of dictionary, to load this table. Because each url have several tags for which there are different tagCounts.the code that I used is this:( the whole table is about 22,000 records ) cursor.execute( ''' SELECT url,tag,tagCount FROM wtp ''') urlTagCount = cursor.fetchall() d = defaultdict(defaultdict) for url,tag,tagCount in urlTagCount: d[url][tag]=tagCount print d first of all I want to know if this is correct.. and if it is why it takes so much time? Is there any faster solutions? I am loading this table into memory to have fast access to get rid of the hassle of slow database operations, but with this slow speed it has become a bottleneck itself, it is even much slower than DB access. and anyone help? thanks

    Read the article

  • how to send data to server using python

    - by Apache
    hi experts, how data can be send to the server, for example i retrieve MAC address, so i want send to the server ( i.e 211.21.24.43:8080/data?mac=00-0C-F1-56-98-AD i found snippet from internet as below from urllib2 import Request, urlopen from binascii import b2a_base64 def b64open(url, postdata): req = Request(url, b2a_base64(postdata), headers={'Content-Transfer-Encoding': 'base64'}) return urlopen(req) conn = b64open("http://211.21.24.43:8080/data","mac=00-0C-F1-56-98-AD") but when run, File "send2.py", line 8 SyntaxError: Non-ASCII character '\xc3' in file send2.py on line 8, but no encoding declared; see http://www.python.org/peps/pep-0263.html for details can anyone help me how send data to the server thanks in advance

    Read the article

  • How to change font size using the Python ImageDraw Library

    - by Eldila
    I am trying to change the font size using python's ImageDraw library. You can do something like this: fontPath = "/usr/share/fonts/dejavu-lgc/DejaVuLGCSansCondensed-Bold.ttf" sans16 = ImageFont.truetype ( fontPath, 16 ) im = Image.new ( "RGB", (200,50), "#ddd" ) draw = ImageDraw.Draw ( im ) draw.text ( (10,10), "Run awayyyy!", font=sans16, fill="red" ) The problem is that I don't want to specify a font. I want to use the default font and just change the size of the font. This seems to me that it should be simple, but I can't find documentation on how to do this.

    Read the article

  • Python 2.6, 3 abstract base class misunderstanding

    - by Aaron
    I'm not seeing what I expect when I use ABCMeta and abstractmethod. This works fine in python3: from abc import ABCMeta, abstractmethod class Super(metaclass=ABCMeta): @abstractmethod def method(self): pass a = Super() TypeError: Can't instantiate abstract class Super ... And in 2.6: class Super(): __metaclass__ = ABCMeta @abstractmethod def method(self): pass a = Super() TypeError: Can't instantiate abstract class Super ... They both also work fine (I get the expected exception) if I derive Super from object, in addition to ABCMeta. They both "fail" (no exception raised) if I derive Super from list. I want an abstract base class to be a list but abstract, and concrete in sub classes. Am I doing it wrong, or should I not want this in python?

    Read the article

  • itertools.product eliminating repeated reversed tuples

    - by genclik27
    I asked a question yesterday and thanks to Tim Peters, it is solved. The question is here; itertools.product eliminating repeated elements The new question is further version of this. This time I will generate tuples inside of tuples. Here is an example; lis = [[(1,2), (3,4)], [(5,2), (1,2)], [(2,1), (1,2)]] When I use it in itertools.product function this is what I get, ((1, 2), (5, 2), (2, 1)) ((1, 2), (5, 2), (1, 2)) ((1, 2), (1, 2), (2, 1)) ((1, 2), (1, 2), (1, 2)) ((3, 4), (5, 2), (2, 1)) ((3, 4), (5, 2), (1, 2)) ((3, 4), (1, 2), (2, 1)) ((3, 4), (1, 2), (1, 2)) I want to change it in a way that if a sequence has (a,b) inside of it, then it can not have (b,a). In this example if you look at this sequence ((3, 4), (1, 2), (2, 1)) it has (1,2) and (2,1) inside of it. So, this sequence ((3, 4), (1, 2), (2, 1)) should not be considered in the results. As I said, I asked similar question before, in that case it was not considering duplicate elements. I try to adapt it to my problem. Here is modified code. Changed parts in old version are taken in comments. def reverse_seq(seq): s = [] for i in range(len(seq)): s.append(seq[-i-1]) return tuple(s) def uprod(*seqs): def inner(i): if i == n: yield tuple(result) return for elt in sets[i] - reverse: #seen.add(elt) rvrs = reverse_seq(elt) reverse.add(rvrs) result[i] = elt for t in inner(i+1): yield t #seen.remove(elt) reverse.remove(rvrs) sets = [set(seq) for seq in seqs] n = len(sets) #seen = set() reverse = set() result = [None] * n for t in inner(0): yield t In my opinion this code should work but I am getting error for the input lis = [[(1,2), (3,4)], [(5,2), (1,2)], [(2,1), (1,2)]]. I could not understand where I am wrong. for i in uprod(*lis): print i Output is, ((1, 2), (1, 2), (1, 2)) Traceback (most recent call last): File "D:\Users\SUUSER\workspace tree\sequence_covering _array\denemeler_buraya.py", line 39, in <module> for i in uprod(*lis): File "D:\Users\SUUSER\workspace tree\sequence_covering _array\denemeler_buraya.py", line 32, in uprod for t in inner(0): File "D:\Users\SUUSER\workspace tree\sequence_covering _array\denemeler_buraya.py", line 22, in inner for t in inner(i+1): File "D:\Users\SUUSER\workspace tree\sequence_covering _array\denemeler_buraya.py", line 25, in inner reverse.remove(rvrs) KeyError: (2, 1) Thanks,

    Read the article

  • Random List of millions of elements in Python Efficiently

    - by eWizardII
    Hello, I have read this answer potentially as the best way to randomize a list of strings in Python. I'm just wondering then if that's the most efficient way to do it because I have a list of about 30 million elements via the following code: import json from sets import Set from random import shuffle a = [] for i in range(0,193): json_data = open("C:/Twitter/user/user_" + str(i) + ".json") data = json.load(json_data) for j in range(0,len(data)): a.append(data[j]['su']) new = list(Set(a)) print "Cleaned length is: " + str(len(new)) ## Take Cleaned List and Randomize it for Analysis shuffle(new) If there is a more efficient way to do it, I'd greatly appreciate any advice on how to do it. Thanks,

    Read the article

  • python: multiline regular expression

    - by facha
    Hi, everyone I have a piece of text and I've got to parse usernames and hashes out of it. Right now I'm doing it with two regular expressions. Could I do it with just one multiline regular expression? #!/usr/bin/env python import re test_str = """ Hello, UserName. Please read this looooooooooooooooong text. hash Now, write down this hash: fdaf9399jef9qw0j. Then keep reading this loooooooooong text. Hello, UserName2. Please read this looooooooooooooooong text. hash Now, write down this hash: gtwnhton340gjr2g. Then keep reading this loooooooooong text. """ logins = re.findall('Hello, (?P<login>.+).',test_str) hashes = re.findall('hash: (?P<hash>.+).',test_str)

    Read the article

  • python: strange behavior about exec statement

    - by ifocus
    exec statement: exec code [ in globals[, locals]] When I execute the following code in python, the result really confused me. Some of the variables were setup into the globals, some were setup into the locals. s = """ # test var define int_v1 = 1 list_v1 = [1, 2, 3] dict_v1 = {1: 'hello', 2:'world', 3:'!'} # test built-in function list_v2 = [float(x) for x in list_v1] len_list_v1 = len(list_v1) # test function define def func(): global g_var, list_v1, dict_v1 print 'access var in globals:' print g_var print 'access var in locals:' for x in list_v1: print dict_v1[x] """ g = {'__builtins__': __builtins__, 'g_var': 'global'} l = {} exec s in g, l print 'globals:', g print 'locals:', l exec 'func()' in g, l the result in python2.6.5: globals: {'__builtins__': <module '__builtin__' (built-in)>, 'dict_v1': {1: 'hello', 2: 'world', 3: '!'}, 'g_var': 'global', 'list_v1': [1, 2, 3]} locals: {'int_v1': 1, 'func': <function func at 0x00ACA270>, 'x': 3, 'len_list_v1': 3, 'list_v2': [1.0, 2.0, 3.0]} access var in globals: global access var in locals: hello world ! And if I want to setup all variables and functions into the locals, and keep the rights of accessing the globals. How to do ?

    Read the article

  • Problem with literal arguments in the PATTERN string for a python 2to3 fixer

    - by Zxaos
    Hi folks. I'm writing a fixer for the 2to3 tool in python. In my pattern string, I have a section where I'd like to match an empty string as an argument, or an empty unicode string. The relevant chunk of my pattern looks like: (args='""' | args='u""') My issue is the second option never matches. Even if it's alone, it won't match. However, if I simply say args=any and then output args, I can catch cases where args is exactly equal to the second option. Is there some weird unicode handling thing going on? Why won't the second literal option ever match?

    Read the article

  • Auto enter pass phrase in case of Python ssl Client/Server

    - by rauch
    I need to create Client/Server application to send files from clients to Server. I use simple ssl sockets for that and authenticate with certificates. ms = socket.socket(socket.AF_INET, socket.SOCK_STREAM) ssl_sock = ssl.wrap_socket(ms, keyfile=".../newCA/my_client.key", certfile=".../newCA/my_client.crt", server_side=0, cert_reqs=ssl.CERT_REQUIRED, ca_certs=".../newCA/CA/my-ca.crt" ) ssl_sock.connect((HOST, MPORT)) And Server side: msock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.ssl_sock = ssl.wrap_socket(msock, keyfile=".../newCA/my_server.key", certfile=".../newCA/my_server.crt", server_side=1, cert_reqs=ssl.CERT_REQUIRED, ca_certs=".../newCA/CA/my-ca.crt" ) self.ssl_sock.bind(('', self.PORT)) self.ssl_sock.listen(self.QUEUE_MAX) The problem is the following: when client tries to connect to Server, it requires Enter the pass phrase for private key for Both: for Server-side and Client-side. In Java we need to set System Property: javax.net.ssl.keyStorePassword="" and it has to be used automatically, But how is it been used in Python? I can't enter pass phrase all time the client connects.

    Read the article

  • Store time of the day in SQL

    - by nute
    How would you store a time or time range in SQL? It won't be a datetime because it will just be let's say 4:30PM (not, January 3rd, 4:30pm). Those would be weekly, or daily meetings. The type of queries that I need are of course be for display, but also later will include complex queries such as avoiding conflicts in schedule. I'd rather pick the best datatype for that now. I'm using MS SQL Server Express 2005. Thanks! Nathan

    Read the article

  • Creating a unique key based on file content in python

    - by Cawas
    I got many, many files to be uploaded to the server, and I just want a way to avoid duplicates. Thus, generating a unique and small key value from a big string seemed something that a checksum was intended to do, and hashing seemed like the evolution of that. So I was going to use hash md5 to do this. But then I read somewhere that "MD5 are not meant to be unique keys" and I thought that's really weird. What's the right way of doing this? edit: by the way, I took two sources to get to the following, which is how I'm currently doing it and it's working just fine, with Python 2.5: import hashlib def md5_from_file (fileName, block_size=2**14): md5 = hashlib.md5() f = open(fileName) while True: data = f.read(block_size) if not data: break md5.update(data) f.close() return md5.hexdigest()

    Read the article

  • Generator speed in python 3

    - by Will
    Hello all, I am going through a link about generators that someone posted. In the beginning he compares the two functions below. On his setup he showed a speed increase of 5% with the generator. I'm running windows XP, python 3.1.1, and cannot seem to duplicate the results. I keep showing the "old way"(logs1) as being slightly faster when tested with the provided logs and up to 1GB of duplicated data. Can someone help me understand whats happening differently? Thanks! def logs1(): wwwlog = open("big-access-log") total = 0 for line in wwwlog: bytestr = line.rsplit(None,1)[1] if bytestr != '-': total += int(bytestr) return total def logs2(): wwwlog = open("big-access-log") bytecolumn = (line.rsplit(None,1)[1] for line in wwwlog) getbytes = (int(x) for x in bytecolumn if x != '-') return sum(getbytes)

    Read the article

  • Python grab class in class definition.

    - by epochwolf
    I don't even know how to explain this, so here is the code I'm trying. class Test: type = self.__name__ #self doesn't work, how do I get a reference to Test? class Test2(Test): pass #Test2.type should return "Test2" The reason I'm even trying this is I'm working on creating a base class for an orm I'm using. I want to avoid defining the table name for every model I have. Also knowing what the limits of python is will help me avoid wasting time trying impossible things.

    Read the article

  • Python having problems writing/reading and testing in a correct format

    - by Ionut
    I’m trying to make a program that will do the following: check if auth_file exists if yes - read file and try to login using data from that file - if data is wrong - request new data if no - request some data and then create the file and fill it with requested data So far: import json import getpass import os import requests filename = ".auth_data" auth_file = os.path.realpath(filename) url = 'http://example.com/api' headers = {'content-type': 'application/json'} def load_auth_file(): try: f = open(auth_file, "r") auth_data = f.read() r = requests.get(url, auth=auth_data, headers=headers) if r.reason == 'OK': return auth_data else: print "Incorrect login..." req_auth() except IOError: f = file(auth_file, "w") f.write(req_auth()) f.close() def req_auth(): user = str(raw_input('Username: ')) password = getpass.getpass('Password: ') auth_data = (user, password) r = requests.get(url, auth=auth_data, headers=headers) if r.reason == 'OK': return user, password elif r.reason == "FORBIDDEN": print "Incorrect login information..." req_auth() return False I have the following problems(understanding and applying the correct way): I can't find a correct way of storing the returned data from req_auth() to auth_file in a format that can be read and used in load_auth file PS: Of course I'm a beginner in Python and I'm sure I have missed some key elements here :(

    Read the article

  • Having my Python package install shortcuts in Start menu

    - by cool-RR
    I'm making a Python package that gets installed with a setup.py file using setuptools. The package includes a GUI, and when it's installed on a Windows machine, I want the installation to make a folder in "Programs" in the start menu, and make a shortcut there to a pyw script that will start the GUI. (The pyw think works on all platforms, right?) On Mac and Linux, I would like it to put this shortcut in whatever Mac and Linux have that is parallel to the start menu. How do I do this?

    Read the article

  • Python Profiling In Windows, How do you ignore Builtin Functions

    - by Tim McJilton
    I have not been capable of finding this anywhere online. I was looking to find out using a profiler how to better optimize my code, and when sorting by which functions use up the most time cumulatively, things like str(), print, and other similar widely used functions eat up much of the profile. What is the best way to profile a python program to get the user-defined functions only to see what areas of their code they can optimize? I hope that makes sense, any light you can shed on this subject would be very appreciated.

    Read the article

  • Draw and move a point over an image in python

    - by frx08
    Hi all I have to do a little script in Python. In this script I have a variable (that represents a coordinate) that is continuously updated to a new value. So I have to draw a red point over a image and update the point position every time the variable that contains the coordinate is updated. I tried to explain what I need doing something like this but obviously it doesn't works: import Tkinter, Image, ImageDraw, ImageTk i=0 root = Tkinter.Tk() im = Image.open("img.jpg") root.geometry("%dx%d" % (im.size[0], im.size[1])) while True: draw = ImageDraw.Draw(im) draw.ellipse((i, 0, 10, 10), fill=(255, 0, 0)) pi = ImageTk.PhotoImage(im) label = Tkinter.Label(root, image=pi) label.place(x=0, y=0, width=im.size[0], height=im.size[1]) i+=1 del draw someone may help me please? thanks very much!

    Read the article

  • Sending mail via sendmail from python

    - by Nate
    If I want to send mail not via SMTP, but rather via sendmail, is there a library for python that encapsulates this process? Better yet, is there a good library that abstracts the whole 'sendmail -versus- smtp' choice? I'll be running this script on a bunch of unix hosts, only some of which are listening on localhost:25; a few of these are part of embedded systems and can't be set up to accept SMTP. As part of Good Practice, I'd really like to have the library take care of header injection vulnerabilities itself -- so just dumping a string to popen('/usr/bin/sendmail', 'w') is a little closer to the metal than I'd like. If the answer is 'go write a library,' so be it ;-)

    Read the article

  • Time Complexities of recursive algorithms

    - by Peter
    Whenever I see a recursive solution, or I write recursive code for a problem, it is really difficult for me to figure out the time complexity, in most of the cases I just say its exponential? How is it exponential actually? How people say it is 2^n, when it is n!, when it is n^n or n^k. I have some questions in mind, let say find all permutations of a string (O(n!)) find all sequences which sum up to k in an array (exponential, how exactly do I calculate). Find all subsets of size k whose sum is 0 (will k come somewhere in complexity , it should come right?). Can any1 help me how to calculate the exact complexity of such questions, I am able to wrote code for them , but its hard understanding the exact time complexity.

    Read the article

  • Capturing stdout within the same process in Python

    - by danben
    I've got a python script that calls a bunch of functions, each of which writes output to stdout. Sometimes when I run it, I'd like to send the output in an e-mail (along with a generated file). I'd like to know how I can capture the output in memory so I can use the email module to build the e-mail. My ideas so far were: use a memory-mapped file (but it seems like I have to reserve space on disk for this, and I don't know how long the output will be) bypass all this and pipe the output to sendmail (but this may be difficult if I also want to attach the file)

    Read the article

  • Efficient file buffering & scanning methods for large files in python

    - by eblume
    The description of the problem I am having is a bit complicated, and I will err on the side of providing more complete information. For the impatient, here is the briefest way I can summarize it: What is the fastest (least execution time) way to split a text file in to ALL (overlapping) substrings of size N (bound N, eg 36) while throwing out newline characters. I am writing a module which parses files in the FASTA ascii-based genome format. These files comprise what is known as the 'hg18' human reference genome, which you can download from the UCSC genome browser (go slugs!) if you like. As you will notice, the genome files are composed of chr[1..22].fa and chr[XY].fa, as well as a set of other small files which are not used in this module. Several modules already exist for parsing FASTA files, such as BioPython's SeqIO. (Sorry, I'd post a link, but I don't have the points to do so yet.) Unfortunately, every module I've been able to find doesn't do the specific operation I am trying to do. My module needs to split the genome data ('CAGTACGTCAGACTATACGGAGCTA' could be a line, for instance) in to every single overlapping N-length substring. Let me give an example using a very small file (the actual chromosome files are between 355 and 20 million characters long) and N=8 import cStringIO example_file = cStringIO.StringIO("""\ header CAGTcag TFgcACF """) for read in parse(example_file): ... print read ... CAGTCAGTF AGTCAGTFG GTCAGTFGC TCAGTFGCA CAGTFGCAC AGTFGCACF The function that I found had the absolute best performance from the methods I could think of is this: def parse(file): size = 8 # of course in my code this is a function argument file.readline() # skip past the header buffer = '' for line in file: buffer += line.rstrip().upper() while len(buffer) = size: yield buffer[:size] buffer = buffer[1:] This works, but unfortunately it still takes about 1.5 hours (see note below) to parse the human genome this way. Perhaps this is the very best I am going to see with this method (a complete code refactor might be in order, but I'd like to avoid it as this approach has some very specific advantages in other areas of the code), but I thought I would turn this over to the community. Thanks! Note, this time includes a lot of extra calculation, such as computing the opposing strand read and doing hashtable lookups on a hash of approximately 5G in size. Post-answer conclusion: It turns out that using fileobj.read() and then manipulating the resulting string (string.replace(), etc.) took relatively little time and memory compared to the remainder of the program, and so I used that approach. Thanks everyone!

    Read the article

< Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >