Search Results

Search found 19662 results on 787 pages for 'python module'.

Page 126/787 | < Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >

  • Most useful Python modules from the standard library?

    - by EOL
    I am teaching a graduate level Python class at the University of Paris, and the students need to be introduced to the standard library. I want to discuss with them about some of the most important standard modules. What modules do you think are absolute musts? Even though responses probably vary depending on your field (web programming, science, etc.), I feel that some modules are commonly needed: math, sys, re, os, os.path, logging,… and maybe: collections, struct,… What modules would you suggest I present, in a 1 or 2 hour slot?

    Read the article

  • How to Redirect a Python Console output to a QTextBox

    - by krishnanunni
    Hello, I'm working on developing a GUI for the recompilation of Linux kernel. For this I need to implement 4-5 Linux commands from Python. I use Qt as GUI designer. I have successfully implemented the commands using os.system() call. But the output is obtained at the console. The real problem is the output of command is a listing that takes almost 20-25 min continuous printing. How we can transfer this console output to a text box designed in Qt. Can any one help me to implement the setSource() operation in Qt using source as the live console outputs.

    Read the article

  • Optimizing python code performance when importing zipped csv to a mongo collection

    - by mark
    I need to import a zipped csv into a mongo collection, but there is a catch - every record contains a timestamp in Pacific Time, which must be converted to the local time corresponding to the (longitude,latitude) pair found in the same record. The code looks like so: def read_csv_zip(path, timezones): with ZipFile(path) as z, z.open(z.namelist()[0]) as input: csv_rows = csv.reader(input) header = csv_rows.next() check,converters = get_aux_stuff(header) for csv_row in csv_rows: if check(csv_row): row = { converter[0]:converter[1](value) for converter, value in zip(converters, csv_row) if allow_field(converter) } ts = row['ts'] lng, lat = row['loc'] found_tz_entry = timezones.find_one(SON({'loc': {'$within': {'$box': [[lng-tz_lookup_radius, lat-tz_lookup_radius],[lng+tz_lookup_radius, lat+tz_lookup_radius]]}}})) if found_tz_entry: tz_name = found_tz_entry['tz'] local_ts = ts.astimezone(timezone(tz_name)).replace(tzinfo=None) row['tz'] = tz_name else: local_ts = (ts.astimezone(utc) + timedelta(hours = int(lng/15))).replace(tzinfo = None) row['local_ts'] = local_ts yield row def insert_documents(collection, source, batch_size): while True: items = list(itertools.islice(source, batch_size)) if len(items) == 0: break; try: collection.insert(items) except: for item in items: try: collection.insert(item) except Exception as exc: print("Failed to insert record {0} - {1}".format(item['_id'], exc)) def main(zip_path): with Connection() as connection: data = connection.mydb.data timezones = connection.timezones.data insert_documents(data, read_csv_zip(zip_path, timezones), 1000) The code proceeds as follows: Every record read from the csv is checked and converted to a dictionary, where some fields may be skipped, some titles be renamed (from those appearing in the csv header), some values may be converted (to datetime, to integers, to floats. etc ...) For each record read from the csv, a lookup is made into the timezones collection to map the record location to the respective time zone. If the mapping is successful - that timezone is used to convert the record timestamp (pacific time) to the respective local timestamp. If no mapping is found - a rough approximation is calculated. The timezones collection is appropriately indexed, of course - calling explain() confirms it. The process is slow. Naturally, having to query the timezones collection for every record kills the performance. I am looking for advises on how to improve it. Thanks. EDIT The timezones collection contains 8176040 records, each containing four values: > db.data.findOne() { "_id" : 3038814, "loc" : [ 1.48333, 42.5 ], "tz" : "Europe/Andorra" } EDIT2 OK, I have compiled a release build of http://toblerity.github.com/rtree/ and configured the rtree package. Then I have created an rtree dat/idx pair of files corresponding to my timezones collection. So, instead of calling collection.find_one I call index.intersection. Surprisingly, not only there is no improvement, but it works even more slowly now! May be rtree could be fine tuned to load the entire dat/idx pair into RAM (704M), but I do not know how to do it. Until then, it is not an alternative. In general, I think the solution should involve parallelization of the task. EDIT3 Profile output when using collection.find_one: >>> p.sort_stats('cumulative').print_stats(10) Tue Apr 10 14:28:39 2012 ImportDataIntoMongo.profile 64549590 function calls (64549180 primitive calls) in 1231.257 seconds Ordered by: cumulative time List reduced from 730 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.012 0.012 1231.257 1231.257 ImportDataIntoMongo.py:1(<module>) 1 0.001 0.001 1230.959 1230.959 ImportDataIntoMongo.py:187(main) 1 853.558 853.558 853.558 853.558 {raw_input} 1 0.598 0.598 370.510 370.510 ImportDataIntoMongo.py:165(insert_documents) 343407 9.965 0.000 359.034 0.001 ImportDataIntoMongo.py:137(read_csv_zip) 343408 2.927 0.000 287.035 0.001 c:\python27\lib\site-packages\pymongo\collection.py:489(find_one) 343408 1.842 0.000 274.803 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:699(next) 343408 2.542 0.000 271.212 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:644(_refresh) 343408 4.512 0.000 253.673 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:605(__send_message) 343408 0.971 0.000 242.078 0.001 c:\python27\lib\site-packages\pymongo\connection.py:871(_send_message_with_response) Profile output when using index.intersection: >>> p.sort_stats('cumulative').print_stats(10) Wed Apr 11 16:21:31 2012 ImportDataIntoMongo.profile 41542960 function calls (41542536 primitive calls) in 2889.164 seconds Ordered by: cumulative time List reduced from 778 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.028 0.028 2889.164 2889.164 ImportDataIntoMongo.py:1(<module>) 1 0.017 0.017 2888.679 2888.679 ImportDataIntoMongo.py:202(main) 1 2365.526 2365.526 2365.526 2365.526 {raw_input} 1 0.766 0.766 502.817 502.817 ImportDataIntoMongo.py:180(insert_documents) 343407 9.147 0.000 491.433 0.001 ImportDataIntoMongo.py:152(read_csv_zip) 343406 0.571 0.000 391.394 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:384(intersection) 343406 379.957 0.001 390.824 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:435(_intersection_obj) 686513 22.616 0.000 38.705 0.000 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:451(_get_objects) 343406 6.134 0.000 33.326 0.000 ImportDataIntoMongo.py:162(<dictcomp>) 346 0.396 0.001 30.665 0.089 c:\python27\lib\site-packages\pymongo\collection.py:240(insert) EDIT4 I have parallelized the code, but the results are still not very encouraging. I am convinced it could be done better. See my own answer to this question for details.

    Read the article

  • Python list as *args?

    - by Cap
    I have two Python functions, both of which take variable arguments in their function definitions. To give a simple example: def func1(*args): for arg in args: print arg def func2(*args): return [2 * arg for arg in args] I'd like to compose them -- as in func1(func2(3, 4, 5)) -- but I don't want args in func1 to be ([6, 7, 8],), I want it to be (6, 7, 8), as if it was called as func1(6, 7, 8) rather than func1([6, 7, 8]). Normally, I would just use func1(*func2(3, 4, 5)) or have func1 check to see if args[0] was a list. Unfortunately, I can't use the first solution in this particular instance and to apply the second would require doing such a check in many places (there are a lot of functions in the role of func1). Does anybody have an idea how to do this? I imagine some sort of introspection could be used, but I could be wrong.

    Read the article

  • float change from python 3.0.1 to 3.1.2

    - by Jeremy
    Im trying to learn python. I am using 3.1.2 and the o'reilly book is using 3.0.1 here is my code import urllib.request price = (99.99) while price 4.74: page = urllib.request.urlopen ("http://www.beans-r-us.biz/prices-loyalty.html") text = page.read().decode("utf8") where = text.find('>$') start_of_price = where + 2 end_of_price = start_of_price + 6 price = float(text[start_of_price:end_of_price]) print ("Buy!") - here is my error Traceback (most recent call last): File "/Users/odin/Desktop/Coffe.py", line 14, in price = float(text[start_of_price:end_of_price]) ValueError: invalid literal for float(): 4.59 what is wrong? please help!!

    Read the article

  • Check if Rhythmbox is running via Python

    - by cschol
    I am trying to extract information from Rhythmbox via dbus, but I only want to do so, if Rhythmbox is running. Is there a way to check if Rhythmbox is running via Python without starting it if it is not running? Whenever I invoke the dbus code like this: bus = dbus.Bus() obj = bus.get_object("org.gnome.Rhythmbox", "/org/gnome/Rhythmbox/Shell") iface = dbus.Interface(obj, "org.gnome.Rhythmbox.Shell) and Rhythmbox is not running, it then starts it. Can I check via dbus if Rhythmbox is running without actually starting it? Or is there any other way, other than parsing the list of currently running processes, to do so?

    Read the article

  • Running a python script in background from a CGI

    - by Cagey
    I have a python CGI which runs some script in the background and shows the stdout in the html page. I run the script when the user clicks some button in the page. My problem is when the script starts running the page becomes busy and the user can't use the other client side features in the page. What I want is: The script should run in background when the user clicks the button and should notify the CGI when run is complete. Then the CGI show should the stdout of the script run. How can this be done?

    Read the article

  • how to get day name in datetime in python

    - by gadss
    how can I get the day name (such as : Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, and Sunday) in datetime in python?... here is my code in my handlers.py from django.utils.xmlutils import SimplerXMLGenerator from piston.handler import BaseHandler from booking.models import * from django.db.models import * from piston.utils import rc, require_mime, require_extended, validate import datetime class BookingHandler(BaseHandler): allowed_method = ('GET', 'POST', 'PUT', 'DELETE') fields = ('id', 'date_select', 'product_name', 'quantity', 'price','totalcost', 'first_name', 'last_name', 'contact', 'product') model = Booking def read(self, request, id, date_select): if not self.has_model(): return rc.NOT_IMPLEMENTED try: prod = Product.objects.get(id=id) prod_quantity = prod.quantity merge = [] checkDateExist = Booking.objects.filter(date_select=date_select) if checkDateExist.exists(): entered_date = Booking.objects.values('date_select').distinct('date_select').filter(date_select=date_select)[0]['date_select'] else: entered_date = datetime.datetime.strptime(date_select, '%Y-%m-%d') entered_date = entered_date.date() delta = datetime.timedelta(days=3) target_date = entered_date - delta day = 1 for x in range(0,7): delta = datetime.timedelta(days=x+day) new_date = target_date + delta maximumProdQuantity = prod.quantity quantityReserve = Booking.objects.filter(date_select=new_date, product=prod).aggregate(Sum('quantity'))['quantity__sum'] if quantityReserve == None: quantityReserve = 0 quantityAvailable = prod_quantity - quantityReserve data1 = {'maximum_guest': maximumProdQuantity, 'available': quantityAvailable, 'date': new_date} merge.append(data1) return merge except self.model.DoesNotExist: return rc.NOT_HERE in my code: this line sets the date: for x in range(0,7): delta = datetime.timedelta(days=x+day) new_date = target_date + delta

    Read the article

  • python regex for repeating string

    - by Lars Nordin
    I am wanting to verify and then parse this string (in quotes): string = "start: c12354, c3456, 34526;" //Note that some codes begin with 'c' I would like to verify that the string starts with 'start:' and ends with ';' Afterward, I would like to have a regex parse out the strings. I tried the following python re code: regx = r"V1 OIDs: (c?[0-9]+,?)+;" reg = re.compile(regx) matched = reg.search(string) print ' matched.groups()', matched.groups() I have tried different variations but I can either get the first or the last code but not a list of all three. Or should I abandon using a regex?

    Read the article

  • how to send data to server using python

    - by Apache
    hi experts, how data can be send to the server, for example i retrieve MAC address, so i want send to the server ( i.e 211.21.24.43:8080/data?mac=00-0C-F1-56-98-AD i found snippet from internet as below from urllib2 import Request, urlopen from binascii import b2a_base64 def b64open(url, postdata): req = Request(url, b2a_base64(postdata), headers={'Content-Transfer-Encoding': 'base64'}) return urlopen(req) conn = b64open("http://211.21.24.43:8080/data","mac=00-0C-F1-56-98-AD") but when run, File "send2.py", line 8 SyntaxError: Non-ASCII character '\xc3' in file send2.py on line 8, but no encoding declared; see http://www.python.org/peps/pep-0263.html for details can anyone help me how send data to the server thanks in advance

    Read the article

  • Mocking imported modules in Python

    - by Evgenyt
    I'm trying to implement unit tests for function that uses imported external objects. For example helpers.py is: import os import pylons def some_func(arg): ... var1 = os.path.exist(...) var2 = os.path.getmtime(...) var3 = pylons.request.environ['HTTP_HOST'] ... So when I'm creating unit test for it I do some mocking (minimock in my case) and replacing references to pylons.request and os.path: import helpers def test_some_func(): helpers.pylons.request = minimock.Mock("pylons.request") helpers.pylons.request.environ = { 'HTTP_HOST': "localhost" } helpers.os.path = minimock.Mock(....) ... some_func(...) # assert ... This does not look good for me. Is there any other better way or strategy to substitute imported function/objects in Python?

    Read the article

  • How to change font size using the Python ImageDraw Library

    - by Eldila
    I am trying to change the font size using python's ImageDraw library. You can do something like this: fontPath = "/usr/share/fonts/dejavu-lgc/DejaVuLGCSansCondensed-Bold.ttf" sans16 = ImageFont.truetype ( fontPath, 16 ) im = Image.new ( "RGB", (200,50), "#ddd" ) draw = ImageDraw.Draw ( im ) draw.text ( (10,10), "Run awayyyy!", font=sans16, fill="red" ) The problem is that I don't want to specify a font. I want to use the default font and just change the size of the font. This seems to me that it should be simple, but I can't find documentation on how to do this.

    Read the article

  • Bash or python for changing spacing in files

    - by Werner
    Hi, I have a set of 10000 files. In all of them, the second line, looks like: AAA 3.429 3.84 so there is just one space (requirement) between AAA and the two other columns. The rest of lines on each file are completely different and correspond to 10 columns of numbers. Randomly, in around 20% of the files, and due to some errors, one gets BBB 3.429 3.84 so now there are two spaces between the first and second column. This is a big error so I need to fix it, changing from 2 to 1 space in the files where the error takes place. The first approach I thought of was to write a bash script that for each file reads the 3 values of the second line and then prints them with just one space, doing it for all the files. I wonder what do oyu think about this approach and if you could suggest something better, bashm python or someother approach. Thanks

    Read the article

  • Random List of millions of elements in Python Efficiently

    - by eWizardII
    Hello, I have read this answer potentially as the best way to randomize a list of strings in Python. I'm just wondering then if that's the most efficient way to do it because I have a list of about 30 million elements via the following code: import json from sets import Set from random import shuffle a = [] for i in range(0,193): json_data = open("C:/Twitter/user/user_" + str(i) + ".json") data = json.load(json_data) for j in range(0,len(data)): a.append(data[j]['su']) new = list(Set(a)) print "Cleaned length is: " + str(len(new)) ## Take Cleaned List and Randomize it for Analysis shuffle(new) If there is a more efficient way to do it, I'd greatly appreciate any advice on how to do it. Thanks,

    Read the article

  • Python: Hack to call a method on an object that isn't of its class

    - by cool-RR
    Assume you define a class, which has a method which does some complicated processing: class A(object): def my_method(self): # Some complicated processing is done here return self And now you want to use that method on some object from another class entirely. Like, you want to do A.my_method(7). This is what you'd get: TypeError: unbound method my_method() must be called with A instance as first argument (got int instance instead). Now, is there any possibility to hack things so you could call that method on 7? I'd want to avoid moving the function or rewriting it. (Note that the method's logic does depend on self.) One note: I know that some people will want to say, "You're doing it wrong! You're abusing Python! You shouldn't do it!" So yes, I know, this is a terrible terrible thing I want to do. I'm asking if someone knows how to do it, not how to preach to me that I shouldn't do it.

    Read the article

  • Python 3.1 - Memory Error during sampling of a large list

    - by jimy
    The input list can be more than 1 million numbers. When I run the following code with smaller 'repeats', its fine; def sample(x): length = 1000000 new_array = random.sample((list(x)),length) return (new_array) def repeat_sample(x): i = 0 repeats = 100 list_of_samples = [] for i in range(repeats): list_of_samples.append(sample(x)) return(list_of_samples) repeat_sample(large_array) However, using high repeats such as the 100 above, results in MemoryError. Traceback is as follows; Traceback (most recent call last): File "C:\Python31\rnd.py", line 221, in <module> STORED_REPEAT_SAMPLE = repeat_sample(STORED_ARRAY) File "C:\Python31\rnd.py", line 129, in repeat_sample list_of_samples.append(sample(x)) File "C:\Python31\rnd.py", line 121, in sample new_array = random.sample((list(x)),length) File "C:\Python31\lib\random.py", line 309, in sample result = [None] * k MemoryError I am assuming I'm running out of memory. I do not know how to get around this problem. Thank you for your time!

    Read the article

  • Loading a DB table into nested dictionaries in Python

    - by Hossein
    Hi, I have a table in MySql DB which I want to load it to a dictionary in python. the table columns is as follows: id,url,tag,tagCount tagCount is the number of times that a tag has been repeated for a certain url. So in that case I need a nested dictionary, in other words a dictionary of dictionary, to load this table. Because each url have several tags for which there are different tagCounts.the code that I used is this:( the whole table is about 22,000 records ) cursor.execute( ''' SELECT url,tag,tagCount FROM wtp ''') urlTagCount = cursor.fetchall() d = defaultdict(defaultdict) for url,tag,tagCount in urlTagCount: d[url][tag]=tagCount print d first of all I want to know if this is correct.. and if it is why it takes so much time? Is there any faster solutions? I am loading this table into memory to have fast access to get rid of the hassle of slow database operations, but with this slow speed it has become a bottleneck itself, it is even much slower than DB access. and anyone help? thanks

    Read the article

  • Creating a unique key based on file content in python

    - by Cawas
    I got many, many files to be uploaded to the server, and I just want a way to avoid duplicates. Thus, generating a unique and small key value from a big string seemed something that a checksum was intended to do, and hashing seemed like the evolution of that. So I was going to use hash md5 to do this. But then I read somewhere that "MD5 are not meant to be unique keys" and I thought that's really weird. What's the right way of doing this? edit: by the way, I took two sources to get to the following, which is how I'm currently doing it and it's working just fine, with Python 2.5: import hashlib def md5_from_file (fileName, block_size=2**14): md5 = hashlib.md5() f = open(fileName) while True: data = f.read(block_size) if not data: break md5.update(data) f.close() return md5.hexdigest()

    Read the article

  • Comment out a python code block

    - by gbarry
    Is there any mechanism to comment out large blocks of Python code? Right now the only ways I can see of commenting out code are to either start every line with a #, or to enclose the code in """ (triple quotes), except that actually makes it show up in various doc tools. Edit--After reading the answers (and referring to the "duplicate"), I have concluded the correct answer is "No". One person said so, and the rest lectured us about editors. Not a bad thing, but I feel it's important to put the answer at the top.

    Read the article

  • python: multiline regular expression

    - by facha
    Hi, everyone I have a piece of text and I've got to parse usernames and hashes out of it. Right now I'm doing it with two regular expressions. Could I do it with just one multiline regular expression? #!/usr/bin/env python import re test_str = """ Hello, UserName. Please read this looooooooooooooooong text. hash Now, write down this hash: fdaf9399jef9qw0j. Then keep reading this loooooooooong text. Hello, UserName2. Please read this looooooooooooooooong text. hash Now, write down this hash: gtwnhton340gjr2g. Then keep reading this loooooooooong text. """ logins = re.findall('Hello, (?P<login>.+).',test_str) hashes = re.findall('hash: (?P<hash>.+).',test_str)

    Read the article

  • "painting" one array onto another using python / numpy

    - by Nate
    I'm writing a library to process gaze tracking in Python, and I'm rather new to the whole numpy / scipy world. Essentially, I'm looking to take an array of (x,y) values in time and "paint" some shape onto a canvas at those coordinates. For example, the shape might be a blurred circle. The operation I have in mind is more or less identical to using the paintbrush tool in Photoshop. I've got an interative algorithm that trims my "paintbrush" to be within the bounds of my image and adds each point to an accumulator image, but it's slow(!), and it seems like there's probably a fundamentally easier way to do this. Any pointers as to where to start looking?

    Read the article

  • Python 2.6, 3 abstract base class misunderstanding

    - by Aaron
    I'm not seeing what I expect when I use ABCMeta and abstractmethod. This works fine in python3: from abc import ABCMeta, abstractmethod class Super(metaclass=ABCMeta): @abstractmethod def method(self): pass a = Super() TypeError: Can't instantiate abstract class Super ... And in 2.6: class Super(): __metaclass__ = ABCMeta @abstractmethod def method(self): pass a = Super() TypeError: Can't instantiate abstract class Super ... They both also work fine (I get the expected exception) if I derive Super from object, in addition to ABCMeta. They both "fail" (no exception raised) if I derive Super from list. I want an abstract base class to be a list but abstract, and concrete in sub classes. Am I doing it wrong, or should I not want this in python?

    Read the article

  • Python grab class in class definition.

    - by epochwolf
    I don't even know how to explain this, so here is the code I'm trying. class Test: type = self.__name__ #self doesn't work, how do I get a reference to Test? class Test2(Test): pass #Test2.type should return "Test2" The reason I'm even trying this is I'm working on creating a base class for an orm I'm using. I want to avoid defining the table name for every model I have. Also knowing what the limits of python is will help me avoid wasting time trying impossible things.

    Read the article

  • Python - Using "Google AJAX Search" API's Local Search Objects

    - by user330739
    Hi! I've just started using Google's search API to find addresses and the distances between those addresses. I used geopy for this, but, I often had the problem of not getting the correct addresses for my queries. I decided to experiment, therefore, with Google's "Local Search" (http://code.google.com/apis/ajaxsearch/local.html). Anyway, I wanted to ask if I could use the "Local Search" objects provided by the API within python. Something tells me that I can't and that I have to use json. Does anyone know if there is a work around? PS: Im trying to make something like this: http://www.google.com/uds/samples/random/lead.html ... except a matrix type deal where the insides will be filled with distances between the addresses. Thanks for reading!

    Read the article

  • Python having problems writing/reading and testing in a correct format

    - by Ionut
    I’m trying to make a program that will do the following: check if auth_file exists if yes - read file and try to login using data from that file - if data is wrong - request new data if no - request some data and then create the file and fill it with requested data So far: import json import getpass import os import requests filename = ".auth_data" auth_file = os.path.realpath(filename) url = 'http://example.com/api' headers = {'content-type': 'application/json'} def load_auth_file(): try: f = open(auth_file, "r") auth_data = f.read() r = requests.get(url, auth=auth_data, headers=headers) if r.reason == 'OK': return auth_data else: print "Incorrect login..." req_auth() except IOError: f = file(auth_file, "w") f.write(req_auth()) f.close() def req_auth(): user = str(raw_input('Username: ')) password = getpass.getpass('Password: ') auth_data = (user, password) r = requests.get(url, auth=auth_data, headers=headers) if r.reason == 'OK': return user, password elif r.reason == "FORBIDDEN": print "Incorrect login information..." req_auth() return False I have the following problems(understanding and applying the correct way): I can't find a correct way of storing the returned data from req_auth() to auth_file in a format that can be read and used in load_auth file PS: Of course I'm a beginner in Python and I'm sure I have missed some key elements here :(

    Read the article

< Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >