Search Results

Search found 20092 results on 804 pages for 'python import'.

Page 125/804 | < Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >

  • JavaScript-like Object in Python standard library?

    - by David Wolever
    Quite often, I find myself wanting a simple, "dump" object in Python which behaves like a JavaScript object (ie, its members can be accessed either with .member or with ['member']). Usually I'll just stick this at the top of the .py: class DumbObject(dict): def __getattr__(self, attr): return self[attr] def __stattr__(self, attr, value): self[attr] = value But that's kind of lame, and there is at least one bug with that implementation (although I can't remember what it is). So, is there something similar in the standard library? And, for the record, simply instanciating object doesn't work: obj = object() obj.airspeed = 42 Traceback (most recent call last): File "", line 1, in AttributeError: 'object' object has no attribute 'airspeed' Thanks, David

    Read the article

  • pb with callback in the python optparse module

    - by PierrOz
    Hi Guys, I'm playing with Python 2.6 and its optparse module. I would like to convert one of my arguments to a datetime through a callback but it fails. Here is the code: def parsedate(option, opt_str, value, parser): option.date = datetime.strptime(value, "%Y/%m/%d") def parse_options(args): parser = OptionParser(usage="%prog -l LOGFOLDER [-e]", version="%prog 1.0") parser.add_option("-d", "--date", action="callback", callback="parsedate", dest="date") global options (options, args) = parser.parse_args(args) print option.date.strftime() if __name__ == "__main__": parse_options(sys.argv[1:]) I get an error File: optparse.py in _check_callback "callback not callable". I guess I'm doing something wrong in the way I define my callback but what ? and why ? Can anyone help ?

    Read the article

  • Encoding in python with lxml - complex solution

    - by Vojtech R.
    Hi, I need to download and parse webpage with lxml and build UTF-8 xml output. I thing schema in pseudocode is more illustrative: from lxml import etree webfile = urllib2.urlopen(url) root = etree.parse(webfile.read(), parser=etree.HTMLParser(recover=True)) txt = my_process_text(etree.tostring(root.xpath('/html/body'), encoding=utf8)) output = etree.Element("out") output.text = txt outputfile.write(etree.tostring(output, encoding=utf8)) So webfile can be in any encoding (lxml should handle this). Outputfile have to be in utf-8. I'm not sure where to use encoding/coding. Is this schema ok? (I cant find good tutorial about lxml and encoding, but I can find many problems with this...) I need robust approved solution so I ask you seniors. Many thanks

    Read the article

  • Python-McNuggets problem

    - by challarao
    Hi! I am a student of IIIT.I am new to python.This question is one of the problems in my problem set.Please Help me writing program such as in what I should do it. Show that it is possible to buy exactly 50, 51, 52, 53, 54, and 55 McNuggets, by finding solutions to the Diophantine equation. You can solve this in your head, using paper and pencil, or writing a program. However you chose to solve this problem, list the combinations of 6, 9 and 20 packs of McNuggets you need to buy in order to get each of the exact amounts. Given that it is possible to buy sets of 50, 51, 52, 53, 54 or 55 McNuggets by combinations of 6, 9 and 20 packs, show that it is possible to buy 56, 57,..., 65 McNuggets. In other words, show how, given solutions for 50-55, one can derive solutions for 56-65 6a + 9b + 20c = n

    Read the article

  • Web scraping with Python

    - by Jack
    I'm currently trying to scrape a website that has fairly poorly-formatted HTML (often missing closing tags, no use of classes or ids so it's incredibly difficult to go straight to the element you want, etc.). I've been using BeautifulSoup with some success so far but every once and a while (though quite rarely), I run into a page where BeautifulSoup creates the HTML tree a bit differently from (for example) Firefox or Webkit. While this is understandable as the formatting of the HTML leaves this ambiguous, if I were able to get the same parse tree as Firefox or Webkit produces I would be able to parse things much more easily. The problems are usually something like the site opens a <b> tag twice and when BeautifulSoup sees the second <b> tag, it immediately closes the first while Firefox and Webkit nest the <b> tags. Is there a web scraping library for Python (or even any other language (I'm getting desperate)) that can reproduce the parse tree generated by Firefox or WebKit (or at least get closer than BeautifulSoup in cases of ambiguity).

    Read the article

  • Python re.sub MULTILINE caret match

    - by cdleary
    The Python docs say: re.MULTILINE: When specified, the pattern character '^' matches at the beginning of the string and at the beginning of each line (immediately following each newline)... By default, '^' matches only at the beginning of the string... So what's going on when I get the following unexpected result? >>> import re >>> s = """// The quick brown fox. ... // Jumped over the lazy dog.""" >>> re.sub('^//', '', s, re.MULTILINE) ' The quick brown fox.\n// Jumped over the lazy dog.'

    Read the article

  • plotting results of hierarchical clustering ontop of a matrix of data in python

    - by user248237
    How can I plot a dendrogram right on top of a matrix of values, reordered appropriately to reflect the clustering, in Python? An example is in the bottom of the following figure: http://www.coriell.org/images/microarray.gif I use scipy.cluster.dendrogram to make my dendrogram and perform hierarchical clustering on a matrix of data. How can I then plot the data as a matrix where the rows have been reordered to reflect a clustering induced by the cutting the dendrogram at a particular threshold, and have the dendrogram plotted alongside the matrix? I know how to plot the dendrogram in scipy, but not how to plot the intensity matrix of data with the right scale bar next to it. Any help on this would be greatly appreciated.

    Read the article

  • How to structure Python package that contains Cython code

    - by Craig McQueen
    I'd like to make a Python package containing some Cython code. I've got the the Cython code working nicely. However, now I want to know how best to package it. For most people who just want to install the package, I'd like to include the .c file that Cython creates, and arrange for setup.py to compile that to produce the module. Then the user doesn't need Cython installed in order to install the package. But for people who may want to modify the package, I'd also like to provide the Cython .pyx files, and somehow also allow for setup.py to build them using Cython (so those users would need Cython installed). How should I structure the files in the package to cater for both these scenarios? The Cython documentation gives a little guidance. But it doesn't say how to make a single setup.py that handles both the with/without Cython cases.

    Read the article

  • vlc python bindings - how to receive keyboard input?

    - by itsadok
    I'm trying to use VLC's python bindings to create my own little video player. The demo implementation is quite simple and nice, but it requires all the keyboard commands to be typed into the console from which the script was run. Is there any way I can handle keyboard input also when the video player itself has focus? Specifically, I care about controlling the video while in fullscreen mode. Perhaps there's a way to keep the keyboard focus in the console (or maybe another window) while showing the video? I'm using Windows XP, if that has any relevance.

    Read the article

  • Web scraping with Python

    - by Jack
    I'm currently trying to scrape a website that has fairly poorly-formatted HTML (often missing closing tags, no use of classes or ids so it's incredibly difficult to go straight to the element you want, etc.). I've been using BeautifulSoup with some success so far but every once and a while (though quite rarely), I run into a page where BeautifulSoup creates the HTML tree a bit differently from (for example) Firefox or Webkit. While this is understandable as the formatting of the HTML leaves this ambiguous, if I were able to get the same parse tree as Firefox or Webkit produces I would be able to parse things much more easily. The problems are usually something like the site opens a <b> tag twice and when BeautifulSoup sees the second <b> tag, it immediately closes the first while Firefox and Webkit nest the <b> tags. Is there a web scraping library for Python (or even any other language (I'm getting desperate)) that can reproduce the parse tree generated by Firefox or WebKit (or at least get closer than BeautifulSoup in cases of ambiguity).

    Read the article

  • Python SQLite: database is locked

    - by user322683
    I'm trying this code: import sqlite connection = sqlite.connect('cache.db') cur = connection.cursor() cur.execute('''create table item (id integer primary key, itemno text unique, scancode text, descr text, price real)''') connection.commit() cur.close() I'm catching this exception: Traceback (most recent call last): File "cache_storage.py", line 7, in <module> scancode text, descr text, price real)''') File "/usr/lib/python2.6/dist-packages/sqlite/main.py", line 237, in execute self.con._begin() File "/usr/lib/python2.6/dist-packages/sqlite/main.py", line 503, in _begin self.db.execute("BEGIN") _sqlite.OperationalError: database is locked Permissions for cache.db are ok. Any ideas?

    Read the article

  • Python 2.5.2: trying to open files recursively

    - by user248959
    Hi, the script below should open all the files inside the folder 'pruebaba' recursively but i get this error: Traceback (most recent call last): File "/home/tirengarfio/Desktop/prueba.py", line 8, in f = open(file,'r') IOError: [Errno 21] Is a directory This is the hierarchy: pruebaba folder1 folder11 test1.php folder12 test1.php test2.php folder2 test1.php The script: import re,fileinput,os path="/home/tirengarfio/Desktop/pruebaba" os.chdir(path) for file in os.listdir("."): f = open(file,'r') data = f.read() data = re.sub(r'(\s*function\s+.*\s*{\s*)', r'\1echo "The function starts here."', data) f.close() f = open(file, 'w') f.write(data) f.close() Any idea? Regards Javi

    Read the article

  • Run Python CGI Script on Windows XP

    - by daveywc
    I have a Windows XP machine that has Apache installed via a VisualSVNServer installation. I am . trying to get a simple python cgi script to run in my browser e.g. http://build.procepts.com.au:8080/hg/cgi-bin/test.cgi. However despite trying all the recommended approaches the browser only ever displays the plain text from the cgi script. Amongst many other attempted solutions I have followed the instructions contained here. My ultimate aim is to be able to use the Apache web server to serve repositories from a new Mercurial installation. Seeing as Apache is already installed from VisualSVNServer I thought I might as well make use of it. Is there some other trick to get this working?

    Read the article

  • Python's equivalence?

    - by user304014
    Is there anyway to transform the following code in Java to Python's equivalence? public class Animal{ public enum AnimalBreed{ Dog, Cat, Cow, Chicken, Elephant } private static final int Animals = AnimalBreed.Dog.ordinal(); private static final String[] myAnimal = new String[Animals]; private static Animal[] animal = new Animal[Animals]; public static final Animal DogAnimal = new Animal(AnimalBreed.Dog, "woff"); public static final Animal CatAnimal = new Animal(AnimalBreed.Cat, "meow"); private AnimalBreed breed; public static Animal myDog (String name) { return new Animal(AnimalBreed.Dog, name); } }

    Read the article

  • Fastest way to list all primes below N in python

    - by jbochi
    This is the best algorithm I could come up with after struggling with a couple of Project Euler's questions. def get_primes(n): numbers = set(range(n, 1, -1)) primes = [] while numbers: p = numbers.pop() primes.append(p) numbers.difference_update(set(range(p*2, n+1, p))) return primes >>> timeit.Timer(stmt='get_primes.get_primes(1000000)', setup='import get_primes').timeit(1) 1.1499958793645562 Can it be made even faster? EDIT: This code has a flaw: Since numbers is an unordered set, there is no guarantee that numbers.pop() will remove the lowest number from the set. Nevertheless, it works (at least for me) for some input numbers: >>> sum(get_primes(2000000)) 142913828922L #That's the correct sum of all numbers below 2 million >>> 529 in get_primes(1000) False >>> 529 in get_primes(530) True EDIT: The rank so far (pure python, no external sources, all primes below 1 million): Sundaram's Sieve implementation by myself: 327ms Daniel's Sieve: 435ms Alex's recipe from Cookbok: 710ms EDIT: ~unutbu is leading the race.

    Read the article

  • computing z-scores for 2D matrices in scipy/numpy in Python

    - by user248237
    How can I compute the z-score for matrices in Python? Suppose I have the array: a = array([[ 1, 2, 3], [ 30, 35, 36], [2000, 6000, 8000]]) and I want to compute the z-score for each row. The solution I came up with is: array([zs(item) for item in a]) where zs is in scipy.stats.stats. Is there a better built-in vectorized way to do this? Also, is it always good to z-score numbers before using hierarchical clustering with euclidean or seuclidean distance? Can anyone discuss the relative advantages/disadvantages? thanks.

    Read the article

  • Calculating spam probability in python

    - by Hobhouse
    I am building a website in python/django and want to predict wether a user submission is valid or wether it is spam. Users have an accept rate on their submissions, like this website has. Users can moderate other users' submissions; and these moderations are later metamoderated by an admin. Given this: user A with an submission accept rate of 60% submits something. user B moderates A's post as a valid submission. However, his moderations are often wrong, and his moderations' accept rate is a mere 30%. user C moderates A's post as spam. User C is usually right. His moderations' accept rate is 80%. How can I predict the chance of A's post being spam?

    Read the article

  • OpenCV in Python can't scan through pixels

    - by Marco L.
    Hi everyone, I'm stuck with a problem of the python wrapper for OpenCv. I have this function that returns 1 if the number of black pixels is greater than treshold def checkBlackPixels( img, threshold ): width = img.width height = img.height nchannels = img.nChannels step = img.widthStep dimtot = width * height data = img.imageData black = 0 for i in range( 0, height ): for j in range( 0, width ): r = data[i*step + j*nchannels + 0] g = data[i*step + j*nchannels + 1] b = data[i*step + j*nchannels + 2] if r == 0 and g == 0 and b == 0: black = black + 1 if black >= threshold * dimtot: return 1 else: return 0 The loop (scan each pixel of a given image) works good when the input is an RGB image...but if the input is a single channel image I get this error: for j in range( width ): TypeError: Nested sequences should have 2 or 3 dimensions The input single channel image (called 'rg' in the next example) is taken from an RGB image called 'src' processed with cvSplit and then cvAbsDiff cvSplit( src, r, g, b, 'NULL' ) rg = cvCreateImage( cvGetSize(src), src.depth, 1 ) # R - G cvAbsDiff( r, g, rg ) I've also already noticed that the problem comes from the difference image got from cvSplit... Anyone can help me? Thank you

    Read the article

  • Convert string to JSON using Python

    - by Luiz Fernando
    Hi, I'm a little bit confused with JSON in Python. To me, it seems like a dictionary, and for that reason I'm trying to do that: json = """{ "glossary": { "title": "example glossary", "GlossDiv": { "title": "S", "GlossList": { "GlossEntry": { "ID": "SGML", "SortAs": "SGML", "GlossTerm": "Standard Generalized Markup Language", "Acronym": "SGML", "Abbrev": "ISO 8879:1986", "GlossDef": { "para": "A meta-markup language, used to create markup languages such as DocBook.", "GlossSeeAlso": ["GML", "XML"] }, "GlossSee": "markup" } } } } } """ But when I do print dict(json), it gives an error. How can I transform this string into a structure and then call json["title"] to obtain "example glossary"? Thanks.

    Read the article

  • How to add a another value to a key in python

    - by Nanowatt
    First I'm sorry this might be a dumb question but I'm trying to self learn python and I can't find the answer to my question. I want to make a phonebook and I need to add an email to an already existing name. That name has already a phone number attached. I have this first code: phonebook = {} phonebook ['ana'] = '12345' phonebook ['maria']= '23456' , '[email protected]' def add_contact(): name = raw_input ("Please enter a name:") number = raw_input ("Please enter a number:") phonebook[name] = number Then I wanted to add an email to the name "ana" for example: ana: 12345, [email protected]. I created this code but instead of addend a new value (the email), it just changes the old one, removing the number: def add_email(): name = raw_input("Please enter a name:") email = raw_input("Please enter an email:") phonebook[name] = email I tried .append() too but it didn't work. Can you help me? And I'm sorry if the code is bad, I'm just trying to learn and I'm a bit noob yet :)

    Read the article

  • weakref list in python

    - by Dan
    I'm in need of a list of weak references that deletes items when they die. Currently the only way I have of doing this is to keep flushing the list (removing dead references manually). I'm aware there's a WeakKeyDictionary and a WeakValueDictionary, but I'm really after a WeakList, is there a way of doing this? Here's an example: import weakref class A(object): def __init__(self): pass class B(object): def __init__(self): self._references = [] def addReference(self, obj): self._references.append(weakref.ref(obj)) def flush(self): toRemove = [] for ref in self._references: if ref() is None: toRemove.append(ref) for item in toRemove: self._references.remove(item) b = B() a1 = A() b.addReference(a1) a2 = A() b.addReference(a2) del a1 b.flush() del a2 b.flush()

    Read the article

  • To find first N prime numbers in python

    - by Rahul Tripathi
    Hi All, I am new to the programming world. I was just writing this code in python to generate N prime numbers. User should input the value for N which is the total number of prime numbers to print out. I have written this code but it doesn't throw the desired output. Instead it prints the prime numbers till the Nth number. For eg.: User enters the value of N = 7. Desired output: 2, 3, 5, 7, 11, 13, 19 Actual output: 2, 3, 5, 7 Kindly advise. i=1 x = int(input("Enter the number:")) for k in range (1, (x+1), 1): c=0 for j in range (1, (i+1), 1): a = i%j if (a==0): c = c+1 if (c==2): print (i) else: k = k-1 i=i+1

    Read the article

  • Scraping *.aspx content using Python

    - by tomato
    I'm having difficulties scraping dynamically generated table in ASPX. Trying to scrape the gas prices from a site like this GasPrices. I can extract all the information in the gas price table (address, time submitted etc.), except for the actual gas price. Is there a way I could scrape the gas prices? i.e. somehow get a text representation of it. I'm not very familiar with ASP/ASPX - but what's being generated now is not showing up in the final HTML. I'm using Python to do the scraping, but that's irrelevant unless there's a specific library... Thanks in advance.

    Read the article

  • Scraping *.aspx content using Python

    - by tomato
    I'm having difficulties scrapping dynamically generated table in ASPX. Trying to scrap the gas prices from a site like these GasPrices. I can extract all the information in the gas price table (address, time submitted etc.), except for the actual gas price. Is there a way I could scrap the gas prices? i.e. somehow get a text representation of it. I'm not very familiar with ASP/ASPX - but what's being generated now is not showing up in the final HTML. I'm using Python to do the scrapping, but that's irrelevant unless there's a specific library...

    Read the article

  • How to log python exception ?

    - by Maxim Veksler
    Hi, Coming from java, being familiar with logback I used to do try { ... catch (Exception e) { log("Error at X", e); } I would like the same functionality of being able to log the exception and the stacktrace into a file. How would you recommend me implementing this? Currently using boto logging infrastructre, boto.log.info(...) I've looked at some options and found out I can access the actual exception details using this code: import sys try: 1/0 except: exc_type, exc_value, exc_traceback = sys.exc_info() traceback.print_exception(exc_type, exc_value, exc_traceback) I would like to somehow get the string print_exception() throws to stdout so that I can log it. Thank you, Maxim.

    Read the article

< Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >