Search Results

Search found 14657 results on 587 pages for 'portable python'.

Page 155/587 | < Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >

  • Plotting 3D Polygons in python-matplotlib

    - by Developer
    I was unsuccessful browsing web for a solution for the following simple question: How to draw 3D polygon (say a filled rectangle or triangle) using vertices values? I have tried many ideas but all failed, see: from mpl_toolkits.mplot3d import Axes3D from matplotlib.collections import PolyCollection import matplotlib.pyplot as plt fig = plt.figure() ax = Axes3D(fig) x = [0,1,1,0] y = [0,0,1,1] z = [0,1,0,1] verts = [zip(x, y,z)] ax.add_collection3d(PolyCollection(verts),zs=z) plt.show() I appreciate in advance any idea/comment. Updates based on the accepted answer: import mpl_toolkits.mplot3d as a3 import matplotlib.colors as colors import pylab as pl import scipy as sp ax = a3.Axes3D(pl.figure()) for i in range(10000): vtx = sp.rand(3,3) tri = a3.art3d.Poly3DCollection([vtx]) tri.set_color(colors.rgb2hex(sp.rand(3))) tri.set_edgecolor('k') ax.add_collection3d(tri) pl.show() Here is the result:

    Read the article

  • Access static class variable of parent class in Python

    - by fuenfundachtzig
    I have someting like this class A: __a = 0 def __init__(self): A.__a = A.__a + 1 def a(self): return A.__a class B(A): def __init__(self): # how can I access / modify A.__a here? A.__a = A.__a + 1 # does not work def a(self): return A.__a Can I access the __astatic variable in B? It's possible writing a instead of __a, is this the only way? (I guess the answer might be rather short: yes :)

    Read the article

  • Take the intersection of an arbitrary number of lists in python

    - by thepandaatemyface
    Suppose I have a list of lists of elements which are all the same (i'll use ints in this example) [range(100)[::4], range(100)[::3], range(100)[::2], range(100)[::1]] What would be a nice and/or efficient way to take the intersection of these lists (so you would get every element that is in each of the lists)? For the example that would be: [0, 12, 24, 36, 48, 60, 72, 84, 96]

    Read the article

  • Python File Search Line And Return Specific Number of Lines after Match

    - by Simos Anderson
    I have a text file that has lines representing some data sets. The file itself is fairly long but it contains certain sections of the following format: Series_Name INFO Number of teams : n1 | Team | # | wins | | TeamName1 | x | y | . . . | TeamNamen1 | numn | numn | Some Irrelevant lines Series_Name2 INFO Number of teams : n1 | Team | # | wins | | TeamName1 | num1 | num2 | . where each section has a header that begins with the Series_Name. Each Series_Name is different. The line with the header also includes the number of teams in that series, n1. Following the header line is a set of lines that represents a table of data. For each series there are n1+1 rows in the table, where each row shows an individual team name and associated stats. I have been trying to implement a function that will allow the user to search for a Team name and then print out the line in the table associated with that team. However, certain team names show up under multiple series. To resolve this, I am currently trying to write my code so that the user can search for the header line with series name first and then print out just the following n1+1 lines that represent the data associated with the series. Here's what I have come up with so far: import re print fname = raw_input("Enter filename: ") seriesname = raw_input("Enter series: ") def findcounter(fname, seriesname): logfile = open(fname, "r") pat = 'INFO Number of teams :' for line in logfile: if seriesname in line: if pat in line: s=line pattern = re.compile(r"""(?P<name>.*?) #starting name \s*INFO #whitespace and success \s*Number\s*of\s*teams #whitespace and strings \s*\:\s*(?P<n1>.*)""",re.VERBOSE) match = pattern.match(s) name = match.group("name") n1 = int(match.group("n1")) print name + " has " + str(n1) + " teams" lcount = 0 for line in logfile: if line.startswith(name): if pat in line: while lcount <= n1: s.append(line) lcount += 1 return result The first part of my code works; it matches the header line that the person searches for, parses the line, and then prints out how many teams are in that series. Since the header line basically tells me how many lines are in the table, I thought that I could use that information to construct a loop that would continue printing each line until a set counter reached n1. But I've tried running it, and I realize that the way I've set it up so far isn't correct. So here's my question: How do you return a number of lines after a matched line when given the number of desired lines that follow the match? I'm new to programming, and I apologize if this question seems silly. I have been working on this quite diligently with no luck and would appreciate any help.

    Read the article

  • How to read a file with variable multi-row data in Python

    - by dr.bunsen
    I have a file that is about 100Mb that looks like this: #meta data 1 skadjflaskdjfasljdfalskdjfl sdkfjhasdlkgjhsdlkjghlaskdj asdhfk #meta data 2 jflaksdjflaksjdflkjasdlfjas ldaksjflkdsajlkdfj #meta data 3 alsdkjflasdjkfglalaskdjf This file contains one row of meta data that corresponds to several, variable length data containing only alpha-numeric characters. What is the best way to read this data into a simple list like this: data = [[#meta data 1, skadjflaskdjfasljdfalskdjflsdkfjhasdlkgjhsdlkjghlaskdjasdhfk], [#meta data 2, jflaksdjflaksjdflkjasdlfjasldaksjflkdsajlkdfj], [#meta data 3, alsdkjflasdjkfglalaskdjf]] My initial idea was to use the read() method to read the whole file into memory and then use regular expressions to parse the data into the desired format. Is there a better more pythonic way? All metadata lines start with an octothorpe and all data lines are all alpha-numeric. Thanks!

    Read the article

  • How to concat a string in Python

    - by alex
    query = "SELECT * FROM mytable WHERE time=%s", (mytime) Then, I want to add a limit %s to it. How can I do that without messing up the %s in mytime? Edit: I want to concat query2, which has "LIMIT %s, %s"

    Read the article

  • Python nested dict comprehension with sets

    - by Jasie
    Can someone explain how to do nested dict comprehensions? >> l = [set([1, 2, 3]), set([4, 5, 6])] >> j = dict((a, i) for a in s for i, s in enumerate(l)) >> NameError: name 's' is not defined I would have liked: >> j >> {1:0, 2:0, 3:0, 4: 1, 5: 1, 6: 1} I just asked a previous question about a simpler dict comprehension where the parentheses in the generator function were reduced. How come the s in the leftmost comprehension is not recognized?

    Read the article

  • Working with multiple input and output files in Python

    - by Morlock
    I need to open multiple files (2 input and 2 output files), do complex manipulations on the lines from input files and then append results at the end of 2 output files. I am currently using the following approach: in_1 = open(input_1) in_2 = open(input_2) out_1 = open(output_1, "w") out_2 = open(output_2, "w") # Read one line from each 'in_' file # Do many operations on the DNA sequences included in the input files # Append one line to each 'out_' file in_1.close() in_2.close() out_1.close() out_2.close() The files are huge (each potentially approaching 1Go, that is why I am reading through these input files one at a time. I am guessing that this is not a very Pythonic way to do things. :) Would using the following form good? with open("file1") as f1: with open("file2") as f2: # etc. If yes, could I do it while avoiding the highly indented code that would result? Thanks for the insights!

    Read the article

  • Parsing text file in python

    - by Ockonal
    Hello, I have html-file. I have to replace all text between this: [%anytext%]. As I understand, it's very easy to do with BeautifulSoup for parsing hmtl. But what is regular expression and how to remove&write back text data?

    Read the article

  • Google App Engine python - Self is not defined

    - by sdasdas
    I have a request that maps to this class ChatMsg It takes in 3 get variables, username, roomname, and msg. But it fails on this last line here. class ChatMsg(webapp.RequestHandler): # this is line 239 def get(self): username = urllib.unquote(self.request.get('username')) roomname = urllib.unquote(self.request.get('roomname')) # this is line 242 When it tries to assign roomname, it tells me: <type 'exceptions.NameError'>: name 'self' is not defined Traceback (most recent call last): File "/base/data/home/apps/chatboxes/1.341998073649951735/chatroom.py", line 239, in <module> class ChatMsg(webapp.RequestHandler): File "/base/data/home/apps/chatboxes/1.341998073649951735/chatroom.py", line 242, in ChatMsg roomname = urllib.unquote(self.request.get('roomname')) what the hell is going on to make self not defined

    Read the article

  • python iterators and thread-safety

    - by Igor
    I have a class which is being operated on by two functions. One function creates a list of widgets and writes it into the class: def updateWidgets(self): widgets = self.generateWidgetList() self.widgets = widgets the other function deals with the widgets in some way: def workOnWidgets(self): for widget in self.widgets: self.workOnWidget(widget) each of these functions runs in it's own thread. the question is, what happens if the updateWidgets() thread executes while the workOnWidgets() thread is running? I am assuming that the iterator created as part of the for...in loop will keep some kind of reference to the old self.widgets object? So I will finish iterating over the old list... but I'd love to know for sure.

    Read the article

  • Algorithm detect repeating/similiar strings in a corpus of data -- say email subjects, in Python

    - by RizwanK
    I'm downloading a long list of my email subject lines , with the intent of finding email lists that I was a member of years ago, and would want to purge them from my Gmail account (which is getting pretty slow.) I'm specifically thinking of newsletters that often come from the same address, and repeat the product/service/group's name in the subject. I'm aware that I could search/sort by the common occurrence of items from a particular email address (and I intend to), but I'd like to correlate that data with repeating subject lines.... Now, many subject lines would fail a string match, but "Google Friends : Our latest news" "Google Friends : What we're doing today" are more similar to each other than a random subject line, as is: "Virgin Airlines has a great sale today" "Take a flight with Virgin Airlines" So -- how can I start to automagically extract trends/examples of strings that may be more similar. Approaches I've considered and discarded ('because there must be some better way'): Extracting all the possible substrings and ordering them by how often they show up, and manually selecting relevant ones Stripping off the first word or two and then count the occurrence of each sub string Comparing Levenshtein distance between entries Some sort of string similarity index ... Most of these were rejected for massive inefficiency or likelyhood of a vast amount of manual intervention required. I guess I need some sort of fuzzy string matching..? In the end, I can think of kludgy ways of doing this, but I'm looking for something more generic so I've added to my set of tools rather than special casing for this data set. After this, I'd be matching the occurring of particular subject strings with 'From' addresses - I'm not sure if there's a good way of building a data structure that represents how likely/not two messages are part of the 'same email list' or by filtering all my email subjects/from addresses into pools of likely 'related' emails and not -- but that's a problem to solve after this one. Any guidance would be appreciated.

    Read the article

  • How do I calculate percentiles with python/numpy?

    - by Uri
    Is there a convenient way to calculate percentiles for a sequence or single-dimensional numpy array? I am looking for something similar to Excel's percentile function. I looked in NumPy's statistics reference, and couldn't find this. All I could find is the median (50th percentile), but not something more specific.

    Read the article

  • Python beautiful soup arguments

    - by scott
    Hi I have this code that fetches some text from a page using BeautifulSoup soup= BeautifulSoup(html) body = soup.find('div' , {'id':'body'}) print body I would like to make this as a reusable function that takes in some htmltext and the tags to match it like the following def parse(html, atrs): soup= BeautifulSoup(html) body = soup.find(atrs) return body But if i make a call like this parse(htmlpage, ('div' , {'id':'body'}")) or like parse(htmlpage, ['div' , {'id':'body'}"]) I get only the div element, the body attribute seems to get ignored. Is there a way to fix this?

    Read the article

  • How to control a subthread process in python?

    - by SpawnCxy
    Code first: '''this is main structure of my program''' from twisted.web import http from twisted.protocols import basic import threading threadstop = False #thread trigger,to be done class MyThread(threading.Thread): def __init__(self): threading.Thread.__init__(self) self.start() def run(self): while True: if threadstop: return dosomething() '''def some function''' if __name__ == '__main__': from twisted.internet import reactor t = MyThread() reactor.listenTCP(serverport,myHttpFactory()) reactor.run() As my first multithread program,I feel happy that it works as expected.But now I find I cannot control it.If I run it on front,Control+C can only stop the main process,and I can still find it in processlist;if I run it in background,I have to use kill -9 pid to stop it.And I wonder if there's a way to control the subthread process by a trigger variale,or a better way to stop the whole process other than kill -9.Thanks.

    Read the article

  • Dynamic dispatch and inheritance in python

    - by Bill Zimmerman
    Hi, I'm trying to modify Guido's multimethod (dynamic dispatch code): http://www.artima.com/weblogs/viewpost.jsp?thread=101605 to handle inheritance and possibly out of order arguments. e.g. (inheritance problem) class A(object): pass class B(A): pass @multimethod(A,A) def foo(arg1,arg2): print 'works' foo(A(),A()) #works foo(A(),B()) #fails Is there a better way than iteratively checking for the super() of each item until one is found? e.g. (argument ordering problem) I was thinking of this from a collision detection standpoint. e.g. foo(Car(),Truck()) and foo(Truck(), Car()) and should both trigger foo(Car,Truck) # Note: @multimethod(Truck,Car) will throw an exception if @multimethod(Car,Truck) was registered first? I'm looking specifically for an 'elegant' solution. I know that I could just brute force my way through all the possibilities, but I'm trying to avoid that. I just wanted to get some input/ideas before sitting down and pounding out a solution. Thanks

    Read the article

  • Writing white space to CSV fields in Python?

    - by matt
    When I try to write a field that includes whitespace in it, it gets split into multiple fields on the space. What's causing this? It's driving me insane. Thanks data = open("file.csv", "wb") w = csv.writer(data) w.writerow(['word1', 'word2']) w.writerow(['word 1', 'word2']) data.close() I'll get 2 fields(word1,word2) for first example and 3(word,1,word2) for the second.

    Read the article

  • python threading and performace?

    - by kumar
    I had to do heavy I/o bound operation, i.e Parsing large files and converting from one format to other format. Initially I used to do it serially, i.e parsing one after another..! Performance was very poor ( it used take 90+ seconds). So I decided to use threading to improve the performance. I created one thread for each file. ( 4 threads) for file in file_list: t=threading.Thread(target = self.convertfile,args = file) t.start() ts.append(t) for t in ts: t.join() But for my astonishment, there is no performance improvement whatsoever. Now also it takes around 90+ seconds to complete the task. As this is I/o bound operation , I had expected to improve the performance. What am I doing wrong?

    Read the article

  • crashing out in a while loop python

    - by Edward
    How to solve this error? i want to pass the values from get_robotxya() and get_ballxya() and use it in a loop but it seems that it will crash after awhile how do i fix this? i want to get the values whithout it crashing out of the while loop import socket import os,sys import time from threading import Thread HOST = '59.191.193.59' PORT = 5555 COORDINATES = [] def connect(): globals()['client_socket'] = socket.socket(socket.AF_INET, socket.SOCK_STREAM) client_socket.connect((HOST,PORT)) def update_coordinates(): connect() screen_width = 0 screen_height = 0 while True: try: client_socket.send("loc\n") data = client_socket.recv(8192) except: connect(); continue; globals()['COORDINATES'] = data.split() if(not(COORDINATES[-1] == "eom" and COORDINATES[0] == "start")): continue if (screen_width != int(COORDINATES[2])): screen_width = int(COORDINATES[2]) screen_height = int(COORDINATES[3]) return def get_ballxy(): update_coordinates() ballx = int(COORDINATES[8]) bally = int(COORDINATES[9]) return ballx,bally def get_robotxya(): update_coordinates() robotx = int(COORDINATES[12]) roboty = int(COORDINATES[13]) angle = int(COORDINATES[14]) return robotx,roboty,angle def print_ballxy(bx,by): print bx print by def print_robotxya(rx,ry,a): print rx print ry print a def activate(): bx,by = get_ballxy() rx,ry,a = get_robotxya() print_ballxy(bx,by) print_robotxya(rx,ry,a) Thread(target=update_coordinates).start() while True: activate() this is the error i get:

    Read the article

  • How to loop over nodes with xmlfeed using scrapy python

    - by Kour ipm
    Hi i working on scrapy and trying xml feeds first time, below is my code class TestxmlItemSpider(XMLFeedSpider): name = "TestxmlItem" allowed_domains = {"http://www.nasinteractive.com"} start_urls = [ "http://www.nasinteractive.com/jobexport/advance/hcantexasexport.xml" ] iterator = 'iternodes' itertag = 'job' def parse_node(self, response, node): title = node.select('title/text()').extract() job_code = node.select('job-code/text()').extract() detail_url = node.select('detail-url/text()').extract() category = node.select('job-category/text()').extract() print title,";;;;;;;;;;;;;;;;;;;;;" print job_code,";;;;;;;;;;;;;;;;;;;;;" item = TestxmlItem() item['title'] = node.select('title/text()').extract() ....... return item result: File "/usr/lib/python2.7/site-packages/Scrapy-0.14.3-py2.7.egg/scrapy/item.py", line 56, in __setitem__ (self.__class__.__name__, key)) exceptions.KeyError: 'TestxmlItem does not support field: title' Totally there are 200+ items so i need to loop over and assign the node text to item but here all the results are displaying at once when we print, actually how can we loop over on nodes in scraping xml files with xmlfeedspider

    Read the article

  • Invalid syntax in this simple Python application.

    - by Sergio Boombastic
    Getting an invalid syntax when creating the template_value variable: class MainPage(webapp.RequestHandler): def get(self): blogPosts_query = BlogPost.all().order('-postDate') blogPosts = blogPosts_query.fetch(10) if users.get_current_user(): url = users.create_logout_url(self.request.uri) url_linktext = 'Logout' else: url = url = users.create_login_url(self.request.uri) url_linktext = 'Login' template_value = ( 'blogPosts': blogPosts, 'url': url, 'url_linktext': url_linktext, ) path = os.path.join(os.path.dirname(__file__), 'index.html') self.response.out.write(template.render(path, template_values)) The error fires specifically on the 'blogPosts': blogPosts line. What am I doing wrong? Thanks!

    Read the article

  • How to override built-in getattr in Python?

    - by Stephen Gross
    I know how to override an object's getattr() to handle calls to undefined object functions. However, I would like to achieve the same behavior for the builtin getattr() function. For instance, consider code like this: call_some_undefined_function() Normally, that simply produces an error: Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'call_some_undefined_function' is not defined I want to override getattr() so that I can intercept the call to "call_some_undefined_function()" and figure out what to do. Is this possible? Thanks, --Steve

    Read the article

  • Django/Python: Save an HTML table to Excel

    - by kchau
    I have an HTML table that I'd like to be able to export to an Excel file. I already have an option to export the table into an IQY file, but I'd prefer something that didn't allow the user to refresh the data via Excel. I just want a feature that takes a snapshot of the table at the time the user clicks the link/button. I'd prefer it if the feature was a link/button on the HTML page that allows the user to save the query results displayed in the table. Is there a way to do this at all? Or, something I can modify with the IQY? I can try to provide more details if needed. Thanks in advance.

    Read the article

< Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >