Search Results

Search found 15000 results on 600 pages for 'python csv'.

Page 409/600 | < Previous Page | 405 406 407 408 409 410 411 412 413 414 415 416  | Next Page >

  • Which class should store the lookup table?

    - by max
    The world contains agents at different locations, with only a single agent at any location. Each agent knows where he's at, but I also need to quickly check if there's an agent at a given location. Hence, I also maintain a map from locations to agents. I have a problem deciding where this map belongs to: class World, class Agent (as a class attribute) or elsewhere. In the following I put the lookup table, agent_locations, in class World. But now agents have to call world.update_agent_location every time they move. This is very annoying; what if I decide later to track other things about the agents, apart from their locations - would I need to add calls back to the world object all across the Agent code? class World: def __init__(self, n_agents): # ... self.agents = {} self.agent_locations = {} for id in range(n_agents): x, y = self.find_location() agent = Agent(self,x,y) self.agents.append(agent) self.agent_locations[x,y] = agent def update_agent_location(self, agent, x, y): del self.agent_locations[agent.x, agent.y] self.agent_locations[x, y] = agent def update(self): # next step in the simulation for agent in self.agents: agent.update() # next step for this agent # ... class Agent: def __init__(self, world, x, y): self.world = world self.x, self.y = x, y def move(self, x1, y1): self.world.update_agent_location(self, x1, y1) self.x, self.y = x1, y1 def update(): # find a good location that is not occupied and move there for x, y in self.valid_locations(): if not self.location_is_good(x, y): continue if self.world.agent_locations[x, y]: # location occupied continue self.move(x, y) I can instead put agent_locations in class Agent as a class attribute. But that only works when I have a single World object. If I later decide to instantiate multiple World objects, the lookup tables would need to be world-specific. I am sure there's a better solution... EDIT: I added a few lines to the code to show how agent_locations is used. Note that it's only used from inside Agent objects, but I don't know if that would remain the case forever.

    Read the article

  • A RAM error of big array

    - by flint
    I have a big file, more than 400M. In that file, there are 13496*13496 number, means 13496 rows and 13496 cols. I want to read them to a array. This is my code: _L1 = [[0 for col in range(13496)] for row in range(13496)] _L1file = open('distanceCMD.function.txt') while (i<13496): print "i="+str(i) _strlf = _L1file.readline() _strlf = _strlf.split('\t') _strlf = _strlf[:-1] _L1[i] = _strlf i += 1 _L1file.close() And this is my error massage: MemoryError: File "D:\research\space-function\ART3.py", line 30, in <module> _strlf = _strlf.split('\t')

    Read the article

  • Filter across three tables using Django

    - by Vanessa MacDougal
    I have 3 django models, where the first has a foreign key to the second, and the second has a foreign key to the third. Like this: class Book(models.Model): year_published = models.IntField() author = models.ForeignKey(Author) class Author(models.Model): author_id = models.AutoField(primary_key=True) name = models.CharField(max_length=50) agent = models.ForeignKey(LitAgent) class LitAgent(models.Model): agent_id = models.AutoField(primary_key=True) name = models.CharField(max_length=50) I want to ask for all the literary agents whose authors had books published in 2006, for example. How can I do this in Django? I have looked at the documentation about filters and QuerySets, and don't see an obvious way. Thanks.

    Read the article

  • How do I make BeautifulSoup parse the contents of textarea tags as HTML?

    - by brofield
    Before 3.0.5, BeautifulSoup used to treat the contents of <textarea as HTML. It now treats it as text. The document I am parsing has HTML inside the textarea tags, and I am trying to process it. I've tried: for textarea in soup.findAll('textarea'): contents = BeautifulSoup.BeautifulSoup(textarea.contents) textarea.replaceWith(contents.html(text=True)) But I'm getting errors. I can't find this in the documentation, and the alternative parsers aren't helping. Anyone know how I can parse the textareas as HTML?

    Read the article

  • How can I load an MP3 or similar music file for display and analysis in wxWidgets?

    - by Jon Cage
    I'm developing a GUI in wxPython which allows a user to generate sequences of colours for some toys I'm building. Part of the program needs to load an MP3 (and potentially other formats further down the line) and display it to the user. That should be sufficient to get started but later I'd like to add features like identifying beats and some crude frequency analysis. Is there any simple way of loading / understanding an MP3's contents to display a plot of its amplitudes to the screen using wxWidgets? I later intend to port to C++/wxWidgets for speed and to avoid having to distribute wxPython.

    Read the article

  • Custom Django tag & jQuery

    - by pocoa
    I'm new to Django. Today I created some Django custom tags which is not that hard. But now I wonder what is the best way to include some jQuery or some Javascript code packed into my custom tag definition. What is the regular way to include a custom library into my code? For example: {% faceboxify item %} So assume that it'll create a specific HTML output for Facebox plugin. I just want to learn some elegant way to import this plugin into my code. I want the above definition to be enough for all functionality. Is there any way to do it? I couldn't find any example. Maybe I'm missing something.. Thank you.

    Read the article

  • Numpy array, how to select indices satisfying multiple conditions?

    - by Bob
    Suppose I have a numpy array x = [5, 2, 3, 1, 4, 5], y = ['f', 'o', 'o', 'b', 'a', 'r']. I want to select the elements in y corresponding to elements in x that are greater than 1 and less than 5. I tried x = array([5, 2, 3, 1, 4, 5]) y = array(['f','o','o','b','a','r']) output = y[x > 1 & x < 5] # desired output is ['o','o','b','a'] but this doesn't work. How would I do this?

    Read the article

  • How to get these values with BeautifulSoup?

    - by Damiano
    Hello everybody, I have this html table: <table> <tr> <td class="datax">a</td> <td class="datax">b</td> <td class="datax">c</td> <td class="datax">d</td> </tr> <tr> <td class="datax">e</td> <td class="datax">f</td> <td class="datax">g</td> <td class="datax">h</td> </tr> </table> How to get the second and the fourth value of each <tr> ? If i do: bs.findAll('td', {'class':'datax'}) I get: <td class="datax">a</td> <td class="datax">b</td> <td class="datax">c</td> <td class="datax">d</td> <td class="datax">e</td> <td class="datax">f</td> <td class="datax">g</td> <td class="datax">h</td> it's correct! but I would like to have this result: <td class="datax">b</td> <td class="datax">d</td> <td class="datax">f</td> <td class="datax">h</td> so, the values I want are - b - d - f - h (the second and the forth <td> of each <tr>) Is it possible with BeautifulSoup module? Thank you very much!

    Read the article

  • Parse raw HTTP Headers

    - by Cev
    I have a string of raw HTTP and I would like to represent the fields in an object. Is there any way to parse the individual headers from an HTTP string? 'GET /search?sourceid=chrome&ie=UTF-8&q=ergterst HTTP/1.1\r\nHost: www.google.com\r\nConnection: keep-alive\r\nAccept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5\r\nUser-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_6; en-US) AppleWebKit/534.13 (KHTML, like Gecko) Chrome/9.0.597.45 Safari/534.13\r\nAccept-Encoding: gzip,deflate,sdch\r\nAvail-Dictionary: GeNLY2f-\r\nAccept-Language: en-US,en;q=0.8\r\n [...]'

    Read the article

  • failure on creating a Scikits.TimeSeries object

    - by user311906
    Hi All I am trying to create a scikit.timeseries object starting from 2 datetime objects. If I understood correctly it should be possible to create a scikits.timeseries starting from datetime objects. I try the following code but it says that Insufficient parameters. The 2 datetime differs for few microseconds. In this case what should be the value for freq parameter? Is what I am trying allowed? In theory, since timeseries can be based on datetime objects it should be possible to hanlde up to microsecond , is this correct? I think that this is not really clear to me. Regards Eo import datetime import sckilits.timeseries as ts tm1 = datetime.datetime( 2010,1,1, 10,10,2, 123456 ) tm2 = datetime.datetime( 2010,1,1, 10,10,2, 345678 ) d = [ tm1, tm2 ] tseries = ts.time_series( dates=d ) tseries = ts.time_series( d )

    Read the article

  • How to check wether a path represented by a QString with german umlauts exists?

    - by MB
    Hey, i get a QString which represents a directory from a QLineEdit. Now i want to check wether a certain file exists in this directory. But if i try this with os.path.exists and os.path.join and get in trouble when german umlauts occur in the directory path: #the direcory coming from the user input in the QLineEdit #i take this QString to the local 8-Bit encoding and then make #a string from it target_dir = str(lineEdit.text().toLocal8Bit()) #the file name that should be checked for file_name = 'some-name.txt' #this fails with a UnicodeDecodeError when a umlaut occurs in target_dir os.path.exists(os.path.join(target_dir, file_name)) How would you check if the file exists, when you might encounter german umlauts?

    Read the article

  • Threaded Django task doesn't automatically handle transactions or db connections?

    - by Gabriel Hurley
    I've got Django set up to run some recurring tasks in their own threads, and I noticed that they were always leaving behind unfinished database connection processes (pgsql "Idle In Transaction"). I looked through the Postgres logs and found that the transactions weren't being completed (no ROLLBACK). I tried using the various transaction decorators on my functions, no luck. I switched to manual transaction management and did the rollback manually, that worked, but still left the processes as "Idle". So then I called connection.close(), and all is well. But I'm left wondering, why doesn't Django's typical transaction and connection management work for these threaded tasks that are being spawned from the main Django thread?

    Read the article

  • Problem opening Solr *.jsp pages with urllib2.urlopen.

    - by nestling
    I'm trying to open a page at http://localhost:8983/solr/admin/stats.jsp but urllib2.urlopen returns a blank string. It works fine for solr/ and solr/admin, but for all the pages above /solr/admin/ I get nothing but a blank string. 76]: t = urllib2.urlopen('http://localhost:8983/solr/admin/stats.jsp') 77]: s = t.read() 78]: s 78]: 79]: type(s) 79]: <type 'str'> 80]: urllib2.urlopen('http://localhost:8983/solr/admin/registry.jsp').read() 80]: In [84]: urllib2.urlopen('http://localhost:8983/solr/admin/schema.jsp').read() Out[84]: I know this isn't a problem with urllib2, but beyond that I am at a loss. I wish solr (or jetty) had an easy to get to log file, so that perhaps it could tell me its side of the story.

    Read the article

  • Parse large XML file w/ script or use BioPython API ?

    - by jeremy04
    Hey guys this is my first question on here. I'm trying to make a local copy of the UniprotKB in SQL. The UniprotKB is 2.1GB, and it comes in XML and a special text format used by SwissProt Here are my options: 1) Use a SAX parser (XML) - I chose Ruby, and Nokogiri. I started writing the parser, but my initial reaction: how would I map the XML schema to the SAX parser? 2) BioPython - I already have BioSQL/Biopython installed, which literally created my SQL schema for me, and I was able to successfully insert one SwissProt/Uniprot txt file into the database. I'm running it right now (crosses fingers) on the entire 2.1gb. Here is the code I'm running: from Bio import SeqIO from BioSQL import BioSeqDatabase from Bio import SwissProt server = BioSeqDatabase.open_database(driver = "MySQLdb", user = "root", passwd = "", host="localhost", db = "bioseqdb") db = server["uniprot"] iterator = SeqIO.parse(open("/path/to/uniprot_sprot.dat", "r"), "swiss") db.load(iterator) server.commit() Edit: it's now crashing because the transactions are getting locked (since the tables are Innodb) Error Number: 1205 Lock wait timeout exceeded; try restarting transaction. I'm using MySQL version: 5.1.43 Should I switch my database to Postgrelsql ?

    Read the article

  • What is the difference between type.__getattribute__ and object.__getattribute__?

    - by Neil G
    Given: In [37]: class A: ....: f = 1 ....: In [38]: class B(A): ....: pass ....: In [39]: getattr(B, 'f') Out[39]: 1 Okay, that either calls super or crawls the mro? In [40]: getattr(A, 'f') Out[40]: 1 This is expected. In [41]: object.__getattribute__(A, 'f') Out[41]: 1 In [42]: object.__getattribute__(B, 'f') --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-42-de76df798d1d> in <module>() ----> 1 object.__getattribute__(B, 'f') AttributeError: 'type' object has no attribute 'f' What is getattribute not doing that getattr does? In [43]: type.__getattribute__(B, 'f') Out[43]: 1 What?! type.__getattribute__ calls super but object's version doesn't? In [44]: type.__getattribute__(A, 'f') Out[44]: 1

    Read the article

  • Possible to change function name in definition?

    - by Bird Jaguar IV
    I tried several ways to change the function name in the definition, but they failed. >>> def f(): pass >>> f.__name__ 'f' >>> def f(): f.__name__ = 'new name' >>> f.__name__ 'f' >>> def f(): self.__name__ = 'new name' >>> f.__name__ 'f' But I can change the name attribute after defining it. >>> def f(): pass >>> f.__name__ = 'new name' >>> f.__name__ 'new name' Any way to change/set it in the definition (other than using a decorator)?

    Read the article

  • poplib and email module will not reloop through a message if it has alread read it

    - by user1440925
    I'm currently trying to write a script that gets messages from my gmail account but I'm noticing a problem. If poplib loops through a message in my inbox it will never loop through it again. Here is my code import poplib, string, email user = "[email protected]" password = "p0ckystyx" message = "" mail = poplib.POP3_SSL('pop.gmail.com') mail.user(user) mail.pass_(password) iMessageCount = len(mail.list()[1]) message = "" msg = mail.retr(iMessageCount) str = string.join(msg[1], "\n") frmMail = email.message_from_string(str) for part in frmMail.walk(): if part.get_content_type() == "text/plain": print part.get_payload() mail.quit() Every time I run this script it goes to the next newest email and just skips over the email that was shown last time it was run.

    Read the article

  • How can this code be made more Pythonic?

    - by usethedeathstar
    This next part of code does exactly what I want it to do. dem_rows and dem_cols contain float values for a number of things i can identify in an image, but i need to get the nearest pixel for each of them, and than to make sure I only get the unique points, and no duplicates. The problem is that this code is ugly and as far as I get it, as unpythonic as it gets. If there would be a pure-numpy-solution (without for-loops) that would be even better. # next part is to make sure that we get the rounding done correctly, and than to get the integer part out of it # without the annoying floatingpoint-error, and without duplicates fielddic={} for i in range(len(dem_rows)): # here comes the ugly part: abusing the fact that i overwrite dictionary keys if I get duplicates fielddic[int(round(dem_rows[i]) + 0.1), int(round(dem_cols[i]) + 0.1)] = None # also very ugly: to make two arrays of integers out of the first and second part of the keys field_rows = numpy.zeros((len(fielddic.keys())), int) field_cols = numpy.zeros((len(fielddic.keys())), int) for i, (r, c) in enumerate(fielddic.keys()): field_rows[i] = r field_cols[i] = c

    Read the article

  • Add string to another string

    - by daemonfire300
    Hi there, I currently encountered a problem: I want to handle adding strings to other strings very efficiently, so I looked up many methods and techniques, and I figured the "fastest" method. But I quite can not understand how it actually works: def method6(): return ''.join([`num` for num in xrange(loop_count)]) From source (Method 6) Especially the ([numfor num in xrange(loop_count)]) confused me totally.

    Read the article

< Previous Page | 405 406 407 408 409 410 411 412 413 414 415 416  | Next Page >