Search Results

Search found 70655 results on 2827 pages for 'python time'.

Page 136/2827 | < Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >

  • Writing Strings to files in python

    - by Leif Andersen
    I'm getting the following error when trying to write a string to a file in pythion: Traceback (most recent call last): File "export_off.py", line 264, in execute save_off(self.properties.path, context) File "export_off.py", line 244, in save_off primary.write(file) File "export_off.py", line 181, in write variable.write(file) File "export_off.py", line 118, in write file.write(self.value) TypeError: must be bytes or buffer, not str I basically have a string class, which contains a string: class _off_str(object): __slots__ = 'value' def __init__(self, val=""): self.value=val def get_size(self): return SZ_SHORT def write(self,file): file.write(self.value) def __str__(self): return str(self.value) Furthermore, I'm calling that class like this: def write(self, file): for variable in self.variables: variable.write(file) I have no idea what is going on. I've seen other python programs writing strings to files, so why can't this one? Thank you very much for your help.

    Read the article

  • get n records at a time from a temporary table

    - by Claudiu
    I have a temporary table with about 1 million entries. The temporary table stores the result of a larger query. I want to process these records 1000 at a time, for example. What's the best way to set up queries such that I get the first 1000 rows, then the next 1000, etc.? They are not inherently ordered, but the temporary table just has one column with an ID, so I can order it if necessary. I was thinking of creating an extra column with the temporary table to number all the rows, something like: CREATE TEMP TABLE tmptmp AS SELECT ##autonumber somehow##, id FROM .... --complicated query then I can do: SELECT * FROM tmptmp WHERE autonumber>=0 AND autonumber < 1000 etc... how would I actually accomplish this? Or is there a better way? I'm using Python and PostgreSQL.

    Read the article

  • % confuses python raw sql query

    - by Jonathan
    Following this SO question, I'm trying to "truncate" all tables related to a certain django application using the following raw sql commands in python: cursor.execute("set foreign_key_checks = 0") cursor.execute("select concat('truncate table ',table_schema,'.',table_name,';') as sql_stmt from information_schema.tables where table_schema = 'my_db' and table_type = 'base table' AND table_name LIKE 'some_prefix%'") for sql in [sql[0] for sql in cursor.fetchall()]: cursor.execute(sql) cursor.execute("set foreign_key_checks = 1") Alas I receive the following error: C:\dev\my_project>my_script.py Traceback (most recent call last): File "C:\dev\my_project\my_script.py", line 295, in <module> cursor.execute(r"select concat('truncate table ',table_schema,'.',table_name,';') as sql_stmt from information_schema.tables where table_schema = 'my_db' and table_type = 'base table' AND table_name LIKE 'some_prefix%'") File "C:\Python26\lib\site-packages\django\db\backends\util.py", line 18, in execute sql = self.db.ops.last_executed_query(self.cursor, sql, params) File "C:\Python26\lib\site-packages\django\db\backends\__init__.py", line 216, in last_executed_query return smart_unicode(sql) % u_params TypeError: not enough arguments for format string Is the % in the LIKE making trouble? How can I workaround it?

    Read the article

  • File/module structure in Python

    - by keithjgrant
    So I'm just getting started with Python, and currently working my way through diveintopython.org. The code examples are nice, but the vast majority of them are little four-line snippets, and I want to see a little more of the big picture. As I understand it--and correct me if I'm wrong--each '.py' file becomes a "module", and a group of modules in a directory becomes a "package" (at least, it does if I create a __init__.py file in that directory). What is it if I don't have a __init__.py file? So what does each "module" file look like? Do I generally define only one class in the file? Does anything else go in that file besides the class definition and maybe a handful of import commands?

    Read the article

  • problem reading a csv file in python

    - by Hossein
    Hi, I am trying to read a very simple but somehow large(800Mb) csv file using the csv library in python. The delimiter is a single tab and each line consists of some numbers. Each line is a record, and I have 20681 rows in my file. I had some problems during my calculations using this file,it always stops at a certain row. I got suspicious about the number of rows in the file.I used the code below to count the number of row in this file: tfdf_Reader = csv.reader(open('v2-host_tfdf_en.txt'),delimiter=' ') c = 0 for row in tfdf_Reader: c = c + 1 print c To my surprise c is printed with the value of 61722!!! Why is this happening? What am I doing wrong?

    Read the article

  • Creating interruptible process in python

    - by Glycerine
    I'm creating a python script of which parses a large (but simple) CSV. It'll take some time to process. I would like the ability to interrupt the parsing of the CSV so I can continue at a later stage. Currently I have this - of which lives in a larger class: (unfinished) Edit: I have some changed code. But the system will parse over 3 million rows. def parseData(self) reader = csv.reader(open(self.file)) for id, title, disc in reader: print "%-5s %-50s %s" % (id, title, disc) l = LegacyData() l.old_id = int(id) l.name = title l.disc_number = disc l.parsed = False l.save() This is the old code. def parseData(self): #first line start fields = self.data.next() for row in self.data: items = zip(fields, row) item = {} for (name, value) in items: item[name] = value.strip() self.save(item) Thanks guys.

    Read the article

  • Exception message (Python 2.6)

    - by TurboJupi
    If I want to open binary file (in Python 2.6), that doesn't exists, program exits with an error and prints this: Traceback (most recent call last): File "C:\Python_tests\Exception_Handling\src\exception_handling.py", line 4, in <module> pkl_file = open('monitor.dat', 'rb') IOError: [Errno 2] No such file or directory: 'monitor.dat' I can handle this with 'try-except', like: try: pkl_file = open('monitor.dat', 'rb') monitoring_pickle = pickle.load(pkl_file) pkl_file.close() except Exception: print 'No such file or directory' Does anybody know, how could I, in caught Exception, print the following line? File "C:\Python_tests\Exception_Handling\src\exception_handling.py", line 11, in <module> pkl_file = open('monitor.dat', 'rb') So, program would not exits, and I would have useful information.

    Read the article

  • How to integrate Python scripting in my Android App (like SL4A)

    - by Seraphim's host
    I need to add scripting layer to my android App. So I can remotely prepare a script that my app download form a web service and execute on the user device. I found a interesting project called Scripting Layer for Android (SL4A) here: http://code.google.com/p/android-scripting/ I'm not sure I can execute Python script without installing the PythonForAndroid_r4.apk first. I can't force my customer to install that application! So my question is, can the SL4A layer be integrated in my app without the need to install other apk? I need to execute actions like update data in the DB, create/read/delete a file on the sd card... Not so complex but I see SL4A can do a lot of things like these. Other scripting libraries? EDIT: Found also MVEL: http://mvel.codehaus.org/ but I think it needs to be integrated to execute complex operations like accessing a DB...

    Read the article

  • Where is Python support for PEM + RSA + DES3?

    - by jasonjs
    I need a Python library that supports PEM files and both RSA signing and DES3 encryption. pycrypto doesn't seem to support PEM, and its mechanism for loading existing keys is undocumented and cryptic. m2crypto doesn't seem to support DES/DES3, oddly. I've been running an openssl subprocess, but I'd rather have something built in and preferably fast. Does this exist? (Failing that, I hesitate to ask, but are there high-level enough C apis available for this that I could write a special-purpose extension without killing myself/introducing vulns?)

    Read the article

  • Python+suds : xsd_base64Binary type ?

    - by n1r3
    Hi, I'm trying to attach some files to a Jira using the Soap API. I have python 2.6 and SOAPpy isn't working any more, so, I'm using suds. Everything is fine except for the attachements ... I don't know how to rewrite this piece of code : http://confluence.atlassian.com/display/JIRA/Creating+a+SOAP+Client?focusedCommentId=180943#comment-180943 Any clue ? I don't know how to deal with complex type like this one : <complexType name="ArrayOf_xsd_base64Binary"> <complexContent> <restriction base="soapenc:Array"> <attribute ref="soapenc:arrayType" wsdl:arrayType="xsd:byte[][]"/> </restriction> </complexContent> </complexType> thanks a lot n.

    Read the article

  • Maintaining Logging and/or stdout/stderr in Python Daemon

    - by dave mankoff
    Every recipe that I've found for creating a daemon process in Python involves forking twice (for Unix) and then closing all open file descriptors. (See http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/ for an example). This is all simple enough but I seem to have an issue. On the production machine that I am setting up, my daemon is aborting - silently since all open file descriptors were closed. I am having a tricky time debugging the issue currently and am wondering what the proper way to catch and log these errors are. What is the right way to setup logging such that it continues to work after daemonizing? Do I just call logging.basicConfig() a second time after daemonizing? What's the right way to capture stdout and stderr? I am fuzzy on the details of why all the files are closed. Ideally, my main code could just call daemon_start(pid_file) and logging would continue to work.

    Read the article

  • Error handling with Python + Pylons

    - by ensnare
    What is the proper way to handle errors with Python + Pylons? Say a user sets a password via a form that, when passed to a model class via the controller, throws an error because it's too short. How should that error be handled so that an error message gets displayed on the web page rather than the entire script terminating to an error page? Should there be any error handling in the controller itself? I hope I am explaining myself clearly. Thank you.

    Read the article

  • Python sqlite3 and concurrency

    - by RexE
    I have a Python program that uses the "threading" module. Once every second, my program starts a new thread that fetches some data from the web, and stores this data to my hard drive. I would like to use sqlite3 to store these results, but I can't get it to work. The issue seems to be about the following line: conn = sqlite3.connect("mydatabase.db") If I put this line of code inside each thread, I get an OperationalError telling me that the database file is locked. I guess this means that another thread has mydatabase.db open through a sqlite3 connection and has locked it. If I put this line of code in the main program and pass the connection object (conn) to each thread, I get a ProgrammingError, saying that SQLite objects created in a thread can only be used in that same thread. Previously I was storing all my results in CSV files, and did not have any of these file-locking issues. Hopefully this will be possible with sqlite. Any ideas?

    Read the article

  • python writing a list to a file

    - by gfar90
    I need to write a list to a file in python. I know the list should be converted to a string with the join method, but since I have a tuple I got confused. I tried a lot to change my variables to strings etc, this is one of my first attempts: def perform(text): repository = [("","")] fdist = nltk.FreqDist(some_variable) for c in some_variable: repository.append((c, fdist[c])) return ' '.join(repository) but it gives me the following error: Traceback (most recent call last): File "", line 1, in qe = perform(entfile2) File "", line 14, in perform return ' '.join(repository) TypeError: sequence item 0: expected string, tuple found any ideas how to write the list 'repository' to a file? Thanks!

    Read the article

  • long waiting time in linking

    - by ccanan
    Hi, here is the situation. I am using visual studio 2005. the solution contains lots of projects, 34 projects in all, and the start up projects depends on others. then in linking part, it'll wait a long time before the real linking starts. I am pretty sure it's because of too many projects depended, as when I use a solution with 10 of the 34 projects(keep other projects as headers&libs), it'll start instantly. so any one has any idea that I can reduce the waiting time? thx.

    Read the article

  • python -> combinations of numbers and letters

    - by tekknolagi
    #!/usr/bin/python import random lower_a = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z'] upper_a = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z'] num = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'] all = [] all = " ".join("".join(lower_a) + "".join(upper_a) + "".join(num)) all = all.split() x = 1 c = 1 while x < 10: y = [] for i in range(c): a = random.choice(all) y.append(a) print "".join(y) x += 1 c += 1 what i have now outputs something like the following: 5 hE HAy 1kgy Pt6JM 2pFuCb Jv5osaX 5q8PwWAO SvHWRKfI5 how can i make it systematically go through every combination of letters (upper and lowercase) for a given length, then add 1 to that length and repeat the process?

    Read the article

  • python: find and replace numbers < 1 in text file

    - by hjp
    I'm pretty new to Python programming and would appreciate some help to a problem I have... Basically I have multiple text files which contain velocity values as such: 0.259515E+03 0.235095E+03 0.208262E+03 0.230223E+03 0.267333E+03 0.217889E+03 0.156233E+03 0.144876E+03 0.136187E+03 0.137865E+00 etc for many lines... What I need to do is convert all the values in the text file that are less than 1 (e.g. 0.137865E+00 above) to an arbitrary value of 0.100000E+01. While it seems pretty simple to replace specific values with the 'replace()' method and a while loop, how do you do this if you want to replace a range? thanks

    Read the article

  • crc24 from c to python

    - by biiiiiaw
    can someone please translate this code to python? i have tried and tried again, but have not managed it: #define CRC24_INIT 0xB704CEL #define CRC24_POLY 0x1864CFBL typedef long crc24; crc24 crc_octets(unsigned char *octets, size_t len) { crc24 crc = CRC24_INIT; int i; while (len--) { crc ^= (*octets++) << 16; for (i = 0; i < 8; i++) { crc <<= 1; if (crc & 0x1000000) crc ^= CRC24_POLY; } } return crc & 0xFFFFFFL; } i have the rotate left function (ROL24(value,bits_to_rotate_by)), which i know works since i got it from a source code of a reputable programmer, but i dont get the * and ++ on octet. i only sort of understand how ++ works in c++, and i dont know what * is at all

    Read the article

  • Run and terminate a prgram (Python under Windows)

    - by Fredrich
    I'd like to create a small script to that basically does this: run program1.exe -- kill program1.exe after n seconds -- run program1.exe again. I know some basic Python and would read up on this, but I'm in a bit of a hurry and just need this to get done asap. If someone has a script/idea or could help my out with just the syntax I need to open and kill the .exe file, please... I don't mind solutions in other languages either. I'm sorry if this is a bit "please write my code"-ish, that's not something I typically do.

    Read the article

  • getting smallest of coordinates that differ by N or more in Python

    - by user248237
    suppose I have a list of coordinates: data = [[(10, 20), (100, 120), (0, 5), (50, 60)], [(13, 20), (300, 400), (100, 120), (51, 62)]] and I want to take all tuples that either appear in each list in data, or any tuple that differs from all tuples in lists other than its own by 3 or less. How can I do this efficiently in Python? For the above example, the results should be: [[(100, 120), # since it occurs in both lists (10, 20), (13, 20), # since they differ by only 3 (50, 60), (51, 60)]] (0, 5) and (300, 400) would not be included, since they don't appear in both lists and are not different from elements in lists other than their own by 3 or less. how can this be computed? thanks.

    Read the article

  • Update existing columns and rows within csv file using Python

    - by wilbev
    So I've been attempting to use the csv module in Python to add data to existing rows and columns, but only specific columns of each row. So for examples let's say my existing csv file has the following: id, name, city, age 1, Ed,, 34 2, Pat,, 23 So basically the city of each person is missing, so I would like to update each row with that person's city. However, the writerow method only seems replace the existing data within the csv file. Changing the open file to append mode just adds the data to a new row. Is there any way to skip the existing data, and only add the city to each row? Thanks

    Read the article

  • Filtering Data in a Text File with Python

    - by YAS
    I'm new to Python (like Zygote new), and it's just to supplement another program but what I need is I have a text file that's a group of items for a game and it is formatted so: [1] Name=Blah Faction=Blahdiddly Cost=1000 [2] Name=Meh Faction=MehMeh Cost=2000 [3] Name=Lollypop Faction=Blahdiddly Cost=100 And I need to be able to find out what groups (the numbers in brackets) have matching values. So if I search Faction=Blahdiddly Group 1 & 3 will come up. I unfortunately have NO idea how to do this. Can anyone help?

    Read the article

  • Scrapping *.aspx content using Python

    - by tomato
    I'm having difficulties scrapping dynamically generated table in ASPX. Trying to scrap the gas prices from a site like these GasPrices. I can extract all the information in the gas price table (address, time submitted etc.), except for the actual gas price. Is there a way I could scrap the gas prices? i.e. somehow get a text representation of it. I'm not very familiar with ASP/ASPX - but what's being generated now is not showing up in the final HTML. I'm using Python to do the scrapping, but that's irrelevant unless there's a specific library...

    Read the article

  • google app engine db.Model in python only display user-defined fields

    - by MattM
    I'm a python newbie so I apologize in advance if this question has been asked before. I am building out an application in GAE and need to generate a report that contains the values for a user-defined subset of fields. For example, in my db model, CrashReport, I have the following fields: entry_type entry_date instance_id build_id crash_text machine_info I present a user with the above list as a checkbox group from which they select. Whichever fields the user selects, I then create a report showing all the values in the datastore, but only for the fields that they selected. For example, if from the above list, the user selects the build_id and crash_text fields, the output might look like this: build_id crash_text 0.8.2 blown gasket 0.8.2 boom! 0.8.1 crack! ... So the question is, how exactly do I only access the values for the fields which the user has defined?

    Read the article

< Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >