Search Results

Search found 30115 results on 1205 pages for 'read uncommitted'.

Page 221/1205 | < Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >

  • Using Multiple File Handles for Single File

    - by Ryan Rosario
    I have an O(n^2) operation that requires me to read line i from a file, and then compare line i to every line in the file. This repeats for all i. I wrote the following code to do this with 2 file handles, but it does not yield the result I am looking for. I imagine this is a simple error on my part. IN1 = open("myfile.dat","r") IN2 = open("myfile.dat","r") for line1 in IN1: for line2 in IN2: print line1.strip(), line2.strip() IN1.close() IN2.close() The result: Hello Hello Hello World Hello This Hello is Hello an Hello Example Hello of Hello Using Hello Two Hello File Hello Pointers Hello to Hello Read Hello One Hello File The output should contain 15^2 lines.

    Read the article

  • Is it worthwhile learning about Emacs? [closed]

    - by dole doug
    Hi there Some time ago, I've heard what a great tool Emacs can be. I've read some papers about it and some watched some video too. I've read that emacs is great not only for developers but for usual users too...so i decided to start learning how to use it and wok with it. The problem is that I'm a MS Windows user and I learn in my spare time PHP and C(I also did some products on those languages, but i still considering myself in learning stage). Another problem is that I learn alone (no friends around to ask/learn from them, programming related stuff). Can you give me some tips about how to use those type of tools(especially those written for gnu/unix) with "poor" GUI but rich features? Do you recommend to use the specific windows written applications only and forget about those which come from GNU?

    Read the article

  • Cython Speed Boost vs. Usability

    - by zubin71
    I just came across Cython, while I was looking out for ways to optimize Python code. I read various posts on stackoverflow, the python wiki and read the article "General Rules for Optimization". Cython is something which grasps my interest the most; instead of writing C-code for yourself, you can choose to have other datatypes in your python code itself. Here is a silly test i tried, #!/usr/bin/python # test.pyx def test(value): for i in xrange(value): i**2 if(i==1000000): print i test(10000001) $ time python test.pyx real 0m16.774s user 0m16.745s sys 0m0.024s $ time cython test.pyx real 0m0.513s user 0m0.196s sys 0m0.052s Now, honestly, i`m dumbfounded. The code which I have used here is pure python code, and all I have changed is the interpreter. In this case, if cython is this good, then why do people still use the traditional Python interpretor? Are there any reliability issues for Cython?

    Read the article

  • ASP.NET MVC Authentication Cookie Not Being Retrieved

    - by Jamie Wright
    I am having a hard time implementing "Remember Me" functionality in an MVC application with a custom principal. I have boiled it down to ASP.NET not retrieving the authentication cookie for me. I have included a snaphot below from Google Chrome. Shows the results of Request.Cookies that is set within the controller action and placed in ViewData for the view to read. Notice that it is missing the .ASPXAUTH cookie Shows the results from the Chrome developer tools. You can see that .ASPXAUTH is included here. Does anyone know what the issue may be here? Why does ASP.NET not read this value from the cookie collection?

    Read the article

  • How do I debug a .NET executable at MSIL-level?

    - by Eyal
    I have a .NET executable file that I need to debug. I would like to step into it so that it stops on the first instruction and have a visual interface for single-stepping, breakpoints, etc. This seems like it should be easier but I haven't yet found a solution! I read about DbgCLR.exe on the web but I can't find that file on my system or online for the life of me. I also read somewhere that DbgCLR.exe is no longer necessary because Visual Studio can do the same thing. A Visual Studio .NET solution would be great, too! (Maybe there's a menu item that I overlooked?) Either will suit, so long as I can inspect the stack, set breakpoints, etc.

    Read the article

  • PyQt4, QThread and opening big files without freezing the GUI

    - by jmrbcu
    Hi, I would like to ask how to read a big file from disk and maintain the PyQt4 UI responsive (not blocked). I had moved the load of the file to a QThread subclass but my GUI thread get freezed. Any suggestions? I think it must be something with the GIL but I don't know how to sort it? EDIT: I am using vtkGDCMImageReader from the GDCM project to read a multiframe DICOM image and display it with vtk and pyqt4. I do this load in a different thread (QThread) but my app freeze until the image is loaded. here is an example code: class ReadThread(QThread): def __init__(self, file_name): super(ReadThread, self).__init__(self) self.file_name = file_name self.reader.vtkgdcm.vtkGDCMImageReader() def run(self): self.reader.SetFileName(self.file_name) self.reader.Update() self.emit(QtCore.SIGNAL('image_loaded'), self.reader.GetOutput())

    Read the article

  • SSIS(sql server integration service) xml data flow

    - by swapna
    Hi, I have an xml file the content which i have to write to a Database table using ssis pacakge. I am using xml source nad oledb destination My issue now is this xml file generate multiple outputs .(event,produt,offer,form) etc. But i need to write all in one data row(more than one if 2 products are there for the event) in the database. But i do not know how to use this multiple outputs and make a single row for a event. I hav read numerous articles about this subject but not able to take a decision.what is the right way of doing this. 1) xml source ? (if i use this how do i merge the multiple outputs) 2) or a script task using xml objects read and write to the DB. or anything new ? Please provide me some solutions xml sample file * - ABc. 2009-06-07 2010-04-30 region test 1 contact - offertest product1 product1 187 * Thanks SNA

    Read the article

  • How does jQuery stores data with .data()?

    - by TK
    I am a little confused how jQuery stores data with .data() functions. Is this something called expando? Or is this using HTML5 Web Storage although I think this is very unlikely? The documentation says: The .data() method allows us to attach data of any type to DOM elements in a way that is safe from circular references and therefore from memory leaks. As I read about expando, it seems to have a rick of memory leak. Unfortunately my skills are not enough to read and understand jQuery code itself, but I want to know how jQuery stores such data by using data(). http://api.jquery.com/data/

    Read the article

  • SqlCommand() ExecuteNonQuery() truncates command text.

    - by H. Abraham Chavez
    I'm building a custom db deployment utility, I need to read text files containing sql scripts and execute them against the database. Pretty easy stuff, so far so good. However I've encountered a snag, the contents of the file are read successfully and entirely, but once passed into the SqlCommand and then executed with SqlCommand.ExecuteNonQuery only part of the script is executed. I fired up Profiler and confirmed that my code is not passing all of the script. private void ExecuteScript(string cmd, SqlConnection sqlConn, SqlTransaction trans) { SqlCommand sqlCmd = new SqlCommand(cmd, sqlConn, trans); sqlCmd.CommandType = CommandType.Text; sqlCmd.CommandTimeout = 9000000; // for testing sqlCmd.ExecuteNonQuery(); } // I call it like this, readDMLScript contains 543 lines of T-SQL string readDMLScript = ReadFile(dmlFile); ExecuteScript(readDMLScript, sqlConn, trans);

    Read the article

  • Piping SoX in Python - subprocess alternative?

    - by Cochise Ruhulessin
    I use SoX in an application. The application uses it to apply various operations on audiofiles, such as trimming. This works fine: from subprocess import Popen, PIPE kwargs = {'stdin': PIPE, 'stdout': PIPE, 'stderr': PIPE} pipe = Popen(['sox','-t','mp3','-', 'test.mp3','trim','0','15'], **kwargs) output, errors = pipe.communicate(input=open('test.mp3','rb').read()) if errors: raise RuntimeError(errors) This will cause problems on large files hower, since read() loads the complete file to memory; which is slow and may cause the pipes' buffer to overflow. A workaround exists: from subprocess import Popen, PIPE import tempfile import uuid import shutil import os kwargs = {'stdin': PIPE, 'stdout': PIPE, 'stderr': PIPE} tmp = os.path.join(tempfile.gettempdir(), uuid.uuid1().hex + '.mp3') pipe = Popen(['sox','test.mp3', tmp,'trim','0','15'], **kwargs) output, errors = pipe.communicate() if errors: raise RuntimeError(errors) shutil.copy2(tmp, 'test.mp3') os.remove(tmp) So the question stands as follows: Are there any alternatives to this approach, aside from writing a Python extension to the Sox C API?

    Read the article

  • A couple questions using fwrite/fread with data structures

    - by Nazgulled
    Hi, I'm using fwrite() and fread() for the first time to write some data structures to disk and I have a couple of questions about best practices and proper ways of doing things. What I'm writing to disk (so I can later read it back) is all user profiles inserted in a Graph structure. Each graph vertex is of the following type: typedef struct sUserProfile { char name[NAME_SZ]; char address[ADDRESS_SZ]; int socialNumber; char password[PASSWORD_SZ]; HashTable *mailbox; short msgCount; } UserProfile; And this is how I'm currently writing all the profiles to disk: void ioWriteNetworkState(SocialNetwork *social) { Vertex *currPtr = social->usersNetwork->vertices; UserProfile *user; FILE *fp = fopen("save/profiles.dat", "w"); if(!fp) { perror("fopen"); exit(EXIT_FAILURE); } fwrite(&(social->usersCount), sizeof(int), 1, fp); while(currPtr) { user = (UserProfile*)currPtr->value; fwrite(&(user->socialNumber), sizeof(int), 1, fp); fwrite(user->name, sizeof(char)*strlen(user->name), 1, fp); fwrite(user->address, sizeof(char)*strlen(user->address), 1, fp); fwrite(user->password, sizeof(char)*strlen(user->password), 1, fp); fwrite(&(user->msgCount), sizeof(short), 1, fp); break; currPtr = currPtr->next; } fclose(fp); } Notes: The first fwrite() you see will write the total user count in the graph so I know how much data I need to read back. The break is there for testing purposes. There's thousands of users and I'm still experimenting with the code. My questions: After reading this I decided to use fwrite() on each element instead of writing the whole structure. I also avoid writing the pointer to to the mailbox as I don't need to save that pointer. So, is this the way to go? Multiple fwrite()'s instead of a global one for the whole structure? Isn't that slower? How do I read back this content? I know I have to use fread() but I don't know the size of the strings, cause I used strlen() to write them. I could write the output of strlen() before writing the string, but is there any better way without extra writes?

    Read the article

  • shutil.rmtree fails on Windows with 'Access is denied'

    - by Sridhar Ratnakumar
    In Python, when running shutil.rmtree over a folder that contains a read-only file, the following exception is printed: File "C:\ActivePython32Python26\lib\shutil.py", line 216, in rmtree rmtree(fullname, ignore_errors, onerror) File "C:\ActivePython32Python26\lib\shutil.py", line 216, in rmtree rmtree(fullname, ignore_errors, onerror) File "C:\ActivePython32Python26\lib\shutil.py", line 216, in rmtree rmtree(fullname, ignore_errors, onerror) File "C:\ActivePython32Python26\lib\shutil.py", line 216, in rmtree rmtree(fullname, ignore_errors, onerror) File "C:\ActivePython32Python26\lib\shutil.py", line 216, in rmtree rmtree(fullname, ignore_errors, onerror) File "C:\ActivePython32Python26\lib\shutil.py", line 216, in rmtree rmtree(fullname, ignore_errors, onerror) File "C:\ActivePython32Python26\lib\shutil.py", line 216, in rmtree rmtree(fullname, ignore_errors, onerror) File "C:\ActivePython32Python26\lib\shutil.py", line 221, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "C:\ActivePython32Python26\lib\shutil.py", line 219, in rmtree os.remove(fullname) WindowsError: [Error 5] Access is denied: 'build\\pyhg_trunk-win32-x86-hgtip27\\image\\feature-core\\INSTALLDIR\\tcl\\tcl8.5\\msgs\\af.msg' Looking in File Properties dialog I noticed that af.msg file is set to be read-only. So the question is: what is the simplest workaround/fix to get around this problem - given that my intention is to do an equivalent of rm -rf build/ but on Windows? (without having to use unxutils or cygwin)

    Read the article

  • wxPython - Save Items in ListCtrl

    - by dpswt
    Hello everyone. My question is if we can save the items on ListCtrl so everytime someone opens the application, the items are there and if the user removes it, it also removes from the configuration. I know that I can use wx.Config and I'm trying to accomplish using that but I don't know how to read it in a way to accomplish what I want. So what I would like to know is a proper way to write/read the wx.Config in a way that everytime someone opens the application, the items from ListCtrl are there. Thanks in advance.

    Read the article

  • FoxPro to C#: What best method between ODBC, OLE DB or another?

    - by Martin Labelle
    We need to read data from FoxPro 8 with C#. I'm gonna do some operations, and will push some of thoses data to an SQL Server database. We are not sure what's best method to read those data. I saw OLE DB and ODBC; what's best? REQUIRMENTS: The export program will run each night, but my company runs 24h a day. The DBF could sometimes be huge. We DON'T need to modify data. Our system, wich use FoxPro, is quite unstable: I need to find a way that ABSOLUTELY do not corrupt data, and, ideally, do not lock DBF files while reading. Speed is a minor requirement: it must be quick, but requirement #4 is most important.

    Read the article

  • How Should I Print Documentation from Google Code?

    - by peter.newhook
    Google does a decent job of documenting their API (like Closure http://code.google.com/closure/compiler/docs/overview.html) but I find it hard to read because it's broken into such short pages. I like to leaf through my docs and read it on paper. Has anyone found a good way to print from the documentation on Google Code. It could be a PDF, or even just a long page with lots of content. Please note, I'm not talking about the Wikis in the Open Source side of Google Code. I'm referring to the API docs published by Google.

    Read the article

  • Appscript to write iTunes artwork

    - by Kartik Aiyer
    I'm trying to capture artwork from a pict file and embed into a track on iTunes using python appscript. I did something like this: imFile = open('/Users/kartikaiyer/temp.pict','r') data = imFile.read() it = app('iTunes') sel = it.current_track.get() sel.artworks[0].data_.set(data[513:]) I get an error OSERROR: -1731 MESSAGE: Unknown object Similar applescript code looks like this: tell application "iTunes" set the_artwork to read (POSIX file "/Users/kartikaiyer/temp.pict") from 513 as picture set data of artwork 1 of current track to the_artwork end tell I tried using ASTranslate but it never instantiates 'the_artwork' and then throws an error when there is a reference to the_artwork. Can anyone help. I'm new to appscript and python in general.

    Read the article

  • Experienced developer trying to get outsourcing contract with current client.

    - by Mike
    I work for a major bank as a contract software developer. I've been there a few months, and without exception this place has the worst software practices I've ever seen. The software my team makes has no formal testing, terrible code (not reusable, hard to read, etc), minimal documentation, no defined development process and an absolutely sickening amount of waste due to bureaucratic overhead. Part of my contract is to maintain a group of thousands of very poorly written batch jobs. When one of the jobs fails (read: crashes), it's a developers job to look at the source, figure out what's wrong, fix it, and check it in. There is no quality assurance process or auditing of the results whatsoever. Once the developer says "it works" a manager signs off and it goes into production. What's disturbing is that these jobs essentially grab market data and put it into a third-party risk management system, which provides the bank with critical intelligence. I've discovered the disturbing truth that this has been happening since the 90s and nobody really has evidence the system is getting the correct data! Without going into details, an issue arose on Friday that was so horrible I actually stormed out of the place. I was ready to quit, but I decided to just get out to calm my nerves and possibly go back Monday. I've been reflecting today on how to handle this. I have realized that, in probably less than 6 months, I could (with 2 other developers) remake a large component of this system. The new system would provide them with, as primary benefits, a maintainable codebase less prone to error and a solid QA framework. To do it properly I would have to be outside the bank, the internal bureaucracy is just too much. And moreover, I think a bank is fundamentally not a place that can make good software. This is my plan. Write a report explaining in depth all the problems with their current system Explain why their software practices fail and generate a tremendous amount of error and waste. Use this as the basis for claiming the project must be developed externally. Write a high level development plan, including what resources I will require Hand 1,2,3 to my manager, hopefully he passes it up the chain. Worst case he fires me, but this isn't so bad. Convinced Executive decides to award my company a contract for the new system I have 8 years experience as a software contractor and have delivered my share of successful software products, but all working in-house for small/medium sized companies. When I read this over, I think I have a dynamite plan. But since this is the first time doing something this bold so I have my doubts. My question is, is this a good idea? If you think not, please spare no detail.

    Read the article

  • Get last n lines of a file with Python, similar to tail

    - by Armin Ronacher
    I'm writing a log file viewer for a web application and for that I want to paginate through the lines of the log file. The items in the file are line based with the newest item on the bottom. So I need a tail() method that can read n lines from the bottom and supports an offset. What I came up with looks like this: def tail(f, n, offset=0): """Reads a n lines from f with an offset of offset lines.""" avg_line_length = 74 to_read = n + offset while 1: try: f.seek(-(avg_line_length * to_read), 2) except IOError: # woops. apparently file is smaller than what we want # to step back, go to the beginning instead f.seek(0) pos = f.tell() lines = f.read().splitlines() if len(lines) >= to_read or pos == 0: return lines[-to_read:offset and -offset or None] avg_line_length *= 1.3 Is this a reasonable approach? What is the recommended way to tail log files with offsets?

    Read the article

  • .NET WPF Application : Loading a resourced .XPS document

    - by contactmatt
    I'm trying to load a .xps document into a DocumentViewer object in my WPF application. Everything works fine, except when I try loading a resourced .xps document. I am able to load the .xps document fine when using an absolute path, but when I try loading a resourced document it throws a "DirectoryNotFoundException" Here's an example of my code that loads the document. using System.Windows.Xps.Packaging; private void Window_Loaded(object sender, RoutedEventArgs e) { //Absolute Path works (below) //var xpsDocument = new XpsDocument(@"C:\Users\..\Visual Studio 2008\Projects\MyProject\MyProject\Docs\MyDocument.xps", FileAccess.Read); //Resource Path doesn't work (below) var xpsDocument = new XpsDocument(@"\MyProject;component/Docs/Mydocument.xps", FileAccess.Read); DocumentViewer.Document = xpsDocument.GetFixedDocumentSequence(); } When the DirectoryNotFoundException is thrown, it says "Could not find a part of the path : 'C:\MyProject;component\Docs\MyDocument.xps' It appears that it is trying to grab the .xps document from that path, as if it were an actual path on the computer, and not trying to grab from the .xps that is stored as a resource within the application.

    Read the article

  • Processing potentially large STDIN data, more than once

    - by d11wtq
    I'd like to provide an accessor on a class that provides an NSInputStream for STDIN, which may be several hundred megabytes (or gigabytes, though unlikely, perhaps) of data. When I caller gets this NSInputStream it should be able to read from it without worrying about exhausting the data it contains. In other words, another block of code may request the NSInputStream and will expect to be able to read from it. Without first copying all of the data into an NSData object which (I assume) would cause memory exhaustion, what are my options for handling this? The returned NSInputStream does not have to be the same instance, it simply needs to provide the same data. The best I can come up with right now is to copy STDIN to a temporary file and then return NSInputStream instances using that file. Is this pretty much the only way to handle it? Is there anything I should be cautious of if I go the temporary file route?

    Read the article

  • Experiences with "language converters"?

    - by Friedrich
    I have read a few articles mentioning converters from one language to another. I'm a bit more than skeptical about the use of such kind of tools. Does anyone know or have experiences let's say about Visual Basic to Java or vs converters? Just one example to pick http://www.tvobjects.com/products/products.html, claims to be the "world leader" or so in that aspect, However if read this: http://dev.mysql.com/tech-resources/articles/active-grid.html There the author states: "The consensus of MySQL users is that automated conversion tools for MS Access do not work. For example, tools that translate existing Access applications to Java often result in 80% complete solutions where finishing the last 20% of the work takes longer than starting from scratch." Well we know we need 80% of the time to implement the first 80% functionality and another 80% of the time for the other 20 %.... So has anyone tried such tools and found them to be worthwhile?

    Read the article

< Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >