Search Results

Search found 1657 results on 67 pages for 'writes on'.

Page 8/67 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • memcpy segmentation fault on linux but not os x

    - by Andre
    I'm working on implementing a log based file system for a file as a class project. I have a good amount of it working on my 64 bit OS X laptop, but when I try to run the code on the CS department's 32 bit linux machines, I get a seg fault. The API we're given allows writing DISK_SECTOR_SIZE (512) bytes at a time. Our log record consists of the 512 bytes the user wants to write as well as some metadata (which sector he wants to write to, the type of operation, etc). All in all, the size of the "record" object is 528 bytes, which means each log record spans 2 sectors on the disk. The first record writes 0-512 on sector 0, and 0-15 on sector 1. The second record writes 16-512 on sector 1, and 0-31 on sector 2. The third record writes 32-512 on sector 2, and 0-47 on sector 3. ETC. So what I do is read the two sectors I'll be modifying into 2 freshly allocated buffers, copy starting at record into buf1+the calculated offset for 512-offset bytes. This works correctly on both machines. However, the second memcpy fails. Specifically, "record+DISK_SECTOR_SIZE-offset" in the below code segfaults, but only on the linux machine. Running some random tests, it gets more curious. The linux machine reports sizeof(Record) to be 528. Therefore, if I tried to memcpy from record+500 into buf for 1 byte, it shouldn't have a problem. In fact, the biggest offset I can get from record is 254. That is, memcpy(buf1, record+254, 1) works, but memcpy(buf1, record+255, 1) segfaults. Does anyone know what I'm missing? Record *record = malloc(sizeof(Record)); record->tid = tid; record->opType = OP_WRITE; record->opArg = sector; int i; for (i = 0; i < DISK_SECTOR_SIZE; i++) { record->data[i] = buf[i]; // *buf is passed into this function } char* buf1 = malloc(DISK_SECTOR_SIZE); char* buf2 = malloc(DISK_SECTOR_SIZE); d_read(ad->disk, ad->curLogSector, buf1); d_read(ad->disk, ad->curLogSector+1, buf2); memcpy(buf1+offset, record, DISK_SECTOR_SIZE-offset); memcpy(buf2, record+DISK_SECTOR_SIZE-offset, offset+sizeof(Record)-sizeof(record->data));

    Read the article

  • Is it possible for competing file access to cause deadlock in Java?

    - by BlairHippo
    I'm chasing a production bug that's intermittent enough to be a real bastich to diagnose properly but frequent enough to be a legitimate nuisance for our customers. While I'm waiting for it to happen again on a machine set to spam the logfile with trace output, I'm trying to come up with a theory on what it could be. Is there any way for competing file read/writes to create what amounts to a deadlock condition? For instance, let's say I have Thread A that occasionally writes to config.xml, and Thread B that occasionally reads from it. Is there a set of circumstances that would cause Thread B to prevent Thread A from proceeding? My thanks in advance to anybody who helps with this theoretical fishing expedition. Edit: To answer Pyrolistical's questions: the code isn't using Filelock, and is running on a WinXP machine.

    Read the article

  • haystack's RealTimeSearchIndex causes django to hang on data entry

    - by lsc
    I'm using django-haystack and a xapian backend with real time indexing (haystack.indexes.RealTimeSearchIndexing) of model data and it works fine on my Ubuntu server. However, it causes django to hang upon data entry when I deployed the app on a RHEL5 server. Everything is hunky dory if I switch to a standard SearchIndex. Running ./manage.py rebuild_index manually works fine too. The major differences between the two setups would be the versions of Python (2.4.3 vs 2.6.4) and the xapian (1.0.4-1 vs 1.0.15). Any suggestions on what may be the problem? Nothing interesting appears in the logs, and I've tried different databases (mysql, sqlite3) and deployment methods (mod_python, wsgi) with no luck yet. I have noted the warning on the haystack docs stating that RealTimeSearchIndex is only handled gracefully with a Solr backend, however I'm running a very traffic site with only occasional writes so I'm fine with some CPU overheads on writes.

    Read the article

  • GUI Application becomes unresponsive while http request is being done

    - by JW
    I have just made my first proper little desktop GUI application that basically wraps a GUI (php-gtk) interface around a SimpleTest Web Test case to make it act as a remote testing client. Each time the local The Web Test case runs, it sends an HTTP request to another SimpleTest case (that has an XHTML interface) sitting on my server. The application, allows me to run one local test that collates information from multiple remote tests. It just has a 'Start Test' button, 'Stop Test' button and a setting to increase/decrease the number of remote tests conducted in each HTTP Request. Each test-run takes about an hour to complete. The trouble is, most of the time the application is making http requests. Furthermore, whenever an HTTP Request is being made, the application's GUI is unresponsive. I have taken to making the application wait a few seconds (iterating through the Gtk::main_iteration ) between requests in order to give the user time to re-size the window, press the Stop button, etc. But, this makes the whole test run take a lot a longer than is necessary. <?php require_once('simpletest/web_tester.php'); class TestRemoteTestingClient extends WebTestCase { function testRunIterations() { ... $this->assertTrue($this->get($nextUrl), 'getting from pointer:'. $this->_remoteMementoPointer); $this->assertResponse(200, "checking response for " . $nextUrl ); $this->assertText('RemoteNodeGreen'); $this->doGtkIterationsForMinNSeconds($secs); ... } public function doGtkIterationsForMinNSeconds($secs) { $this->appendStatusMessage("Waiting " . $secs); $start = time(); $end = $start + $secs; while( (time() < $end) ) { $this->appendStatusMessage("Waiting " . ($end - time())); while(gtk::events_pending()) Gtk::main_iteration(); } } } Is there a way to keep the application responsive whilst, at the same time making an HTTP request? I am considering splitting the application into two, where: Test Controller Application - Acts as a settings-writer / report-reader and this writes to settings file and reads a report file. Test Runner Application - Acts as a settings-reader / report-writer and, for each iteration reads the settings file, Runs the test, then write a report. So to tell it to close down - I'd: Press the Stop Button on the 'Test Controller Application', which writes to the settings file, which is read by the 'Test Runner Application' which stops, then writes to the report file to say it stopped the 'Test Controller Application' reads the report and updates the status and so on... However, before I go ahead and split the application in two - I am wondering if there is any other obvious way to deal with, this issue. I suspect it is probably quite common and a well-trodden path. Also is there an easier way to send messages between two applications?

    Read the article

  • Can i write to data output stream after reading response from data input stream?

    - by Sirius
    Hi, i want to do a client-server activity like this: 1. first the client sends/writes to output stream 2. the server responses with some data that will be read with input stream 3. after receiving the data, the client sends/writes to output stream again to respond that the data has been received now, do i have to close the output stream and re-open it again before doing step no.3 ? also if someone could provide me with a snippet, it would be really helpful. thanks

    Read the article

  • Scala and the Java Memory Model

    - by Ben Lings
    The Java Memory Model (since 1.5) treats final fields differently to non-final fields. In particular, provided the this reference doesn't escape during construction, writes to final fields in the constructor are guaranteed to be visible on other threads even if the object is made available to the other thread via a data race. (Writes to non-final fields aren't guaranteed to be visible, so if you improperly publish them, another thread could see them in a partially constructed state.) Is there any documentation on how/if the Scala compiler creates final (rather than non-final) backing fields for classes? I've looked through the language specification and searched the web but can't find any definitive answers. (In comparison the @scala.volatile annotation is documented to mark a field as volatile)

    Read the article

  • How to programmatically fill a database

    - by Dwaine Bailey
    Hi There, I currently have an iPhone app that reads data from an external XML file at start-up, and then writes this data to the database (it only reads/writes data that the user's app has not seen before, though) My concern is that there is going to be a back catalogue of data of several years, and that the first time the user runs the app it will have to read this in and be atrociously slow. Our proposed solution is to include this data "pre-built" into the applications database, so that it doesn't have to load in the archival data on first-load - that is already in the app when they purchase it. My question is whether there is a way to automatically populate this data with data from, say, an XML file or something. The database is in SQLite. I would populate it by hand, but obviously this will take a very long time, so I was just wondering if anybody had a more...programmatic solution...

    Read the article

  • PHP Linefeeds (\n) Not Working

    - by dosboy
    For some reason I can't use \n to create a linefeed when outputting to a file with PHP. It just writes "\n" to the file. I've tried using "\\n" as well, where it just writes "\n" (as expected). But I can't for the life of me figure out why adding \n to my strings isn't creating new lines. I've also tried \r\n but it just appends "\r\n" to the line in the file. Example: error_log('test\n', 3, 'error.log'); error_log('test2\n', 3, 'error.log'); Outputs: test\ntest2\n Using MAMP on OSX in case that matters (some sort of PHP config thing maybe?). Any suggestions?

    Read the article

  • XmlDocument.WriteTo truncates resultant file

    - by Brad Heller
    Trying to serialize an XmlDocument to file. The XmlDocument is rather large; however, in the debugger I can see that the InnerXml property has all of the XML blob in it -- it's not truncated there. Here's the code that writes my XmlDocument object to file: // Write that string to a file. var fileStream = new FileStream("AdditionalData.xml", FileMode.OpenOrCreate, FileAccess.Write); xmlDocument.WriteTo(new XmlTextWriter(fileStream, Encoding.UTF8) {Formatting = Formatting.Indented}); fileStream.Close(); The file that's produced here only writes out to line like 5,760 -- it's actually truncated in the middle of a tag! Anyone have any ideas why this would truncate here?

    Read the article

  • Is there an editor that shows WYSIWYG comments?

    - by Bráulio Bezerra
    Has anyone seen an editor/IDE that shows WYSIWYG comments inside the code? I have seen some that show the docs of an element in a separated tab/windows, but not together with code. For example, a JavaDoc comment would be much clearer and easier to edit if it had no tags and could be edited like a snippet from a normal text document. /** * Writes <code>Hello world!</code> to the <b>standard output</b>. * @seealso showGoodbye */ public static void showHello() { Could be something like this: /* Writes Hello world! to the standard output. See also: showGoodbye() */ public static void showHello() { but, editable, of course. And for anyone who happens to have some knowledge/experience with open IDEs like Eclipse, Netbeans, etc.: would it be too hard to implement this?

    Read the article

  • Referencing a theorem-like environment by its [name]

    - by Seamus
    I am using ntheorem to typeset a set of conditions. In my preamble I have: \theoremstyle{empty} \newtheorem{Condtion}{Condtion} When I want to typeset a condition, I write: \begin{Condtion}[name] \label{cnd:nm} foo foo foo \end{Condition} The name appears boldface on the same line as the start of the text of the condition, with no number or anything. Perfect. What I want to do now is refer to the condition by some variant of the \ref command, \ref calls the number [which is not displayed anywhere else] \thref writes "Condition n" for the nth condition \nameref writes the name of the SECTION of the label. a zref solution was suggested here, but seems unsatisfactory. Any clues?

    Read the article

  • ASP.NET Granting access to local resources

    - by Mina Samy
    Hi all I have an ASP.NET web application that runs on a windows server 2003 server. there is a form that reads and writes data to an xml file inside the application's directory. I always grant the NETWORK SERVICE user full control on my application folder so that it can read and write to the xml file. I put the application on another windows server 2003 server and did the same steps above but i was getting an Access denied exception on the form that reads and writes to the xml. I did some search and found that if you grant the user ASPNET full control to the directory it would work, I did that and it worked fine. my question is: what is the difference between granting full control permissions to NETWORK SERVICE and ASPNET users ? and what can be the difference between the two servers that caused this issue ? thanks

    Read the article

  • How can I synchronize database access between a write-thread and a read-thread?

    - by Runcible
    My program has two threads: Main execution thread that handles user input and queues up database writes A utility thread that wakes up every second and flushes the writes to the database Inside the main thread, I occasionally need to make reads on the database. When this happens, performance is not important, but correctness is. (In a perfect world, I would be reading from a cache, not making a round-trip to the database - but let's put that aside for the sake of discussion.) How do I make sure that the main thread sees a correct / quiescent database? A standard mutex won't work, since I run the risk of having the main thread grab the mutex before the data gets flushed to the database. This would be a big race condition. What I really want is some sort of mutex that lets the main thread of execution proceed only AFTER the mutex has been grabbed and released once. Does such a thing exist? What's the best way to solve this problem?

    Read the article

  • Anyone Know a Great Sparse One Dimensional Array Library in Python?

    - by TheJacobTaylor
    I am working on an algorithm in Python that uses arrays heavily. The arrays are typically sparse and are read from and written to constantly. I am currently using relatively large native arrays and the performance is good but the memory usage is high (as expected). I would like to be able to have the array implementation not waste space for values that are not used and allow an index offset other than zero. As an example, if my numbers start at 1,000,000 I would like to be able to index my array starting at 1,000,000 and not be required to waste memory with a million unused values. Array reads and writes needs to be fast. Expanding into new territory can be a small delay but reads and writes should be O(1) if possible. Does anybody know of a library that can do it? Thanks!

    Read the article

  • Combining FileStream and MemoryStream to avoid disk accesses/paging while receiving gigabytes of data?

    - by w128
    I'm receiving a file as a stream of byte[] data packets (total size isn't known in advance) that I need to store somewhere before processing it immediately after it's been received (I can't do the processing on the fly). Total received file size can vary from as small as 10 KB to over 4 GB. One option for storing the received data is to use a MemoryStream, i.e. a sequence of MemoryStream.Write(bufferReceived, 0, count) calls to store the received packets. This is very simple, but obviously will result in out of memory exception for large files. An alternative option is to use a FileStream, i.e. FileStream.Write(bufferReceived, 0, count). This way, no out of memory exceptions will occur, but what I'm unsure about is bad performance due to disk writes (which I don't want to occur as long as plenty of memory is still available) - I'd like to avoid disk access as much as possible, but I don't know of a way to control this. I did some testing and most of the time, there seems to be little performance difference between say 10 000 consecutive calls of MemoryStream.Write() vs FileStream.Write(), but a lot seems to depend on buffer size and the total amount of data in question (i.e the number of writes). Obviously, MemoryStream size reallocation is also a factor. Does it make sense to use a combination of MemoryStream and FileStream, i.e. write to memory stream by default, but once the total amount of data received is over e.g. 500 MB, write it to FileStream; then, read in chunks from both streams for processing the received data (first process 500 MB from the MemoryStream, dispose it, then read from FileStream)? Another solution is to use a custom memory stream implementation that doesn't require continuous address space for internal array allocation (i.e. a linked list of memory streams); this way, at least on 64-bit environments, out of memory exceptions should no longer be an issue. Con: extra work, more room for mistakes. So how do FileStream vs MemoryStream read/writes behave in terms of disk access and memory caching, i.e. data size/performance balance. I would expect that as long as enough RAM is available, FileStream would internally read/write from memory (cache) anyway, and virtual memory would take care of the rest. But I don't know how often FileStream will explicitly access a disk when being written to. Any help would be appreciated.

    Read the article

  • boost.asio's socket's recieve/send functions are bad?

    - by the_drow
    Data may be read from or written to a connected TCP socket using the receive(), async_receive(), send() or async_send() member functions. However, as these could result in short writes or reads, an application will typically use the following operations instead: read(), async_read(), write() and async_write(). I don't really understand that remark as read(), async_read(), write() and async_write() can also end up in short writes or reads, right? Why are those functions not the same? Should I use them at all? Can someone clarify that remark for me?

    Read the article

  • Fibonacci Numbers in Haskell

    - by boraer
    Hi everbody I need to change my F# code to Haskell code but I am so new in Haskell and I can not this My code simply read data from keyboard if data not an integer return an error message then calculate the n fibonacci number then writes to a list after that writes the list into a txt file Here is my code open System let rec fib n = match n with |0->0 |1->1 |2->1 |n->fib(n-1)+fib(n-2);; let printFibonacci list = for i=0 to (List.length list)-1 do printf "%d " (list.Item(i));; let writeToFile list = let file = System.IO.File.Create("C:\out2.txt") let mutable s ="" let writer = new System.IO.StreamWriter(file) try for i=0 to (List.length list)-1 do s <- list.Item(i).ToString() writer.Write(s+" ") finally writer.Close() file.Dispose() printfn "Writed To File" let mutable control = true let mutable num = 0 while control do try printfn "Enter a Number:" num <- Convert.ToInt32(stdin.ReadLine()) let listFibonacci = [for i in 0 .. num-1->fib(i)] printFibonacci(listFibonacci) printfn "\n%A"(listFibonacci) writeToFile(listFibonacci) control<-false with | :? System.FormatException->printfn "Number Format Exception"; Console.ReadKey true|>ignore

    Read the article

  • Why are there performance differences when a SQL function is called from .Net app vs when the same c

    - by Dan Snell
    We are having a problem in our test and dev environments with a function that runs quite slowly at times when called from an .Net Application. When we call this function directly from management studio it works fine. Here are the differences when they are profiled: From the Application: CPU: 906 Reads: 61853 Writes: 0 Duration: 926 From SSMS: CPU: 15 Reads: 11243 Writes: 0 Duration: 31 Now we have determined that when we recompile the function the performance returns to what we are expecting and the performance profile when run from the application matches that of what we get when we run it from SSMS. It will start slowing down again at what appear to random intervals. We have not seen this in prod but they may be in part because everything is recompiled there on a weekly basis. So what might cause this sort of behavior?

    Read the article

  • Is there a 'catch' with FastFormat?

    - by Roddy
    I just read about the FastFormat C++ i/o formatting library, and it seems too good to be true: Faster even than printf, typesafe, and with what I consider a pleasing interface: // prints: "This formats the remaining arguments based on their order - in this case we put 1 before zero, followed by 1 again" fastformat::fmt(std::cout, "This formats the remaining arguments based on their order - in this case we put {1} before {0}, followed by {1} again", "zero", 1); // prints: "This writes each argument in the order, so first zero followed by 1" fastformat::write(std::cout, "This writes each argument in the order, so first ", "zero", " followed by ", 1); This looks almost too good to be true. Is there a catch? Have you had good, bad or indifferent experiences with it? CW on this question, as there's probably no right answer...

    Read the article

  • .NET 3.5SP1 64-bit memory model vs. 32-bit memory model

    - by James Dunne
    As I understand it, the .NET memory model on a 32-bit machine guarantees 32-bit word writes and reads to be atomic operations but does not provide this guarantee on 64-bit words. I have written a quick tool to demonstrate this effect on a Windows XP 32-bit OS and am getting results consistent with that memory model description. However, I have taken this same tool's executable and run it on a Windows 7 Enterprise 64-bit OS and am getting wildly different results. Both the machines are identical specs just with different OSes installed. I would have expected that the .NET memory model would guarantee writes and reads to BOTH 32-bit and 64-bit words to be atomic on a 64-bit OS. I find results completely contrary to BOTH assumptions. 32-bit reads and writes are not demonstrated to be atomic on this OS. Can someone explain to me why this fails on a 64-bit OS? Tool code: using System; using System.Threading; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { var th = new Thread(new ThreadStart(RunThread)); var th2 = new Thread(new ThreadStart(RunThread)); int lastRecordedInt = 0; long lastRecordedLong = 0L; th.Start(); th2.Start(); while (!done) { int newIntValue = intValue; long newLongValue = longValue; if (lastRecordedInt > newIntValue) Console.WriteLine("BING(int)! {0} > {1}, {2}", lastRecordedInt, newIntValue, (lastRecordedInt - newIntValue)); if (lastRecordedLong > newLongValue) Console.WriteLine("BING(long)! {0} > {1}, {2}", lastRecordedLong, newLongValue, (lastRecordedLong - newLongValue)); lastRecordedInt = newIntValue; lastRecordedLong = newLongValue; } th.Join(); th2.Join(); Console.WriteLine("{0} =? {2}, {1} =? {3}", intValue, longValue, Int32.MaxValue / 2, (long)Int32.MaxValue + (Int32.MaxValue / 2)); } private static long longValue = Int32.MaxValue; private static int intValue; private static bool done = false; static void RunThread() { for (int i = 0; i < Int32.MaxValue / 4; ++i) { ++longValue; ++intValue; } done = true; } } } Results on Windows XP 32-bit: Windows XP 32-bit Intel Core2 Duo P8700 @ 2.53GHz BING(long)! 2161093208 > 2161092246, 962 BING(long)! 2162448397 > 2161273312, 1175085 BING(long)! 2270110050 > 2270109040, 1010 BING(long)! 2270115061 > 2270110059, 5002 BING(long)! 2558052223 > 2557528157, 524066 BING(long)! 2571660540 > 2571659563, 977 BING(long)! 2646433569 > 2646432557, 1012 BING(long)! 2660841714 > 2660840732, 982 BING(long)! 2661795522 > 2660841715, 953807 BING(long)! 2712855281 > 2712854239, 1042 BING(long)! 2737627472 > 2735210929, 2416543 1025780885 =? 1073741823, 3168207035 =? 3221225470 Notice how BING(int) is never written and demonstrates that 32-bit reads/writes are atomic on this 32-bit OS. Results on Windows 7 Enterprise 64-bit: Windows 7 Enterprise 64-bit Intel Core2 Duo P8700 @ 2.53GHz BING(long)! 2208482159 > 2208121217, 360942 BING(int)! 280292777 > 279704627, 588150 BING(int)! 308158865 > 308131694, 27171 BING(long)! 2549116628 > 2548884894, 231734 BING(int)! 534815527 > 534708027, 107500 BING(int)! 545113548 > 544270063, 843485 BING(long)! 2710030799 > 2709941968, 88831 BING(int)! 668662394 > 667539649, 1122745 1006355562 =? 1073741823, 3154727581 =? 3221225470 Notice that BING(long) AND BING(int) are both displayed! Why are the 32-bit operations failing, let alone the 64-bit ones?

    Read the article

  • OCFS2 Now Certified for E-Business Suite Release 12 Application Tiers

    - by sergio.leunissen
    Steven Chan writes that OCFS2 is now certified for use as a clustered filesystem for sharing files between all of your E-Business Suite application tier servers.  OCFS2 (Oracle Cluster File System 2) is a free, open source, general-purpose, extent-based clustered file system which Oracle developed and contributed to the Linux community.  It was accepted into Linux kernel 2.6.16.OCFS2 is included in Oracle Enterprise Linux (OEL) and supported under Unbreakable Linux support.

    Read the article

  • Good practices - database programming, unit testing

    - by Piotr Rodak
    Jason Brimhal wrote today on his blog that new book, Defensive Database Programming , written by Alex Kuznetsov ( blog ) is coming to bookstores. Alex writes about various techniques that make your code safer to run. SQL injection is not the only one vulnerability the code may be exposed to. Some other include inconsistent search patterns, unsupported character sets, locale settings, issues that may occur during high concurrency conditions, logic that breaks when certain conditions are not met. The...(read more)

    Read the article

  • Type of Blobs

    - by kaleidoscope
    With the release of Windows Azure November 2009 CTP, now we have two types of blobs. Block Blob - This blob type is in place since PDC 2008 and is optimized for streaming workloads. [Max Size allowed : 200GB] Page Blob - With November 2009 CTP release, a new blob type is added which is optimized for random read / writes called Page Blob. [Max Size allowed : 1TB] More details can be found at: http://geekswithblogs.net/IUnknown/archive/2009/11/16/azure-november-ctp-announced.aspx Amit, S

    Read the article

  • {smartassembly} Software for Code Obfuscation

    {smartassembly} is a tool for ensuring that the source code your commercial .NET application isn't visible to anyone with .NET Reflector. Matteo Slaviero, who writes for us about encryption in .NET, asked if he could write a review of {smartassembly} for Simple-Talk. Because we like the product too, and Red Gate Software had recently taken it over, we were happy to agree. span.fullpost {display:none;}

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >