Search Results

Search found 20904 results on 837 pages for 'disk performance'.

Page 505/837 | < Previous Page | 501 502 503 504 505 506 507 508 509 510 511 512  | Next Page >

  • .NET PerformanceCounter for Hard Faults/sec

    Vista's Resource Monitor includes a reading for "Hard Faults/sec". Is there an equivalent performance counter I can use in C# to get this reading? I've tried the "Page Faults/sec" under the memory category, but that appears to be something different.

    Read the article

  • How long do you keep log files?

    - by Alex
    I have an application which writes its log files in a special folder. Now I'd like to add a functionality to delete these logs after a defined period of time automatically. But how long should I keep the log files? What are "good" default values (7 or 180 days)? Or do you prefer other criteria (e.g. max. used disk space)?

    Read the article

  • How fast is Berkeley DB SQL compared to SQLite?

    - by dan04
    Oracle recently released a Berkeley DB back-end to SQLite. I happen to have a hundreds-of-megabytes SQLite database that could very well benefit from "improved performance, concurrency, scalability, and reliability", but Oracle's site appears to lack any measurements of the improvements. Has anyone here done some benchmarking?

    Read the article

  • How do I remove eTag headers from IIS7?

    - by Brent Broome
    Per Yahoo's best practices for high performance web sites, I'd like to remove Etags from my headers (I'm manually managing all my caching and have no need for Etags... and when/if I need to scale to a farm, I'd really like them gone). I'm running IIS7 on Windows Server 2008. Anyone know how I can do this?

    Read the article

  • How to programmatically detect sata drive unplug in SuSE Linux?

    - by Steven Behnke
    Does anyone know of a method I can use to programmatically detect if a SATA hard drive has been unplugged? Our file system is mounted in READ-ONLY mode when we need to detect the removal of the drive. We noticed the other day that we were able to unplug a hard drive and everything continued to run without a hitch until the next time we attempted to read from a file on disk.

    Read the article

  • Help w/ Sluggish "rake cucumber"

    - by Eric M.
    I've been trying to debug some super slow performance in running my cucumber features. I've run various calls through ruby-prof and think I see the bottlenecks (not too familiar with using ruby-prof) but do not know the cause or more important the solution. I've include below the output from running rake cucumber. http://dl.dropbox.com/u/1788885/rake_cucumber.txt Does anyone have any idea why this is happening or how I could go about debugging it further? Thanks, Eric

    Read the article

  • Just to not to be ingnorant.

    - by atch
    Could anyone explain to me why is it that producers of processors claim that their processor can perform so many thousands (or millions) operations per second and yet typical program (Word, VS etc.) on my machine with 4GB, 3500hz starts with no less than 10sek. Have to mention that I've just formatted disk and tick any necessarry boxes to optimize my machine. So if for example outlook starts in 10 sek I wonder how many millions of operations have to be performed to run such program? Thanks

    Read the article

  • GWT Table that supports dynamic filtering

    - by Holograham
    This question is similar to http://stackoverflow.com/questions/161686/gwt-table-that-supports-sorting-scrolling-and-filtering However I would prefer open source and I am looking for snappy performance. I want a good way to perform dynamic filtering on rows. SmartGWT's adaptive filter looks interesting. http://www.smartclient.com/smartgwt/showcase/#grid_adaptive_filter_featured_category Anyone have any experience with this?

    Read the article

  • Is Ruby on Rails slow with medium traffic?

    - by IHawk
    Hello ! I made some searches on Google, and I read some posts, articles and benchmarks about Ruby on Rails being slow and I am planning to build one website that will have a good amount of users inserting data and there will be some applications to process this data (maybe in Ruby, you can help me choosing the language). What is the real performance of Ruby on Rails with large traffic ? Thank you !

    Read the article

  • Why is SpringSource Tool Suite (STS) so slow? And how can I fix it?

    - by colbeerhey
    I've been running STS 2.3.2 on a MacBook Pro for a few days now. I'm finding the performance to be significantly slower than any other build of Eclipse I've used. For example, switching from one tab to another can take up to 4 seconds. I tried turning off much of the validation, and increasing the memory, but it's not making a difference. Are others having similar experiences?

    Read the article

  • Uninstall exceptions in InstallShield

    - by SiN
    Hello, I have a setup project with InstallShield 2010. I'm deploying a configuration file during installation. However, when uninstalled, InstallShield decides to delete it (which is normal). The question is, is there a way to keep the file on the hard disk even after the application in uninstalled? I don't want to reconfigure the application every time the user uninstalls/installs. Edit: I'm using MSI project.

    Read the article

  • Using gcc compiler flag in Xcode

    - by tech74
    Hi, Shark has identified a area of code to be improved - Unaligned loop start and recommends adding -falign-loops=16 (gcc compiler flag). I've added this to Other C flags in iphone Xcode both to the dependant project and top level project. However it still does not seem to affect the performance and Shark is still reporting the same problem so it appears it didn't work. Am i doing this correctly?

    Read the article

  • NoSQL DB for .Net document-based database (ECM)

    - by Dane
    I'm halfway through coding a basic multi-tenant SaaS ECM solution. Each client has it's own instance of the database / datastore, but the .Net app is single instance. The documents are pretty much read only (i.e. an image archive of tiffs or PDFs) I've used MSSQL so far, but then started thinking this might be viable in a NoSQL DB (e.g. MongoDB, CouchDB). The basic premise is that it stores documents, each with their own particular indexes. Each tenant can have multiple document types. e.g. One tenant might have an invoice type, which has Customer ID, Invoice Number and Invoice Date. Another tenant might have an application form, which has Member Number, Application Number, Member Name, and Application Date. So far I've used the old method which Sharepoint (used?) to use, and created a document table which has int_field_1, int_field_2, date_field_1, date_field_2, etc. Then, I've got a "mapping" table which stores the customer specific index name, and the database field that will map to. I've avoided the key-value pair model in the DB due to volume of documents. This way, we can support multiple document types in the one table, and get reasonably high performance out of it, and allow for custom document type searches (i.e. user selects a document type, then they're presented with a list of search fields). However, a NoSQL DB might make this a lot simpler, as I don't need to worry about denormalizing the document. However, I've just got concerns about the rest of the data around a document. We store an "action history" against the document. This tracks views, whether someone emails the document from within the system, and other "future" functionality (e.g. faxing). We have control over the document load process, so we can manipulate the data however it needs to be to get it in the document store (e.g. assign unique IDs). Users will not be adding in their own documents, so we shouldn't need to worry about ACID compliance, as the documents are relatively static. So, my questions I guess : Is a NoSQL DB a good fit Is MongoDB the best for Asp.Net (I saw Raven and Velocity, but they're still kinda beta) Can I store a key for each document, and then store the action history in a MSSQL DB with this key? I don't need to do joins, it would be if a person clicks "View History" against a document. How would performance compare between the two (NoSQL DB vs denormalized "document" table) Volumes would be up to 200,000 new documents per month for a single tenant. My current scaling plan with the SQL DB involves moving the SQL DB into a cluster when certain thresholds are reached, and then reviewing partitioning and indexing structures.

    Read the article

  • MBR Booting from DOS

    - by eflukx
    For a project I would like to invoke the MBR on the first harddisk directly from DOS. I've written a small assembler program that loads the MBR in memory at 0:7c00h an does a far jump to it. I've put my util on a bootable floppy. The disk (HD0, 0x80) i'm trying to boot has a TrueCrypt boot loader on it. It shows up the TrueCrypt screen, but after typing in the password it crashes the system. When I run my little utlility (w00t.com) on a normal WinXP machine it seams to crash immedealty. Apparently I'm forgetting some crucial stuff the BIOS normally does, my guess is it's something trivial. Can someone with better bare-metal DOS and BIOS experience help me out? Heres my code: .MODEL tiny .386 _TEXT SEGMENT USE16 INCLUDE BootDefs.i ORG 100h start: ; http://vxheavens.com/lib/vbw05.html ; Before DOS has booted the BIOS stores the amount of usable lower memory ; in a word located at 0:413h in memory. We going to erase this value because ; we have booted dos before loading the bootsector, and dos is fat (and ugly). ; fake free memory ;push ds ;push 0 ;pop ds ;mov ax, TC_BOOT_LOADER_SEGMENT / 1024 * 16 + TC_BOOT_MEMORY_REQUIRED ;mov word ptr ds:[413h], ax ;ax = memory in K ;pop ds ;lea si, memory_patched_msg ;call print ;mov ax, cs mov ax, 0 mov es, ax ; read first sector to es:7c00h (== cs:7c00) mov dl, 80h mov cl, 1 mov al, 1 mov bx, 7c00h ;load sector to es:bx call read_sectors lea si, mbr_loaded_msg call print lea si, jmp_to_mbr_msg call print ;Set BIOS default values in environment cli mov dl, 80h ;(drive C) xor ax, ax mov ds, ax mov es, ax mov ss, ax mov sp, 0ffffh sti push es push 7c00h retf ;Jump to MBR code at 0:7c00h ; Print string print: xor bx, bx mov ah, 0eh cld @@: lodsb test al, al jz print_end int 10h jmp @B print_end: ret ; Read sectors of the first cylinder read_sectors: mov ch, 0 ; Cylinder mov dh, 0 ; Head ; DL = drive number passed from BIOS mov ah, 2 int 13h jnc read_ok lea si, disk_error_msg call print read_ok: ret memory_patched_msg db 'Memory patched', 13, 10, 7, 0 mbr_loaded_msg db 'MBR loaded', 13, 10, 7, 0 jmp_to_mbr_msg db 'Jumping to MBR code', 13, 10, 7, 0 disk_error_msg db 'Disk error', 13, 10, 7, 0 _TEXT ENDS END start

    Read the article

  • .DS_Store valid in Leopard and Snow Leopard

    - by madmw
    Scripts to generate a DMG image disk for mac usually copy a .DS_Store file with folder customizations (icons sizes, positions, image background, etc.) You customize a read/write copy of the DMG, then copy the .DS_Store and use it to automate the DMG generation. It seems .DS_Store made in Leopard doesn't work in Snow Leopard and viceversa. A DMG created in Snow Leopard won't show the background image in Leopard. Is there a way to made a .DS_Store file that works in both OSX?

    Read the article

  • ASP.NET custom templates, still ASP.NET controls possible?

    - by Sha Le
    Hello: we currently do not use asp.net controls (no web forms). The way we do is: 1 Read HTML file from disk 2 lookup database, parse tags and populate data finally, Response.Write(page.ToString()); here there is no possibility of using asp.net controls. What I am wondering is, if we use asp.net controls in those HTML files, is there way to process them during step 2? Thanks and appreciate your response.

    Read the article

  • one high-end server with one Application Server or multiple Application Servers?

    - by elgcom
    If I have a high-end server, for example with 1T memory and 8x4core CPU... will it bring more performance if I run multiple App Server (on different JVM) rather than just one App Server? On App Server I will run some services (EAR whith message driven beans) which exchange message with each other. btw, has java 64bit now no memory limitation any more? http://java.sun.com/products/hotspot/whitepaper.html#64

    Read the article

  • A/B testing on App Engine?

    - by Silver Dragon
    What would be the simplest implementation of an A/B testing system running on App engine? I'm especially keen towards performance implications of using Datastore for back-end (with looong query times), and database design.

    Read the article

  • Data aggregation mongodb vs mysql

    - by Dimitris Stefanidis
    I am currently researching on a backend to use for a project with demanding data aggregation requirements. The main project requirements are the following. Store millions of records for each user. Users might have more than 1 million entries per year so even with 100 users we are talking about 100 million entries per year. Data aggregation on those entries must be performed on the fly. The users need to be able to filter on the entries by a ton of available filters and then present summaries (totals , averages e.t.c) and graphs on the results. Obviously I cannot precalculate any of the aggregation results because the filter combinations (and thus the result sets) are huge. Users are going to have access on their own data only but it would be nice if anonymous stats could be calculated for all the data. The data is going to be most of the time in batch. e.g the user will upload the data every day and it could like 3000 records. In some later version there could be automated programs that upload every few minutes in smaller batches of 100 items for example. I made a simple test of creating a table with 1 million rows and performing a simple sum of 1 column both in mongodb and in mysql and the performance difference was huge. I do not remember the exact numbers but it was something like mysql = 200ms , mongodb = 20 sec. I have also made the test with couchdb and had much worse results. What seems promising speed wise is cassandra which I was very enthusiastic about when I first discovered it. However the documentation is scarce and I haven't found any solid examples on how to perform sums and other aggregate functions on the data. Is that possible ? As it seems from my test (Maybe I have done something wrong) with the current performance its impossible to use mongodb for such a project although the automated sharding functionality seems like a perfect fit for it. Does anybody have experience with data aggregation in mongodb or have any insights that might be of help for the implementation of the project ? Thanks, Dimitris

    Read the article

< Previous Page | 501 502 503 504 505 506 507 508 509 510 511 512  | Next Page >