Search Results

Search found 20904 results on 837 pages for 'disk performance'.

Page 225/837 | < Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >

  • FileSystemWatcher Work is Done?

    - by Snowy
    I setup a FsWatcher on a local filesystem directory. I only want to know when files are added to the directory so they can be moved to another filesystem. I seem to be able to detect when the first file is in, but actually I want to know when all files from a given copy operation are done. If I used Windows Explorer to copy files from one directory to another, Explorer would tell me that there are n seconds left in the transfer, so while there is some activity for the begin-transfer and end-transfer for each file, it appears that there is something for the begin-transfer and end-transfer for all files. I wonder if there is something similar that I can do just with the .NET Framework. I would like to know when "all" files are in and not just a single file in a "transaction". If there is nothing baked in, maybe I should come up with some kind of waiting/countering in order to only do my activity when a job is "done". Not sure if I'm making 100% sense on this one, please anyone comment. Thanks.

    Read the article

  • WordPress website getting hung up for 10-15 seconds

    - by synergy989
    Problem I have a website (which loads fine): http://testupg.videve.com/ Go to the blog section (and it begins to load, gets hung up, then loads): http://testupg.videve.com/blog/ It's all one WordPress install. The only pages that are getting hung up are the blog page itself and any article page. The video pages load fine etc. What Have I done and noticed. If I disable all plugins, the blog page loads fine. I narrowed it down to DisplayBuddy plugins (Featured posts, Carousel, Slider)...as soon as I enable any single one of them, the site loads slow. The thing that doesn't make sense is, I disabled the sidebar (deleted it from the template) and the article page still loaded slow and it has ZERO instances of this plugin. I disable the plugin and the article page loads fine. How on earth can this be? I am hoping this case above can give just a little bit of insight! One other thing. The video pages use a custom taxonomy so I am wondering if its something to do with the default wordpress taxonomy. Any help is GREATLY appreciated, I have been at this thing for hours and it's time to call in some support. Cheers

    Read the article

  • SQL Database Schema Design For Large 3 Billion Relationship Database.

    - by K-Bell
    Get your geek on. Can you solve this? I am designing a products database for SQL Server 2008 R2 Ed. (not Enterprise Ed.) that will be used to store custom product configurations for over 30,000 distinct products. The database will have up to 500 users at a time. Here is the design problem… Each Product has a collection of Parts (up to 50 parts per product). So if I have 30,000 Products and each of them can have up to 50 Parts, that’s 1.5 million distinct Product-to-Part relationships …or as an equation… 30,000 (Products) X 50 (Parts) = 1.5 million Product-to-Parts records. …and If… Each Part can have up to 2000 finish options (A finish is a paint color). NOTE: Only one finish will be selected by a user at run-time. The 2000 finish options I need to store are the allowed options for a specific part on a specific product. So if I have 1.5 million distinct product-to-part relationships/records and each of those parts can have up to 2,000 finishes that is 3 billion allowable product-to-part-to finish relationships/records …or as an equation… 1.5 million (Parts) x 2,000 (Finishes) = 3 Billion Product-to-Part-to-Finishes records. How can I design this database so that I can execute fast and efficient queries for a specific product and return its list of Parts and all the allowable Finishes for each part without 3 Billion Product-to-Part-to-Finish records? Read time is more important then write time. Please post your thoughts/suggestions if you have experience with large databases. Thanks!

    Read the article

  • What is the cost of object creating.

    - by Tony
    Hi If I have to choose between static method and creating an instance and use instance method, I will choose static methods always. but what is the detailed overhead of creating an instance? for example I saw a DAL which can be done with static classes but they choose to make it instance now in the BLL at every single call they call something like. new Customer().GetData(); how far this can be bad? Thanks

    Read the article

  • Session Timeout and page response time

    - by Johnny5
    Hi, I'm load testing an asp.net app. The load test is simulating 500 user doing searchs on the site and browsing the results. I'm observing that the more I reduce the session timeout limit (in web.config) the better the page response time. For exemple, with a timeout at 10 minutes, I got an average response time of 8.35 seconds. With a timout at 3 minutes, the average response time for the same page is 3,98 seconds. The session in stored "InProc". I supposed the memory used by the "no more used but still actives" sessions may be in cause. But, even if there is more memory used when the timeout is at 10, there is still plenty of memory available (about 2.7Gb). Any ideas?

    Read the article

  • Most efficient way of checking if Date object and Calendar object are in the same month

    - by Indigenuity
    I am working on a project that will run many thousands of comparisons between dates to see if they are in the same month, and I am wondering what the most efficient way of doing it would be. This isn't exactly what my code looks like, but here's the gist: List<Date> dates = getABunchOfDates(); Calendar month = Calendar.getInstance(); for(int i = 0; i < numMonths; i++) { for(Date date : dates) { if(sameMonth(month, date) .. doSomething } month.add(Calendar.MONTH, -1); } Creating a new Calendar object for every date seems like a pretty hefty overhead when this comparison will happen thousands of times, soI kind of want to cheat a bit and use the deprecated method Date.getMonth() and Date.getYear() public static boolean sameMonth(Calendar month, Date date) { return month.get(Calendar.YEAR) == date.getYear() && month.get(Calendar.MONTH) == date.getMonth(); } I'm pretty close to just using this method, since it seems to be the fastest, but is there a faster way? And is this a foolish way, since the Date methods are deprecated? Note: This project will always run with Java 7

    Read the article

  • Browser timing out attempting to load images

    - by notJim
    I've got a page on a webapp that has about 13 images that are generated by my application, which is written in the Kohana PHP framework. The images are actually graphs. They are cached so they are only generated once, but the first time the user visits the page, and the images all have to be generated, about half of the images don't load in the browser. Once the page has been requested once and images are cached, they all load successfully. Doing some ad-hoc testing, if I load an individual image in the browser, it takes from 450-700 ms to load with an empty cache (I checked this using Google Chrome's resource tracking feature). For reference, it takes around 90-150 ms to load a cached image. Even if the image cache is empty, I have the data and some of the application's startup tasks cached, so that after the first request, none of that data needs to be fetched. My questions are: Why are the images failing to load? It seems like the browser just decides not to download the image after a certain point, rather than waiting for them all to finish loading. What can I do to get them to load the first time, with an empty cache? Obviously one option is to decrease the load times, and I could figure out how to do that by profiling the app, but are there other options? As I mentioned, the app is in the Kohana PHP framework, and it's running on Apache. As an aside, I've solved this problem for now by fetching the page as soon as the data is available (it comes from a batch process), so that the images are always cached by the time the user sees them. That feels like a kludgey solution to me, though, and I'm curious about what's actually going on.

    Read the article

  • SQL Profiler and Tuning Advisor for Reporting Services - what events should be selected?

    - by chris
    I've used the SQL Profiler to generate a trace file, and tuning advisor to take that trace file and provide some recommendations on db updates. However, the SQL Profiler doesn't seem to track the queries when running against a Reporting Server, the profiler doesn't seem to be capturing any of the queries. I'm logging the defaults (SQL:BatchCompleted and Starting, RPC:completed, and Sessions - Existing Connections) What events should I be capturing in SQL Profiler in order to run the tuning advisor?

    Read the article

  • Optimizing NSNumber numberWithInt:

    - by Riviera
    I am profiling an iPhone app and I noticed a strange pattern. In a certain block of code that's called quite frequently... [item setQuadrant:[NSNumber numberWithInt:a]]; [item setIndex:[NSNumber numberWithInt:b]]; [item setTimestamp:[NSNumber numberWithInt:c]]; [item setState:[NSNumber numberWithInt:d]]; [item setCompletionPercentage:[NSNumber numberWithInt:e]]; [item setId_:[NSNumber numberWithInt:f]]; ...the first call to [NSNumber numberWithInt:] takes an inordinate amount of time, in the order of 10-15x that of the remaining calls. I've verified that the results are consistent if I shuffle the lines (the first line is always the slow one, by the same ratio). Is there something going on that I'm not aware of? Perhaps this happens because this block is inside a try/catch?

    Read the article

  • Does having a longer string in a SQL Like expression allow hinder or help query executing speed?

    - by Allain Lalonde
    I have a db query that'll cause a full table scan using a like clause and came upon a question I was curious about... Which of the following should run faster in Mysql or would they both run at the same speed? Benchmarking might answer it in my case, but I'd like to know the why of the answer. The column being filtered contains a couple thousand characters if that's important. SELECT * FROM users WHERE data LIKE '%=12345%' or SELECT * FROM users WHERE data LIKE '%proileId=12345%' I can come up for reasons why each of these might out perform the other, but I'm curious to know the logic.

    Read the article

  • Does table columns increase select statement execution time

    - by paokg4
    I have 2 tables, same structure, same rows, same data but the first has more columns (fields). For example: I select the same 3 fields from both of them (SELECT a,b,c FROM mytable1 and then SELECT a,b,c FROM mytable2) I've tried to run those queries on 100,000 records (for each table) but at the end I got the same execution time (0.0006 sec) Do you know if the number of the columns (and in the end the size of the one table is bigger than the other) has to do something with the query execution time?

    Read the article

  • Any reason not to use USE_ETAGS with CommonMiddleware in Django?

    - by allyourcode
    The only reason I can think of is that calculating ETag's might be expensive. If pages change very quickly, the browser's cache is likely to be invalidated by the ETag. In that case, calculating the ETag would be a waste of time. On the other hand, a giving a 304 response when possible minimizes the amount of time spent in transmission. What are some good guidelines for when ETag's are likely to be a net winner when implemented with Django's CommonMiddleware?

    Read the article

  • Which is better Span that runat server or default asp lable ?

    - by Space Cracker
    I have a simple asp.net web page that contain a table with about 5 TR and each row have 2 TD .. in the page load I get user data ( 5 property ) and view them in this page the following are first 2 rows : <table> <tr> <td> FullName </td> <td> <span id="fullNameSpan" runat="server"></span> </td> </tr> <tr> <td> Username </td> <td> <span id="userNameSpan" runat="server"></span> </td> </tr> </table> I always used <asp:Label to set value by code but i always notice that label converted in runtime to span so i decided to user span by making him runat=server to be accessed by code, so Which is better to use asp:label or span with runat=server ??

    Read the article

  • How can i test my DB speed? (Learning)

    - by acidzombie24
    I have design a database. Theres no columns with indexing, nor any code for optimizing. I am positive i should index certain columns since i search them a lot. My question is HOW do i test if any part of my database will be slow? ATM I am using sqlite and i will be switching to either MS Sql or MySql based on my host provider. Will creating 100,000 records in each table be enough? Or will that always be fast in sqlite and i need to do 1mil? Do i need 10mil before a database will become slow? Also how do i time it? I am using C# so should i use StopWatch or is there a ADO.NET/Sqlite function i should use?

    Read the article

  • C++, using one byte to store two variables

    - by 2di
    Hi All I am working on representation of the chess board, and I am planning to store it in 32 bytes array, where each byte will be used to store two pieces. (That way only 4 bits are needed per piece) Doing it in that way, results in a overhead for accessing particular index of the board. Do you think that, this code can be optimised or completely different method of accessing indexes can be used? c++ char getPosition(unsigned char* c, int index){ //moving pointer c+=(index>>1); //odd number if (index & 1){ //taking right part return *c & 0xF; }else { //taking left part return *c>>4; } } void setValue(unsigned char* board, char value, int index){ //moving pointer board+=(index>>1); //odd number if (index & 1){ //replace right part //save left value only 4 bits *board = (*board & 0xF0) + value; }else { //replacing left part *board = (*board & 0xF) + (value<<4); } } int main() { char* c = (char*)malloc(32); for (int i = 0; i < 64 ; i++){ setValue((unsigned char*)c, i % 8,i); } for (int i = 0; i < 64 ; i++){ cout<<(int)getPosition((unsigned char*)c, i)<<" "; if (((i+1) % 8 == 0) && (i > 0)){ cout<<endl; } } return 0; } I am equally interested in your opinions regarding chess representations, and optimisation of the method above, as a stand alone problem. Thanks a lot

    Read the article

  • fastest way to check to see if a certain index in a linq statement is null

    - by tehdoommarine
    Basic Details I have a linq statement that grabs some records from a database and puts them in a System.Linq.Enumerable: var someRecords = someRepoAttachedToDatabase.Where(p=>true); Suppose this grabs tons (25k+) of records, and i need to perform operations on all of them. to speed things up, I have to decided to use paging and perform the operations needed in blocks of 100 instead of all of the records at the same time. The Question The line in question is the line where I count the number of records in the subset to see if we are on the last page; if the number of records in subset is less than the size of paging - then that means there are no more records left. What I would like to know is what is the fastest way to do this? Code in Question int pageSize = 100; bool moreData = true; int currentPage = 1; while (moreData) { var subsetOfRecords = someRecords.Skip((currentPage - 1) * pageSize).Take(pageSize); //this is also a System.Linq.Enumerable if (subsetOfRecords.Count() < pageSize){ moreData = false;} //line in question //do stuff to records in subset currentPage++; } Things I Have Considered subsetOfRecords.Count() < pageSize subsetOfRecords.ElementAt(pageSize - 1) == null (causes out of bounds exception - can catch exception and set moreData to false there) Converting subsetOfRecords to an array (converting someRecords to an array will not work due to the way subsetOfRecords is declared - but I am open to changing it) I'm sure there are plenty of other ideas that I have missed.

    Read the article

  • PHP – Slow String Manipulation

    - by Simon Roberts
    I have some very large data files and for business reasons I have to do extensive string manipulation (replacing characters and strings). This is unavoidable. The number of replacements runs into hundreds of thousands. It's taking longer than I would like. PHP is generally very quick but I'm doing so many of these string manipulations that it's slowing down and script execution is running into minutes. This is a pain because the script is run frequently. I've done some testing and found that str_replace is fastest, followed by strstr, followed by preg_replace. I've also tried individual str_replace statements as well as constructing arrays of patterns and replacements. I'm toying with the idea of isolating string manipulation operation and writing in a different language but I don't want to invest time in that option only to find that improvements are negligible. Plus, I only know Perl, PHP and COBOL so for any other language I would have to learn it first. I'm wondering how other people have approached similar problems? I have searched and I don't believe that this duplicates any existing questions.

    Read the article

  • Database for Large number of 1kB data chunks (MySQL?)

    - by The Unknown
    I have a very large dataset, each item in the dataset being roughly 1kB in size. The data needs to be queried rapidly by many applications distributed over a network. The dataset has more than a million items (so 500 million+ 1kB data chunks). What would be the best method to storing this dataset (need to allow adding more items, and reading them rapidly, but never modifying already added data)? Would using a MySQL DB using the binary blob format be appropriate? Or should each of these be stored as files on a file system? edit: the number is 1 million items now, but needs to be able to scale to well over 500 million items easily.

    Read the article

  • Having to insert a record, then update the same record warrants 1:1 relationship design?

    - by dianovich
    Let's say an Order has many Line items and we're storing the total cost of an order (based on the sum of prices on order lines) in the orders table. -------------- orders -------------- id ref total_cost -------------- -------------- lines -------------- id order_id price -------------- In a simple application, the order and line are created during the same step of the checkout process. So this means INSERT INTO orders .... -- Get ID of inserted order record INSERT into lines VALUES(null, order_id, ...), ... where we get the order ID after creating the order record. The problem I'm having is trying to figure out the best way to store the total cost of an order. I don't want to have to create an order create lines on an order calculate cost on order based on lines then update record created in 1. in orders table This would mean a nullable total_cost field on orders for starters... My solution thus far is to have an order_totals table with a 1:1 relationship to the orders table. But I think it's redundant. Ideally, since everything required to calculate total costs (lines on an order) is in the database, I would work out the value every time I need it, but this is very expensive. What are your thoughts?

    Read the article

  • rails belongs_to sql statement using NULL id

    - by Team Pannous
    When paginating through our Phrase table it takes very long to return the results. In the sql logs we see many sql requests which don't make sense to us: Phrase Load (7.4ms) SELECT "phrases".* FROM "phrases" WHERE "phrases"."id" IS NULL LIMIT 1 User Load (0.4ms) SELECT "users".* FROM "users" WHERE "users"."id" IS NULL LIMIT 1 These add up significantly. Is there a way to prevent querying against null ids? This is the underlying model: class Phrase < ActiveRecord::Base belongs_to :user belongs_to :response, :class_name => "Phrase", :foreign_key => "next_id" end

    Read the article

  • Perl launched from Java takes forever

    - by Wade Williams
    I know this is an absolute shot in the dark, but we're absolutely perplexed. A perl (5.8.6) script run by Java (1.5) is taking more than an hour to complete. The same script, when run manually from the command line takes 12 minutes to complete. This is on a Linux host. Logging is the same in both cases and the script is run with the same parameters in both cases. The script does some complex stuff like Oracle DB access, some scp's, etc, but again, it does the exact same actions in both cases. We're stumped. Has anyone ever run into a similar situation? If not and if you were faced with the same situation, how would you consider debugging it?

    Read the article

< Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >