Search Results

Search found 14841 results on 594 pages for 'performance monitoring'.

Page 185/594 | < Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >

  • How to speed up the reading of innerHTML in IE8?

    - by Dennis Cheung
    I am using JQuery with the DataTable plugin, and now I have a big performnce issue on the following line. aLocalData[jInner] = nTds[j].innerHTML; // jquery.dataTables.js:2220 I have a ajax call, and result string in HTML format. I convert them into HTML nodes, and that part is ok. var $result = $('<div/>').html(result).find("*:first"); // simlar to $result=$(result) but much more faster in Fx Then I activate enable the result from a plain table to a sortable datatable. The speed is acceptable in Fx (around 4sec for 900 rows), but unacceptable in IE8 (more then 100 seconds). I check it deep using the buildin profiler, and found the above single line take all 99.9% of the time, how can I speed it up? anything I missed? nTrs = oSettings.nTable.getElementsByTagName('tbody')[0].childNodes; for ( i=0, iLen=nTrs.length ; i<iLen ; i++ ) { if ( nTrs[i].nodeName == "TR" ) { iThisIndex = oSettings.aoData.length; oSettings.aoData.push( { "nTr": nTrs[i], "_iId": oSettings.iNextId++, "_aData": [], "_anHidden": [], "_sRowStripe": '' } ); oSettings.aiDisplayMaster.push( iThisIndex ); aLocalData = oSettings.aoData[iThisIndex]._aData; nTds = nTrs[i].childNodes; jInner = 0; for ( j=0, jLen=nTds.length ; j<jLen ; j++ ) { if ( nTds[j].nodeName == "TD" ) { aLocalData[jInner] = nTds[j].innerHTML; // jquery.dataTables.js:2220 jInner++; } } } }

    Read the article

  • What kind of data processing problems would CUDA help with?

    - by Chris McCauley
    Hi, I've worked on many data matching problems and very often they boil down to quickly and in parallel running many implementations of CPU intensive algorithms such as Hamming / Edit distance. Is this the kind of thing that CUDA would be useful for? What kinds of data processing problems have you solved with it? Is there really an uplift over the standard quad-core intel desktop? Chris

    Read the article

  • Combine static files or load in parallel

    - by Niall Collins
    I am at present introducing code to my site to combine css and javascript files. Is there a way without having to include an external library to load javascript asynchronously or in parallel? I have read on some blogs that combining of files can be counter productive as the load of the http request can be large and its better to load multiple files in parallel. Opinions on this? I am caching my javascript/css. And would have thought it was better to combine rather than have multiple http requests.

    Read the article

  • SQL Overlapping and Multi-Column Indexes

    - by durilai
    I am attempting to tune some stored procedures and have a question on indexes. I have used the tuning advisor and they recommended two indexes, both for the same table. The issue is one index is for one column and the other is for multiple columns, of which it includes the same column from the first. My question is why and what is the difference? CREATE NONCLUSTERED INDEX [_dta_index_Table1_5_2079723603__K23_K17_K13_K12_K2_K10_K22_K14_K19_K20_K9_K11_5_6_7_15_18] ON [dbo].[Table1] ( [EfctvEndDate] ASC, [StuLangCodeKey] ASC, [StuBirCntryCodeKey] ASC, [StuBirStOrProvncCodeKey] ASC, [StuKey] ASC, [GndrCodeKey] ASC, [EfctvStartDate] ASC, [StuHspncEnctyIndctr] ASC, [StuEnctyMsngIndctr] ASC, [StuRaceMsngIndctr] ASC, [StuBirDate] ASC, [StuBirCityName] ASC ) INCLUDE ( [StuFstNameLgl], [StuLastOrSrnmLgl], [StuMdlNameLgl], [StuIneligSnorImgrntIndctr], [StuExpctdGrdtngClYear] ) WITH (SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF) ON [PRIMARY] go CREATE NONCLUSTERED INDEX [_dta_index_Table1_5_2079723603__K23] ON [dbo].[Table1] ( [EfctvEndDate] ASC )WITH (SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF) ON [PRIMARY]

    Read the article

  • Web Page execution

    - by Sweta Jha
    I have a web page which brings 13K+ records in 20 seconds. There is a menu on the page, clicking on which navigates me to another page which is very lightweight. Displaying the data (13K+) took only 20 seconds whereas navigating from that page took much longer, more than 2 mins. Can you tell me why is the latter taking so much of time. I've stopped the page_load code execution on click of the menu. I've disabled the viewstate for that page as well.

    Read the article

  • How to formulate a SQL Server indexed view that aggregates distinct values?

    - by Jeremy Lew
    I have a schema that includes tables like the following (pseudo schema): TABLE ItemCollection { ItemCollectionId ...etc... } TABLE Item { ItemId, ItemCollectionId, ContributorId } I need to aggregate the number of distinct contributors per ItemCollectionId. This is possible with a query like: SELECT ItemCollectionId, COUNT(DISTINCT ContributorId) FROM Item GROUP BY ItemCollectionId I further want to pre-calculate this aggregation using an indexed (materialized) view. The DISTINCT prevents an index being placed on this view. Is there any way to reformulate this which will not violate SQL Server's indexed view constraints?

    Read the article

  • XDocument holding onto Memory?

    - by Jon
    I have an appplication that does a XDocument.Load from a 20mb file and then gets passed to a form to view its contents: openFileDialog1.FileName = ""; if (openFileDialog1.ShowDialog() == DialogResult.OK) { AuditFile = XDocument.Load(openFileDialog1.FileName); fmAuditLogViewer AuditViewer = new fmAuditLogViewer(); AuditViewer.ReportDocument = AuditFile; AuditViewer.Init(); AuditViewer.ShowDialog(); AuditViewer.Dispose(); AuditFile.RemoveNodes(); AuditFile = null; } In Task Manager I can see the memory being used by my application shoot up when I open this file. When I have finished viewing this file in my application I call : myXDocument.RemoveNodes(); myXDocument = null; However the memory use in Task Manager is still pretty high against my app. Is the XDocument still being held in memory and can I decrease the memory usage by my app?

    Read the article

  • Writing a DBMS in Python

    - by Matt Luongo
    Hey guys, I'm working on a basic DBMS as a pet project and planning to prototype in Python. I figure there's a reason there are only a few Python databases, and my gut agrees that my favorite language will be too slow to act as an honest performing database, but I'm looking forward to using it to learn what I need quickly. Would someone please contradict me? Is Python as ill-suited right now for this sort of thing as I think?

    Read the article

  • Will more CPUs/cores help with VS.NET build times?

    - by LoveMeSomeCode
    I was wondering if anyone knew whether Visual Studio .NET had a parallel build process or not? I have a solution with lots of projects, every project has lots of markup/code, lots of types, etc. Just sitting there with intellisense on runs it up to about 700MB. But the build times are really slow and only seem to max out one of my two cpu cores. Does this mean the build process is single threaded? My solution's build dependency chain isn't linear, so I don't see why it couldn't be building some of the projects in parallel. I remember Joel Spolsky blogging about his new SSD, and how it didn't help with compile times, but he didn't mention which compiler he was using. We're using VS 2005. Anyone know how it's compilation works? And is it any different/better in 2008/2010?

    Read the article

  • What's the difference between reflow and repaint?

    - by Jon Raasch
    I'm a little unclear on the difference between reflow + repaint (if there's any difference at all) Seems like reflow might be shifting the position of various DOM elements, where repaint is just rendering a new object. E.g. reflow would occur when removing an element and repaint would occur when changing its color. Is this true?

    Read the article

  • iPhone - Get a pointer to the data behind CGDataProvider?

    - by jtrim
    I'm trying to take a CGImage and copy its data into a buffer for later processing. The code below is what I have so far, but there's one thing I don't like about it - it's copying the image data twice. Once for CGDataProviderCopyData() and once for the :getBytes:length call on imgData. I haven't been able to find a way to copy the image data directly into my buffer and cut out the CGDataProviderCopyData() step, but there has to be a way...any pointers? (...pun ftw) NSData *imgData = (NSData *)(CGDataProviderCopyData(CGImageGetDataProvider(myCGImageRef))); CGImageRelease(myCGImageRef); // i've got a previously-defined pointer to an available buffer called "mybuff" [imgData getBytes:mybuff length:[imgData length]];

    Read the article

  • Flash causing jerky javascript animations

    - by Matt Brailsford
    Hi Guys, I'm developing a site which has a flash background playing a small video loop scaled to fill the whole background. Over the top I have a number of HTML elements which are animated using javascript. The problem I am having is that (predominantly in FF, but also in others to a lesser degree) the flash seems to be causing my javascript animations to run rather jerky, and in some cases missing the animation altogether and just jumping to the end state. Does anybody have any thoughts on how to make the 2 work together nicely? Many thanks Matt

    Read the article

  • How can "set timestamp" be a slow query?

    - by Peder
    My slow query log is full of entries like the following # Query_time: 1.016361 Lock_time: 0.000000 Rows_sent: 0 Rows_examined: 0 SET timestamp=1273826821; COMMIT; I guess the set timestamp command is issued by replication but I don't understand how set timestamp can take over a second. Any ideas?

    Read the article

  • Explanation for expires header

    - by sushil bharwani
    I have a joomla application working on Apache.To improve site performace we have written a .htaccess file to root of the application with setting a far future expires header to all the static content. As desired first time the files load in fresh with 200 status code. when again click on the same link many of the files are served directly from cache. I need explanation for two things When i press f5 then a number of files load with 304 status code however i expected them to be coming directly from cache without hitting the server for a status header? When i close the browser and come back to the same page again i see the same thing happening a number of files load with 304 status code although i thought they will load directly from the browser cache? I understand that 304 also servs file from browser cache but i want to avoid the header communication between servers as my static files wont ever change. Also i want to add that my requests are over a https connection does that create any issue.

    Read the article

  • Using TCP Acks to measure latency to a server?

    - by Ted Graham
    I am trying to measure latency to a server that I don't control. This is in a colocated environment, so the latency is on the order of 500 us (.5 ms). I understand that Cisco gear frequently deprioritizes ICMP traffic, making ping times unreliable. Is there a way for me to tell if this is the case on the gear I am traversing? Can I use TCP acknowledgements to determine the minimum latency to the remote server? To do this, I would somehow need to force the remote server to send a TCP ack immediately on receiving my data.

    Read the article

  • How can I make this Java code run faster?

    - by Martin Wiboe
    Hello all, I am trying to make a Java port of a simple feed-forward neural network. This obviously involves lots of numeric calculations, so I am trying to optimize my central loop as much as possible. The results should be correct within the limits of the float data type. My current code looks as follows (error handling & initialization removed): /** * Simple implementation of a feedforward neural network. The network supports * including a bias neuron with a constant output of 1.0 and weighted synapses * to hidden and output layers. * * @author Martin Wiboe */ public class FeedForwardNetwork { private final int outputNeurons; // No of neurons in output layer private final int inputNeurons; // No of neurons in input layer private int largestLayerNeurons; // No of neurons in largest layer private final int numberLayers; // No of layers private final int[] neuronCounts; // Neuron count in each layer, 0 is input // layer. private final float[][][] fWeights; // Weights between neurons. // fWeight[fromLayer][fromNeuron][toNeuron] // is the weight from fromNeuron in // fromLayer to toNeuron in layer // fromLayer+1. private float[][] neuronOutput; // Temporary storage of output from previous layer public float[] compute(float[] input) { // Copy input values to input layer output for (int i = 0; i < inputNeurons; i++) { neuronOutput[0][i] = input[i]; } // Loop through layers for (int layer = 1; layer < numberLayers; layer++) { // Loop over neurons in the layer and determine weighted input sum for (int neuron = 0; neuron < neuronCounts[layer]; neuron++) { // Bias neuron is the last neuron in the previous layer int biasNeuron = neuronCounts[layer - 1]; // Get weighted input from bias neuron - output is always 1.0 float activation = 1.0F * fWeights[layer - 1][biasNeuron][neuron]; // Get weighted inputs from rest of neurons in previous layer for (int inputNeuron = 0; inputNeuron < biasNeuron; inputNeuron++) { activation += neuronOutput[layer-1][inputNeuron] * fWeights[layer - 1][inputNeuron][neuron]; } // Store neuron output for next round of computation neuronOutput[layer][neuron] = sigmoid(activation); } } // Return output from network = output from last layer float[] result = new float[outputNeurons]; for (int i = 0; i < outputNeurons; i++) result[i] = neuronOutput[numberLayers - 1][i]; return result; } private final static float sigmoid(final float input) { return (float) (1.0F / (1.0F + Math.exp(-1.0F * input))); } } I am running the JVM with the -server option, and as of now my code is between 25% and 50% slower than similar C code. What can I do to improve this situation? Thank you, Martin Wiboe

    Read the article

  • How to calculate real-time stats?

    - by Diego Jancic
    I have a site with millions of users (well, actually it doesn't have any yet, but let's imagine), and I want to calculate some stats like "log-ins in the past hour". The problem is similar to the one described here: http://highscalability.com/blog/2008/4/19/how-to-build-a-real-time-analytics-system.html The simplest approach would be to do a select like this: select count(distinct user_id) from logs where date>='20120601 1200' and date <='20120601 1300' (of course other conditions could apply for the stats, like log-ins per country) Of course this would be really slow, mainly if it has millions (or even thousands) of rows, and I want to query this every time a page is displayed. How would you summarize the data? What should go to the (mem)cache? EDIT: I'm looking for a way to de-normalize the data, or to keep the cache up-to-date. For example I could increment an in-memory variable every time someone logs in, but that would help to know the total amount of logins, not the "logins in the last hour". Hope it's more clear now.

    Read the article

  • Mysql: Perform of NOT EXISTS. Is it possible to improve permofance?

    - by petRUShka
    I have two tables posts and comments. Table comments have post_id attribute. I need to get all posts with type "open", for which there are no comments with type "good" and created date MAY 1. Is it optimal to use such SQL-query: SELECT posts.* FROM posts WHERE NOT EXISTS ( SELECT comments.id FROM comments WHERE comments.post_id = posts.id AND comments.comment_type = 'good' AND comments.created_at BETWEEN '2010-05-01 00:00:00' AND '2010-05-01 23:59:59') I'm not sure that NOT EXISTS is perfect construction in this situation.

    Read the article

  • data sync from rails with iphone sqlite db

    - by Markus
    Hi all, I'm wondering whether there is a bossibility to export some selected data from my rails mysql db to a other sqlite db. The aim is to send that sqlite file directly to my iPhone application... That way I don't have to do a lot of xml integration in the iPhone app. That seems to be very slow... Markus

    Read the article

  • SQL Server Multiple Joins Are Taxing The CPU

    - by durilai
    I have a stored procedure on SQL Server 2005. It is pulling from a Table function, and has two joins. When the query is run using a load test it kills the CPU 100% across all 16 cores! I have determined that removing one of the joins makes the query run fine, but both taxes the CPU. Select SKey From dbo.tfnGetLatest(@ID) a left join [STAGING].dbo.RefSrvc b on a.LID = b.ESIID left join [STAGING].dbo.RefSrvc c on a.EID = c.ESIID Any help is appreciated, note the join is happening on the same table in a different database on the same server.

    Read the article

  • Server Benchmarking: What tools to use with my real-world test data

    - by mdemmitt
    I want to benchmark a new server using historical HTTP-request data. I have a textfile that contains one day's worth of real historical requests to a production server. What is the best tool for sending that list of requests on the server I'm testing? The tool I use should be able to configure the following: Number of threads making the requests Number of requests/second sent A list of request URLs to use when making the requests. Apache Bench seems like a close fit. However, Bench does not seem to be able to take in a list of request URLs as a parameter. What would you recommend?

    Read the article

  • Perf4J Graph Output from Log File

    - by manyxcxi
    I currently have a long running process that I am trying to analyze with Perf4J. I currently have it writing results in CSV format to its own log file using the AsyncCoalescingStatisticsAppender and a StatisticsCsvLayout on the file appender. My question is; when I try and use the --graph option from the command line (using the perf4j jar) it isn't populating the data points- it isn't populating anything. Are my appenders set incorrectly? The log file contains hundreds (sometimes thousands) of data points of about 10 different tag names. <appender name="perfAppender" class="org.apache.log4j.FileAppender"> <param name="File" value="perfStats.log"/> <layout class="org.perf4j.log4j.StatisticsCsvLayout"> </layout> </appender> <appender name="CoalescingStatistics" class="org.perf4j.log4j.AsyncCoalescingStatisticsAppender"> <!-- The TimeSlice option is used to determine the time window for which all received StopWatch logs are aggregated to create a single GroupedTimingStatistics log. Here we set it to 10 seconds, overriding the default of 30000 ms --> <param name="TimeSlice" value="10000"/> <appender-ref ref="ConsoleAppender"/> <appender-ref ref="CompositeRollingFileAppender"/> <appender-ref ref="perfAppender"/> </appender>

    Read the article

< Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >