Search Results

Search found 28685 results on 1148 pages for 'query performance'.

Page 333/1148 | < Previous Page | 329 330 331 332 333 334 335 336 337 338 339 340  | Next Page >

  • Counting context switches per thread

    - by Sarmun
    Is there a way to see how many context switches each thread generates? (both in and out if possible) Either in X/s, or to let it run and give aggregated data after some time. (either on linux or on windows) I have found only tools that give aggregated context-switching number for whole os or per process. My program makes many context switches (50k/s), probably a lot not necessary, but I am not sure where to start optimizing, where do most of those happen.

    Read the article

  • Long primitive or AtomicLong for a counter?

    - by Rich
    Hi I have a need for a counter of type long with the following requirements/facts: Incrementing the counter should take as little time as possible. The counter will only be written to by one thread. Reading from the counter will be done in another thread. The counter will be incremented regularly (as much as a few thousand times per second), but will only be read once every five seconds. Precise accuracy isn't essential, only a rough idea of the size of the counter is good enough. The counter is never cleared, decremented. Based upon these requirements, how would you choose to implement your counter? As a simple long, as a volatile long or using an AtomicLong? Why? At the moment I have a volatile long but was wondering whether another approach would be better. I am also incrementing my long by doing ++counter as opposed to counter++. Is this really any more efficient (as I have been led to believe elsewhere) because there is no assignment being done? Thanks in advance Rich

    Read the article

  • 3-clique counting in a graph

    - by Legend
    I am operating on a (not so) large graph having about 380K edges. I wrote a program to count the number of 3-cliques in the graph. A quick example: List of edges: A - B B - C C - A C - D List of cliques: A - B - C A 3-clique is nothing but a triangle in a graph. Currently, I am doing this using PHP+MySQL. As expected, it is not fast enough. Is there a way to do this in pure MySQL? (perhaps a way to insert all 3-cliques into a table?)

    Read the article

  • Selecting More Than 1 Table in A Single Query

    - by Kamran
    I have 5 Tables in MS Access as Logs [Title, ID, Date, Author] Tape [Title, ID, Date, Author] Maps [Title, ID, Date, Author] VCDs [Title, ID, Date, Author] Book [Title, ID, Date, Author] I tried my level best through this code SELECT Logs.[Author], Tape.[Author], Maps.[Author], VCDs.[Author], Book.[Author] FROM Logs , Tape , Maps , VCDs, Book WHERE ((([Author] & " " & [Author] & " " & [Author] & " " & [Author]& " " & [Author]) Like "*" & [Type the Title or Any Part of the Title and Press Ok] & "*")); I want to select all of these fields in a single query. Suppose there is Adam as author of works in all tables. So when i put Adam in search box it should result from all tables. I know this can be done by having single table or renaming fields names but that's not required. Please help.

    Read the article

  • Slow first page load on asp.net site

    - by Tabloo Quijico
    Hi, Every now and then (always after a long period of idle-time, e.g. overnight) when I access a site built using asp.net - it takes around 15 seconds to load the page (15 seconds before I see any progress whatsoever, then the page comes up fast). Further pages on that site, or refreshes, are quick as usual - they are also fast on other machines, only the first one seems to take the 'hit'. Page tracing never through anything up (whole cycle was a fraction of a second) So my question is where else should I be looking? Perhaps IIS? Or could it still be my asp.net app and I'm just looking in the wrong place (the trace) for clues? As I don't have much control over the IIS server, anything I can check through asp.net would be more helpful, before I go ask that particular admin. cheers :D

    Read the article

  • C: using a lot of structs can make a program slow?

    - by nunos
    I am coding a breakout clone. I had one version in which I only had one level deep of structures. This version runs at 70 fps. For more clarity in the code I decided the code should have more abstractions and created more structs. Most of the times I have two two three level deep of structures. This version runs at 30 fps. Since there are some other differences besides the structures, I ask you: Does using a lot of structs in C can slow down the code significantly? Thanks.

    Read the article

  • MySQL Query - WHERE and IF?

    - by Prash
    I'm not quite sure how to right this query. Basically, I'm going to have a table with two columns (OS and country_code) - more columns too, but those are the conditional ones. These will be either set to 0 for all, or specific ones, separated by commas. Now, what I'm trying achieve is pull data from the table if the OS and country_code = 0, or if they contain matching data (separated by commas). Then, I have a column for time. I want to select rows where the time is GREATER than the time column, unless the column time_t is set to false, in which case this shouldn't matter. I hope I explained it right? This is what I kind of have so far: $get = $db->prepare("SELECT * FROM commands WHERE country_code = 0 OR country_code LIKE :country_code AND OS = 0 OR OS LIKE :OS AND IF (time_t = 1, expiry > NOW()) "); $get->execute(array( ':country_code' => "%{$data['country_code']}%", ':OS' => "%{$data['OS']}%" ));

    Read the article

  • finding long repeated substrings in a massive string

    - by Will
    I naively imagined that I could build a suffix trie where I keep a visit-count for each node, and then the deepest nodes with counts greater than one are the result set I'm looking for. I have a really really long string (hundreds of megabytes). I have about 1 GB of RAM. This is why building a suffix trie with counting data is too inefficient space-wise to work for me. To quote Wikipedia's Suffix tree: storing a string's suffix tree typically requires significantly more space than storing the string itself. The large amount of information in each edge and node makes the suffix tree very expensive, consuming about ten to twenty times the memory size of the source text in good implementations. The suffix array reduces this requirement to a factor of four, and researchers have continued to find smaller indexing structures. And that was wikipedia's comments on the tree, not trie. How can I find long repeated sequences in such a large amount of data, and in a reasonable amount of time (e.g. less than an hour on a modern desktop machine)? (Some wikipedia links to avoid people posting them as the 'answer': Algorithms on strings and especially Longest repeated substring problem ;-) )

    Read the article

  • JAVA bytecode optimization

    - by Idob
    This is a basic question. I have code which shouldn't run on metadata beans. All metadata beans are located under metadata package. Now, I use reflection API to find out whether a class is located in the the metadata package. if (newEntity.getClass().getPackage().getName().contains("metadata")) I use this If in several places within this code. The question is: Should I do this once with: boolean isMetadata = false if (newEntity.getClass().getPackage().getName().contains("metadata")) { isMetadata = true; } C++ makes optimizations and knows that this code was already called and it won't call it again. Does JAVA makes optimization? I know reflection API is a beat heavy and I prefer not to lose expensive runtime.

    Read the article

  • Intersection() and Except() is too slow with large collections of custom objects

    - by Theo
    I am importing data from another database. My process is importing data from a remote DB into a List<DataModel> named remoteData and also importing data from the local DB into a List<DataModel> named localData. I am then using LINQ to create a list of records that are different so that I can update the local DB to match the data pulled from remote DB. Like this: var outdatedData = this.localData.Intersect(this.remoteData, new OutdatedDataComparer()).ToList(); I am then using LINQ to create a list of records that no longer exist in remoteData, but do exist in localData, so that I delete them from local database. Like this: var oldData = this.localData.Except(this.remoteData, new MatchingDataComparer()).ToList(); I am then using LINQ to do the opposite of the above to add the new data to the local database. Like this: var newData = this.remoteData.Except(this.localData, new MatchingDataComparer()).ToList(); Each collection imports about 70k records, and each of the 3 LINQ operation take between 5 - 10 minutes to complete. How can I make this faster? Here is the object the collections are using: internal class DataModel { public string Key1{ get; set; } public string Key2{ get; set; } public string Value1{ get; set; } public string Value2{ get; set; } public byte? Value3{ get; set; } } The comparer used to check for outdated records: class OutdatedDataComparer : IEqualityComparer<DataModel> { public bool Equals(DataModel x, DataModel y) { var e = string.Equals(x.Key1, y.Key1) && string.Equals(x.Key2, y.Key2) && ( !string.Equals(x.Value1, y.Value1) || !string.Equals(x.Value2, y.Value2) || x.Value3 != y.Value3 ); return e; } public int GetHashCode(DataModel obj) { return 0; } } The comparer used to find old and new records: internal class MatchingDataComparer : IEqualityComparer<DataModel> { public bool Equals(DataModel x, DataModel y) { return string.Equals(x.Key1, y.Key1) && string.Equals(x.Key2, y.Key2); } public int GetHashCode(DataModel obj) { return 0; } }

    Read the article

  • JavaScript: Is there a better way to retain your array but efficiently concat or replace items?

    - by Michael Mikowski
    I am looking for the best way to replace or add to elements of an array without deleting the original reference. Here is the set up: var a = [], b = [], c, i, obj; for ( i = 0; i < 100000; i++ ) { a[ i ] = i; b[ i ] = 10000 - i; } obj.data_list = a; Now we want to concatenate b INTO a without changing the reference to a, since it is used in obj.data_list. Here is one method: for ( i = 0; i < b.length; i++ ) { a.push( b[ i ] ); } This seems to be a somewhat terser and 8x (on V8) faster method: a.splice.apply( a, [ a.length, 0 ].concat( b ) ); I have found this useful when iterating over an "in-place" array and don't want to touch the elements as I go (a good practice). I start a new array (let's call it keep_list) with the initial arguments and then add the elements I wish to retain. Finally I use this apply method to quickly replace the truncated array: var keep_list = [ 0, 0 ]; for ( i = 0; i < a.length; i++ ){ if ( some_condition ){ keep_list.push( a[ i ] ); } // truncate array a.length = 0; // And replace contents a.splice.apply( a, keep_list ); There are a few problems with this solution: there is a max call stack size limit of around 50k on V8 I have not tested on other JS engines yet. This solution is a bit cryptic Has anyone found a better way?

    Read the article

  • Not unique table/alias - can't understand why!?

    - by Andy Barlow
    Hi! I'm trying to join some tables together in MySQL, but I seem to get an error saying: #1066 - Not unique table/alias: 'calendar_jobs' I really want it to select everything from the cal_events, the 2 user bits and just the destination col from the jobs table, but become "null" if there arn't any job. A right join seemed to fit the bill but doesn't work! Can anyone help!? SELECT calendar_events.* , calendar_users.doctorOrNurse, calendar_users.passportName, calendar_jobs.destination FROM `calendar_events` , `calendar_users` , `calendar_jobs` RIGHT JOIN calendar_jobs ON calendar_events.jobID = calendar_jobs.jobID WHERE `start` >=0 AND calendar_users.userID = calendar_events.userID Cheers!

    Read the article

  • mysql with 3 group by and sum

    - by cyberfly
    Hi all I have this data in my table (tb_cash_transaction) I want to group the TYPE column, CURRENCY_ID column and AMOUNT column so it will become like below: **Currency** **Cash IN** **Cash OUT** **Balance** 14 40000 30000 10000 15 50000 40000 10000 Rule : 1.Group by currency 2.Then find the sum of cash in for that currency 3.Find the sum of cash out for that currency 4.Get the balance (sum cash in - sum cash out) How to achieve it using mysql? I try using group by but cannot get the desired output. Thanks in advance

    Read the article

  • SQL Server GROUP BY troubles!

    - by Lucas311
    I'm getting a frustrating error in one of my SQL Server 2008 queries. It parses fine, but crashes when I try to execute. The error I get is the following: Msg 8120, Level 16, State 1, Line 4 Column 'customertraffic_return.company' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause. SELECT * FROM (SELECT ctr.sp_id AS spid, Substring(ctr.company, 1, 20) AS company, cci.email_address AS tech_email, CASE WHEN rating IS NULL THEN 'unknown' ELSE rating END AS rating FROM customer_contactinfo cci INNER JOIN customertraffic_return ctr ON ctr.sp_id = cci.sp_id WHERE cci.email_address <> '' AND cci.email_address NOT LIKE '%hotmail%' AND cci.email_address IS NOT NULL AND ( region LIKE 'Europe%' OR region LIKE 'Asia%' ) AND SERVICE IN ( '1', '2' ) AND ( rating IN ( 'Premiere', 'Standard', 'unknown' ) OR rating IS NULL ) AND msgcount >= 5000 GROUP BY ctr.sp_id, cci.email_address) AS a WHERE spid NOT IN (SELECT spid FROM customer_exclude) GROUP BY spid, tech_email

    Read the article

  • Speed up :visible:input selector avoiding filter

    - by macca1
    I have a jQuery selector that is running way too slow on my unfortunately large page: $("#section").find(":visible:input").filter(":first").focus(); Is there a quicker way to select the first visible input without having to find ALL the visible inputs and then filtering THAT selection for the first? I want something like :visible:input:first but that doesn't seem to work.

    Read the article

  • 'Good' programming form in maintaining / updating / accessing files by entry

    - by zhermes
    Basic Question: If I'm storying/modifying data, should I access elements of a file by index hard-coded index, i.e. targetFile.getElement(5); via a hardcoded identifier (internally translated into index), i.e. target.getElementWithID("Desired Element"), or with some intermediate DESIRED_ELEMENT = 5; ... target.getElement(DESIRED_ELEMENT), etc. Background: My program (c++) stores data in lots of different 'dataFile's. I also keep a list of all of the data-files in another file---a 'listFile'---which also stores some of each one's properties (see below, but i.e. what it's name is, how many lines of information it has etc.). There is an object which manages the data files and the list file, call it a 'fileKeeper'. The entries of a listFile look something like: filename , contents name , number of lines , some more numbers ... Its definitely possible that I may add / remove fields from this list --- but in general, they'll stay static. Right now, I have a constant string array which holds the identification of each element in each entry, something like: const string fileKeeper::idKeys[] = { "FileName" , "Contents" , "NumLines" ... }; const int fileKeeper::idKeysNum = 6; // 6 - for example I'm trying to manage this stuff in 'good' programatic form. Thus, when I want to retrieve the number of lines in a file (for example), instead of having a method which just retrieves the '3'rd element... Instead I do something like: string desiredID = "NumLines"; int desiredIndex = indexForID(desiredID); string desiredElement = elementForIndex(desiredIndex); where the function indexForID() goes through the entries of idKeys until it finds desiredID then returns the index it corresponds to. And elementForIndex(index) actually goes into the listFile to retrieve the index'th element of the comma-delimited string. Problem: This still seems pretty ugly / poor-form. Is there a way I should be doing this? If not, what are some general ways in which this is usually done? Thanks!

    Read the article

  • Working by group by for grouping data into string format

    - by Shantanu Gupta
    I have a table that contains some data given below. It uses a tree like structure i.e. Department SubD1, SubD2 ..... PreSubD1, PreSubD1... PreSubD2, PreSubD2... pk_map_id preferences ImmediateParent Department_Id -------------------- -------------------- -------------------- -------------------- 20 14 5 1 21 15 5 1 22 16 6 1 23 9 4 2 24 4 3 2 25 24 20 2 26 25 20 2 27 23 13 2 I want to group my records on behalf of department then immediate parent then preferences each seperated by ',' i.e. department Immediate Parent preferences 1 5,6 14,15,16 2 4,3,20,13 9,4,24,25,23 and this table also Immediate parent preferences 5 14,15 6 16 4 9 3 4 20 24,25 13 13 In actual scenario all these are my ids which are to be replaced by their string fields. I am using sql server 2k5

    Read the article

< Previous Page | 329 330 331 332 333 334 335 336 337 338 339 340  | Next Page >