Search Results

Search found 13151 results on 527 pages for 'performance counters'.

Page 166/527 | < Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >

  • Which is better Span that runat server or default asp lable ?

    - by Space Cracker
    I have a simple asp.net web page that contain a table with about 5 TR and each row have 2 TD .. in the page load I get user data ( 5 property ) and view them in this page the following are first 2 rows : <table> <tr> <td> FullName </td> <td> <span id="fullNameSpan" runat="server"></span> </td> </tr> <tr> <td> Username </td> <td> <span id="userNameSpan" runat="server"></span> </td> </tr> </table> I always used <asp:Label to set value by code but i always notice that label converted in runtime to span so i decided to user span by making him runat=server to be accessed by code, so Which is better to use asp:label or span with runat=server ??

    Read the article

  • How can i test my DB speed? (Learning)

    - by acidzombie24
    I have design a database. Theres no columns with indexing, nor any code for optimizing. I am positive i should index certain columns since i search them a lot. My question is HOW do i test if any part of my database will be slow? ATM I am using sqlite and i will be switching to either MS Sql or MySql based on my host provider. Will creating 100,000 records in each table be enough? Or will that always be fast in sqlite and i need to do 1mil? Do i need 10mil before a database will become slow? Also how do i time it? I am using C# so should i use StopWatch or is there a ADO.NET/Sqlite function i should use?

    Read the article

  • C++, using one byte to store two variables

    - by 2di
    Hi All I am working on representation of the chess board, and I am planning to store it in 32 bytes array, where each byte will be used to store two pieces. (That way only 4 bits are needed per piece) Doing it in that way, results in a overhead for accessing particular index of the board. Do you think that, this code can be optimised or completely different method of accessing indexes can be used? c++ char getPosition(unsigned char* c, int index){ //moving pointer c+=(index>>1); //odd number if (index & 1){ //taking right part return *c & 0xF; }else { //taking left part return *c>>4; } } void setValue(unsigned char* board, char value, int index){ //moving pointer board+=(index>>1); //odd number if (index & 1){ //replace right part //save left value only 4 bits *board = (*board & 0xF0) + value; }else { //replacing left part *board = (*board & 0xF) + (value<<4); } } int main() { char* c = (char*)malloc(32); for (int i = 0; i < 64 ; i++){ setValue((unsigned char*)c, i % 8,i); } for (int i = 0; i < 64 ; i++){ cout<<(int)getPosition((unsigned char*)c, i)<<" "; if (((i+1) % 8 == 0) && (i > 0)){ cout<<endl; } } return 0; } I am equally interested in your opinions regarding chess representations, and optimisation of the method above, as a stand alone problem. Thanks a lot

    Read the article

  • fastest way to check to see if a certain index in a linq statement is null

    - by tehdoommarine
    Basic Details I have a linq statement that grabs some records from a database and puts them in a System.Linq.Enumerable: var someRecords = someRepoAttachedToDatabase.Where(p=>true); Suppose this grabs tons (25k+) of records, and i need to perform operations on all of them. to speed things up, I have to decided to use paging and perform the operations needed in blocks of 100 instead of all of the records at the same time. The Question The line in question is the line where I count the number of records in the subset to see if we are on the last page; if the number of records in subset is less than the size of paging - then that means there are no more records left. What I would like to know is what is the fastest way to do this? Code in Question int pageSize = 100; bool moreData = true; int currentPage = 1; while (moreData) { var subsetOfRecords = someRecords.Skip((currentPage - 1) * pageSize).Take(pageSize); //this is also a System.Linq.Enumerable if (subsetOfRecords.Count() < pageSize){ moreData = false;} //line in question //do stuff to records in subset currentPage++; } Things I Have Considered subsetOfRecords.Count() < pageSize subsetOfRecords.ElementAt(pageSize - 1) == null (causes out of bounds exception - can catch exception and set moreData to false there) Converting subsetOfRecords to an array (converting someRecords to an array will not work due to the way subsetOfRecords is declared - but I am open to changing it) I'm sure there are plenty of other ideas that I have missed.

    Read the article

  • Having to insert a record, then update the same record warrants 1:1 relationship design?

    - by dianovich
    Let's say an Order has many Line items and we're storing the total cost of an order (based on the sum of prices on order lines) in the orders table. -------------- orders -------------- id ref total_cost -------------- -------------- lines -------------- id order_id price -------------- In a simple application, the order and line are created during the same step of the checkout process. So this means INSERT INTO orders .... -- Get ID of inserted order record INSERT into lines VALUES(null, order_id, ...), ... where we get the order ID after creating the order record. The problem I'm having is trying to figure out the best way to store the total cost of an order. I don't want to have to create an order create lines on an order calculate cost on order based on lines then update record created in 1. in orders table This would mean a nullable total_cost field on orders for starters... My solution thus far is to have an order_totals table with a 1:1 relationship to the orders table. But I think it's redundant. Ideally, since everything required to calculate total costs (lines on an order) is in the database, I would work out the value every time I need it, but this is very expensive. What are your thoughts?

    Read the article

  • Database for Large number of 1kB data chunks (MySQL?)

    - by The Unknown
    I have a very large dataset, each item in the dataset being roughly 1kB in size. The data needs to be queried rapidly by many applications distributed over a network. The dataset has more than a million items (so 500 million+ 1kB data chunks). What would be the best method to storing this dataset (need to allow adding more items, and reading them rapidly, but never modifying already added data)? Would using a MySQL DB using the binary blob format be appropriate? Or should each of these be stored as files on a file system? edit: the number is 1 million items now, but needs to be able to scale to well over 500 million items easily.

    Read the article

  • Fast serialization/deserialization of structs

    - by user256890
    I have huge amont of geographic data represented in simple object structure consisting only structs. All of my fields are of value type. public struct Child { readonly float X; readonly float Y; readonly int myField; } public struct Parent { readonly int id; readonly int field1; readonly int field2; readonly Child[] children; } The data is chunked up nicely to small portions of Parent[]-s. Each array contains a few thousands Parent instances. I have way too much data to keep all in memory, so I need to swap these chunks to disk back and forth. (One file would result approx. 2-300KB). What would be the most efficient way of serializing/deserializing the Parent[] to a byte[] for dumpint to disk and reading back? Concerning speed, I am particularly interested in fast deserialization, write speed is not that critical. Would simple BinarySerializer good enough? Or should I hack around with StructLayout (see accepted answer)? I am not sure if that would work with array field of Parent.children. UPDATE: Response to comments - Yes, the objects are immutable (code updated) and indeed the children field is not value type. 300KB sounds not much but I have zillions of files like that, so speed does matter.

    Read the article

  • Is is faster to filter and get data or filter then get data ?

    - by remi bourgarel
    Hi I have this kind of request : SELECT myTable.ID, myTable.Adress, -- 20 more columns of all kind of type FROM myTable WHERE EXISTS(SELECT * FROM myLink WHERE myLink.FID = myTable.ID and myLink.FID2 = 666) myLink has a lot of rows. Do you think it's faster to do like this : SELECT myLink.FID INTO @result FROM myLink WHERE myLink.FID2 = 666 UPDATE @result SET Adress = myTable.Adress, -- 20 more columns of all kind of type FROM myTable WHERE myTable.ID = @result.ID

    Read the article

  • Perl launched from Java takes forever

    - by Wade Williams
    I know this is an absolute shot in the dark, but we're absolutely perplexed. A perl (5.8.6) script run by Java (1.5) is taking more than an hour to complete. The same script, when run manually from the command line takes 12 minutes to complete. This is on a Linux host. Logging is the same in both cases and the script is run with the same parameters in both cases. The script does some complex stuff like Oracle DB access, some scp's, etc, but again, it does the exact same actions in both cases. We're stumped. Has anyone ever run into a similar situation? If not and if you were faced with the same situation, how would you consider debugging it?

    Read the article

  • rails belongs_to sql statement using NULL id

    - by Team Pannous
    When paginating through our Phrase table it takes very long to return the results. In the sql logs we see many sql requests which don't make sense to us: Phrase Load (7.4ms) SELECT "phrases".* FROM "phrases" WHERE "phrases"."id" IS NULL LIMIT 1 User Load (0.4ms) SELECT "users".* FROM "users" WHERE "users"."id" IS NULL LIMIT 1 These add up significantly. Is there a way to prevent querying against null ids? This is the underlying model: class Phrase < ActiveRecord::Base belongs_to :user belongs_to :response, :class_name => "Phrase", :foreign_key => "next_id" end

    Read the article

  • PHP – Slow String Manipulation

    - by Simon Roberts
    I have some very large data files and for business reasons I have to do extensive string manipulation (replacing characters and strings). This is unavoidable. The number of replacements runs into hundreds of thousands. It's taking longer than I would like. PHP is generally very quick but I'm doing so many of these string manipulations that it's slowing down and script execution is running into minutes. This is a pain because the script is run frequently. I've done some testing and found that str_replace is fastest, followed by strstr, followed by preg_replace. I've also tried individual str_replace statements as well as constructing arrays of patterns and replacements. I'm toying with the idea of isolating string manipulation operation and writing in a different language but I don't want to invest time in that option only to find that improvements are negligible. Plus, I only know Perl, PHP and COBOL so for any other language I would have to learn it first. I'm wondering how other people have approached similar problems? I have searched and I don't believe that this duplicates any existing questions.

    Read the article

  • Faster code with another compiler

    - by Andrei
    I'm using the standard gcc compiler in math software development with C-language. I don't know that much about compilers or compiler options, and I was just wondering, is it possible to make faster executables using another compiler or choosing better options? The default Makefile sets options -ffast-math and -O3 and I think both of them have some impact in the overall calculation time. My software is using memory quite extensively, so I imagine some options related to memory management might do the trick? Any ideas?

    Read the article

  • Which method of adding items to the ASP.NET Dictionary class is more efficient?

    - by ahmd0
    I'm converting a comma separated list of strings into a dictionary using C# in ASP.NET (by omitting any duplicates): string str = "1,2, 4, 2, 4, item 3,item2, item 3"; //Just a random string for the sake of this example and I was wondering which method is more efficient? 1 - Using try/catch block: Dictionary<string, string> dic = new Dictionary<string, string>(); string[] strs = str.Split(','); foreach (string s in strs) { if (!string.IsNullOrWhiteSpace(s)) { try { string s2 = s.Trim(); dic.Add(s2, s2); } catch { } } } 2 - Or using ContainsKey() method: string[] strs = str.Split(','); foreach (string s in strs) { if (!string.IsNullOrWhiteSpace(s)) { string s2 = s.Trim(); if (!dic.ContainsKey(s2)) dic.Add(s2, s2); } }

    Read the article

  • Will creating index help in this case

    - by The King
    I'm still a learning user of SQL-SERVER2005. Here is my table structure CREATE TABLE [dbo].[Trn_PostingGroups]( [ControlGroup] [char](5) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL, [PracticeCode] [char](5) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL, [ScanDate] [smalldatetime] NULL, [DepositDate] [smalldatetime] NULL, [NameOfFile] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [DepositValue] [decimal](11, 2) NULL, [RecordStatus] [char](1) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, CONSTRAINT [PK_Trn_PostingGroups_1] PRIMARY KEY CLUSTERED ( [ControlGroup] ASC, [PracticeCode] ASC )WITH (IGNORE_DUP_KEY = OFF) ON [PRIMARY] ) ON [PRIMARY] Scenario 1 : Suppose I have a query like this... Select * from Trn_PostingGroups where PracticeCode = 'ABC' Will indexing on Practice Code seperately help me in making my query faster?? Scenario 2 : Select * from Trn_PostingGroups where ControlGroup = 12701 and PracticeCode = 'ABC' and NameOfFile = 'FileName1' Will indexing on NameOfFile seperately help me in making my query faster ??

    Read the article

  • Replacing certain words with links to definitions using Javascript

    - by adharris
    I am trying to create a glossary system which will get a list of common words and their definitions via ajax, then replace any occurrence of that word in certain elements (those with the useGlossary class) with a link to the full definition and provide a short definition on mouse hover. The way I am doing it works, but for large pages it takes 30-40 seconds, during which the page hangs. I would like to either decrease the time it takes to do the replacement or make it so that the replacement is running in the background without hanging the page. I am using jquery for most of the javascript, and Qtip for the mouse hover. Here is my existing slow code: $(document).ready(function () { $.get("fetchGlossary.cfm", null, glossCallback, "json"); }); function glossCallback(data) { $(".useGlossary").each(function() { var $this = $(this); for (var i in data) { $this.html($this.html().replace(new RegExp("\\b" + data[i].term + "\\b", "gi"), function(m) {return makeLink(m, data[i].def);})); } $this.find("a.glossary").qtip({ style: { name: 'blue', tip: true } }) }); } function makeLink(m, def) { return "<a class='glossary glossary" + m.replace(/\s/gi, "").toUpperCase() + "' href='reference/glossary.cfm' title='" + def + "'>" + m + "</a>"; } Thanks for any feedback/suggestions!

    Read the article

  • Help me choose between XML or SQL Lite on android

    - by Ngetha
    I have an android app that periodically, say once a week downloads content from a server in XML. The content is used by the app, different Acitivities use different parts of the content. My question is a design one, should I save the data in SQlite or just keep it as an XML file, which one would be faster to read? The app can only use one content piece at a time, which means subsequent XML content downloads replace the old one.

    Read the article

  • Java program runs smoothly in Netbeans but slowly in Eclipse and as an executed jar. WTF?

    - by comp sci balla
    A java program that does frequent swing/awt painting animation (but nothing more advanced than g.fillOval(...)) runs at a consistent 60fps in Netbeans, and at about 6fps when ran in Eclipse or executed as a jar file from a unix terminal. The program was developed in Netbeans and is run-of-the-mill desktop application (not webstart or japplet or ...). This is occurring in Ubuntu 10 with java 1.6. How is this possible? The universe no longer makes sense to me.

    Read the article

  • resort on a std::vector vs std::insert

    - by Abruzzo Forte e Gentile
    I have a sorted std::vector of relative small size ( from 5 to 20 elements ). I used std::vector since the data is continuous so I have speed because of cache. On a specific point I need to remove an element from this vector. I have now a doubt: which is the fastest way to remove this value between the 2 options below? setting that element to 0 and call sort to reorder: this has complexity but elements are on the same cache line. call erase that will copy ( or memcpy who knows?? ) all elements after it of 1 place ( I need to investigate the behind scense of erase ). Do you know which one is faster? I think that the same approach could be thought about inserting a new element without hitting the max capacity of the vector. Regards AFG

    Read the article

  • What are the suggested alternatives for Class<T>.isAssignableFrom(Class<?> cls)?

    - by Wing C. Chen
    Currently I am doing the profiling to a piece of code. During the profiling, I discovered that this very method call, Class<T>.isAssignableFrom(Class<?> cls) takes up to quite amount of the entire time. Because this is a method from reflection, it takes a lot of time compared to normal keywords or method calls. I am wondering if there are some good alternatives for this method calls?

    Read the article

  • Counting context switches per thread

    - by Sarmun
    Is there a way to see how many context switches each thread generates? (both in and out if possible) Either in X/s, or to let it run and give aggregated data after some time. (either on linux or on windows) I have found only tools that give aggregated context-switching number for whole os or per process. My program makes many context switches (50k/s), probably a lot not necessary, but I am not sure where to start optimizing, where do most of those happen.

    Read the article

  • Quickest way to compare a buch of array or list of values.

    - by zapping
    Can you please let me know on the quickest and efficient way to compare a large set of values. Its like there are a list of parent codes(string) and each code has a series of child values(string). The child lists have to be compared with each other and find out duplicates and count how many times they repeat. code1(code1_value1, code1_value2, code3_value3, ..., code1_valueN); code2(code2_value1, code1_value2, code2_value3, ..., code2_valueN); code3(code2_value1, code3_value2, code3_value3, ..., code3_valueN); . . . codeN(codeN_value1, codeN_value2, codeN_value3, ..., codeN_valueN); The lists are huge say like there are 100 parent codes and each has about 250 values in them. There will not be duplicates within a code list. Doing it in java and the solution i could figure out is. Store the values of first set of code in as codeMap.put(codeValue, duplicateCount). The count initialized to 0. Then compare the rest of the values with this. If its in the map then increment the count otherwise append it to the map. The downfall of this is to get the duplicates. Another iteration needs to be performed on a very large list. An alternative is to maintain another hashmap for duplicates like duplicateCodeMap.put(codeValue, duplicateCount) and change the initial hashmap to codeMap.put(codeValue, codeValue). Speed is what is requirement. Hope one of you can help me with it.

    Read the article

  • C++ Program performs better when piped

    - by ET1 Nerd
    I haven't done any programming in a decade. I wanted to get back into it, so I made this little pointless program as practice. The easiest way to describe what it does is with output of my --help codeblock: ./prng_bench --help ./prng_bench: usage: ./prng_bench $N $B [$T] This program will generate an N digit base(B) random number until all N digits are the same. Once a repeating N digit base(B) number is found, the following statistics are displayed: -Decimal value of all N digits. -Time & number of tries taken to randomly find. Optionally, this process is repeated T times. When running multiple repititions, averages for all N digit base(B) numbers are displayed at the end, as well as total time and total tries. My "problem" is that when the problem is "easy", say a 3 digit base 10 number, and I have it do a large number of passes the "total time" is less when piped to grep. ie: command ; command |grep took : ./prng_bench 3 10 999999 ; ./prng_bench 3 10 999999|grep took .... Pass# 999999: All 3 base(10) digits = 3 base(10). Time: 0.00005 secs. Tries: 23 It took 191.86701 secs & 99947208 tries to find 999999 repeating 3 digit base(10) numbers. An average of 0.00019 secs & 99 tries was needed to find each one. It took 159.32355 secs & 99947208 tries to find 999999 repeating 3 digit base(10) numbers. If I run the same command many times w/o grep time is always VERY close. I'm using srand(1234) for now, to test. The code between my calls to clock_gettime() for start and stop do not involve any stream manipulation, which would obviously affect time. I realize this is an exercise in futility, but I'd like to know why it behaves this way. Below is heart of the program. Here's a link to the full source on DB if anybody wants to compile and test. https://www.dropbox.com/s/6olqnnjf3unkm2m/prng_bench.cpp clock_gettime() requires -lrt. for (int pass_num=1; pass_num<=passes; pass_num++) { //Executes $passes # of times. clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &temp_time); //get time start_time = timetodouble(temp_time); //convert time to double, store as start_time for(i=1, tries=0; i!=0; tries++) { //loops until 'comparison for' fully completes. counts reps as 'tries'. <------------ for (i=0; i<Ndigits; i++) //Move forward through array. | results[i]=(rand()%base); //assign random num of base to element (digit). | /*for (i=0; i<Ndigits; i++) //---Debug Lines--------------- | std::cout<<" "<<results[i]; //---a LOT of output.---------- | std::cout << "\n"; //---Comment/decoment to disable/enable.*/ // | for (i=Ndigits-1; i>0 && results[i]==results[0]; i--); //Move through array, != element breaks & i!=0, new digits drawn. -| } //If all are equal i will be 0, nested for condition satisfied. -| clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &temp_time); //get time draw_time = (timetodouble(temp_time) - start_time); //convert time to dbl, subtract start_time, set draw_time to diff. total_time += draw_time; //add time for this pass to total. total_tries += tries; //add tries for this pass to total. /*Formated output for each pass: Pass# ---: All -- base(--) digits = -- base(10) Time: ----.---- secs. Tries: ----- (LINE) */ std::cout<<"Pass# "<<std::setw(width_pass)<<pass_num<<": All "<<Ndigits<<" base("<<base<<") digits = " <<std::setw(width_base)<<results[0]<<" base(10). Time: "<<std::setw(width_time)<<draw_time <<" secs. Tries: "<<tries<<"\n"; } if(passes==1) return 0; //No need for totals and averages of 1 pass. /* It took ----.---- secs & ------ tries to find --- repeating -- digit base(--) numbers. (LINE) An average of ---.---- secs & ---- tries was needed to find each one. (LINE)(LINE) */ std::cout<<"It took "<<total_time<<" secs & "<<total_tries<<" tries to find " <<passes<<" repeating "<<Ndigits<<" digit base("<<base<<") numbers.\n" <<"An average of "<<total_time/passes<<" secs & "<<total_tries/passes <<" tries was needed to find each one. \n\n"; return 0;

    Read the article

< Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >