Search Results

Search found 33242 results on 1330 pages for 'database optimization'.

Page 232/1330 | < Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >

  • Is ToString() optimized by compiler?

    - by TheVillageIdiot
    Suppose I've following Code: Console.WriteLine("Value1: " + SomeEnum.Value1.ToString() + "\r\nValue2: " + SomeOtherEnum.Value2.ToString()); Will Compiler Optimize this to: Console.WriteLine("Value1: " + SomeEnum.Value1 + "\r\nValue2: " + SomeOtherEnum.Value2); I've checked it with IL Disassembler and there are calls to IL_005a: callvirt instance string [mscorlib]System.Object::ToString() I don't know if JIT optimizes this.

    Read the article

  • Database file is inexplicably locked during SQLite commit

    - by sweeney
    Hello, I'm performing a large number of INSERTS to a SQLite database. I'm using just one thread. I batch the writes to improve performance and have a bit of security in case of a crash. Basically I cache up a bunch of data in memory and then when I deem appropriate, I loop over all of that data and perform the INSERTS. The code for this is shown below: public void Commit() { using (SQLiteConnection conn = new SQLiteConnection(this.connString)) { conn.Open(); using (SQLiteTransaction trans = conn.BeginTransaction()) { using (SQLiteCommand command = conn.CreateCommand()) { command.CommandText = "INSERT OR IGNORE INTO [MY_TABLE] (col1, col2) VALUES (?,?)"; command.Parameters.Add(this.col1Param); command.Parameters.Add(this.col2Param); foreach (Data o in this.dataTemp) { this.col1Param.Value = o.Col1Prop; this. col2Param.Value = o.Col2Prop; command.ExecuteNonQuery(); } } this.TryHandleCommit(trans); } conn.Close(); } } I now employ the following gimmick to get the thing to eventually work: private void TryHandleCommit(SQLiteTransaction trans) { try { trans.Commit(); } catch (Exception e) { Console.WriteLine("Trying again..."); this.TryHandleCommit(trans); } } I create my DB like so: public DataBase(String path) { //build connection string SQLiteConnectionStringBuilder connString = new SQLiteConnectionStringBuilder(); connString.DataSource = path; connString.Version = 3; connString.DefaultTimeout = 5; connString.JournalMode = SQLiteJournalModeEnum.Persist; connString.UseUTF16Encoding = true; using (connection = new SQLiteConnection(connString.ToString())) { //check for existence of db FileInfo f = new FileInfo(path); if (!f.Exists) //build new blank db { SQLiteConnection.CreateFile(path); connection.Open(); using (SQLiteTransaction trans = connection.BeginTransaction()) { using (SQLiteCommand command = connection.CreateCommand()) { command.CommandText = DataBase.CREATE_MATCHES; command.ExecuteNonQuery(); command.CommandText = DataBase.CREATE_STRING_DATA; command.ExecuteNonQuery(); //TODO add logging } trans.Commit(); } connection.Close(); } } } I then export the connection string and use it to obtain new connections in different parts of the program. At seemingly random intervals, though at far too great a rate to ignore or otherwise workaround this problem, I get unhandled SQLiteException: Database file is locked. This occurs when I attempt to commit the transaction. No errors seem to occur prior to then. This does not always happen. Sometimes the whole thing runs without a hitch. No reads are being performed on these files before the commits finish. I have the very latest SQLite binary. I'm compiling for .NET 2.0. I'm using VS 2008. The db is a local file. All of this activity is encapsulated within one thread / process. Virus protection is off (though I think that was only relevant if you were connecting over a network?). As per Scotsman's post I have implemented the following changes: Journal Mode set to Persist DB files stored in C:\Docs + Settings\ApplicationData via System.Windows.Forms.Application.AppData windows call No inner exception Witnessed on two distinct machines (albeit very similar hardware and software) Have been running Process Monitor - no extraneous processes are attaching themselves to the DB files - the problem is definitely in my code... Does anyone have any idea whats going on here? I know I just dropped a whole mess of code, but I've been trying to figure this out for way too long. My thanks to anyone who makes it to the end of this question! brian UPDATES: Thanks for the suggestions so far! I've implemented many of the suggested changes. I feel that we are getting closer to the answer...however... The code above technically works however it is non-deterministic! It is not guaranteed to do anything aside from spin in neutral forever. In practice it seems to work somewhere between the 1st and 10th iteration. If i batch my commits at a reasonable interval damage will be mitigated but I really do not want to leave things in this state... More suggestions welcome!

    Read the article

  • Quickest way to write to file in java

    - by user1097772
    I'm writing an application which compares directory structure. First I wrote an application which writes gets info about files - one line about each file or directory. My soulution is: calling method toFile Static PrintWriter pw = new PrintWriter(new BufferedWriter( new FileWriter("DirStructure.dlis")), true); String line; // info about file or directory public void toFile(String line) { pw.println(line); } and of course pw.close(), at the end. My question is, can I do it quicker? What is the quickest way? Edit: quickest way = quickest writing in the file

    Read the article

  • Can I use Duff's Device on an array in C?

    - by Ben Fossen
    I have a loop here and I want to make it run faster. I am passing in a large array. I recently heard of Duff's Device can it be applied to this for loop? any ideas? for (i = 0; i < dim; i++) { for (j = 0; j < dim; j++) { dst[RIDX(dim-1-j, i, dim)] = src[RIDX(i, j, dim)]; } }

    Read the article

  • how to speed up the code??

    - by kaushik
    in my program i have a method which requires about 4 files to be open each time it is called,as i require to take some data.all this data from the file i have been storing in list for manupalation. I approximatily need to call this method about 10,000 times.which is making my program very slow? any method for handling this files in a better ways and is storing the whole data in list time consuming what is better alternatives for list? I can give some code,but my previous question was closed as that only confused everyone as it is a part of big program and need to be explained completely to understand,so i am not giving any code,please suggest ways thinking this as a general question... thanks in advance

    Read the article

  • How can I optimize this subqueried and Joined MySQL Query?

    - by kevzettler
    I'm pretty green on mysql and I need some tips on cleaning up a query. It is used in several variations through out a site. Its got some subquerys derived tables and fun going on. Heres the query: # Query_time: 2 Lock_time: 0 Rows_sent: 0 Rows_examined: 0 SELECT * FROM ( SELECT products . *, categories.category_name AS category, ( SELECT COUNT( * ) FROM distros WHERE distros.product_id = products.product_id) AS distro_count, (SELECT COUNT(*) FROM downloads WHERE downloads.product_id = products.product_id AND WEEK(downloads.date) = WEEK(curdate())) AS true_downloads, (SELECT COUNT(*) FROM views WHERE views.product_id = products.product_id AND WEEK(views.date) = WEEK(curdate())) AS true_views FROM products INNER JOIN categories ON products.category_id = categories.category_id ORDER BY created_date DESC, true_views DESC ) AS count_table WHERE count_table.distro_count > 0 AND count_table.status = 'published' AND count_table.active = 1 LIMIT 0, 8 Heres the explain: +----+--------------------+------------+-------+---------------+-------------+---------+------------------------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+--------------------+------------+-------+---------------+-------------+---------+------------------------------------+------+----------------------------------------------+ | 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 232 | Using where | | 2 | DERIVED | categories | index | PRIMARY | idx_name | 47 | NULL | 13 | Using index; Using temporary; Using filesort | | 2 | DERIVED | products | ref | category_id | category_id | 4 | digizald_db.categories.category_id | 9 | | | 5 | DEPENDENT SUBQUERY | views | ref | product_id | product_id | 4 | digizald_db.products.product_id | 46 | Using where | | 4 | DEPENDENT SUBQUERY | downloads | ref | product_id | product_id | 4 | digizald_db.products.product_id | 14 | Using where | | 3 | DEPENDENT SUBQUERY | distros | ref | product_id | product_id | 4 | digizald_db.products.product_id | 1 | Using index | +----+--------------------+------------+-------+---------------+-------------+---------+------------------------------------+------+----------------------------------------------+ 6 rows in set (0.04 sec) And the Tables: mysql> describe products; +---------------+--------------------------------------------------+------+-----+-------------------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------+--------------------------------------------------+------+-----+-------------------+----------------+ | product_id | int(10) unsigned | NO | PRI | NULL | auto_increment | | product_key | char(32) | NO | | NULL | | | title | varchar(150) | NO | | NULL | | | company | varchar(150) | NO | | NULL | | | user_id | int(10) unsigned | NO | MUL | NULL | | | description | text | NO | | NULL | | | video_code | text | NO | | NULL | | | category_id | int(10) unsigned | NO | MUL | NULL | | | price | decimal(10,2) | NO | | NULL | | | quantity | int(10) unsigned | NO | | NULL | | | downloads | int(10) unsigned | NO | | NULL | | | views | int(10) unsigned | NO | | NULL | | | status | enum('pending','published','rejected','removed') | NO | | NULL | | | active | tinyint(1) | NO | | NULL | | | deleted | tinyint(1) | NO | | NULL | | | created_date | datetime | NO | | NULL | | | modified_date | timestamp | NO | | CURRENT_TIMESTAMP | | | scrape_source | varchar(215) | YES | | NULL | | +---------------+--------------------------------------------------+------+-----+-------------------+----------------+ 18 rows in set (0.00 sec) mysql> describe categories -> ; +------------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------------+------------------+------+-----+---------+----------------+ | category_id | int(10) unsigned | NO | PRI | NULL | auto_increment | | category_name | varchar(45) | NO | MUL | NULL | | | parent_id | int(10) unsigned | YES | MUL | NULL | | | category_type_id | int(10) unsigned | NO | | NULL | | +------------------+------------------+------+-----+---------+----------------+ 4 rows in set (0.00 sec) mysql> describe compatibilities -> ; +------------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------------+------------------+------+-----+---------+----------------+ | compatibility_id | int(10) unsigned | NO | PRI | NULL | auto_increment | | name | varchar(45) | NO | | NULL | | | code_name | varchar(45) | NO | | NULL | | | description | varchar(128) | NO | | NULL | | | position | int(10) unsigned | NO | | NULL | | +------------------+------------------+------+-----+---------+----------------+ 5 rows in set (0.01 sec) mysql> describe distros -> ; +------------------+--------------------------------------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------------+--------------------------------------------------+------+-----+---------+----------------+ | id | int(10) unsigned | NO | PRI | NULL | auto_increment | | product_id | int(10) unsigned | NO | MUL | NULL | | | compatibility_id | int(10) unsigned | NO | MUL | NULL | | | user_id | int(10) unsigned | NO | | NULL | | | status | enum('pending','published','rejected','removed') | NO | | NULL | | | distro_type | enum('file','url') | NO | | NULL | | | version | varchar(150) | NO | | NULL | | | filename | varchar(50) | YES | | NULL | | | url | varchar(250) | YES | | NULL | | | virus | enum('READY','PASS','FAIL') | YES | | NULL | | | downloads | int(10) unsigned | NO | | 0 | | +------------------+--------------------------------------------------+------+-----+---------+----------------+ 11 rows in set (0.01 sec) mysql> describe downloads; +------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------+------------------+------+-----+---------+----------------+ | id | int(10) unsigned | NO | PRI | NULL | auto_increment | | product_id | int(10) unsigned | NO | MUL | NULL | | | distro_id | int(10) unsigned | NO | MUL | NULL | | | user_id | int(10) unsigned | NO | MUL | NULL | | | ip_address | varchar(15) | NO | | NULL | | | date | datetime | NO | | NULL | | +------------+------------------+------+-----+---------+----------------+ 6 rows in set (0.01 sec) mysql> describe views -> ; +------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------+------------------+------+-----+---------+----------------+ | id | int(10) unsigned | NO | PRI | NULL | auto_increment | | product_id | int(10) unsigned | NO | MUL | NULL | | | user_id | int(10) unsigned | NO | MUL | NULL | | | ip_address | varchar(15) | NO | | NULL | | | date | datetime | NO | | NULL | | +------------+------------------+------+-----+---------+----------------+ 5 rows in set (0.00 sec)

    Read the article

  • Meassure website

    - by s0mmer
    Hi, I was wondering if it is possible to install or use any online service to measure your website's performance? I've seen many just checking the download speed of images, external files etc. But is it possible to meassure how long asp/php code takes to execute? I have a site running a bit slowly, and it would be very nice with some app/service guiding where to optimize.

    Read the article

  • Difference between Logarithmic and Uniform cost criteria

    - by Marthin
    I'v got some problem to understand the difference between Logarithmic(Lcc) and Uniform(Ucc) cost criteria and also how to use it in calculations. Could someone please explain the difference between the two and perhaps show how to calculate the complexity for a problem like A+B*C (Yes this is part of an assignment =) ) Thx for any help! /Marthin

    Read the article

  • Code runs 6 times slower with 2 threads than with 1

    - by Edward Bird
    So I have written some code to experiment with threads and do some testing. The code should create some numbers and then find the mean of those numbers. I think it is just easier to show you what I have so far. I was expecting with two threads that the code would run about 2 times as fast. Measuring it with a stopwatch I think it runs about 6 times slower! void findmean(std::vector<double>*, std::size_t, std::size_t, double*); int main(int argn, char** argv) { // Program entry point std::cout << "Generating data..." << std::endl; // Create a vector containing many variables std::vector<double> data; for(uint32_t i = 1; i <= 1024 * 1024 * 128; i ++) data.push_back(i); // Calculate mean using 1 core double mean = 0; std::cout << "Calculating mean, 1 Thread..." << std::endl; findmean(&data, 0, data.size(), &mean); mean /= (double)data.size(); // Print result std::cout << " Mean=" << mean << std::endl; // Repeat, using two threads std::vector<std::thread> thread; std::vector<double> result; result.push_back(0.0); result.push_back(0.0); std::cout << "Calculating mean, 2 Threads..." << std::endl; // Run threads uint32_t halfsize = data.size() / 2; uint32_t A = 0; uint32_t B, C, D; // Split the data into two blocks if(data.size() % 2 == 0) { B = C = D = halfsize; } else if(data.size() % 2 == 1) { B = C = halfsize; D = hsz + 1; } // Run with two threads thread.push_back(std::thread(findmean, &data, A, B, &(result[0]))); thread.push_back(std::thread(findmean, &data, C, D , &(result[1]))); // Join threads thread[0].join(); thread[1].join(); // Calculate result mean = result[0] + result[1]; mean /= (double)data.size(); // Print result std::cout << " Mean=" << mean << std::endl; // Return return EXIT_SUCCESS; } void findmean(std::vector<double>* datavec, std::size_t start, std::size_t length, double* result) { for(uint32_t i = 0; i < length; i ++) { *result += (*datavec).at(start + i); } } I don't think this code is exactly wonderful, if you could suggest ways of improving it then I would be grateful for that also.

    Read the article

  • Faster way to split a string and count characters using R?

    - by chrisamiller
    I'm looking for a faster way to calculate GC content for DNA strings read in from a FASTA file. This boils down to taking a string and counting the number of times that the letter 'G' or 'C' appears. I also want to specify the range of characters to consider. I have a working function that is fairly slow, and it's causing a bottleneck in my code. It looks like this: ## ## count the number of GCs in the characters between start and stop ## gcCount <- function(line, st, sp){ chars = strsplit(as.character(line),"")[[1]] numGC = 0 for(j in st:sp){ ##nested ifs faster than an OR (|) construction if(chars[[j]] == "g"){ numGC <- numGC + 1 }else if(chars[[j]] == "G"){ numGC <- numGC + 1 }else if(chars[[j]] == "c"){ numGC <- numGC + 1 }else if(chars[[j]] == "C"){ numGC <- numGC + 1 } } return(numGC) } Running Rprof gives me the following output: > a = "GCCCAAAATTTTCCGGatttaagcagacataaattcgagg" > Rprof(filename="Rprof.out") > for(i in 1:500000){gcCount(a,1,40)}; > Rprof(NULL) > summaryRprof(filename="Rprof.out") self.time self.pct total.time total.pct "gcCount" 77.36 76.8 100.74 100.0 "==" 18.30 18.2 18.30 18.2 "strsplit" 3.58 3.6 3.64 3.6 "+" 1.14 1.1 1.14 1.1 ":" 0.30 0.3 0.30 0.3 "as.logical" 0.04 0.0 0.04 0.0 "as.character" 0.02 0.0 0.02 0.0 $by.total total.time total.pct self.time self.pct "gcCount" 100.74 100.0 77.36 76.8 "==" 18.30 18.2 18.30 18.2 "strsplit" 3.64 3.6 3.58 3.6 "+" 1.14 1.1 1.14 1.1 ":" 0.30 0.3 0.30 0.3 "as.logical" 0.04 0.0 0.04 0.0 "as.character" 0.02 0.0 0.02 0.0 $sampling.time [1] 100.74 Any advice for making this code faster?

    Read the article

  • Google Web Optimizer -- How long until winning combination?

    - by Django Reinhardt
    I've had an A/B Test running in Google Web Optimizer for six weeks now, and there's still no end in sight. Google is still saying: "We have not gathered enough data yet to show any significant results. When we collect more data we should be able to show you a winning combination." Is there any way of telling how close Google is to making up its mind? (Does anyone know what algorithm does it use to decide if there's been any "high confidence winners"?) According to the Google help documentation: Sometimes we simply need more data to be able to reach a level of high confidence. A tested combination typically needs around 200 conversions for us to judge its performance with certainty. But all of our conversions have over 200 conversations at the moment: 230 / 4061 (Original) 223 / 3937 (Variation 1) 205 / 3984 (Variation 2) 205 / 4007 (Variation 3) How much longer is it going to have to run?? Thanks for any help.

    Read the article

  • Rewriting a for loop in pure NumPy to decrease execution time

    - by Statto
    I recently asked about trying to optimise a Python loop for a scientific application, and received an excellent, smart way of recoding it within NumPy which reduced execution time by a factor of around 100 for me! However, calculation of the B value is actually nested within a few other loops, because it is evaluated at a regular grid of positions. Is there a similarly smart NumPy rewrite to shave time off this procedure? I suspect the performance gain for this part would be less marked, and the disadvantages would presumably be that it would not be possible to report back to the user on the progress of the calculation, that the results could not be written to the output file until the end of the calculation, and possibly that doing this in one enormous step would have memory implications? Is it possible to circumvent any of these? import numpy as np import time def reshape_vector(v): b = np.empty((3,1)) for i in range(3): b[i][0] = v[i] return b def unit_vectors(r): return r / np.sqrt((r*r).sum(0)) def calculate_dipole(mu, r_i, mom_i): relative = mu - r_i r_unit = unit_vectors(relative) A = 1e-7 num = A*(3*np.sum(mom_i*r_unit, 0)*r_unit - mom_i) den = np.sqrt(np.sum(relative*relative, 0))**3 B = np.sum(num/den, 1) return B N = 20000 # number of dipoles r_i = np.random.random((3,N)) # positions of dipoles mom_i = np.random.random((3,N)) # moments of dipoles a = np.random.random((3,3)) # three basis vectors for this crystal n = [10,10,10] # points at which to evaluate sum gamma_mu = 135.5 # a constant t_start = time.clock() for i in range(n[0]): r_frac_x = np.float(i)/np.float(n[0]) r_test_x = r_frac_x * a[0] for j in range(n[1]): r_frac_y = np.float(j)/np.float(n[1]) r_test_y = r_frac_y * a[1] for k in range(n[2]): r_frac_z = np.float(k)/np.float(n[2]) r_test = r_test_x +r_test_y + r_frac_z * a[2] r_test_fast = reshape_vector(r_test) B = calculate_dipole(r_test_fast, r_i, mom_i) omega = gamma_mu*np.sqrt(np.dot(B,B)) # write r_test, B and omega to a file frac_done = np.float(i+1)/(n[0]+1) t_elapsed = (time.clock()-t_start) t_remain = (1-frac_done)*t_elapsed/frac_done print frac_done*100,'% done in',t_elapsed/60.,'minutes...approximately',t_remain/60.,'minutes remaining'

    Read the article

  • Can anyone recommend a decent tool for optimising images other than photoshop

    - by toomanyairmiles
    Can anyone recommend a decent tool for optimising images other than adobe photoshop, the gimp etc? I'm looking to optimise images for the web preferably online and free. Basically I have a client who can't install additional software on their work PC but needs to optimise photographs and other images for their website and is presently uploading 1 or 2 Mb files. On a personal level I'm interested to see what other people are using...

    Read the article

  • How to index a date column with null values?

    - by Heinz Z.
    How should I index a date column when some rows has null values? We have to select rows between a date range and rows with null dates. We use Oracle 9.2 and higher. Options I found Using a bitmap index on the date column Using an index on date column and an index on a state field which value is 1 when the date is null Using an index on date column and an other granted not null column My thoughts to the options are: to 1: the table have to many different values to use an bitmap index to 2: I have to add an field only for this purpose and to change the query when I want to retrieve the null date rows to 3: locks tricky to add an field to an index which is not really needed What is the best practice for this case? Thanks in advance Some infos I have read: Oracle Date Index When does Oracle index null column values?

    Read the article

  • Should not a tail-recursive function also be faster?

    - by Balint Erdi
    I have the following Clojure code to calculate a number with a certain "factorable" property. (what exactly the code does is secondary). (defn factor-9 ([] (let [digits (take 9 (iterate #(inc %) 1)) nums (map (fn [x] ,(Integer. (apply str x))) (permutations digits))] (some (fn [x] (and (factor-9 x) x)) nums))) ([n] (or (= 1 (count (str n))) (and (divisible-by-length n) (factor-9 (quot n 10)))))) Now, I'm into TCO and realize that Clojure can only provide tail-recursion if explicitly told so using the recur keyword. So I've rewritten the code to do that (replacing factor-9 with recur being the only difference): (defn factor-9 ([] (let [digits (take 9 (iterate #(inc %) 1)) nums (map (fn [x] ,(Integer. (apply str x))) (permutations digits))] (some (fn [x] (and (factor-9 x) x)) nums))) ([n] (or (= 1 (count (str n))) (and (divisible-by-length n) (recur (quot n 10)))))) To my knowledge, TCO has a double benefit. The first one is that it does not use the stack as heavily as a non tail-recursive call and thus does not blow it on larger recursions. The second, I think is that consequently it's faster since it can be converted to a loop. Now, I've made a very rough benchmark and have not seen any difference between the two implementations although. Am I wrong in my second assumption or does this have something to do with running on the JVM (which does not have automatic TCO) and recur using a trick to achieve it? Thank you.

    Read the article

  • Does the <script> tag position in HTML affects performance of the webpage?

    - by Rahul Joshi
    If the script tag is above or below the body in a HTML page, does it matter for the performance of a website? And what if used in between like this: <body> ..blah..blah.. <script language="JavaScript" src="JS_File_100_KiloBytes"> function f1() { .. some logic reqd. for manipulating contents in a webpage } </script> ... some text here too ... </body> Or is this better?: <script language="JavaScript" src="JS_File_100_KiloBytes"> function f1() { .. some logic reqd. for manipulating contents in a webpage } </script> <body> ..blah..blah.. ..call above functions on some events like onclick,onfocus,etc.. </body> Or this one?: <body> ..blah..blah.. ..call above functions on some events like onclick,onfocus,etc.. <script language="JavaScript" src="JS_File_100_KiloBytes"> function f1() { .. some logic reqd. for manipulating contents in a webpage } </script> </body> Need not tell everything is again in the <html> tag!! How does it affect performance of webpage while loading? Does it really? Which one is the best, either out of these 3 or some other which you know? And one more thing, I googled a bit on this, from which I went here: Best Practices for Speeding Up Your Web Site and it suggests put scripts at the bottom, but traditionally many people put it in <head> tag which is above the <body> tag. I know it's NOT a rule but many prefer it that way. If you don't believe it, just view source of this page! And tell me what's the better style for best performance.

    Read the article

  • when is java faster than c++ (or when is JIT faster then precompiled)?

    - by kostja
    I have heard that under certain circumstances, Java programs or rather parts of java programs are able to be executed faster than the "same" code in C++ (or other precompiled code) due to JIT optimizations. This is due to the compiler being able to determine the scope of some variables, avoid some conditionals and pull similar tricks at runtime. Could you give an (or better - some) example, where this applies? And maybe outline the exact conditions under which the compiler is able to optimize the bytecode beyond what is possible with precompiled code? NOTE : This question is not about comparing Java to C++. Its about the possibilities of JIT compiling. Please no flaming. I am also not aware of any duplicates. Please point them out if you are.

    Read the article

  • Is SQL DATEDIFF(year, ..., ...) an Expensive Computation?

    - by rlb.usa
    I'm trying to optimize up some horrendously complicated SQL queries because it takes too long to finish. In my queries, I have dynamically created SQL statements with lots of the same functions, so I created a temporary table where each function is only called once instead of many, many times - this cut my execution time by 3/4. So my question is, can I expect to see much of a difference if say, 1,000 datediff computations are narrowed to 100?

    Read the article

  • Completely remove ViewState for specific pages

    - by Kerido
    Hi everybody, I have a site that features some pages which do not require any post-back functionality. They simply display static HTML and don't even have any associated code. However, since the Master Page has a <form runat="server"> tag which wraps all ContentPlaceHolders, the resulting HTML always contains the ViewState field, i.e: <input type="hidden" id="__VIEWSTATE" value="/wEPDwUKMjEwNDQyMTMxM2Rk0XhpfvawD3g+fsmZqmeRoPnb9kI=" /> I realize, that when decrypted, this string corresponds to the <form> tag which I cannot remove. However, I would still like to remove the ViewState field for pages that only display static HTML. Is it possible?

    Read the article

  • Verifying compiler optimizations in gcc/g++ by analyzing assembly listings

    - by Victor Liu
    I just asked a question related to how the compiler optimizes certain C++ code, and I was looking around SO for any questions about how to verify that the compiler has performed certain optimizations. I was trying to look at the assembly listing generated with g++ (g++ -c -g -O2 -Wa,-ahl=file.s file.c) to possibly see what is going on under the hood, but the output is too cryptic to me. What techniques do people use to tackle this problem, and are there any good references on how to interpret the assembly listings of optimized code or articles specific to the GCC toolchain that talk about this problem?

    Read the article

  • Data Access from single table in sql server 2005 is too slow

    - by Muhammad Kashif Nadeem
    Following is the script of table. Accessing data from this table is too slow. SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[Emails]( [id] [int] IDENTITY(1,1) NOT NULL, [datecreated] [datetime] NULL CONSTRAINT [DF_Emails_datecreated] DEFAULT (getdate()), [UID] [nvarchar](250) COLLATE Latin1_General_CI_AS NULL, [From] [nvarchar](100) COLLATE Latin1_General_CI_AS NULL, [To] [nvarchar](100) COLLATE Latin1_General_CI_AS NULL, [Subject] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL, [Body] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL, [HTML] [nvarchar](max) COLLATE Latin1_General_CI_AS NULL, [AttachmentCount] [int] NULL, [Dated] [datetime] NULL ) ON [PRIMARY] Following query takes 50 seconds to fetch data. select id, datecreated, UID, [From], [To], Subject, AttachmentCount, Dated from emails If I include Body and Html in select then time is event worse. indexes are on: id unique clustered From Non unique non clustered To Non unique non clustered Tabls has currently 180000+ records. There might be 100,000 records each month so this will become more slow as time will pass. Does splitting data into two table will solve the problem? What other indexes should be there?

    Read the article

  • Optimizing Vector elements swaps using CUDA

    - by Orion Nebula
    Hi all, Since I am new to cuda .. I need your kind help I have this long vector, for each group of 24 elements, I need to do the following: for the first 12 elements, the even numbered elements are multiplied by -1, for the second 12 elements, the odd numbered elements are multiplied by -1 then the following swap takes place: Graph: because I don't yet have enough points, I couldn't post the image so here it is: http://www.freeimagehosting.net/image.php?e4b88fb666.png I have written this piece of code, and wonder if you could help me further optimize it to solve for divergence or bank conflicts .. //subvector is a multiple of 24, Mds and Nds are shared memory _shared_ double Mds[subVector]; _shared_ double Nds[subVector]; int tx = threadIdx.x; int tx_mod = tx ^ 0x0001; int basex = __umul24(blockDim.x, blockIdx.x); Mds[tx] = M.elements[basex + tx]; __syncthreads(); // flip the signs if (tx < (tx/24)*24 + 12) { //if < 12 and even if ((tx & 0x0001)==0) Mds[tx] = -Mds[tx]; } else if (tx < (tx/24)*24 + 24) { //if >12 and < 24 and odd if ((tx & 0x0001)==1) Mds[tx] = -Mds[tx]; } __syncthreads(); if (tx < (tx/24)*24 + 6) { //for the first 6 elements .. swap with last six in the 24elements group (see graph) Nds[tx] = Mds[tx_mod + 18]; Mds [tx_mod + 18] = Mds [tx]; Mds[tx] = Nds[tx]; } else if (tx < (tx/24)*24 + 12) { // for the second 6 elements .. swp with next adjacent group (see graph) Nds[tx] = Mds[tx_mod + 6]; Mds [tx_mod + 6] = Mds [tx]; Mds[tx] = Nds[tx]; } __syncthreads(); Thanks in advance ..

    Read the article

< Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >