Search Results

Search found 97855 results on 3915 pages for 'code performance'.

Page 267/3915 | < Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >

  • How to find the worst performing queries in MS SQL Server 2008?

    - by Thomas Bratt
    How to find the worst performing queries in MS SQL Server 2008? I found the following example but it does not seem to work: SELECT TOP 5 obj.name, max_logical_reads, max_elapsed_time FROM sys.dm_exec_query_stats a CROSS APPLY sys.dm_exec_sql_text(sql_handle) hnd INNER JOIN sys.sysobjects obj on hnd.objectid = obj.id ORDER BY max_logical_reads DESC Taken from: http://www.sqlservercurry.com/2010/03/top-5-costly-stored-procedures-in-sql.html

    Read the article

  • Huge page buffer vs. multiple simultaneous processes

    - by Andrei K.
    One of our customer has a 35 Gb database with average active connections count about 70-80. Some tables in database have more than 10M records per table. Now they have bought new server: 4 * 6 Core = 24 Cores CPU, 48 Gb RAM, 2 RAID controllers 256 Mb cache, with 8 SAS 15K HDD on each. 64bit OS. I'm wondering, what would be a fastest configuration: 1) FB 2.5 SuperServer with huge buffer 8192 * 3500000 pages = 29 Gb or 2) FB 2.5 Classic with small buffer of 1000 pages. Maybe some one has tested such case before and will save me days of work :) Thanks in advance.

    Read the article

  • Sql Server 2000 Stored Procedure Prevent Parallelism or something?

    - by user187305
    I have a huge disgusting stored procedure that wasn't slow a couple months ago, but now is. I barely know what this thing does and I am in no way interested in rewriting it. I do know that if I take the body of the stored procedure and then declare/set the values of the parameters and run it in query analyzer that it runs more than 20x faster. From the internet, I've read that this is probably due to a bad cached query plan. So, I've tried running the sp with "WITH RECOMPILE" after the EXEC and I've also tried putting the "WITH RECOMPLE" inside the sp, but neither of those helped even a little bit. When I look at the execution plan of the sp vs the query, the biggest difference is that the sp has "Parallelism" operations all over the place and the query doesn't have any. Can this be the cause of the difference in speeds? Thank you, any ideas would be great... I'm stuck.

    Read the article

  • What's the good of IDE's auto generated @override annotation ?

    - by Tony
    I am using eclipse , when I use shortcut to generate override implementations , there is an override annotation up there , I am using JDK 6 , this is all right , but under JDK 5 this annotation will cause an error, so I want to ask , if this annotation is completely useless ? Will compiler do some kind of optimization using this annotation ?

    Read the article

  • Why does Go compile quickly?

    - by Evan Kroske
    I've Googled and poked around the Go website, but I can't seem to find an explanation for Go's extraordinary build times. Are they products of the language features (or lack thereof), a highly optimized compiler, or something else? I'm not trying to promote Go; I'm just curious.

    Read the article

  • how to optimize an oracle query that has to_char in where clause for date

    - by panorama12
    I have a table that contains about 49403459 records. I want to query the table on a date range. say 04/10/2010 to 04/10/2010. However, the dates are stored in the table as format 10-APR-10 10.15.06.000000 AM (time stamp). As a result. When I do: SELECT bunch,of,stuff,create_date FROM myTable WHERE TO_CHAR (create_date,'MM/DD/YYYY)' >= '04/10/2010' AND TO_CHAR (create_date, 'MM/DD/YYYY' <= '04/10/2010' I get 529 rows but in 255.59 seconds! which is because I guess I am doing to_char on EACH record. However, When I do SELECT bunch,of,stuff,create_date FROM myTable WHERE create_date >= to_date('04/10/2010','MM/DD/YYYY') AND create_date <= to_date('04/10/2010','MM/DD/YYYY') then I get 0 results in 0.14 seconds. How can I make this query fast and still get valid (529) results?? At this point I can not change indexes. Right now I think index is created on create_date column

    Read the article

  • How to host 50 domains/sites with common Django code base

    - by Off Rhoden
    I have 50 different websites that use the same layout and code base, but mostly non-overlapping data (regional support sites, not link farm). Is there a way to have a single installation of the code and run all 50 at the same time? When I have a bug to fix (or deploy new feature), I want to deploy ONE time + 1 restart and be done with it. Also: Code needs to know what domain the request is coming to so the appropriate data is displayed.

    Read the article

  • Custom session state provider needed for DB storage?

    - by subt13
    I know this question is related to many others, but please bear with me. I am trying an experiment to store all information in database tables instead of the ASP.NET session. In ASP.NET 4 one can create a custom provider for session. So, again should I implement a Custom Session-State Provider or should I just disable session (in Web.config)? Thanks! From the comments my question can be misunderstood. Hopefully this tidbit will help clarify: I don't want to store the session in the database. I want to store information in the database that you would typically store in the session. One reason why: I don't want to carry around a session on every page, especially if that page doesn't care about 90 percent of the information in the session

    Read the article

  • C++ program runs slow in VS2008

    - by Nima
    I have a program written in C++, that opens a binary file(test.bin), reads it object by object, and puts each object into a new file (it opens the new file, writes into it(append), and closes it). I use fopen/fclose, fread and fwrite. test.bin contains 20,000 objects. This program runs under linux with g++ in 1 sec but in VS2008 in debug/release mode in 1min! There are reasons why I don't do them in batches or don't keep them in memory or any other kind of optimizations. I just wonder why it is that much slow under windows. Thanks,

    Read the article

  • How to 'insert if not exists' in MySQL?

    - by warren
    I started by googling, and found this article which talks about mutex tables. I have a table with ~14 million records. If I want to add more data in the same format, is there a way to ensure the record I want to insert does not already exist without using a pair of queries (ie, one query to check and one to insert is the result set is empty)? Does a unique constraint on a field guarantee the insert will fail if it's already there? It seems that with merely a constraint, when I issue the insert via php, the script croaks.

    Read the article

  • How can I most accurately calculate the execution time of an ASP.NET page while also displaying it o

    - by henningst
    I want to calculate the execution time of my ASP.NET pages and display it on the page. Currently I'm calculating the execution time using a System.Diagnostics.Stopwatch and then store the value in a log database. The stopwatch is started in OnInit and stopped in OnPreRenderComplete. This seems to be working quite fine, and it's giving a similar execution time as the one shown in the page trace. The problem now is that I'm not able to display the execution time on the page because the stopwatch is stopped too late in the life cycle. What is the best way to do this?

    Read the article

  • NHibernate + Fluent long startup time

    - by PaRa
    Hi all, am new to NHibernate. When performing below test took 11.2 seconds (debug mode) i am seeing this large startup time in all my tests (basically creating the first session takes a tone of time) setup = Windows 2003 SP2 / Oracle10gR2 latest CPU / ODP.net 2.111.7.20 / FNH 1.0.0.636 / NHibernate 2.1.2.4000 / NUnit 2.5.2.9222 / VS2008 SP1 using System; using System.Collections; using System.Data; using System.Globalization; using System.IO; using System.Text; using System.Data; using NUnit.Framework; using System.Collections.Generic; using System.Data.Common; using NHibernate; using log4net.Config; using System.Configuration; using FluentNHibernate; [Test()] public void GetEmailById() { Email result; using (EmailRepository repository = new EmailRepository()) { results = repository.GetById(1111); } Assert.IsTrue(results != null); } public class EmailRepository : RepositoryBase { public EmailRepository():base() { } } In my RepositoryBase public T GetById(object id) { using (var session = sessionFactory.OpenSession()) using (var transaction = session.BeginTransaction()) { try { T returnVal = session.Get(id); transaction.Commit(); return returnVal; } catch (HibernateException ex) { // Logging here transaction.Rollback(); return null; } } } The query time is very small. The resulting entity is really small. Subsequent queries are fine. Its seems to be getting the first session started. Has anyone else seen something similar?

    Read the article

  • Decent profiler for Windows?

    - by olliej
    Does windows have any decent sampling (eg. non-instrumenting) profilers available? Preferably something akin to Shark on MacOS, although i am willing to accept that i am going to have to pay for such a profiler on windows. I've tried the profiler in VS Team Suite and was not overly impressed, and was wondering if there were any other good ones. [Edit: Erk, i forgot to say this is for C/C++, rather than .NET -- sorry for any confusion]

    Read the article

  • SQL query: how to translate IN() into a JOIN?

    - by tangens
    I have a lot of SQL queries like this: SELECT o.Id, o.attrib1, o.attrib2 FROM table1 o WHERE o.Id IN ( SELECT DISTINCT Id FROM table1, table2, table3 WHERE ... ) These queries have to run on different database engines (MySql, Oracle, DB2, MS-Sql, Hypersonic), so I can only use common SQL syntax. Here I read, that with MySql the IN statement isn't optimized and it's really slow, so I want to switch this into a JOIN. I tried: SELECT o.Id, o.attrib1, o.attrib2 FROM table1 o, table2, table3 WHERE ... But this does not take into account the DISTINCT keyword. Question: How do I get rid of the duplicate rows using the JOIN approach?

    Read the article

  • Fastest implementation of the frac function in C#

    - by user349937
    I would like to implement a frac function in C# (just like the one in hsl here http://msdn.microsoft.com/en-us/library/bb509603%28VS.85%29.aspx) but since it is for a very processor intensive application i would like the best version possible. I was using something like public float Frac(float value) { return value - (float)Math.Truncate(value); } but I'm having precision problems, for example for 2.6f it's returning in the unit test Expected: 0.600000024f But was: 0.599999905f I know that I can convert to decimal the value and then at the end convert to float to obtain the correct result something like this: public float Frac(float value) { return (float)((decimal)value - Decimal.Truncate((decimal)value)); } But I wonder if there is a better way without resorting to decimals...

    Read the article

  • How does CouchDB perform for a regularly updated dataset?

    - by Ritesh M Nayak
    I am planning on using CouchDB on a project. But as the querying mechanism involves writing views (which are a lot like indexes on regular RDMBMS's) I was wondering, if the document database keeps getting updated a lot ( a write heavy database) would CouchDB perform well compared to a regular RDBMS? Or do we have to compact/re-index the system occasionally to make it perform faster?

    Read the article

  • What runs faster? Wordpress or Drupal 6.x?

    - by electblake
    So... I run a pretty large Wordpress blog. Currently it gets around 20k+ pageviews a day, and its always a struggle to keep the bad boy running quickly - I currently run a vps.net with CentOS 5.3 I am also Drupal developer by trade so I love the CMS Framework for its versatility and the portability (I can take work from one site and implement on another with great ease) MY QUESTION IS: What is faster then? Wordpress 3.x & Drupal 6.x I'd love to migrate my site to Drupal to be able to roll out new features etc (which I find awkward to do in Wordpress) but I am scared that Drupal may not be able to handle the traffic. Any opinions? I know that some major players use Drupal - as Dries documents well on his blog but I'm not under any illusions that Drupal can be a real hog. Thanks for any/all help! Please try to avoid server optimization talk unless it pertains to Wordpress or Drupal 6.x specifically, I love to learn more about optimizations but I do want to sort out which platform is quicker :) p.s - I realize the fastest option is to use a lower-level framework (with less overhead) like CakePHP etc but assume that isn't an option ;)

    Read the article

  • Slow query with unexpected scan

    - by zerkms
    Hello I have this query: SELECT * FROM SAMPLE SAMPLE INNER JOIN TEST TEST ON SAMPLE.SAMPLE_NUMBER = TEST.SAMPLE_NUMBER INNER JOIN RESULT RESULT ON TEST.TEST_NUMBER = RESULT . TEST_NUMBER WHERE SAMPLED_DATE BETWEEN '2010-03-17 09:00' AND '2010-03-17 12:00' the biggest table here is RESULT, contains 11.1M records. The left 2 tables about 1M. this query works slowly (more than 10 minutes) and returns about 800 records. executing plan shows clustered index scan over all 11M records. RESULT.TEST_NUMBER is a clustered primary key. if I change 2010-03-17 09:00 to 2010-03-17 10:00 - i get about 40 records. it executes for 300ms. and plan shows clustered index seek if i replace * in SELECT clause to RESULT.TEST_NUMBER (covered with index) - then all become fast in first case too. this points to hdd io issues, but doesn't clarifies changing plan. so, any ideas?

    Read the article

  • Which is more efficient regular expression?

    - by Vagnerr
    I'm parsing some big log files and have some very simple string matches for example if(m/Some String Pattern/o){ #Do something } It seems simple enough but in fact most of the matches I have could be against the start of the line, but the match would be "longer" for example if(m/^Initial static string that matches Some String Pattern/o){ #Do something } Obviously this is a longer regular expression and so more work to match. However I can use the start of line anchor which would allow an expression to be discarded as a failed match sooner. It is my hunch that the latter would be more efficient. Can any one back me up/shoot me down :-)

    Read the article

  • Are Conditional subquery

    - by Tobias Schulte
    I have a table foo and a table bar, where each foo might have a bar (and a bar might belong to multiple foos). Now I need to select all foos with a bar. My sql looks like this SELECT * FROM foo f WHERE [...] AND ($param IS NULL OR (SELECT ((COUNT(*))>0) FROM bar b WHERE f.bar = b.id)) with $param being replaced at runtime. The question is: Will the subquery be executed even if param is null, or will the dbms optimize the subquery out?

    Read the article

  • Resources for learning how to better read code

    - by rsteckly
    Hi, I recently inherited a large codebase and am having to read it. The thing is, I've usually been the dev starting a project. As a result, I don't have a lot of experience reading code. My reaction to having to read a lot of code is, well, umm to rewrite it. But I need to bring myself up to speed quickly and build on top of an existing system. Do other people have techniques they've learned to absorb a code base? At this point, I'm just reading through the code. I've tried generating UML diagrams using UModel. They're so big they won't print cleanly and when I zoom in, I really do lose the perspective of seeing all the relationships. How have other people dealt with this problem?

    Read the article

< Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >