Search Results

Search found 105847 results on 4234 pages for 'sql server performance'.

Page 216/4234 | < Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >

  • Why does PostgresQL query performance drop over time, but restored when rebuilding index

    - by Jim Rush
    According to this page in the manual, indexes don't need to be maintained. However, we are running with a PostgresQL table that has a continuous rate of updates, deletes and inserts that over time (a few days) sees a significant query degradation. If we delete and recreate the index, query performance is restored. We are using out of the box settings. The table in our test is currently starting out empty and grows to half a million rows. It has a fairly large row (lots of text fields). We are search is based of an index, not the primary key (I've confirmed the index is being used, at least under normal conditions) The table is being used as a persistent store for a single process. Using PostgresQL on Windows with a Java client I'm willing to give up insert and update performance to keep up the query performance. We are considering rearchitecting the application so that data is spread across various dynamic tables in a manner that allows us to drop and rebuild indexes periodically without impacting the application. However, as always, there is a time crunch to get this to work and I suspect we are missing something basic in our configuration or usage. We have considered forcing vacuuming and rebuild to run at certain times, but I suspect the locking period for such an action would cause our query to block. This may be an option, but there are some real-time (windows of 3-5 seconds) implications that require other changes in our code. Additional information: Table and index CREATE TABLE icl_contacts ( id bigint NOT NULL, campaignfqname character varying(255) NOT NULL, currentstate character(16) NOT NULL, xmlscheduledtime character(23) NOT NULL, ... 25 or so other fields. Most of them fixed or varying character fiel ... CONSTRAINT icl_contacts_pkey PRIMARY KEY (id) ) WITH (OIDS=FALSE); ALTER TABLE icl_contacts OWNER TO postgres; CREATE INDEX icl_contacts_idx ON icl_contacts USING btree (xmlscheduledtime, currentstate, campaignfqname); Analyze: Limit (cost=0.00..3792.10 rows=750 width=32) (actual time=48.922..59.601 rows=750 loops=1) - Index Scan using icl_contacts_idx on icl_contacts (cost=0.00..934580.47 rows=184841 width=32) (actual time=48.909..55.961 rows=750 loops=1) Index Cond: ((xmlscheduledtime < '2010-05-20T13:00:00.000'::bpchar) AND (currentstate = 'SCHEDULED'::bpchar) AND ((campaignfqname)::text = '.main.ee45692a-6113-43cb-9257-7b6bf65f0c3e'::text)) And, yes, I am aware there there are a variety of things we could do to normalize and improve the design of this table. Some of these options may be available to us. My focus in this question is about understanding how PostgresQL is managing the index and query over time (understand why, not just fix). If it were to be done over or significantly refactored, there would be a lot of changes.

    Read the article

  • MySQL index cardinality - performance vs storage efficiency

    - by Sean
    Say you have a MySQL 5.0 MyISAM table with 100 million rows, with one index (other than primary key) on two integer columns. From my admittedly poor understanding of B-tree structure, I believe that a lower cardinality means the storage efficiency of the index is better, because there are less parent nodes. Whereas a higher cardinality means less efficient storage, but faster read performance, because it has to navigate through less branches to get to whatever data it is looking for to narrow down the rows for the query. (Note - by "low" vs "high", I don't mean e.g. 1 million vs 99 million for a 100 million row table. I mean more like 90 million vs 95 million) Is my understanding correct? Related question - How does cardinality affect write performance?

    Read the article

  • Windows SBS 2008 to Windows Server 2012 migration

    - by StefanGrech
    I am in the process of upgrading my Windows SBS 2008 server running Exchange, Active Directory and as a File server to Windows Server 2012 essentials. Now I know that Windows Server 2012 essentials does not have exchange, thus I was looking to migrate the Active directory and the file server to Windows Server 2012 essentials, Then I would have a separate Virtual machine running Windows server 2012 standard with Exchange 2013. Now my question is, what should I do first? Migrate the AD and File server to Windows 2012 essentials and then after the migration is finished, I create a local move of the mailboxes from SBS 2008 to Windows Server 2012 standard running exchange 2013? or should this be the other way round?

    Read the article

  • Wpf: Tips for better performance

    - by viky
    I am working on a wpf application. In which I am working with a TreeView, each node represents different datatypes, these datatypes are having properties defined and using data template to show their properties. My application reads from xml and create tree accordingly. My problem is that when I load it, it is too slow, I want to know about the tricks that will help me to improve performance of my(any) wpf application. Edit: Please provide me some tips for better performance in wpf!! I am using wpf Profiler but it is not much helpful for me.

    Read the article

  • Performance Improvement: Alternative for array_flip function.

    - by Rachel
    Is there any way I can avoid using array_flip to optimize performance. I am doing a select statement from database, preparing the query and executing it and storing data as an associative array in $resultCollection and than I have array op and for each element in $resultCollection am storing its outputId in op[] as evident from the code. I have explained code and so my question is how can I achieve an similar alternative for array_flip with using array_flip as I want to improve performance. $resultCollection = $statement->fetchAll(PDO::FETCH_ASSOC); $op = array(); //Looping through result collection and storing unicaOfferId into op array. foreach ($resultCollection as $output) { $op[] = $output['outputId']; } //Here op array has key as 0, 1, 2...and value as id {which I am interested in} //Flip op array to get unica offer ids as key $op = array_flip($op); //Doing a flip to get id as key. foreach ($ft as $Id => $Off) { $ft[$Id]['is_set'] = isset($op[$Id]); }

    Read the article

  • Declarative JDOQL vs Single-String JDOQL : performance

    - by DrDro
    When querying with JDOQL is there a performance difference between using the declarative version and the Single-String version: Example from the JDOQL doc: //Declarative JDOQL : Query q = pm.newQuery(org.jpox.Person.class, "lastName == \"Jones\" && age < age_limit"); q.declareParameters("double age_limit"); List results = (List)q.execute(20.0); //Single-String JDOQL : Query q = pm.newQuery("SELECT FROM org.jpox.Person WHERE lastName == \"Jones\"" + " && age < :age_limit PARAMETERS double age_limit"); List results = (List)q.execute(20.0); Other then performance, are there any reasons for which one is better to use then the other or is it just about the one with which we feel more comfortable.

    Read the article

  • Important question about linq to SQL performance on high loaded web applications

    - by Alex
    I started working with linq to SQL several weeks ago. I got really tired of working with SQL server directly through the SQL queries (sqldatareader, sqlcommand and all this good stuff).  After hearing about linq to SQL and mvc I quickly moved all my projects to these technologies. I expected linq to SQL work slower but it suprisongly turned out to be pretty fast, primarily because I always forgot to close my connections when using datareaders. Now I don't have to worry about it. But there's one problem that really bothers me. There's one page that's requested thousands of times a day. The system gets data in the beginning, works with it and updates it. Primarily the updates are ++ @ -- (increase and decrease values). I used to do it like this UPDATE table SET value=value+1 WHERE ID=@I'd It worked with no problems obviously. But with linq to SQL the data is taken in the beginning, moved to the class, changed and then saved. Stats.registeredusers++; Db.submitchanges(); Let's say there were 100 000 users. Linq will say "let it be 100 001" instead of "let it be increased by 1". But if there value of users has already been increased (that happens in my site all the time) then linq will be like oops, this value is already 100 001. Whatever I'll throw an exception" You can change this behavior so that it won't throw an exception but it still will not set the value to 100 002. Like I said, it happened with me all the time. The stas value was increased twice a second on average. I simply had to rewrite this chunk of code with classic ado net. So my question is how can you solve the problem with linq

    Read the article

  • Sql Server Backup and move backup file: How to cope with file permissions?

    - by Stefan Steinegger
    With our product we have a simple backup tool for the sql server database. This tool should just make a full backup and restore to and from any folder. Of course, the user (usually an administrator) needs permission to write to the target folder. To avoid the problem of not being able to perform a backup to a network drive, I write the backup to a temp file in the Sql Server backup directory. Then I move it to the target folder. This requires permission to delete the temporary file from the sql servers backup folder. Restore is the same in the other direction. This seemed to work fine until someone tested it on vista, where the user does not have write access to the backup folder by default. So there are many solutions to solve this, but none of them seemed to be really nice. One solution would be to find another folder for the temporary file. Both the sql server user as well as the administrator performing the backup need read and write permissions. Is there such a directory? Any other ideas? Thanks a lot.

    Read the article

  • Can GPU capabilities impact virtual machine performance?

    - by Dave White
    While this many not seem like a programming question directly, it impacts my development activities and so it seems like it belongs here. It seems that more and more developers are turning to virtual environments for development activities on their computers, SharePoint development being a prime example. Also, as a trainer, I have virtual training environments for all of the classes that I teach. I recently purchased a new Dell E6510 to travel around with. It has the i7 620M (Dual core, HyperThreaded cpu running at 2.66GHz) and 8 GB of memory. Reading the spec sheet, it sounded like it would be a great laptop to carry around and run virtual machines on. Getting the laptop though, I've been pretty disappointed with the user experience of developing in a virtual machine. Giving the Virtual Machine 4 GB of memory, it was slow and I could type complete sentences and watch the VM "catchup". My company has training laptops that we provide for our classes. They are Dell Precision M6400 Intel Core 2 Duo P8700 running at 2.54Ghz with 8 GB of memory and the experience on this laptops is night and day compared to the E6510. They are crisp and you barely aware that you are running in a virtual environment. Since the E6510 should be faster in all categories than the M6400, I couldn't understand why the new laptop was slower, so I did a component by component comparison and the only place where the E6510 is less performant than the M6400 is the graphics department. The M6400 is running a nVidia FX 2700m GPU and the E6510 is running a nVidia 3100M GPU. Looking at benchmarks of the two GPUs suggest that the FX 2700M is twice as fast as the 3100M. http://www.notebookcheck.net/Mobile-Graphics-Cards-Benchmark-List.844.0.html 3100M = 111th (E6510) FX 2700m = 47th (Precision M6400) Radeon HD 5870 = 8th (Alienware) The host OS is Windows 7 64bit as is the guest OS, running in Virtual Box 3.1.8 with Guest Additions installed on the guest. The IDE being used in the virtual environment is VS 2010 Premium. So after that long setup, my question is: Is the GPU significantly impacting the virtual machine's performance or are there other factors that I'm not looking at that I can use to boost the vm's performance? Do we now have to consider GPU performance when purchasing laptops where we expect to use virtualized development environments? Thanks in advance. Cheers, Dave

    Read the article

  • what config files need to be transferred while migrating apache vhosts from old suse server to new suse server?

    - by jarus
    I have an old server with suse on it and its hosting numerous website under same IP , now i am trying to migrate the websites and all the contents of the old suse server to a new server with open suse 12.1 , i have transferred "/srv/www/vhosts" "/etc/apache2/vhosts.d" "/etc/apache2//httpd.conf" "/etc/apache2/listen.conf" "/etc/apache2/default-server.conf" i have transferred all the database files also . i am trying to replace the old server with the new server , i tried changing the ip address with the old server's ip address but its not working. what files do i need to transfer and what do i need to do to get the new server hosting the websites in place of the old server , please, any help will be greatly appreciated.

    Read the article

  • Performance concern when using LINQ "everywhere"?

    - by stiank81
    After upgrading to ReSharper5 it gives me even more useful tips on code improvements. One I see everywhere now is a tip to replace foreach-statements with LINQ queries. Take this example: private Ninja FindNinjaById(int ninjaId) { foreach (var ninja in Ninjas) { if (ninja.Id == ninjaId) return ninja; } return null; } This is suggested replaced with the following using LINQ: private Ninja FindNinjaById(int ninjaId) { return Ninjas.FirstOrDefault(ninja => ninja.Id == ninjaId); } This looks all fine, and I'm sure it's no problem regarding performance to replace this one foreach. But is it something I should do in general? Or might I run into performance problems with all these LINQ queries everywhere?

    Read the article

  • performance monitoring tools for multi-tenant web application

    - by Anton
    We have a need to monitor performance of our java web app. We are looking for some tolls which can help us with this task. The major difficulty is that we are SaaS provider with multi-tenant server architecture with hundreds of customers running on the same hardware. So far we tried commercial products like DynaTrace and Coradinat but unfortunately they don't get the job done so far. What we need is a simple report which would tell us if we had performance problems on each customer site in a specified period of time. Mostly it will be response time per customer but also we will need some more specifics based on the URLs. please let me know if someone had any experience with setting up such monitoring. Thanks!

    Read the article

  • Performance of .NET ILMerged assemblies

    - by matt
    I have two .NET libraries: "Foo.Bar" and "Foo.Baz". "Foo.Bar" is self-contained, while "Foo.Baz" references "Foo.Bar". Assuming I do the following: Use ILMerge to merge "Foo.Bar.dll" with "Foo.Baz.dll" into "Foo1.dll". Create a new solution containing the entirity of both "Foo.Bar" and "Foo.Baz" (since I have access to their source code), and compile this into "Foo2.dll". Will there be any differences in the performance of Foo1.dll and Foo2.dll when using their functionality from an external project? If so, how significant is this performance difference, and is it a once-off (on load?) or ongoing difference? Are there any other pros or cons with either approach?

    Read the article

  • Wildcards in T-SQL LIKE vs. ASP.net parameters

    - by Vinzcent
    In my SQL statement I use wildcards. But when I try to select something, it never select something. While when I execute the query in Microsoft SQL Server Management Studio, it works fine. What am I doing wrong? Click handler protected void btnTitelAuteur_Click(object sender, EventArgs e) { cvalTitelAuteur.Enabled = true; cvalTitelAuteur.Validate(); if (Page.IsValid) { objdsSelectedBooks.SelectMethod = "getBooksByTitleAuthor"; objdsSelectedBooks.SelectParameters.Clear(); objdsSelectedBooks.SelectParameters.Add(new Parameter("title", DbType.String)); objdsSelectedBooks.SelectParameters.Add(new Parameter("author", DbType.String)); objdsSelectedBooks.Select(); gvSelectedBooks.DataBind(); pnlZoeken.Visible = false; pnlKiezen.Visible = true; } } In my Data Access Layer public static DataTable getBooksByTitleAuthor(string title, string author) { string sql = "SELECT 'AUTHOR' = tblAuthors.FIRSTNAME + ' ' + tblAuthors.LASTNAME, tblBooks.*, tblGenres.GENRE " + "FROM tblAuthors INNER JOIN tblBooks ON tblAuthors.AUTHOR_ID = tblBooks.AUTHOR_ID INNER JOIN tblGenres ON tblBooks.GENRE_ID = tblGenres.GENRE_ID " +"WHERE (tblBooks.TITLE LIKE '%@title%');"; SqlDataAdapter da = new SqlDataAdapter(sql, GetConnectionString()); da.SelectCommand.Parameters.Add("@title", SqlDbType.Text); da.SelectCommand.Parameters["@title"].Value = title; DataSet ds = new DataSet(); da.Fill(ds, "Books"); return ds.Tables["Books"]; }

    Read the article

  • linq: SQL performance on high loaded web applications

    - by Alex
    I started working with linq to SQL several weeks ago. I got really tired of working with SQL server directly through the SQL queries (sqldatareader, sqlcommand and all this good stuff).  After hearing about linq to SQL and mvc I quickly moved all my projects to these technologies. I expected linq to SQL work slower but it suprisongly turned out to be pretty fast, primarily because I always forgot to close my connections when using datareaders. Now I don't have to worry about it. But there's one problem that really bothers me. There's one page that's requested thousands of times a day. The system gets data in the beginning, works with it and updates it. Primarily the updates are ++ @ -- (increase and decrease values). I used to do it like this UPDATE table SET value=value+1 WHERE ID=@I'd It worked with no problems obviously. But with linq to SQL the data is taken in the beginning, moved to the class, changed and then saved. Stats.registeredusers++; Db.submitchanges(); Let's say there were 100 000 users. Linq will say "let it be 100 001" instead of "let it be increased by 1". But if there value of users has already been increased (that happens in my site all the time) then linq will be like oops, this value is already 100 001. Whatever I'll throw an exception" You can change this behavior so that it won't throw an exception but it still will not set the value to 100 002. Like I said, it happened with me all the time. The stas value was increased twice a second on average. I simply had to rewrite this chunk of code with classic ado net. So my question is how can you solve the problem with linq

    Read the article

  • Apache modules: C module vs mod_wsgi python module - Performance

    - by Gopal
    Hi A client of ours is asking us to implement a module in C in Apache webserver for performance reasons. This module should handle RESTful uri's, access a database and return results in json format. Many people here have recommended python mod_wsgi instead - but for simplicity of programming reasons. Can anyone tell me if there is a significant difference in performance between the mod_wsgi python solution vs. the Apache + C.module. Any anecdotes? Pointers to some study posted online?

    Read the article

  • How does C#'s DateTime.Now affect query plan caching in SQL Server?

    - by Bill Paetzke
    Given: Let's say we have a stored procedure. It reports data back to a user on a webpage. The user can set a date range. If the user sets today's date as the "end date," which includes today's data, the web app passes DateTime.Now to the sql proc. Let's say that one user runs a report--5/1/2010 to now--over and over several times. On the webpage, the user sees "5/1/2010" to "5/4/2010." But the web app passes DateTime.Now to the sql proc as the end date. So, the end date in the proc will always be different, although the user is querying a similar date range. Assume the number of records in the table and number of users are large. So any performance gains matter. Hence the importance of the question. Question: Does passing DateTime.Now as a parameter to a proc prevent SQL Server from caching the query plan? If so, then is the web app missing out on huge performance gains? Possible Solution: I thought DateTime.Today.AddDays(1) would be a possible solution. It would allow the user to get the latest data and always pass the same end date to the sql proc--"5/5/2010" in this case. Please speak to this as well. Sample proc and execution (if that helps to understand): CREATE PROCEDURE GetFooData @StartDate datetime @EndDate datetime AS SELECT * FROM Foo WHERE LogDate >= @StartDate AND LogDate < @EndDate Here's a sample execution using DateTime.Now: EXEC GetFooData '2010-05-01', '2010-05-04 15:41:27' -- passed in DateTime.Now Here's a sample execution using DateTime.Today.AddDays(1) EXEC GetFooData '2010-05-01', '2010-05-05' -- passed in DateTime.Today.AddDays(1) The same data is returned for both procs, since the current time is: 2010-05-04 15:41:27.

    Read the article

  • Visual Studio add-in for performance benchmarking

    - by chiccodoro
    I'd like to measure the performance of some code blocks in my c# winforms application. In particular I want to measure performance regression/improvement after some restructuring of the code. So long I've seen the System.Diagnostics.Stopwatch. However, I want to avoid writing measuring code into my classes, I would rather prefer to separate measuring from actual code. As for debugging, you can set breakpoints on several code lines and "jump" from one to the next by "Continue Execution", I imagine something similar for measuring: Mark to lines of code and make Visual Studio display the time elapsing from one to the next. Is there any feature/add-in in that direction?

    Read the article

  • Hidden features of PL/SQL

    - by Adam Paynter
    In light of the "Hidden features of..." series of questions, what little-known features of PL/SQL have become useful to you? Edit: Features specific to PL/SQL are preferred over features of Oracle's SQL syntax. However, because PL/SQL can use most of Oracle's SQL constructs, they may be included if they make programming in PL/SQL easier.

    Read the article

  • What's the performance penalty of weak_ptr?

    - by Kornel Kisielewicz
    I'm currently designing a object structure for a game, and the most natural organization in my case became a tree. Being a great fan of smart pointers I use shared_ptr's exclusively. However, in this case, the children in the tree will need access to it's parent (example -- beings on map need to be able to access map data -- ergo the data of their parents. The direction of owning is of course that a map owns it's beings, so holds shared pointers to them. To access the map data from within a being we however need a pointer to the parent -- the smart pointer way is to use a reference, ergo a weak_ptr. However, I once read that locking a weak_ptr is a expensive operation -- maybe that's not true anymore -- but considering that the weak_ptr will be locked very often, I'm concerned that this design is doomed with poor performance. Hence the question: What is the performance penalty of locking a weak_ptr? How significant is it?

    Read the article

  • How does DateTime.Now affect query plan caching in SQL Server?

    - by Bill Paetzke
    Question: Does passing DateTime.Now as a parameter to a proc prevent SQL Server from caching the query plan? If so, then is the web app missing out on huge performance gains? Possible Solution: I thought DateTime.Today.AddDays(1) would be a possible solution. It would pass the same end-date to the sql proc (per day). And the user would still get the latest data. Please speak to this as well. Given Example: Let's say we have a stored procedure. It reports data back to a user on a webpage. The user can set a date range. If the user sets today's date as the "end date," which includes today's data, the web app passes DateTime.Now to the sql proc. Let's say that one user runs a report--5/1/2010 to now--over and over several times. On the webpage, the user sees 5/1/2010 to 5/4/2010. But the web app passes DateTime.Now to the sql proc as the end date. So, the end date in the proc will always be different, although the user is querying a similar date range. Assume the number of records in the table and number of users are large. So any performance gains matter. Hence the importance of the question. Example proc and execution (if that helps to understand): CREATE PROCEDURE GetFooData @StartDate datetime @EndDate datetime AS SELECT * FROM Foo WHERE LogDate >= @StartDate AND LogDate < @EndDate Here's a sample execution using DateTime.Now: EXEC GetFooData '2010-05-01', '2010-05-04 15:41:27' -- passed in DateTime.Now Here's a sample execution using DateTime.Today.AddDays(1) EXEC GetFooData '2010-05-01', '2010-05-05' -- passed in DateTime.Today.AddDays(1) The same data is returned for both procs, since the current time is: 2010-05-04 15:41:27.

    Read the article

< Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >