Search Results

Search found 6231 results on 250 pages for 'slow diver'.

Page 145/250 | < Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >

  • web server and database server distance

    - by Erkan
    I want to seperate my server into two parts: web server and dbserver. My web server is located in Turkey and my dbserver is located in Germany. I cant change my web server because my agreement is based on my Ip adresses. I want to locate my dbserver in Germany because more cheap then Turkey. But... I have a problem in here. When you call a db action, first, you are going to Turkey for IIS and IIS is going to Germany for Dbserver. It is too far and so slow to response back. Any idea? Is it wrong that the distance is so far between web server and dbserver? Or Are there any solutions for this problem?

    Read the article

  • monitor internet bandwidth on LAN

    - by Dimal Chandrasiri
    I'm on a office network and there are around 12 PCs that are connected to a switch and then to a Prolink Router. Now the Internet connection is very slow and I'm the one who manage all the computers. I want to monitor bandwidth usage of each computer on the LAN. how can I achieve this using my computer. I cannot install any software on the client computers other than mine. I tried wireshark but, it only shows my network usage so there is no use with the data I get from it. Is there any specific software that I can use within my PC to get the bandwidth details of other computers. I'm on a windows 7 x64bit PC with admin privileges. Thank you.

    Read the article

  • Put Google Chrome, AVG on a flash drive

    - by duncan12
    I often install programs on others' computers such as LibreOffice, which, when they only have a dial-up connection, takes forever to download. So instead I keep it on a flash drive and install it from there. This works fine for LibreOffice. However, some programs, when you download them all you are downloading is a downloader - when you download ChromeSetup and open it, a installer window opens which actually downloads Chrome. Dial-up users then see that it will take 4 hours. Another example is AVG free antivirus - you download an installer, which when opened actually downloads AVG. However this doesn't work too well for dial-up users. How do I put the entire program of things like Chrome and AVG on a flash drive for fast installation on others' slow connections?

    Read the article

  • Use puppet to make changes to ip route and sysctl

    - by Quintin Par
    I have two changes to ip route & sysctl that disable tcp slow start. Here’s how I do it ip route show Make a note of the line starting with default. Pick up the IP from the default line and run sudo ip route change default via $ip_address dev eth0 initcwnd 12 sudo sysctl -w net.ipv4.tcp_slow_start_after_idle=0 How can I create a puppet script out of this? One that can be deployed to many machines of the same type – CentOS 6 Edit: Added bounty to get a working example for sudo ip route change default via $ip_address dev eth0 initcwnd 12

    Read the article

  • Performance problems - Jira running on Ubuntu over VMware ESX 4.0 maxing out all 4 vCPUs.

    - by Jack T
    We are running Jira on a box under VMware ESX 4.0 and performance is vaiable to say the least. The physical box has 12 Gig RAM and 4x Xeon 2.26 GHz CPUs. vCentre is telling us the CPUs are not maxed out at any time, RAM is fine too. When we issue a request to the host it sometimes maxes out all 4 vCPUs. Sometimes it's quick, sometimes very very slow. There doesn't seem to be a pattern. Any ideas?

    Read the article

  • Why does Exim puts emails on hold if there are frozen messages in the queue?

    - by user51932
    Hi, I've a CentOS with CPanel server working as a SMTP server, which currently uses 20 different hostnames and IP addresses to deliver email for an email newsletter service. However, it's extremely slow in sending emails. It's sending like 10 emails per minute, which I check by running the "exim -bpc" command. What could be affecting this? One thing I'm supposing, is that there are frozen messages in the queue, which are slowing down the sending until they're sent out, and are putting new messages on hold. What are the most common reasons a message can get frozen? Also, would it be more efficient to use 20 different small VPSs to send out email rather than use one large VPS with the 20 different hostnames and IPs in it?

    Read the article

  • Hosting solution for startup social app?

    - by happyhardik
    We are in a process of building up a social app. Initially we will have only a few thousands of users than will grow with time. Which would be the best and suitable hosting for this purpose? Grid, cloud or VPS? (it has to be economic, as we are just starting up) The hosting needs to be strong, so, in case our app has increase in the user base all of a sudden it wont break up or slow down the app. Our app is in PHP, MySQL. Sorry, if question posted in wrong place. Thanks, for your time. :)

    Read the article

  • Is it possible for a router to "go bad" with time?

    - by JQAn
    I've been having problems with my internet connection over the past weeks (intermittent disconnections, slow transfers, etc), and my provider keeps telling me that the problem is not on their end. I have cablemodem with a wifi router (this router was not provided by them). The router is quite old (DIR-300), so I'm starting to wonder if it could be the issue and if I should replace it. Is it possible that it is the cause? Can they become so outdated that they cause intermittent interruptions of service? If I reset the modem and the router, they work fine for a few hours, but the problems starts again after a while.

    Read the article

  • Is it bad to put your computer in sleep mode every time?

    - by Ivo Flipse
    Often I have a lot of stuff open and don't feel like shutting down my laptop, so I just use sleep mode when I'm transferring it. But I have no idea if this might have any disadvantages. So my question is: is it bad to put your computer in sleep mode every time? Things I'm wondering: Should I turn off my computer every once in a while? Will continuous use of sleep mode slow down my system in any way? Are there any bad side effects (in the long term)? Any thoughts? FYI I'm using Windows 7 on a laptop

    Read the article

  • Mysql showing some high spikes

    - by user111196
    We one mysql server suddenly the access to it becomes slow of out sudden. So I read some place they said maybe due to var size is it? I am not too sure any idea how to check the root cause of it. The cpu is like nearly 150%. Any indication on it. I have tried this so far. du -sh * 4.0K account 67M cache 4.0K cvs 16K db 8.0K empty 4.0K games 4.0K gdm 148G lib 4.0K local 16K lock 624M log 0 mail 4.0K nis 4.0K opt 4.0K preserve 400K run 298M spool 4.0K tmp 359M www 12K yp

    Read the article

  • How to contact an Email Administrator at a large company?

    - by Brett G
    Before I've had to deal with other larger companies, client of ours, when we have had e-mail issues with them. Most of the time it's when our messages have been tagged as spam, although right now it's because one of their systems has gone haywire and is repeatedly sending us e-mails. Anyway, my question is how do you get in touch with the email administrator. I've found at some larger companies unless you have the name of the person, the receptionist will refuse to connect you to them (I'd imagine they're acting as the gatekeeper from salespeople asking for "the person responsible for dealing with your printers"). I know we could always try to deal with our contact at the company, but sometimes that can be slow and difficult and these issues are usually time sensitive.

    Read the article

  • Setup a local file server for two networks

    - by rzlines
    Hi I would like to setup a local linux file server (using centos and samba) that can be accessed by two independent local networks in the same house. I have 2 networks on the same floor which have Windows 7 computers, but the networks are split as we have 2 internet connections. How do I go about this? Additional info: Currently both the networks use DropBox to send files to each other but that happens via the Internet and hence its slow. Would like to achieve the same locally to increase the speed.

    Read the article

  • accessing external mysql server through "ssh tunnel" - any drawbacks?

    - by Max
    In an upcoming project I have a two server setup: one is the application server and another, already existing runs the mysql server with databases I need to access. I contacted the server admin of the mysql server and the only way I can access the remote mysql databases is via "SSH tunnel". I have never done this before and never heard of it so far, so my question, are there any drawbacks, e. g. performance wise? Isnt it rather slow compared to directly accessing the mysql server on its default port?

    Read the article

  • Mysql too many connections but not visible in fullprocess list

    - by user968898
    I got a big problem, I guess it's something like a dos attack but I am not sure. Since this morning, my database is very slow and gives me 7/10 times a too many connections error or tries to login with www-data user (as following up of the too many connections error?). I tried to locate the issue by mysql command line with 'show fullprocess list' but it gives me just one response back and that 'me'. What can I do about this? The websites are still running ok, but mysql is overused I guess.

    Read the article

  • Best tool for writing a Programming Book?

    - by walkthedog
    Well, this is not directly programming related! But a friend of mine wants to write a book about programming. Now he asked me if I knew a good software for this, because Word crashes 10 times a day on his machine, and OpenOffice is just very chunky and slow. Also none of them seem to have any useful support for including Code Listings (examples) with useful syntax highlighting or at least some sort of support for inserting code (i.e. indicating line breaks with arrows that turn around, line numbers, etc). Latex is out of question since it's incredible hard to use and has no really useful feature for including tables. It's a mess. Maybe some IT authors are here who can give some hints what tools they use. That would be great!

    Read the article

  • Website project takes a long time to load in VS.NET 2008

    - by rm
    After getting a new computer (a lot faster than the one I've used before) my Solutions take A LONG TIME (3-4 minutes) to load up in VS.NET 2008. I only have 2 projects in the solution: DB Project and Website Project (from IIS). If I remove the Website Project from Solution - it loads up instantly, when I add that same website project to the OPEN Solution - it loads up instantly. The only time it's slow is when I open the solution referencing my Website project. I had the exact same setup (as far as VS is concerned) on my old box, and never had this problem. Any ideas?

    Read the article

  • View Webserver Form Internal and Extenal

    - by Just4Net
    We have a web server hosted out of our office, and it works fine when people access the site from outside the office. The problem is that when people are inside the office and go to the website (www.example.com) it's very slow, as it goes out over the internet and comes back in. Or LAN is Windows with Active Directory and our web server is CentOS with its own internet connection. How can I set when users in my office want see website opens it from local network instead of going out of one internet connection and back in through the other?

    Read the article

  • Site faster on 1 core/1GB ram than on 8 core/32GB ram wth?

    - by xtra
    Both sites are US based. I'm located in Europe. http://198.105.209.13/ <==== site that loads tons faster on 1 core and 1gb ram ONLY (Cent OS VPS) http://108.61.35.219/ <== site is located on my dedicated server with i7-3770 cpu and 32 gb ram (Windows) and is slow as hell Even if faster site is on linux it shouldn't make such a big difference in page load seriously. Does anyone know what could be a hiccup on the dedicated server? It's just unbelievable to me that site loads faster on 1 core cpu than on dedicated server. Note that site is pretty much only thing running on dedicated server. It has 1gbps connection.

    Read the article

  • How to make googlebot to crawl a page? [closed]

    - by mamadum
    What is the best way of forcing googlebot to crawl the page? I've put up Google analytics, registered site with Google webmaster. I've done great deal of SEO work on the website with keywords and titles, I took care of microdata. I submitted the site anonymously, I successfully fetched the site and submit for indexing couple of days ago and still nothing. Last time googlebot visited the site is almost 1 month ago and the indexed content is now obsolete. Am I missing something? or is it just a slow process??

    Read the article

  • Multiple monitors (3) in notebook using HDMI and vga port, with a graphic card only.

    - by user34427
    I have a HP dv9830us laptop. It has a vga-out and a hdmi-out port. I want to know if it's possible to use 3 devices, each one with different displays, using just one graphic card - NVIDIA GeForce 8600M GS? (Original, vga with an external monitor and hdmi with a full hd tv). Where I can find this kind of information? If not possible, I would like to know alternatives like: "Usb" solutions to be possible to connect to the external lcd or to a TV. The TV should be flawless, but the second monitor can have slow response times; Cheap PCMCIA video cards to be possible to connect to an external monitor, so I would use the hdmi with the tv and this second video card to output to another monitor. I'm using Windows 7 and Ubuntu 10.04. Is this possible in both systems?

    Read the article

  • Installation stuck on "Installation Type" screen

    - by Andrew Latham
    I am trying to install Ubuntu 11.10 with Windows 7 from a CD. I am using an HP Pavilion dm4. I've never used Ubuntu (or any Linux) before. Everything goes alright until I get to the "Installation Type" screen. Instead of giving me options, it just has a blank menu, and all the buttons are disabled. When I click "Continue", it gives me an error saying that it can't find the root or something like that. The trial version works fine, but I can't actually install it. Everything on the trial version is really slow, presumably because everything is on the CD or the Windows partition. I did some research, but the only post I could find was http://ubuntuforums.org/showthread.php?t=1870478 Where the only advice is to format the entire drive, which I'm not willing to do. Any suggestions? I'm downloading 10.04 right now and I'm going to try with that instead. EDIT: 10.04 didn't work either. I got to the partitioning screen and got the same problem. I read some more forums, loaded up 11.10 trial from the disk, opened the Terminal and typed sudo apt-get remove dmraid and then y. Then I was actually able to see something on the "Installation type" page: "Erase disk and install Ubuntu" or "Something else". Which is weird, since Windows 7 should be installed. When I click Something Else, I get: /dev/sda /dev/sdb /dev/sdb1 (ntfs) (208 MB) (69 MB used) /dev/sdb2 (ntfs) (477542 MB) (unknown used) /dev/sdb3 (ntfs) (18085 MB) (16094 MB used) /dev/sdb4 (fat32) (4265 MB) (3084 MB used) I have no idea what any of this means. Also, my device for boot loader installation changed from /dev/sda to /dev/sda ATA SAMSUNG MZMPA032 (32.0 GB)

    Read the article

  • Webby Nominations for Retail

    - by David Dorf
    The Webby Awards were created back in 1996 when the internet was just a baby. This is their 14th year of highlighting excellence on the Web, and there are lots of great nominations. Its quite amazing to see the rich content and interactivity of today's websites. Some interesting nominees for this year are: Sephora did a campaign at Christmas, and what remains of the Sephora Clause website is a bunch of wishes. The Starbucks "All you need is Love" campaign has lots of cool videos. The Sound Check from Walmart highlights raising music artists. Refinery29 has their fashion info hub. The five nominees in the retail category are: Bugaboo.com's website for selling high-end baby strollers. If you're looking for a high-end bag, check out Crumpler's flash-based e-commerce site. It's highly interactive, but a little on the slow side. I Make My Case sells custom designed iPod/iPhone and Blackberry cases. At MOO.com, they love to print. Tons of art for printing customized business cards, post cards, etc. If you want light shoes, check out Puma L.I.F.T. and see just how light shoes can weigh. Check them out, cast a few votes, and see if you're inspired to create something even better.

    Read the article

  • Parallelism in .NET – Part 4, Imperative Data Parallelism: Aggregation

    - by Reed
    In the article on simple data parallelism, I described how to perform an operation on an entire collection of elements in parallel.  Often, this is not adequate, as the parallel operation is going to be performing some form of aggregation. Simple examples of this might include taking the sum of the results of processing a function on each element in the collection, or finding the minimum of the collection given some criteria.  This can be done using the techniques described in simple data parallelism, however, special care needs to be taken into account to synchronize the shared data appropriately.  The Task Parallel Library has tools to assist in this synchronization. The main issue with aggregation when parallelizing a routine is that you need to handle synchronization of data.  Since multiple threads will need to write to a shared portion of data.  Suppose, for example, that we wanted to parallelize a simple loop that looked for the minimum value within a dataset: double min = double.MaxValue; foreach(var item in collection) { double value = item.PerformComputation(); min = System.Math.Min(min, value); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This seems like a good candidate for parallelization, but there is a problem here.  If we just wrap this into a call to Parallel.ForEach, we’ll introduce a critical race condition, and get the wrong answer.  Let’s look at what happens here: // Buggy code! Do not use! double min = double.MaxValue; Parallel.ForEach(collection, item => { double value = item.PerformComputation(); min = System.Math.Min(min, value); }); This code has a fatal flaw: min will be checked, then set, by multiple threads simultaneously.  Two threads may perform the check at the same time, and set the wrong value for min.  Say we get a value of 1 in thread 1, and a value of 2 in thread 2, and these two elements are the first two to run.  If both hit the min check line at the same time, both will determine that min should change, to 1 and 2 respectively.  If element 1 happens to set the variable first, then element 2 sets the min variable, we’ll detect a min value of 2 instead of 1.  This can lead to wrong answers. Unfortunately, fixing this, with the Parallel.ForEach call we’re using, would require adding locking.  We would need to rewrite this like: // Safe, but slow double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach(collection, item => { double value = item.PerformComputation(); lock(syncObject) min = System.Math.Min(min, value); }); This will potentially add a huge amount of overhead to our calculation.  Since we can potentially block while waiting on the lock for every single iteration, we will most likely slow this down to where it is actually quite a bit slower than our serial implementation.  The problem is the lock statement – any time you use lock(object), you’re almost assuring reduced performance in a parallel situation.  This leads to two observations I’ll make: When parallelizing a routine, try to avoid locks. That being said: Always add any and all required synchronization to avoid race conditions. These two observations tend to be opposing forces – we often need to synchronize our algorithms, but we also want to avoid the synchronization when possible.  Looking at our routine, there is no way to directly avoid this lock, since each element is potentially being run on a separate thread, and this lock is necessary in order for our routine to function correctly every time. However, this isn’t the only way to design this routine to implement this algorithm.  Realize that, although our collection may have thousands or even millions of elements, we have a limited number of Processing Elements (PE).  Processing Element is the standard term for a hardware element which can process and execute instructions.  This typically is a core in your processor, but many modern systems have multiple hardware execution threads per core.  The Task Parallel Library will not execute the work for each item in the collection as a separate work item. Instead, when Parallel.ForEach executes, it will partition the collection into larger “chunks” which get processed on different threads via the ThreadPool.  This helps reduce the threading overhead, and help the overall speed.  In general, the Parallel class will only use one thread per PE in the system. Given the fact that there are typically fewer threads than work items, we can rethink our algorithm design.  We can parallelize our algorithm more effectively by approaching it differently.  Because the basic aggregation we are doing here (Min) is communitive, we do not need to perform this in a given order.  We knew this to be true already – otherwise, we wouldn’t have been able to parallelize this routine in the first place.  With this in mind, we can treat each thread’s work independently, allowing each thread to serially process many elements with no locking, then, after all the threads are complete, “merge” together the results. This can be accomplished via a different set of overloads in the Parallel class: Parallel.ForEach<TSource,TLocal>.  The idea behind these overloads is to allow each thread to begin by initializing some local state (TLocal).  The thread will then process an entire set of items in the source collection, providing that state to the delegate which processes an individual item.  Finally, at the end, a separate delegate is run which allows you to handle merging that local state into your final results. To rewriting our routine using Parallel.ForEach<TSource,TLocal>, we need to provide three delegates instead of one.  The most basic version of this function is declared as: public static ParallelLoopResult ForEach<TSource, TLocal>( IEnumerable<TSource> source, Func<TLocal> localInit, Func<TSource, ParallelLoopState, TLocal, TLocal> body, Action<TLocal> localFinally ) The first delegate (the localInit argument) is defined as Func<TLocal>.  This delegate initializes our local state.  It should return some object we can use to track the results of a single thread’s operations. The second delegate (the body argument) is where our main processing occurs, although now, instead of being an Action<T>, we actually provide a Func<TSource, ParallelLoopState, TLocal, TLocal> delegate.  This delegate will receive three arguments: our original element from the collection (TSource), a ParallelLoopState which we can use for early termination, and the instance of our local state we created (TLocal).  It should do whatever processing you wish to occur per element, then return the value of the local state after processing is completed. The third delegate (the localFinally argument) is defined as Action<TLocal>.  This delegate is passed our local state after it’s been processed by all of the elements this thread will handle.  This is where you can merge your final results together.  This may require synchronization, but now, instead of synchronizing once per element (potentially millions of times), you’ll only have to synchronize once per thread, which is an ideal situation. Now that I’ve explained how this works, lets look at the code: // Safe, and fast! double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach( collection, // First, we provide a local state initialization delegate. () => double.MaxValue, // Next, we supply the body, which takes the original item, loop state, // and local state, and returns a new local state (item, loopState, localState) => { double value = item.PerformComputation(); return System.Math.Min(localState, value); }, // Finally, we provide an Action<TLocal>, to "merge" results together localState => { // This requires locking, but it's only once per used thread lock(syncObj) min = System.Math.Min(min, localState); } ); Although this is a bit more complicated than the previous version, it is now both thread-safe, and has minimal locking.  This same approach can be used by Parallel.For, although now, it’s Parallel.For<TLocal>.  When working with Parallel.For<TLocal>, you use the same triplet of delegates, with the same purpose and results. Also, many times, you can completely avoid locking by using a method of the Interlocked class to perform the final aggregation in an atomic operation.  The MSDN example demonstrating this same technique using Parallel.For uses the Interlocked class instead of a lock, since they are doing a sum operation on a long variable, which is possible via Interlocked.Add. By taking advantage of local state, we can use the Parallel class methods to parallelize algorithms such as aggregation, which, at first, may seem like poor candidates for parallelization.  Doing so requires careful consideration, and often requires a slight redesign of the algorithm, but the performance gains can be significant if handled in a way to avoid excessive synchronization.

    Read the article

  • Should Development / Testing / QA / Staging environments be similar?

    - by Walter White
    Hi all, After much time and effort, we're finally using maven to manage our application lifecycle for development. We still unfortunately use ANT to build an EAR before deploying to Test / QA / Staging. My question is, while we made that leap forward, developers are still free to do as they please for testing their code. One issue that we have is half our team is using Tomcat to test on and the other half is using Jetty. I prefer Jetty slightly over Tomcat, but regardless we using WAS for all the other environments. My question is, should we develop on the same application server we're deploying to? We've had numerous bugs come up from these differences in environments. Tomcat, Jetty, and WAS are different under the hood. My opinion is that we all should develop on what we're deploying to production with so we don't have the problem of well, it worked fine on my machine. While I prefer Jetty, I just assume we all work on the same environment even if it means deploying to WAS which is slow and cumbersome. What are your team dynamics like? Our lead developers stepped down from the team and development has been a free for all since then. Walter

    Read the article

  • SQL SERVER – Guest Post – Jonathan Kehayias – Wait Type – Day 16 of 28

    - by pinaldave
    Jonathan Kehayias (Blog | Twitter) is a MCITP Database Administrator and Developer, who got started in SQL Server in 2004 as a database developer and report writer in the natural gas industry. After spending two and a half years working in TSQL, in late 2006, he transitioned to the role of SQL Database Administrator. His primary passion is performance tuning, where he frequently rewrites queries for better performance and performs in depth analysis of index implementation and usage. Jonathan blogs regularly on SQLBlog, and was a coauthor of Professional SQL Server 2008 Internals and Troubleshooting. On a personal note, I think Jonathan is extremely positive person. In every conversation with him I have found that he is always eager to help and encourage. Every time he finds something needs to be approved, he has contacted me without hesitation and guided me to improve, change and learn. During all the time, he has not lost his focus to help larger community. I am honored that he has accepted to provide his views on complex subject of Wait Types and Queues. Currently I am reading his series on Extended Events. Here is the guest blog post by Jonathan: SQL Server troubleshooting is all about correlating related pieces of information together to indentify where exactly the root cause of a problem lies. In my daily work as a DBA, I generally get phone calls like, “So and so application is slow, what’s wrong with the SQL Server.” One of the funny things about the letters DBA is that they go so well with Default Blame Acceptor, and I really wish that I knew exactly who the first person was that pointed that out to me, because it really fits at times. A lot of times when I get this call, the problem isn’t related to SQL Server at all, but every now and then in my initial quick checks, something pops up that makes me start looking at things further. The SQL Server is slow, we see a number of tasks waiting on ASYNC_IO_COMPLETION, IO_COMPLETION, or PAGEIOLATCH_* waits in sys.dm_exec_requests and sys.dm_exec_waiting_tasks. These are also some of the highest wait types in sys.dm_os_wait_stats for the server, so it would appear that we have a disk I/O bottleneck on the machine. A quick check of sys.dm_io_virtual_file_stats() and tempdb shows a high write stall rate, while our user databases show high read stall rates on the data files. A quick check of some performance counters and Page Life Expectancy on the server is bouncing up and down in the 50-150 range, the Free Page counter consistently hits zero, and the Free List Stalls/sec counter keeps jumping over 10, but Buffer Cache Hit Ratio is 98-99%. Where exactly is the problem? In this case, which happens to be based on a real scenario I faced a few years back, the problem may not be a disk bottleneck at all; it may very well be a memory pressure issue on the server. A quick check of the system spec’s and it is a dual duo core server with 8GB RAM running SQL Server 2005 SP1 x64 on Windows Server 2003 R2 x64. Max Server memory is configured at 6GB and we think that this should be enough to handle the workload; or is it? This is a unique scenario because there are a couple of things happening inside of this system, and they all relate to what the root cause of the performance problem is on the system. If we were to query sys.dm_exec_query_stats for the TOP 10 queries, by max_physical_reads, max_logical_reads, and max_worker_time, we may be able to find some queries that were using excessive I/O and possibly CPU against the system in their worst single execution. We can also CROSS APPLY to sys.dm_exec_sql_text() and see the statement text, and also CROSS APPLY sys.dm_exec_query_plan() to get the execution plan stored in cache. Ok, quick check, the plans are pretty big, I see some large index seeks, that estimate 2.8GB of data movement between operators, but everything looks like it is optimized the best it can be. Nothing really stands out in the code, and the indexing looks correct, and I should have enough memory to handle this in cache, so it must be a disk I/O problem right? Not exactly! If we were to look at how much memory the plan cache is taking by querying sys.dm_os_memory_clerks for the CACHESTORE_SQLCP and CACHESTORE_OBJCP clerks we might be surprised at what we find. In SQL Server 2005 RTM and SP1, the plan cache was allowed to take up to 75% of the memory under 8GB. I’ll give you a second to go back and read that again. Yes, you read it correctly, it says 75% of the memory under 8GB, but you don’t have to take my word for it, you can validate this by reading Changes in Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2. In this scenario the application uses an entirely adhoc workload against SQL Server and this leads to plan cache bloat, and up to 4.5GB of our 6GB of memory for SQL can be consumed by the plan cache in SQL Server 2005 SP1. This in turn reduces the size of the buffer cache to just 1.5GB, causing our 2.8GB of data movement in this expensive plan to cause complete flushing of the buffer cache, not just once initially, but then another time during the queries execution, resulting in excessive physical I/O from disk. Keep in mind that this is not the only query executing at the time this occurs. Remember the output of sys.dm_io_virtual_file_stats() showed high read stalls on the data files for our user databases versus higher write stalls for tempdb? The memory pressure is also forcing heavier use of tempdb to handle sorting and hashing in the environment as well. The real clue here is the Memory counters for the instance; Page Life Expectancy, Free List Pages, and Free List Stalls/sec. The fact that Page Life Expectancy is fluctuating between 50 and 150 constantly is a sign that the buffer cache is experiencing constant churn of data, once every minute to two and a half minutes. If you add to the Page Life Expectancy counter, the consistent bottoming out of Free List Pages along with Free List Stalls/sec consistently spiking over 10, and you have the perfect memory pressure scenario. All of sudden it may not be that our disk subsystem is the problem, but is instead an innocent bystander and victim. Side Note: The Page Life Expectancy counter dropping briefly and then returning to normal operating values intermittently is not necessarily a sign that the server is under memory pressure. The Books Online and a number of other references will tell you that this counter should remain on average above 300 which is the time in seconds a page will remain in cache before being flushed or aged out. This number, which equates to just five minutes, is incredibly low for modern systems and most published documents pre-date the predominance of 64 bit computing and easy availability to larger amounts of memory in SQL Servers. As food for thought, consider that my personal laptop has more memory in it than most SQL Servers did at the time those numbers were posted. I would argue that today, a system churning the buffer cache every five minutes is in need of some serious tuning or a hardware upgrade. Back to our problem and its investigation: There are two things really wrong with this server; first the plan cache is excessively consuming memory and bloated in size and we need to look at that and second we need to evaluate upgrading the memory to accommodate the workload being performed. In the case of the server I was working on there were a lot of single use plans found in sys.dm_exec_cached_plans (where usecounts=1). Single use plans waste space in the plan cache, especially when they are adhoc plans for statements that had concatenated filter criteria that is not likely to reoccur with any frequency.  SQL Server 2005 doesn’t natively have a way to evict a single plan from cache like SQL Server 2008 does, but MVP Kalen Delaney, showed a hack to evict a single plan by creating a plan guide for the statement and then dropping that plan guide in her blog post Geek City: Clearing a Single Plan from Cache. We could put that hack in place in a job to automate cleaning out all the single use plans periodically, minimizing the size of the plan cache, but a better solution would be to fix the application so that it uses proper parameterized calls to the database. You didn’t write the app, and you can’t change its design? Ok, well you could try to force parameterization to occur by creating and keeping plan guides in place, or we can try forcing parameterization at the database level by using ALTER DATABASE <dbname> SET PARAMETERIZATION FORCED and that might help. If neither of these help, we could periodically dump the plan cache for that database, as discussed as being a problem in Kalen’s blog post referenced above; not an ideal scenario. The other option is to increase the memory on the server to 16GB or 32GB, if the hardware allows it, which will increase the size of the plan cache as well as the buffer cache. In SQL Server 2005 SP1, on a system with 16GB of memory, if we set max server memory to 14GB the plan cache could use at most 9GB  [(8GB*.75)+(6GB*.5)=(6+3)=9GB], leaving 5GB for the buffer cache.  If we went to 32GB of memory and set max server memory to 28GB, the plan cache could use at most 16GB [(8*.75)+(20*.5)=(6+10)=16GB], leaving 12GB for the buffer cache. Thankfully we have SQL Server 2005 Service Pack 2, 3, and 4 these days which include the changes in plan cache sizing discussed in the Changes to Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2 blog post. In real life, when I was troubleshooting this problem, I spent a week trying to chase down the cause of the disk I/O bottleneck with our Server Admin and SAN Admin, and there wasn’t much that could be done immediately there, so I finally asked if we could increase the memory on the server to 16GB, which did fix the problem. It wasn’t until I had this same problem occur on another system that I actually figured out how to really troubleshoot this down to the root cause.  I couldn’t believe the size of the plan cache on the server with 16GB of memory when I actually learned about this and went back to look at it. SQL Server is constantly telling a story to anyone that will listen. As the DBA, you have to sit back and listen to all that it’s telling you and then evaluate the big picture and how all the data you can gather from SQL about performance relate to each other. One of the greatest tools out there is actually a free in the form of Diagnostic Scripts for SQL Server 2005 and 2008, created by MVP Glenn Alan Berry. Glenn’s scripts collect a majority of the information that SQL has to offer for rapid troubleshooting of problems, and he includes a lot of notes about what the outputs of each individual query might be telling you. When I read Pinal’s blog post SQL SERVER – ASYNC_IO_COMPLETION – Wait Type – Day 11 of 28, I noticed that he referenced Checking Memory Related Performance Counters in his post, but there was no real explanation about why checking memory counters is so important when looking at an I/O related wait type. I thought I’d chat with him briefly on Google Talk/Twitter DM and point this out, and offer a couple of other points I noted, so that he could add the information to his blog post if he found it useful.  Instead he asked that I write a guest blog for this. I am honored to be a guest blogger, and to be able to share this kind of information with the community. The information contained in this blog post is a glimpse at how I do troubleshooting almost every day of the week in my own environment. SQL Server provides us with a lot of information about how it is running, and where it may be having problems, it is up to us to play detective and find out how all that information comes together to tell us what’s really the problem. This blog post is written by Jonathan Kehayias (Blog | Twitter). Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: MVP, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

< Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >