Search Results

Search found 13249 results on 530 pages for 'virtualized performance'.

Page 17/530 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Disabling CPU management

    - by Tiffany Walker
    If I add the following processor.max_cstate=0 to the kernel command line for boot up, does that disable all CPU power management and throttling? I also found: http://www.experts-exchange.com/OS/Linux/Administration/A_3492-Avoiding-CPU-speed-scaling-in-modern-Linux-distributions-Running-CPU-at-full-speed-Tips.html The link talks of Change CPU governor from 'ondemand' to 'performance' for all CPUs/cores and disabling kondemand from kernel. Server is for web hosting UPDATES: 2.6.32-379.1.1.lve1.1.7.6.el6.x86_64 #1 SMP Sat Aug 4 09:56:37 EDT 2012 x86_64 x86_64 x86_64 GNU/Linux . # dmidecode 2.11 SMBIOS 2.6 present. 74 structures occupying 2878 bytes. Table at 0x0009F000. Handle 0x0000, DMI type 0, 24 bytes BIOS Information Vendor: American Megatrends Inc. Version: 1.0c Release Date: 05/27/2010 Address: 0xF0000 Runtime Size: 64 kB ROM Size: 4096 kB Characteristics: ISA is supported PCI is supported PNP is supported BIOS is upgradeable BIOS shadowing is allowed ESCD support is available Boot from CD is supported Selectable boot is supported BIOS ROM is socketed EDD is supported 5.25"/1.2 MB floppy services are supported (int 13h) 3.5"/720 kB floppy services are supported (int 13h) 3.5"/2.88 MB floppy services are supported (int 13h) Print screen service is supported (int 5h) 8042 keyboard services are supported (int 9h) Serial services are supported (int 14h) Printer services are supported (int 17h) CGA/mono video services are supported (int 10h) ACPI is supported USB legacy is supported LS-120 boot is supported ATAPI Zip drive boot is supported BIOS boot specification is supported Targeted content distribution is supported BIOS Revision: 8.16 Handle 0x0001, DMI type 1, 27 bytes System Information Manufacturer: Supermicro Product Name: X8SIE Version: 0123456789 Serial Number: 0123456789 UUID: 49434D53-0200-9033-2500-33902500D52C Wake-up Type: Power Switch SKU Number: To Be Filled By O.E.M. Family: To Be Filled By O.E.M. Handle 0x0002, DMI type 2, 15 bytes Base Board Information Manufacturer: Supermicro Product Name: X8SIE Version: 0123456789 Serial Number: VM11S61561 Asset Tag: To Be Filled By O.E.M. Features: Board is a hosting board Board is replaceable Location In Chassis: To Be Filled By O.E.M. Chassis Handle: 0x0003 Type: Motherboard Contained Object Handles: 0 Handle 0x0003, DMI type 3, 21 bytes Chassis Information Manufacturer: Supermicro Type: Sealed-case PC Lock: Not Present Version: 0123456789 Serial Number: 0123456789 Asset Tag: To Be Filled By O.E.M. Boot-up State: Safe Power Supply State: Safe Thermal State: Safe Security Status: None OEM Information: 0x00000000 Height: Unspecified Number Of Power Cords: 1 Contained Elements: 0

    Read the article

  • Very long (>300s) request processing time on Apache Server serving static content from particular IP

    - by Ron Bieber
    We are running an Apache 2.2 server for a very large web site. Over the past few months we have been having some users reporting slow response times, while others (including our resources, both on the internal network and our home networks) do not see any degradation in performance. After a ton of investigation, we finally found a "Deny from none" statement in our configuration that was causing reverse DNS lookups (which were timing out) that solved the bulk of our issues, but we still have some customers that we are seeing in the Apache logs (using %D in the log format) with request processing times of 300s for images, css, javascript and other static content. We've checked all Deny / Allow statements for reoccurrence of "none", as well as all other things we know of that would cause reverse DNS lookups (such as using "REMOTE_HOST" in rewrite rules, using %a instead of %h in our log format configuration) as well as verified that HostnameLookups is set to "Off". As an aside, we've also validated that reverse DNS lookups for folks having this problem do not time out - so I'm fairly certain DNS is not an issue in this case. I've run out of ideas. Are there any Apache configuration scenarios that someone can point me to that I might be missing that would cause request times for static content to take so long only for certain users? Thank you in advance.

    Read the article

  • Understanding RedHats recommended tuned profiles

    - by espenfjo
    We are going to roll out tuned (and numad) on ~1000 servers, the majority of them being VMware servers either on NetApp or 3Par storage. According to RedHats documentation we should choose the virtual-guestprofile. What it is doing can be seen here: tuned.conf We are changing the IO scheduler to NOOP as both VMware and the NetApp/3Par should do sufficient scheduling for us. However, after investigating a bit I am not sure why they are increasing vm.dirty_ratio and kernel.sched_min_granularity_ns. As far as I have understood increasing increasing vm.dirty_ratio to 40% will mean that for a server with 20GB ram, 8GB can be dirty at any given time unless vm.dirty_writeback_centisecsis hit first. And while flushing these 8GB all IO for the application will be blocked until the dirty pages are freed. Increasing the dirty_ratio would probably mean higher write performance at peaks as we now have a larger cache, but then again when the cache fills IO will be blocked for a considerably longer time (Several seconds). The other is why they are increasing the sched_min_granularity_ns. If I understand it correctly increasing this value will decrease the number of time slices per epoch(sched_latency_ns) meaning that running tasks will get more time to finish their work. I can understand this being a very good thing for applications with very few threads, but for eg. apache or other processes with a lot of threads would this not be counter-productive?

    Read the article

  • F# performance in scientific computing

    - by aaa
    hello. I am curious as to how F# performance compares to C++ performance? I asked a similar question with regards to Java, and the impression I got was that Java is not suitable for heavy numbercrunching. I have read that F# is supposed to be more scalable and more performant, but how is this real-world performance compares to C++? specific questions about current implementation are: How well does it do floating-point? Does it allow vector instructions how friendly is it towards optimizing compilers? How big a memory foot print does it have? Does it allow fine-grained control over memory locality? does it have capacity for distributed memory processors, for example Cray? what features does it have that may be of interest to computational science where heavy number processing is involved? Are there actual scientific computing implementations that use it? Thanks

    Read the article

  • VS2010 + Resharper 5 performance issues

    - by Jeremy Roberts
    I have been using VS2010 with Resharper 5 for several weeks and am having a performance issue. Sometimes when typing, the cursor will lag and the keystrokes won't show instantaneously. Also, scrolling will lag at times. There is a forum thread started and JetBrains has been responding. Several people (including myself) have added their voice and uploaded some performance profiles. If anyone here has has this issue, I would encourage you to visit the thread and let JetBrains know about it. Has anyone had this problem and have a suggestion to restore performance?

    Read the article

  • Measuring Web Page Performance on Client vs. Server

    - by Yaakov Ellis
    I am working with a web page (ASP.net 3.5) that is very complicated and in certain circumstances has major performance issues. It uses Ajax (through the Telerik AjaxManager) for most of its functionality. I would like to be able to measure in some way the amounts of time for the following, for each request: On client submitting request to server Client-to-Server On server initializing request On server processing request Server-to-Client Client rendering, JavaScript processing I have monitored the database traffic and cannot find any obvious culprit. On the other hand, I have a suspicion that some of the Ajax interactions are causing performance issues. However, until I have a way to track the times involved, make a baseline measurement, and measure performance as I tweak, it will be hard to work on the issue. So what is the best way to measure all of these? Is there one tool that can do it? Combination of FireBug and logging inserted into different places in the page life-cycle?

    Read the article

  • Looking for SQL Server Performance Monitor Tools

    - by the-locster
    I may be approaching this problem from the wrong angle but what I'm thinking of is some kind of performance monitor tool for SQl server that works in a similar way to code performance tools, e.g. I;d like to see an output of how many times each stored procedure was called, average executuion time and possibly various resource usage stats such as cache/index utilisation, resultign disk access and table scans, etc. As far as I can tell the performance monitor that comes with SQL Server just logs the various calls but doesn't report he variosu stats I'm looking for. Potentially I just need a tool to analyze the log output?

    Read the article

  • Performance Counters Registry validation

    - by anchandra
    I have a C# application that adds some performance counters when it starts up. But if the registry HKEY_LOCAL_MACHINE-SOFTWARE-Microsoft-Windows NT-CurrentVersion-Perflib is corrupted (missing or invalid data), the operation of checking the existence of the performance counters (PerformanceCounterCategory.Exists(category) takes a really long time (around 30 secs) before finally throwing exception (InvalidOperation: Category does not exist). My question is how can i verify the validity of the registry before trying to add the performance counters (and what validity means) or if there is a way i can timeout the perf counter operations, so that it doesn't take 30 seconds to get an exception.

    Read the article

  • Entity Framework associations killing performance

    - by Chris
    Here is the performance test i am looking at. I have 8 different entities that are table per type. Some of the entities contain over 100 thousand rows. This particular application does several recursive calculations on the client so I think it may be best to preload the data instead of lazy loading. If there are no associations I can load the entire database in about 3 seconds. As I add associations in any way the performance starts to drastically decline. I am loading all the data the same way (just calling toList() on the entity attached to the context). I ran the test with edmx generated classes and self tracking entities and had similar results. I am sure if I were to try and deal with the associations myself, similar to how I would in a dataset, the performance problem would go away. On the other hand I am pretty sure this is not how the entity framework was intended to being used. Any thoughts or ideas?

    Read the article

  • azure performance

    - by Dave K
    I've moved my app from a dedicated server to azure (and sql azure), and have noticed substantial performance degradation. obviously not having the database and web server on the same piece of hardware is much of it, but I'm curious what other people have found in migrating to azure, and if there is anything any of you would suggest I do to improve it. Right now I'm considering moving back to my dedicated server... So in summary, are there any rules of thumb for this, existing research (wasn't able to find much) or other pieces of advice on improving the performance of the app? has anyone else found the same to be true, and improved their site's performance in some way? it's built in C# on asp.net mvc 2. Thanks!

    Read the article

  • Performance statistics hooks

    - by tinny
    Lets be honest, most software that developers produce has quite modest performance requirements. E.g. Systems perhaps serving 100's of requests per second, if that. But lets assume for a moment (or even dream) that you where perhaps involved in the "next big thing" (whatever that means) and you wanted to put some sort of performance statistics logging in place to help you out when all those users come flying in. Performance statistics logging, how would you approach this requirement? Perhaps you would use some sort of generic framework for this? Or roll your own solution? What would you log? How granular? Or would you not even bother putting anything in place and rather deal with this issue when it actually became an issue? It would be really interesting to hear your thoughts on this topic.

    Read the article

  • Is code clearness killing application performance?

    - by Jorge Córdoba
    As today's code is getting more complex by the minute, code needs to be designed to be maintainable - meaning easy to read, and easy to understand. That being said, I can't help but remember the programs that ran a couple of years ago such as Winamp or some games in which you needed a high performance program because your 486 100 Mhz wouldn't play mp3s with that beautiful mp3 player which consumed all of your CPU cycles. Now I run Media Player (or whatever), start playing an mp3 and it eats up a 25-30% of one of my four cores. Come on!! If a 486 can do it, how can the playback take up so much processor to do the same? I'm a developer myself, and I always used to advise: keep your code simple, don't prematurely optimize for performance. It seems that we've gone from "trying to get it to use the least amount of CPU as possible" to "if it doesn't take too much CPU is all right". So, do you think we are killing performance by ignoring optimizations?

    Read the article

  • Oracle, slow performance when using sub select

    - by Wyass
    I have a view that is very slow if you fetch all rows. But if I select a subset (providing an ID in the where clause) the performance is very good. I cannot hardcode the ID so I create a sub select to get the ID from another table. The sub select only returns one ID. Now the performance is very slow and it seems like Oracle is evaluating the whole view before using the where clause. Can I somehow help Oracle so SQL 2 and 3 have the same performance? I’m using Oracle 10g 1 slow select * from ci.my_slow_view 2 fast select * from ci.my_slow_view where id = 1; 3 slow select * from ci.my_slow_view where id in (select id from active_ids)

    Read the article

  • Apache2 benchmarks - very poor performance

    - by andrzejp
    I have two servers on which I test the configuration of apache2. The first server: 4GB of RAM, AMD Athlon (tm) 64 X2 Dual Core Processor 5600 + Apache 2.2.3, mod_php, mpm prefork: Settings: Timeout 100 KeepAlive On MaxKeepAliveRequests 150 KeepAliveTimeout 4 <IfModule Mpm_prefork_module> StartServers 7 MinSpareServers 15 MaxSpareServers 30 MaxClients 250 MaxRequestsPerChild 2000 </ IfModule> Compiled in modules: core.c mod_log_config.c mod_logio.c prefork.c http_core.c mod_so.c Second server: 8GB of RAM, Intel (R) Core (TM) i7 CPU [email protected] Apache 2.2.9, **fcgid, mpm worker, suexec** PHP scripts are running via fcgi-wrapper Settings: Timeout 100 KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 4 <IfModule Mpm_worker_module> StartServers 10 MaxClients 200 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 1000 </ IfModule> Compiled in modules: core.c mod_log_config.c mod_logio.c worker.c http_core.c mod_so.c The following test results, which are very strange! New server (dynamic content - php via fcgid+suexec): Server Software: Apache/2.2.9 Server Hostname: XXXXXXXX Server Port: 80 Document Path: XXXXXXX Document Length: 179512 bytes Concurrency Level: 10 Time taken for tests: 0.26276 seconds Complete requests: 1000 Failed requests: 0 Total transferred: 179935000 bytes HTML transferred: 179512000 bytes Requests per second: 38.06 Transfer rate: 6847.88 kb/s received Connnection Times (ms) min avg max Connect: 2 4 54 Processing: 161 257 449 Total: 163 261 503 Old server (dynamic content - mod_php): Server Software: Apache/2.2.3 Server Hostname: XXXXXX Server Port: 80 Document Path: XXXXXX Document Length: 187537 bytes Concurrency Level: 10 Time taken for tests: 173.073 seconds Complete requests: 1000 Failed requests: 22 (Connect: 0, Length: 22, Exceptions: 0) Total transferred: 188003372 bytes HTML transferred: 187546372 bytes Requests per second: 5777.91 Transfer rate: 1086267.40 kb/s received Connnection Times (ms) min avg max Connect: 3 3 28 Processing: 298 1724 26615 Total: 301 1727 26643 Old server: Static content (jpg file) Server Software: Apache/2.2.3 Server Hostname: xxxxxxxxx Server Port: 80 Document Path: /images/top2.gif Document Length: 40486 bytes Concurrency Level: 100 Time taken for tests: 3.558 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 40864400 bytes HTML transferred: 40557482 bytes Requests per second: 281.09 [#/sec] (mean) Time per request: 355.753 [ms] (mean) Time per request: 3.558 [ms] (mean, across all concurrent requests) Transfer rate: 11217.51 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 3 11 4.5 12 23 Processing: 40 329 61.4 339 1009 Waiting: 6 282 55.2 293 737 Total: 43 340 63.0 351 1020 New server - static content (jpg file) Server Software: Apache/2.2.9 Server Hostname: XXXXX Server Port: 80 Document Path: /images/top2.gif Document Length: 40486 bytes Concurrency Level: 100 Time taken for tests: 3.571531 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 41282792 bytes HTML transferred: 41030080 bytes Requests per second: 279.99 [#/sec] (mean) Time per request: 357.153 [ms] (mean) Time per request: 3.572 [ms] (mean, across all concurrent requests) Transfer rate: 11287.88 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 2 63 24.8 66 119 Processing: 124 278 31.8 282 391 Waiting: 3 70 28.5 66 164 Total: 126 341 35.9 350 443 I noticed that in the apache error.log is a lot of entries: [notice] mod_fcgid: call /www/XXXXX/public_html/forum/index.php with wrapper /www/php-fcgi-scripts/XXXXXX/php-fcgi-starter What I have omitted, or do not understand? Such a difference in requests per second? Is it possible? What could be the cause?

    Read the article

  • openvpn TCP/UDP slow SSH/SMB performance

    - by Petr Latal
    I have question about strange behavior of my openVPN configuration on Debian lenny. I have 2 server configs (one proto tcp-server based and one proto udp based). ISP bandwidth is 7Mbit/7Mbit. When I uses proto tcp-server my download server rate is fine around 6,4 Mbit/s, but upload rate is about 3Mbit/s. When I uses proto udp, my download server rate is around 3Mbit/s and upload rate around 6,4Mbit/s. I tried to handle the MTU, MSSFIX and cipher on/off on server and client configs to synchronize rates, but without solution. Here is TCP based SERVER config: mode server tls-server port 1194 proto tcp-server dev tap0 ifconfig 11.10.15.1 255.255.255.0 ifconfig-pool 11.10.15.2 11.10.15.20 255.255.255.0 push "route 192.168.1.0 255.255.255.0" push "dhcp-option DNS 192.168.1.200" push "route-gateway 11.10.15.1" push "dhcp-option WINS 192.168.1.200" route-up /etc/openvpn/routeup.sh duplicate-cn ca /etc/openvpn/ca.crt cert /etc/openvpn/server.crt key /etc/openvpn/server.key dh /etc/openvpn/dh2048.pem log-append /var/log/openvpn.log status /var/run/vpn.status 10 user nobody group nogroup keepalive 10 120 comp-lzo verb 3 script-security 3 plugin /usr/lib/openvpn/openvpn-auth-pam.so system-auth persist-tun persist-key mssfix cipher BF-CBC Here is UDP based SERVER config: port 1194 proto udp dev tun0 local xx.xx.xx.xx server 11.10.15.0 255.255.255.0 ca /etc/openvpn/ca.crt cert /etc/openvpn/server.crt key /etc/openvpn/server.key dh /etc/openvpn/dh2048.pem log-append /var/log/openvpn.log status /var/run/vpn.status 10 user nobody group nogroup keepalive 10 120 comp-lzo verb 3 duplicate-cn script-security 3 plugin /usr/lib/openvpn/openvpn-auth-pam.so system-auth persist-tun persist-key tun-mtu 1500 mssfix 1212 client-to-client ifconfig-pool-persist ipp.txt Here is TCP/UDP based windows CLIENT config: remote xx.xx.xx.xx --socket-flags TCP_NODELAY tls-client port 1194 proto tcp-client #proto udp dev tap #dev tun pull ca ca.crt cert latis.crt key latis.key mute 0 comp-lzo adaptive verb 3 resolv-retry infinite nobind persist-key auth-user-pass auth-nocache script-security 2 mssfix cipher BF-CBC

    Read the article

  • video card performance monitoring?

    - by Dru
    Is there a 'top' like command for monitoring the GPU and memory usage of a video card? I am most interested in Linux commands, but and OS would be interesting. I strongly suspect that for a group of my systems the video cards are being under-utilized (but I have no idea by how much) and would like to re-allocate funds to other bottle-necks. We are using higher end cards, so the price difference between cards is significant. Thank you.

    Read the article

  • Bad performance issue on dedicated server

    - by Pierre Espenan
    I just subscribed to a dedicated server offer, and encounter some bad PHP execution performances. Actually, the time execution may be 2 times bigger than it is on my old mutualized server! I'm definitely not an expert in server management, so I'm wondering what I missed. Here are some stuff that can help you understand what's wrong here : My server (in french but easy to understand) : http://www.online.net/fr/serveur-dedie/dedibox-sc phpinfo(); output : http://jsfiddle.net/E8b7W/embedded/result/ PHP bench script (dedicated server) : http://jsfiddle.net/EhXzK/embedded/result/ PHP bench script (old mutualized) : http://jsfiddle.net/ANbWt/embedded/result/ Is it normal to get such poor performances after a kernel update and basics "apt-get install" for apache2 and php ? Thanks !

    Read the article

  • Performance monitoring for apache websites

    - by instigator
    I am after something that will monitor cpu/mem usage for different apache sites. I have a web server that runs multiple websites (on different domains) and wondering if there is a tool (hopefully web-base that can send email alerts) that will show the cpu and memory usage for each website.

    Read the article

  • Performance mitigations serving content from a UNC share via IIS 6

    - by codepoke
    I have a quad processor vmware instance running Windows 2003 and 1gb ethernet. I'm comparing serving the exact same heavy .NET 2.0 content from the local hard drive versus serving it from a UNC drive. If I use WCAT to load it down, I see about a 40% reduction in transactions/sec while serving from the UNC. Processor time barely moves from 45% and the NIC sits around 40% either way. I don't see any significant memory loading either way. Context Switches/Transaction, though, more than doubles when serving from the UNC. Pathlengths more than double as well, but I believe that's just an expression of the effect of context switching. All told, it looks like the bottleneck is processor switching while waiting on content from the UNC share. Is my experience about the norm? Is there some mitigation I might try? I twiddled HKLM\System\CurrentControlSet\Services\LanmanWorkstation\Parameters\MaxCmds a little bit per http://technet.microsoft.com/en-us/library/dd296694(WS.10).aspx, but to no obvious effect. I kind of doubt my problem is lack of connections, but rather just the act of switching from thread to thread while waiting on data.

    Read the article

  • OpenVPN performance: how many concurrent clients are possible?

    - by Steffen Müller
    I am evaluating a system for a client where many OpenVPN clients connect to a OpenVPN server. "Many" means 50000 - 1000000. Why do I do that? The clients are distributed embedded systems, each sitting behind the system owners dsl router. The server needs to be able to send commands to the clients. My first naive approach is to make the clients connect to the server via an openvpn network. This way, the secure communication tunnel can be used in both directions. This means that all clients are always connected to the server. There are many clients summing up over the years. The question is: does the OpenVPN server explode when reaching a certain number of clients? I am already aware of a maximum TCP connection number limit, therefore (and for other reasons) the VPN would have to use UDP transport. OpenVPN gurus, what is your opinion?

    Read the article

  • Windows XP IIS5 performance across Network

    - by davidsleeps
    Hi, Just wondering whether Windows XP with IIS5 running needs any extra configuration to be suitable as a web server...I'm not considering using this for anything other than a web server on a small network for testing development etc One of the reasons I'm concerned though is that we've deployed an asp.net application to a workstation with Windows XP, and running the application using a browser on the machine (so accessing it through localhost/myApp/page.aspx and not accessing it through the network) runs the application really quickly. If another machine on the LAN accesses the same page (using http://ComputerName/myApp/page.apx) then the whole application runs noticeably slower...yet the computers are connected on a gigabit switch...so I wouldn't have thought network latency or bandwidth could be an issue... Does Windows XP need anything etc enabled or changed or network settings for it to work correctly?

    Read the article

  • Slow NFS transfer performance of small files

    - by Arie K
    I'm using Openfiler 2.3 on an HP ML370 G5, Smart Array P400, SAS disks combined using RAID 1+0. I set up an NFS share from ext3 partition using Openfiler's web based configuration, and I succeeded to mount the share from another host. Both host are connected using dedicated gigabit link. Simple benchmark using dd: $ dd if=/dev/zero of=outfile bs=1000 count=2000000 2000000+0 records in 2000000+0 records out 2000000000 bytes (2.0 GB) copied, 34.4737 s, 58.0 MB/s I see it can achieve moderate transfer speed (58.0 MB/s). But if I copy a directory containing many small files (.php and .jpg, around 1-4 kB per file) of total size ~300 MB, the cp process ends in about 10 minutes. Is NFS not suitable for small file transfer like above case? Or is there some parameters that must be adjusted?

    Read the article

  • Performance problem with Win Server 2008 at vbox on Ubuntu 9.10 Server

    - by Diskilla
    Hey, i´ve got an Ubuntu 9.10 Server and tried to install Windows Server 2008 in a virtual machine. The Problem is, the Server has got 8 cores, but the virtual machine seems to have problems with that. The VM is very slow and not really usable if i set the VM-config to more than one core. Is it set to only one core everything works fine, but thats not a solution because I bought the Server with several cores for a reason... Does anybody know this problem or has got a solution? I feel like I read the entire internet via Google... no solution found. Greetz Diskilla P.S.: I´m from Germany and it´s kind of hard to ask in english :-) I hope I din´t make that much mistakes.

    Read the article

  • Using Performance Monitor To Get IIS7 Response Turnaround Time

    - by alphadogg
    I have a MVC2 web application on W2KR2/IIS7 that I'd like to benchmark/monitor. Some XHR requests by a browser-based client are suddenly taking 8-10 sec when they used to take much less time (as per Chrome Developer Tool timings). The underlying SQL Server queries, using the same params, runs in 1.4s according to total execution time client statistics from SSMS. I'm assuming that there are various counters that can specifically dissect time taken/waiting/processing between IIS7 itself and the web application? For example, can I check how long it takes to get a response from IIS7 app and DB? How about how long it takes to serve IIS7?

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >