Search Results

Search found 22065 results on 883 pages for 'performance testing'.

Page 316/883 | < Previous Page | 312 313 314 315 316 317 318 319 320 321 322 323  | Next Page >

  • Mobile Phone Browser Emulators/Simulators

    - by Jessie
    I work in QA in a .NET shop and recently part of my testing process has started to involve testing our company website on mobile devices. At least one of our techs uses an HTC Desire. After tons of googling I still can't find a good online emulator for testing websites on different types of mobile devices. Is anyone aware of a website that I can test across multiple mobile platforms? Or even an online HTC or Blackberry browser emulator? I've found an iphone/opera mini simulator, but that's about it. Also, I realize there are a lot of SDK's that include emulators, but I'd rather not have to set up an entire SDK just to use an emulator.

    Read the article

  • Will Software RAID And iSCSI Work For A SAN

    - by Justin
    I am looking for a SAN solution, but can't afford even entry level solutions. Basically, the SAN is for development and a proof of concept product. The performance doesn't have to be amazing, but needs to be functional. My buddy says we should just setup sotware RAID and software iSCSI in Linux. Essentially I have a spare server with dual Xeon processors, 4GB of memory, and (2) 500GB 7200RPM drives. It's a bit old but working. I am sure there is reason people don't do software RAID and iSCSI, but will performance be usable? Thinking of configuring the drives in RAID 0 (for performance).

    Read the article

  • Asterisk does not recognise DTMF tones from mobile phones

    - by Eugene van der Merwe
    We have an Asterisk 1.8.7.0 (the Elastix derivative) switchboard. Every since a month ago, seemingly out of the blue, the switchboard does not recognise DTMF tones any more from mobile phones. Testing the switchboard using 7777 works. Testing the switchboard from a normal phones works. Testing the switchboard from a mobile phone fails. Looking at the log file I can't see anything. I used 'asterisk -rvvvv' and 'tail -f /var/log/asterisk/full' to see the live output and scan the logs. I guess I don't see anything because it's simply not recognising the DTMF tones. I did brief research and found an old setting for SIP phones, 'rfc2833compensate=yes', and tried adding this to 'sip_general_custom.conf'. After that I did 'core restart when convenient' but that didn't make any difference. Could anyone give me some additional troubleshooting steps?

    Read the article

  • Exchange Server 2010 ActiveSync SSL Certificate Problem

    - by Cell-o
    Hi All, We have a problem related Exchange Server 2010 Activesync.My problem is;When I connecting to activesync from outside, I am receiving the following error. ExRCA is testing Exchange ActiveSync. The Exchange ActiveSync test failed. Test Steps Attempting to resolve the host name mail.xxxxx.com in DNS. The host name resolved successfully. Additional Details IP addresses returned: xx.0.x3.4 Testing TCP port 443 on host mail.x.com to ensure it's listening and open. The port was opened successfully. Testing the SSL certificate to make sure it's valid. The SSL certificate failed one or more certificate validation checks. Test Steps Validating the certificate name. Certificate name validation failed. Tell me more about this issue and how to resolve it Additional Details Host name mail.x.com doesn't match any name found on the server certificate CN=xxxxxx. Thanks in advance all your help.

    Read the article

  • Will Software RAID And iSCSI Work For A SAN

    - by Justin
    I am looking for a SAN solution, but can't afford even entry level solutions. Basically, the SAN is for development and a proof of concept product. The performance doesn't have to be amazing, but needs to be functional. My buddy says we should just setup sotware RAID and software iSCSI in Linux. Essentially I have a spare server with dual Xeon processors, 4GB of memory, and (2) 500GB 7200RPM drives. It's a bit old but working. I am sure there is reason people don't do software RAID and iSCSI, but will performance be usable? Thinking of configuring the drives in RAID 0 (for performance).

    Read the article

  • needs updated glibc package version 3.4.15 or later for RHEL6

    - by Tejas
    I want to upgrade my current running applications to latest version. But due to some package issue i am unable to install them. I get common error in that: /usr/lib64/libstdc++.so.6: version 'GLIBCXX_3.4.15' not found. When i tried to update glibc package i get following output: [root@agastya ~]# yum install glibc Loaded plugins: refresh-packagekit, rhnplugin epel/metalink | 3.8 kB 00:00 epel | 4.3 kB 00:00 epel/primary_db | 5.0 MB 01:33 epel-testing/metalink | 3.8 kB 00:00 epel-testing | 4.3 kB 00:00 epel-testing/primary_db | 295 kB 00:03 rhel-x86_64-server-6 | 1.8 kB 00:00 rhel-x86_64-server-6/primary | 11 MB 02:02 rhel-x86_64-server-6 8816/8816 Setting up Install Process Package glibc-2.12-1.80.el6_3.6.x86_64 already installed and latest version Nothing to do [root@agastya ~]# Should i need to add some more repositories? If yes, how?

    Read the article

  • File store: CouchDB vs SQL Server + file system

    - by Andrey
    I'm exploring different ways of storing user-uploaded files (all are MS Office documents or alikes) on our high load web site. It's currently designed to store documents as files and have a SQL database store all metadata for those files. I'm concerned about growing out of the storage server and SQL server performance when number of documents reaches hundreds of millions. I was reading a lot of good information about CouchDB including its built-in scalability and performance, but I'm not sure how storing files as attachments in CouchDB would compare to storing files on a file system in terms of performance. Anybody used CouchDB clusters for storing LARGE amounts of documents and in high load environment?

    Read the article

  • How to manage a home-grown YUM package repo?

    - by TomOnTime
    There are plenty of websites that explain how to manage a mirror of YUM repos. I want to run a repo for my home-grown packages. Is there a good way to manage such repos? What I need to do: Manage 3 repos: unstable, testing, stable Self-service functions that let users add/remove/promote packages (promote means moving a package unstable?testing or testing-stable). ACLs that control which users/groups may add/remove/promote packages. Automatically re-sign packages as they move repo to repo (since the GPG key for "stable" should be different than "unstable") Automatically run "createrepo" to update repodata when needed. Suggestions?

    Read the article

  • RAID--0 " TWO " DRIVES SSD ONLY Should I use on-board / Software RAID OR a RAID Card / Control

    - by Wes
    I am looking at going with a TWO Drive Only SSD RAID-0 Configuration And was wondering if I would get better performance / Speed from the Use of a RAID Controller / Card Verses just using the Software RAID on my Mother Board. I have herd conflicting reports , Again I only Plan on Running " 2 " SSD Drives in RAID-0 Config I have No- problem spending the extra money for a good controller but only if I am going to benifit performance wise , Otherwise if there is no notable Gain I will just use the Software RAID that my HP-180-T came with Intel- 3.33 GHZ , 6-Core , 12-GB of DDR-3. I have a huge External drive for All Storage and am not concerned about Data loss just looking for pure speed. And if a Controller will benifit my performance Wht type of card would one suggest?

    Read the article

  • VMWare Workstation Dev Machine Disks: one fast or four echofriendly raid?

    - by Avi
    I'm building a new dev computer. It will be running a few VMWare Worksation virtual machines - A dev machine running VS-2010, a build machine, a version-control machine, a web server for testing, a "personal" machine running office etc. I'll be connecting the computer to my stereo, so I'll also be running iTunes (possible on a dedicated VM) and I want the computer to be a silent one. I'll probably use an Antec P183 case. I was advised on Serverfault to use Raid10 for performance. Raid 10 uses 4 disks. So, my question is as follows: In terms of heat, noise, reliability, warranty, price, capacity and performance, what would you suggest: A Raid10 4 disk array using eco-friendly disks such as the $94 1TB Western Digital Caviar Green, or one high performance disk such as the 2TB Western Digital Caviar Black at $280?

    Read the article

  • Netcat UDP File Transfer Between Two Servers Times Out?

    - by Mark Bowytz
    I'm testing file transfer speeds between two Red Hat servers that are connected to the same switch within the data center and I decided to use netcat to eliminate protocol overhead as much as possible. Testing in TCP mode went well and I was wondering how UDP might fare. On my receiving (client) end, I ran this: nc -u -l 11225 -v > myfile.out And then on the sending (server) end I ran the following: cat myfile.out | nc -u myserver.foo.zzz.com 11225 -v The file I'm testing with is 38 GB but the transfer seems to stop at around 15 GB (one time at 14.9, another at 15.6). I've tested by adding a "-w 5000" just in case it's timing out but no joy. Adding the -v doesn't show anything except acknowledging that the connection occurred. No errors. So - any suggestions as to why would the transfer cease?

    Read the article

  • What can impact the throughput rate at tcp or Os level?

    - by Jimm
    I am facing a problem, where running the same application on different servers, yields unexpected performance results. For example, running the application on a particular faster server (faster cpu, more memory), with no load, yields slower performance than running on a less powerful server on the same network. I am suspecting that either OS or TCP is causing the slowness on the faster server. I cannot use IPerf , unless i modify it, because the "performance" in my application is defined as Component A sends a message to Component B. Component B sends an ACK to component A and ONLY then Component A would send the next message. So it is different from what IPerf does, which to my knowledge, simply tries to push as many messages as possible. Is there a tool that can look at OS and TCP configuration and suggest the cause of slowness?

    Read the article

  • What is a proper MySql replication configuration for frequent db updates and rare selects?

    - by serg555
    We currently have 1 master db on its own server and slave db on app server. App executes very frequent but light updates (like increasing counters), and occasional (once in a few minutes) heavy selects (which is the most important part of the app). When app was connected only to master db there were no performance issues. With slave db introduction CPU load avg on app server increased to about 6-10 during that heavy select period (from 3-4 as before). When server doesn't run those frequent updates it seems like performance for selects stays within the limits. So I have a feeling that those updates is what is causing the performance drop (also these frequent updates are not critical so if slave db doesn't have them in sync with master for some time it would be ok). What would be a good db replication setup for such kind of app? What are the replication parameters we could tweak? Thanks.

    Read the article

  • Iozone: sensible settings for a server with lots of RAM

    - by Frank Brenner
    I have just acquired a server with: 2x quadcore Xeons 48G ECC RAM 5x 160GB SSDs on an LSI 9260-8i Before deploying the target platform, I'd like to collect as much benchmark data as possible, testing I/O with hardware RAID in various configurations, ZFS zRAID, as well as I/O performance on vSphere and with KVM virtualization. In order to see real disk I/O performance without cache effects, I tried running Iozone with a maximum file of more than twice the physical RAM as recommended in the documentation, so: iozone -a -g100G However, as one might expect, this takes far too long to be practicable. (I stopped the run after seven hours..) I'd like to reduce the range of record and file sizes to values that might reflect realistic performance for an application server, hopefully getting the run times to under an hour or so. Any ideas? Thanks.

    Read the article

  • How to reduce 3rd party repository priority in apt

    - by carlosz
    I'm using Debian Testing together with the Deb Multimedia (previously Debian Multimedia) repository for testing. I want to reduce the priority of the deb-multimedia packages so it only installs certain packages. I've tried with: Package: * Pin: release o="Unofficial Multimedia Packages" Pin-Priority: 10 and Package: * Pin: origin "mirror.home-dn.net" Pin-Priority: 10 But neither works, the packages still have the default priority (500). The Release file from the repository looks like this: Archive: testing Version: None Component: main Origin: Unofficial Multimedia Packages Label: Unofficial Multimedia Packages Architecture: amd64 What am I doing wrong? Edit: It worked when I used the Version information instead: Package: * Pin: release v=None Pin-Priority: 10 But I still don't know the reason the other filters didn't work.

    Read the article

  • Visual Studio Development on Virtual Box, Boot Camp, or VMWare Fusion

    - by Eli
    I currently have a Mac, 2ghz and 2 gigs of ram, running OS X Leopard and Virtual Box with a Windows 7 Pro 32bit virtual machine. Performance on the virtual machine is fine for minor tasks but is very clunky while trying to multi-task or develop in Visual Studio 2008. What would be my best option for being able to use Visual Studio, keeping cost and time in mind? 1) Upgrade ram to 4 gigs ($100). Will this really improve my performance enough to use Visual Studio in a Windows 7 vm? Or am I just wasting time/money? 2) Reinstall/restore Windows 7 disk image as a Boot Camp partition. I assume this should improve my performance, yes? 3) Purchase VMWare fusion instead of VirtualBox. Does Fusion require less resources to run? I am open to any suggestions. Thanks in advance

    Read the article

  • How to Track CPU and Memory Usage Per Process

    - by Mjsk
    I have seen this question asked on here before but was unable to follow the answer which was given. I would like to monitor a processes CPU, Memory, and possibly GPU usage over a given time. The data would be useful if presented in a graph. It would be nice if I could do this using Performance Monitor, but I am open to alternative solutions as well. I have tried using Performance Monitor and my problem is that I'm not sure which performance counters to use since there are so many. I've been looking at a Process, Processor, Memory, etc. but I'm not sure which counters within those categories will be of interest to me. My OS is Windows 7.

    Read the article

  • KVM vs Hyper-V. Which hypervisor is best for windows guests?

    - by user198851
    I am currently testing openstack for windows guests (XP and 7). I have deployed openstack "all in one" on system with following specs Processor corei5. (4 physical cores and 8 Threads with HT Technology) RAM 8 GB. HD 500 GB. I have created 4 windows xp guests with 512MB RAM and 1VCPU. On each windows guest i have installed visual studio 2008 only. In nova.conf CPU Over-Commit ratio is 2 for better performance (as mentioned in openstack operation guide). Using KVM as hyerpvisor. I have observed poor performance when simultaneously using visual studio in four windows instances. How i can improve performance ? Should i use KVM or Hyper-V ? or any other suggestion ?

    Read the article

  • OS Isolation: Virtualization or Dual-Boot Duplication, a General How To?

    - by Mr_CryptoPrime
    I want to isolate my windows 7 operating system and I have looked into virtualization. This should work with Linux, however, I do want to still have a way to run windows 7 securely, but without significant performance loss, thus eliminating virtualization for that. I know that you can dual boot because I currently do so with my XP/Linux system. Is there a way that I can duplicate my windows 7 system so I can select one at bootup? This way I can ensure that each OS is isolated and not worry about performance loss. However, I am having a lot of trouble finding a solid method for OS duplication?! Is this even possible or must I buy two versions of win7 and somehow install them separately? Any information regarding this would be helpful, thanks! Essentially I want, Two instances of win7 (not necessarily simultaneously running) Each are isolated from one another so that a security breach in one doesn't affect the other. There is no performance loss in either from doing so

    Read the article

  • jQuery DataTables server side processing and ASP.Net

    - by Chad
    I'm trying to use the server side functionality of the jQuery Datatables plugin with ASP.Net. The ajax request is returning valid JSON, but nothing is showing up in the table. I originally had problems with the data I was sending in the ajax request. I was getting a "Invalid JSON primative" error. I discovered that the data needs to be in a string instead of JSON serialized, as described in this post: http://encosia.com/2008/06/05/3-mistakes-to-avoid-when-using-jquery-with-aspnet-ajax/. I wasn't quite sure how to fix that, so I tried adding this in the ajax request: "data": "{'sEcho': '" + aoData.sEcho + "'}" If the aboves eventually works I'll add the other parameters later. Right now I'm just trying to get something to show up in my table. The returning JSON looks ok and validates, but the sEcho in the post is undefined, and I think thats why no data is being loaded into the table. So, what am I doing wrong? Am I even on the right track or am I being stupid? Does anyone ran into this before or have any suggestions? Here's my jQuery: $(document).ready(function() { $("#grid").dataTable({ "bJQueryUI": true, "sPaginationType": "full_numbers", "bServerSide":true, "sAjaxSource": "GridTest.asmx/ServerSideTest", "fnServerData": function(sSource, aoData, fnCallback) { $.ajax({ "type": "POST", "dataType": 'json', "contentType": "application/json; charset=utf-8", "url": sSource, "data": "{'sEcho': '" + aoData.sEcho + "'}", "success": fnCallback }); } }); }); HTML: <table id="grid"> <thead> <tr> <th>Last Name</th> <th>First Name</th> <th>UserID</th> </tr> </thead> <tbody> <tr> <td colspan="5" class="dataTables_empty">Loading data from server</td> </tr> </tbody> </table> Webmethod: <WebMethod()> _ Public Function ServerSideTest() As Data Dim list As New List(Of String) list.Add("testing") list.Add("chad") list.Add("testing") Dim container As New List(Of List(Of String)) container.Add(list) list = New List(Of String) list.Add("testing2") list.Add("chad") list.Add("testing") container.Add(list) HttpContext.Current.Response.ContentType = "application/json" Return New Data(HttpContext.Current.Request("sEcho"), 2, 2, container) End Function Public Class Data Private _iTotalRecords As Integer Private _iTotalDisplayRecords As Integer Private _sEcho As Integer Private _sColumns As String Private _aaData As List(Of List(Of String)) Public Property sEcho() As Integer Get Return _sEcho End Get Set(ByVal value As Integer) _sEcho = value End Set End Property Public Property iTotalRecords() As Integer Get Return _iTotalRecords End Get Set(ByVal value As Integer) _iTotalRecords = value End Set End Property Public Property iTotalDisplayRecords() As Integer Get Return _iTotalDisplayRecords End Get Set(ByVal value As Integer) _iTotalDisplayRecords = value End Set End Property Public Property aaData() As List(Of List(Of String)) Get Return _aaData End Get Set(ByVal value As List(Of List(Of String))) _aaData = value End Set End Property Public Sub New(ByVal sEcho As Integer, ByVal iTotalRecords As Integer, ByVal iTotalDisplayRecords As Integer, ByVal aaData As List(Of List(Of String))) If sEcho <> 0 Then Me.sEcho = sEcho Me.iTotalRecords = iTotalRecords Me.iTotalDisplayRecords = iTotalDisplayRecords Me.aaData = aaData End Sub Returned JSON: {"__type":"Data","sEcho":0,"iTotalRecords":2,"iTotalDisplayRecords":2,"aaData":[["testing","chad","testing"],["testing2","chad","testing"]]}

    Read the article

  • SQL SERVER – Microsoft SQL Server Migration Assistant V6.0 Released

    - by Pinal Dave
    Every company makes a different decision about the database when they start, but as they move forward they mature and make the decision which is based on their experience and best interest of the organization. Similarly, quite a many organizations make different decisions on database, like Sybase, MySQL, Oracle or Access and as time passes by they learn that now they want to move to a different platform. Microsoft makes it easy for SQL Server professional by releasing various Migration Assistant tools. Last week, Microsoft released Microsoft SQL Server Migration Assistant v6.0. Here are different tools released earlier last week to migrate various product to SQL Server. Microsoft SQL Server Migration Assistant v6.0 for Sybase SQL Server Migration Assistant (SSMA) is a free supported tool from Microsoft that simplifies database migration process from Sybase Adaptive Server Enterprise (ASE) to SQL Server and Azure SQL DB. SSMA automates all aspects of migration including migration assessment analysis, schema and SQL statement conversion, data migration as well as migration testing. Microsoft SQL Server Migration Assistant v6.0 for MySQL SQL Server Migration Assistant (SSMA) is a free supported tool from Microsoft that simplifies database migration process from MySQL to SQL Server and Azure SQL DB. SSMA automates all aspects of migration including migration assessment analysis, schema and SQL statement conversion, data migration as well as migration testing. Microsoft SQL Server Migration Assistant v6.0 for Oracle SQL Server Migration Assistant (SSMA) is a free supported tool from Microsoft that simplifies database migration process from Oracle to SQL Server and Azure SQL DB. SSMA automates all aspects of migration including migration assessment analysis, schema and SQL statement conversion, data migration as well as migration testing. Microsoft SQL Server Migration Assistant v6.0 for Access SQL Server Migration Assistant (SSMA) is a free supported tool from Microsoft that simplifies database migration process from Access to SQL Server. SSMA for Access automates conversion of Microsoft Access database objects to SQL Server database objects, loads the objects into SQL Server and Azure SQL DB, and then migrates data from Microsoft Access to SQL Server and Azure SQL DB. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SQL Migration

    Read the article

  • The C++ Standard Template Library as a BDB Database (part 1)

    - by Gregory Burd
    If you've used C++ you undoubtedly have used the Standard Template Libraries. Designed for in-memory management of data and collections of data this is a core aspect of all C++ programs. Berkeley DB is a database library with a variety of APIs designed to ease development, one of those APIs extends and makes use of the STL for persistent, transactional data storage. dbstl is an STL standard compatible API for Berkeley DB. You can make use of Berkeley DB via this API as if you are using C++ STL classes, and still make full use of Berkeley DB features. Being an STL library backed by a database, there are some important and useful features that dbstl can provide, while the C++ STL library can't. The following are a few typical use cases to use the dbstl extensions to the C++ STL for data storage. When data exceeds available physical memory.Berkeley DB dbstl can vastly improve performance when managing a dataset which is larger than available memory. Performance suffers when the data can't reside in memory because the OS is forced to use virtual memory and swap pages of memory to disk. Switching to BDB's dbstl improves performance while allowing you to keep using STL containers. When you need concurrent access to C++ STL containers.Few existing C++ STL implementations support concurrent access (create/read/update/delete) within a container, at best you'll find support for accessing different containers of the same type concurrently. With the Berkeley DB dbstl implementation you can concurrently access your data from multiple threads or processes with confidence in the outcome. When your objects are your database.You want to have object persistence in your application, and store objects in a database, and use the objects across different runs of your application without having to translate them to/from SQL. The dbstl is capable of storing complicated objects, even those not located on a continous chunk of memory space, directly to disk without any unnecessary overhead. These are a few reasons why you should consider using Berkeley DB's C++ STL support for your embedded database application. In the next few blog posts I'll show you a few examples of this approach, it's easy to use and easy to learn.

    Read the article

  • Oracle Announces Oracle Big Data Appliance X3-2 and Enhanced Oracle Big Data Connectors

    - by jgelhaus
    Enables Customers to Easily Harness the Business Value of Big Data at Lower Cost Engineered System Simplifies Big Data for the Enterprise Oracle Big Data Appliance X3-2 hardware features the latest 8-core Intel® Xeon E5-2600 series of processors, and compared with previous generation, the 18 compute and storage servers with 648 TB raw storage now offer: 33 percent more processing power with 288 CPU cores; 33 percent more memory per node with 1.1 TB of main memory; and up to a 30 percent reduction in power and cooling Oracle Big Data Appliance X3-2 further simplifies implementation and management of big data by integrating all the hardware and software required to acquire, organize and analyze big data. It includes: Support for CDH4.1 including software upgrades developed collaboratively with Cloudera to simplify NameNode High Availability in Hadoop, eliminating the single point of failure in a Hadoop cluster; Oracle NoSQL Database Community Edition 2.0, the latest version that brings better Hadoop integration, elastic scaling and new APIs, including JSON and C support; The Oracle Enterprise Manager plug-in for Big Data Appliance that complements Cloudera Manager to enable users to more easily manage a Hadoop cluster; Updated distributions of Oracle Linux and Oracle Java Development Kit; An updated distribution of open source R, optimized to work with high performance multi-threaded math libraries Read More   Data sheet: Oracle Big Data Appliance X3-2 Oracle Big Data Appliance: Datacenter Network Integration Big Data and Natural Language: Extracting Insight From Text Thomson Reuters Discusses Oracle's Big Data Platform Connectors Integrate Hadoop with Oracle Big Data Ecosystem Oracle Big Data Connectors is a suite of software built by Oracle to integrate Apache Hadoop with Oracle Database, Oracle Data Integrator, and Oracle R Distribution. Enhancements to Oracle Big Data Connectors extend these data integration capabilities. With updates to every connector, this release includes: Oracle SQL Connector for Hadoop Distributed File System, for high performance SQL queries on Hadoop data from Oracle Database, enhanced with increased automation and querying of Hive tables and now supported within the Oracle Data Integrator Application Adapter for Hadoop; Transparent access to the Hive Query language from R and introduction of new analytic techniques executing natively in Hadoop, enabling R developers to be more productive by increasing access to Hadoop in the R environment. Read More Data sheet: Oracle Big Data Connectors High Performance Connectors for Load and Access of Data from Hadoop to Oracle Database

    Read the article

  • .NET Weak Event Handlers – Part II

    - by João Angelo
    On the first part of this article I showed two possible ways to create weak event handlers. One using reflection and the other using a delegate. For this performance analysis we will further differentiate between creating a delegate by providing the type of the listener at compile time (Explicit Delegate) vs creating the delegate with the type of the listener being only obtained at runtime (Implicit Delegate). As expected, the performance between reflection/delegate differ significantly. With the reflection based approach, creating a weak event handler is just storing a MethodInfo reference while with the delegate based approach there is the need to create the delegate which will be invoked later. So, at creating the weak event handler reflection clearly wins, but what about when the handler is invoked. No surprises there, performing a call through reflection every time a handler is invoked is costly. In conclusion, if you want good performance when creating handlers that only sporadically get triggered use reflection, otherwise use the delegate based approach. The explicit delegate approach always wins against the implicit delegate, but I find the syntax for the latter much more intuitive. // Implicit delegate - The listener type is inferred at runtime from the handler parameter public static EventHandler WrapInDelegateCall(EventHandler handler); public static EventHandler<TArgs> WrapInDelegateCall<TArgs>(EventHandler<TArgs> handler) where TArgs : EventArgs; // Explicite delegate - TListener is the type that defines the handler public static EventHandler WrapInDelegateCall<TListener>(EventHandler handler); public static EventHandler<TArgs> WrapInDelegateCall<TArgs, TListener>(EventHandler<TArgs> handler) where TArgs : EventArgs;

    Read the article

  • SQL SERVER – Guest Post – Jonathan Kehayias – Wait Type – Day 16 of 28

    - by pinaldave
    Jonathan Kehayias (Blog | Twitter) is a MCITP Database Administrator and Developer, who got started in SQL Server in 2004 as a database developer and report writer in the natural gas industry. After spending two and a half years working in TSQL, in late 2006, he transitioned to the role of SQL Database Administrator. His primary passion is performance tuning, where he frequently rewrites queries for better performance and performs in depth analysis of index implementation and usage. Jonathan blogs regularly on SQLBlog, and was a coauthor of Professional SQL Server 2008 Internals and Troubleshooting. On a personal note, I think Jonathan is extremely positive person. In every conversation with him I have found that he is always eager to help and encourage. Every time he finds something needs to be approved, he has contacted me without hesitation and guided me to improve, change and learn. During all the time, he has not lost his focus to help larger community. I am honored that he has accepted to provide his views on complex subject of Wait Types and Queues. Currently I am reading his series on Extended Events. Here is the guest blog post by Jonathan: SQL Server troubleshooting is all about correlating related pieces of information together to indentify where exactly the root cause of a problem lies. In my daily work as a DBA, I generally get phone calls like, “So and so application is slow, what’s wrong with the SQL Server.” One of the funny things about the letters DBA is that they go so well with Default Blame Acceptor, and I really wish that I knew exactly who the first person was that pointed that out to me, because it really fits at times. A lot of times when I get this call, the problem isn’t related to SQL Server at all, but every now and then in my initial quick checks, something pops up that makes me start looking at things further. The SQL Server is slow, we see a number of tasks waiting on ASYNC_IO_COMPLETION, IO_COMPLETION, or PAGEIOLATCH_* waits in sys.dm_exec_requests and sys.dm_exec_waiting_tasks. These are also some of the highest wait types in sys.dm_os_wait_stats for the server, so it would appear that we have a disk I/O bottleneck on the machine. A quick check of sys.dm_io_virtual_file_stats() and tempdb shows a high write stall rate, while our user databases show high read stall rates on the data files. A quick check of some performance counters and Page Life Expectancy on the server is bouncing up and down in the 50-150 range, the Free Page counter consistently hits zero, and the Free List Stalls/sec counter keeps jumping over 10, but Buffer Cache Hit Ratio is 98-99%. Where exactly is the problem? In this case, which happens to be based on a real scenario I faced a few years back, the problem may not be a disk bottleneck at all; it may very well be a memory pressure issue on the server. A quick check of the system spec’s and it is a dual duo core server with 8GB RAM running SQL Server 2005 SP1 x64 on Windows Server 2003 R2 x64. Max Server memory is configured at 6GB and we think that this should be enough to handle the workload; or is it? This is a unique scenario because there are a couple of things happening inside of this system, and they all relate to what the root cause of the performance problem is on the system. If we were to query sys.dm_exec_query_stats for the TOP 10 queries, by max_physical_reads, max_logical_reads, and max_worker_time, we may be able to find some queries that were using excessive I/O and possibly CPU against the system in their worst single execution. We can also CROSS APPLY to sys.dm_exec_sql_text() and see the statement text, and also CROSS APPLY sys.dm_exec_query_plan() to get the execution plan stored in cache. Ok, quick check, the plans are pretty big, I see some large index seeks, that estimate 2.8GB of data movement between operators, but everything looks like it is optimized the best it can be. Nothing really stands out in the code, and the indexing looks correct, and I should have enough memory to handle this in cache, so it must be a disk I/O problem right? Not exactly! If we were to look at how much memory the plan cache is taking by querying sys.dm_os_memory_clerks for the CACHESTORE_SQLCP and CACHESTORE_OBJCP clerks we might be surprised at what we find. In SQL Server 2005 RTM and SP1, the plan cache was allowed to take up to 75% of the memory under 8GB. I’ll give you a second to go back and read that again. Yes, you read it correctly, it says 75% of the memory under 8GB, but you don’t have to take my word for it, you can validate this by reading Changes in Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2. In this scenario the application uses an entirely adhoc workload against SQL Server and this leads to plan cache bloat, and up to 4.5GB of our 6GB of memory for SQL can be consumed by the plan cache in SQL Server 2005 SP1. This in turn reduces the size of the buffer cache to just 1.5GB, causing our 2.8GB of data movement in this expensive plan to cause complete flushing of the buffer cache, not just once initially, but then another time during the queries execution, resulting in excessive physical I/O from disk. Keep in mind that this is not the only query executing at the time this occurs. Remember the output of sys.dm_io_virtual_file_stats() showed high read stalls on the data files for our user databases versus higher write stalls for tempdb? The memory pressure is also forcing heavier use of tempdb to handle sorting and hashing in the environment as well. The real clue here is the Memory counters for the instance; Page Life Expectancy, Free List Pages, and Free List Stalls/sec. The fact that Page Life Expectancy is fluctuating between 50 and 150 constantly is a sign that the buffer cache is experiencing constant churn of data, once every minute to two and a half minutes. If you add to the Page Life Expectancy counter, the consistent bottoming out of Free List Pages along with Free List Stalls/sec consistently spiking over 10, and you have the perfect memory pressure scenario. All of sudden it may not be that our disk subsystem is the problem, but is instead an innocent bystander and victim. Side Note: The Page Life Expectancy counter dropping briefly and then returning to normal operating values intermittently is not necessarily a sign that the server is under memory pressure. The Books Online and a number of other references will tell you that this counter should remain on average above 300 which is the time in seconds a page will remain in cache before being flushed or aged out. This number, which equates to just five minutes, is incredibly low for modern systems and most published documents pre-date the predominance of 64 bit computing and easy availability to larger amounts of memory in SQL Servers. As food for thought, consider that my personal laptop has more memory in it than most SQL Servers did at the time those numbers were posted. I would argue that today, a system churning the buffer cache every five minutes is in need of some serious tuning or a hardware upgrade. Back to our problem and its investigation: There are two things really wrong with this server; first the plan cache is excessively consuming memory and bloated in size and we need to look at that and second we need to evaluate upgrading the memory to accommodate the workload being performed. In the case of the server I was working on there were a lot of single use plans found in sys.dm_exec_cached_plans (where usecounts=1). Single use plans waste space in the plan cache, especially when they are adhoc plans for statements that had concatenated filter criteria that is not likely to reoccur with any frequency.  SQL Server 2005 doesn’t natively have a way to evict a single plan from cache like SQL Server 2008 does, but MVP Kalen Delaney, showed a hack to evict a single plan by creating a plan guide for the statement and then dropping that plan guide in her blog post Geek City: Clearing a Single Plan from Cache. We could put that hack in place in a job to automate cleaning out all the single use plans periodically, minimizing the size of the plan cache, but a better solution would be to fix the application so that it uses proper parameterized calls to the database. You didn’t write the app, and you can’t change its design? Ok, well you could try to force parameterization to occur by creating and keeping plan guides in place, or we can try forcing parameterization at the database level by using ALTER DATABASE <dbname> SET PARAMETERIZATION FORCED and that might help. If neither of these help, we could periodically dump the plan cache for that database, as discussed as being a problem in Kalen’s blog post referenced above; not an ideal scenario. The other option is to increase the memory on the server to 16GB or 32GB, if the hardware allows it, which will increase the size of the plan cache as well as the buffer cache. In SQL Server 2005 SP1, on a system with 16GB of memory, if we set max server memory to 14GB the plan cache could use at most 9GB  [(8GB*.75)+(6GB*.5)=(6+3)=9GB], leaving 5GB for the buffer cache.  If we went to 32GB of memory and set max server memory to 28GB, the plan cache could use at most 16GB [(8*.75)+(20*.5)=(6+10)=16GB], leaving 12GB for the buffer cache. Thankfully we have SQL Server 2005 Service Pack 2, 3, and 4 these days which include the changes in plan cache sizing discussed in the Changes to Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2 blog post. In real life, when I was troubleshooting this problem, I spent a week trying to chase down the cause of the disk I/O bottleneck with our Server Admin and SAN Admin, and there wasn’t much that could be done immediately there, so I finally asked if we could increase the memory on the server to 16GB, which did fix the problem. It wasn’t until I had this same problem occur on another system that I actually figured out how to really troubleshoot this down to the root cause.  I couldn’t believe the size of the plan cache on the server with 16GB of memory when I actually learned about this and went back to look at it. SQL Server is constantly telling a story to anyone that will listen. As the DBA, you have to sit back and listen to all that it’s telling you and then evaluate the big picture and how all the data you can gather from SQL about performance relate to each other. One of the greatest tools out there is actually a free in the form of Diagnostic Scripts for SQL Server 2005 and 2008, created by MVP Glenn Alan Berry. Glenn’s scripts collect a majority of the information that SQL has to offer for rapid troubleshooting of problems, and he includes a lot of notes about what the outputs of each individual query might be telling you. When I read Pinal’s blog post SQL SERVER – ASYNC_IO_COMPLETION – Wait Type – Day 11 of 28, I noticed that he referenced Checking Memory Related Performance Counters in his post, but there was no real explanation about why checking memory counters is so important when looking at an I/O related wait type. I thought I’d chat with him briefly on Google Talk/Twitter DM and point this out, and offer a couple of other points I noted, so that he could add the information to his blog post if he found it useful.  Instead he asked that I write a guest blog for this. I am honored to be a guest blogger, and to be able to share this kind of information with the community. The information contained in this blog post is a glimpse at how I do troubleshooting almost every day of the week in my own environment. SQL Server provides us with a lot of information about how it is running, and where it may be having problems, it is up to us to play detective and find out how all that information comes together to tell us what’s really the problem. This blog post is written by Jonathan Kehayias (Blog | Twitter). Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: MVP, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

< Previous Page | 312 313 314 315 316 317 318 319 320 321 322 323  | Next Page >