Search Results

Search found 60072 results on 2403 pages for 'application performance'.

Page 3/2403 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Free Online Performance Tuning Event

    - by Andrew Kelly
      On June 9th 2010 I will be showing several sessions related to performance tuning for SQL Server and they are the best kind because they are free :).  So mark your calendars. Here is the event info and URL: June 29, 2010 - 10:00 am - 3:00 pm Eastern SQL Server is the platform for business. In this day-long free virtual event, well-known SQL Server performance expert Andrew Kelly will provide you with the tools and knowledge you need to stay on top of three key areas related to peak performance...(read more)

    Read the article

  • SQL Server – Learning SQL Server Performance: Indexing Basics – Interview of Vinod Kumar by Pinal Dave

    - by pinaldave
    Recently I just wrote a blog post on about Learning SQL Server Performance: Indexing Basics and I received lots of request that if we can share some insight into the course. Every single time when Performance is discussed, Indexes are mentioned along with it. In recent times, data and application complexity is continuously growing.  The demand for faster query response, performance, and scalability by organizations is increasing and developers and DBAs need to now write efficient code to achieve this. When we developed the course – we made sure that this course remains practical and demo heavy instead of just theories on this subject. Vinod Kumar and myself we often thought about this and realized that practical understanding of the indexes is very important. One can not master every single aspects of the index. However there are some minimum expertise one should gain if performance is one of the concern. Here is 200 seconds interview of Vinod Kumar I took right after completing the course. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology, Video

    Read the article

  • Understanding Performance Profiling Targets

    In this sample chapter from his upcoming book, Paul Glavich explains performance metrics and walks us through the steps needed to establish meaningful performance targets. He covers many metrics such as "time to first byte" and explains why you should add some contingency into your estimated performance requirements.

    Read the article

  • System Wide Performance Sanity Check Procedures

    - by user702295
    Do you need to boost your overall implementation performance? Do you need a direction to pinpoint possible performance opportunities? Are you looking for a general performance guide? Try MOS note 69565.1.  This paper describes a holistic methodology that defines a systematic approach to resolve complex Application performance problems.  It has been successfully used on many critical accounts.  The 'end-to-end' tuning approach encompasses the client, network and database and has proven far more effective than isolated tuning exercises.  It has been used to define and measure targets to ensure success.  Even though it was checked for relevance on 13-Oct-2008, the procedure is still very valuable. Regards!  

    Read the article

  • Building Performance Metrics into ASP.NET MVC Applications

    When you're instrumenting an ASP.NET MVC or Web API application to monitor its performance while it is running, it makes sense to use custom performance counters.There are plenty of tools available that read performance counter data, report on it and create alerts based on it. You can then plot application metrics against all sorts of server and workstation metrics.This way, there will always be the right data to guide your tuning efforts.

    Read the article

  • How can I tell which page is creating a high-CPU-load httpd process?

    - by Greg
    I have a LAMP server (CentOS-based MediaTemple (DV) Extreme with 2GB RAM) running a customized Wordpress+bbPress combination . At about 30k pageviews per day the server is starting to groan. It stumbled earlier today for about 5 minutes when there was an influx of traffic. Even under normal conditions I can see that the virtual server is sometimes at 90%+ CPU load. Using Top I can often see 5-7 httpd processes that are each using 15-30% (and sometimes even 50%) CPU. Before we do a big optimization pass (our use of MySQL is probably the culprit) I would love to find the pages that are the main offenders and deal with them first. Is there a way that I can find out which specific requests were responsible for the most CPU-hungry httpd processes? I have found a lot of info on optimization in general, but nothing on this specific question. Secondly, I know there are a million variables, but if you have any insight on whether we should be at the boundaries of performance with a single dedicated virtual server with a site of this size, then I would love to hear your opinion. Should we be thinking about moving to a more powerful server, or should we be focused on optimization on the current server?

    Read the article

  • VM Tuning to enhance performance

    - by Tiffany Walker
    vm.bdflush = 100 1200 128 512 15 5000 500 1884 2 vm.dirty_ratio = 20 vm.min_free_kbytes = 300000 That means that the MOST dirty data that can be in RAM is 20% and that there will always be 300MB RAM that linux CANNOT use to cache files right? What I am trying to do is ensure that there is always room left for service to spawn and use RAM. I have 8GB of ram and hosting websites with PHP so I want to have more free RAM on stand by instead of seeing myself on 50MB of RAM free.

    Read the article

  • SQL Server 2005 standard filegroups / files for performance on SAN

    - by Blootac
    I submitted this to stack overflow (here) but realised it should really be on serverfault. so apologies for the incorrect and duplicate posting: Ok so I've just been on a SQL Server course and we discussed the usage scenarios of multiple filegroups and files when in use over local RAID and local disks but we didn't touch SAN scenarios so my question is as follows; I currently have a 250 gig database running on SQL Server 2005 where some tables have a huge number of writes and others are fairly static. The database and all objects reside in a single file group with a single data file. The log file is also on the same volume. My interpretation is that separate data files should be used across different disks to lessen disk contention and that file groups should be used for partitioning of data. However, with a SAN you obviously don't really have the same issue of disk contention that you do with a small RAID setup (or at least we don't at the moment), and standard edition doesn't support partitioning. So in order to improve parallelism what should I do? My understanding of various Microsoft publications is that if I increase the number of data files, separate threads can act across each file separately. Which leads me to the question how many files should I have. One per core? Should I be putting tables and indexes with high levels of activity in separate file groups, each with the same number of data files as we have cores? Thank you

    Read the article

  • KVM Slow performance on XP Guest

    - by Gregg Leventhal
    The system is very slow to do anything, even browse a local folder, and CPU sits at 100% frequently. Guest is XP 32 bit. Host is Scientific Linux 6.2, Libvirt 0.10, Guest XP OS shows ACPI Multiprocessor HAL and a virtIO driver for NIC and SCSI. Installed. CPUInfo on host: processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 42 model name : Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz stepping : 7 cpu MHz : 3200.000 cache size : 8192 KB physical id : 0 siblings : 8 core id : 0 cpu cores : 4 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm ida arat epb xsaveopt pln pts dts tpr_shadow vnmi flexpriority ept vpid bogomips : 6784.93 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: <memory unit='KiB'>4194304</memory> <currentMemory unit='KiB'>4194304</currentMemory> <vcpu placement='static' cpuset='0'>1</vcpu> <os> <type arch='x86_64' machine='rhel6.3.0'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <cpu mode='custom' match='exact'> <model fallback='allow'>SandyBridge</model> <vendor>Intel</vendor> <feature policy='require' name='vme'/> <feature policy='require' name='tm2'/> <feature policy='require' name='est'/> <feature policy='require' name='vmx'/> <feature policy='require' name='osxsave'/> <feature policy='require' name='smx'/> <feature policy='require' name='ss'/> <feature policy='require' name='ds'/> <feature policy='require' name='tsc-deadline'/> <feature policy='require' name='dtes64'/> <feature policy='require' name='ht'/> <feature policy='require' name='pbe'/> <feature policy='require' name='tm'/> <feature policy='require' name='pdcm'/> <feature policy='require' name='ds_cpl'/> <feature policy='require' name='xtpr'/> <feature policy='require' name='acpi'/> <feature policy='require' name='monitor'/> <feature policy='force' name='sse'/> <feature policy='force' name='sse2'/> <feature policy='force' name='sse4.1'/> <feature policy='force' name='sse4.2'/> <feature policy='force' name='ssse3'/> <feature policy='force' name='x2apic'/> </cpu> <clock offset='localtime'> <timer name='rtc' tickpolicy='catchup'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='none'/> <source file='/var/lib/libvirt/images/Server-10-9-13.qcow2'/> <target dev='vda' bus='virtio'/> <alias name='virtio-disk0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </disk>

    Read the article

  • Performance Monitor (perfmon) showing some unusual statistics

    - by Param
    Recently i have thought to used perfmon.msc to monitor process utilization of remote computer. But i am faced with some peculiar situation. Please see the below Print-screen I have selected three computer -- QDIT049, QDIT199V6 & QNIVN014. Please observer the processor Time % which i have marked in Red Circle. How it can be more than 100%.? The Total Processor Time can never go above 100%, am i right? If i am right? than why the processor time % is showing 200% Please let me know, how it is possible or where i have done mistake. Thanks & Regards, Param

    Read the article

  • Strange C++ performance difference?

    - by STingRaySC
    I just stumbled upon a change that seems to have counterintuitive performance ramifications. Can anyone provide a possible explanation for this behavior? Original code: for (int i = 0; i < ct; ++i) { // do some stuff... int iFreq = getFreq(i); double dFreq = iFreq; if (iFreq != 0) { // do some stuff with iFreq... // do some calculations with dFreq... } } While cleaning up this code during a "performance pass," I decided to move the definition of dFreq inside the if block, as it was only used inside the if. There are several calculations involving dFreq so I didn't eliminate it entirely as it does save the cost of multiple run-time conversions from int to double. I expected no performance difference, or if any at all, a negligible improvement. However, the perfomance decreased by nearly 10%. I have measured this many times, and this is indeed the only change I've made. The code snippet shown above executes inside a couple other loops. I get very consistent timings across runs and can definitely confirm that the change I'm describing decreases performance by ~10%. I would expect performance to increase because the int to double conversion would only occur when iFreq != 0. Chnaged code: for (int i = 0; i < ct; ++i) { // do some stuff... int iFreq = getFreq(i); if (iFreq != 0) { // do some stuff with iFreq... double dFreq = iFreq; // do some stuff with dFreq... } } Can anyone explain this? I am using VC++ 9.0 with /O2. I just want to understand what I'm not accounting for here.

    Read the article

  • SQL SERVER – Example of Performance Tuning for Advanced Users with DB Optimizer

    - by Pinal Dave
    Performance tuning is such a subject that everyone wants to master it. In beginning everybody is at a novice level and spend lots of time learning how to master the art of performance tuning. However, as we progress further the tuning of the system keeps on getting very difficult. I have understood in my early career there should be no need of ego in the technology field. There are always better solutions and better ideas out there and we should not resist them. Instead of resisting the change and new wave I personally adopt it. Here is a similar example, as I personally progress to the master level of performance tuning, I face that it is getting harder to come up with optimal solutions. In such scenarios I rely on various tools to teach me how I can do things better. Once I learn about tools, I am often able to come up with better solutions when I face the similar situation next time. A few days ago I had received a query where the user wanted to tune it further to get the maximum out of the performance. I have re-written the similar query with the help of AdventureWorks sample database. SELECT * FROM HumanResources.Employee e INNER JOIN HumanResources.EmployeeDepartmentHistory edh ON e.BusinessEntityID = edh.BusinessEntityID INNER JOIN HumanResources.Shift s ON edh.ShiftID = s.ShiftID; User had similar query to above query was used in very critical report and wanted to get best out of the query. When I looked at the query – here were my initial thoughts Use only column in the select statements as much as you want in the application Let us look at the query pattern and data workload and find out the optimal index for it Before I give further solutions I was told by the user that they need all the columns from all the tables and creating index was not allowed in their system. He can only re-write queries or use hints to further tune this query. Now I was in the constraint box – I believe * was not a great idea but if they wanted all the columns, I believe we can’t do much besides using *. Additionally, if I cannot create a further index, I must come up with some creative way to write this query. I personally do not like to use hints in my application but there are cases when hints work out magically and gives optimal solutions. Finally, I decided to use Embarcadero’s DB Optimizer. It is a fantastic tool and very helpful when it is about performance tuning. I have previously explained how it works over here. First open DBOptimizer and open Tuning Job from File >> New >> Tuning Job. Once you open DBOptimizer Tuning Job follow the various steps indicates in the following diagram. Essentially we will take our original script and will paste that into Step 1: New SQL Text and right after that we will enable Step 2 for Generating Various cases, Step 3 for Detailed Analysis and Step 4 for Executing each generated case. Finally we will click on Analysis in Step 5 which will generate the report detailed analysis in the result pan. The detailed pan looks like. It generates various cases of T-SQL based on the original query. It applies various hints and available hints to the query and generate various execution plans of the query and displays them in the resultant. You can clearly notice that original query had a cost of 0.0841 and logical reads about 607 pages. Whereas various options which are just following it has different execution cost as well logical read. There are few cases where we have higher logical read and there are few cases where as we have very low logical read. If we pay attention the very next row to original query have Merge_Join_Query in description and have lowest execution cost value of 0.044 and have lowest Logical Reads of 29. This row contains the query which is the most optimal re-write of the original query. Let us double click over it. Here is the query: SELECT * FROM HumanResources.Employee e INNER JOIN HumanResources.EmployeeDepartmentHistory edh ON e.BusinessEntityID = edh.BusinessEntityID INNER JOIN HumanResources.Shift s ON edh.ShiftID = s.ShiftID OPTION (MERGE JOIN) If you notice above query have additional hint of Merge Join. With the help of this Merge Join query hint this query is now performing much better than before. The entire process takes less than 60 seconds. Please note that it the join hint Merge Join was optimal for this query but it is not necessary that the same hint will be helpful in all the queries. Additionally, if the workload or data pattern changes the query hint of merge join may be no more optimal join. In that case, we will have to redo the entire exercise once again. This is the reason I do not like to use hints in my queries and I discourage all of my users to use the same. However, if you look at this example, this is a great case where hints are optimizing the performance of the query. It is humanly not possible to test out various query hints and index options with the query to figure out which is the most optimal solution. Sometimes, we need to depend on the efficiency tools like DB Optimizer to guide us the way and select the best option from the suggestion provided. Let me know what you think of this article as well your experience with DB Optimizer. Please leave a comment. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Joins, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Inaccurate performance counter timer values in Windows Performance Monitor

    - by krisg
    I am implementing instrumentation within an application and have encountered an issue where the value that is displayed in Windows Performance Monitor from a PerformanceCounter is incongruent with the value that is recorded. I am using a Stopwatch to record the duration of a method execution, then first i record the total milliseconds as a double, and secondly i pass the Stopwatch's TimeSpan.Ticks to the PerformanceCounter to be recorded in the Performance Monitor. Creating the Performance Counters in perfmon: var datas = new CounterCreationDataCollection(); datas.Add(new CounterCreationData { CounterName = name, CounterType = PerformanceCounterType.AverageTimer32 }); datas.Add(new CounterCreationData { CounterName = namebase, CounterType = PerformanceCounterType.AverageBase }); PerformanceCounterCategory.Create("Category", "performance data", PerformanceCounterCategoryType.SingleInstance, datas); Then to record i retrieve a pre-initialized counter from a collection and increment: _counters[counter].IncrementBy(timing); _counters[counterbase].Increment(); ...where "timing" is the Stopwatch's TimeSpan.Ticks value. When this runs, the collection of double's, which are the milliseconds values for the Stopwatch's TimeSpan show one set of values, but what appears in PerfMon are a different set of values. For example... two values recorded in the List of milliseconds are: 23322.675, 14230.614 And what appears in PerfMon graph are: 15.546, 9.930 Can someone explain this please?

    Read the article

  • mysql settings - using the available resources

    - by Christian Payne
    I've got a lot of processing work I need to run on a mysql server. I've installed mysql 5.1.45-community on a Win 2007 64bit. Its running on a xenon, 3ghz 6 processors with 8 gig ram. It doesn't seem to matter what queries I run (or the number I run at the same time), when I look in task manager, I'll see one processor is out at 100%. The other 5 are idol. Memory is static at 1.54 gig. When I installed mysql, I used the wizard and selected the default "server" (not workstation) option. I feel like I should be getting more bang for my buck. Is there something else I should be monitoring or something I should change to use the other system resources???

    Read the article

  • SQL SERVER – DMV to Identify Incremental Statistics – Performance improvements in SQL Server 2014 – Part 3

    - by Pinal Dave
    This is the third part of the series Incremental Statistics. Here is the index of the complete series. What is Incremental Statistics? – Performance improvements in SQL Server 2014 – Part 1 Simple Example of Incremental Statistics – Performance improvements in SQL Server 2014 – Part 2 DMV to Identify Incremental Statistics – Performance improvements in SQL Server 2014 – Part 3 In earlier two parts we have seen what is incremental statistics and its simple example. In this blog post we will be discussing about DMV, which will list all the statistics which are enabled for Incremental Updates. SELECT  OBJECT_NAME(sys.stats.OBJECT_ID) AS TableName, sys.columns.name AS ColumnName, sys.stats.name AS StatisticsName FROM   sys.stats INNER JOIN sys.stats_columns ON sys.stats.OBJECT_ID = sys.stats_columns.OBJECT_ID AND sys.stats.stats_id = sys.stats_columns.stats_id INNER JOIN sys.columns ON sys.stats.OBJECT_ID = sys.columns.OBJECT_ID AND sys.stats_columns.column_id = sys.columns.column_id WHERE   sys.stats.is_incremental = 1 If you run above script in the example displayed, in part 1 and part 2 you will get resultset as following. When you execute the above script, it will list all the statistics in your database which are enabled for Incremental Update. The script is very simple and effective. If you have any further improved script, I request you to post in the comment section and I will post that on blog with due credit. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SQL Statistics, Statistics

    Read the article

  • SQL SERVER – Cardinality Estimation and Performance – SQL in Sixty Seconds #072

    - by Pinal Dave
    Yesterday I wrote blog post based on my latest Pluralsight course on learning SQL Server 2014. I discussed newly introduced cardinality estimation in SQL Server 2014 and how it improves the performance of the query. The cardinality estimation logic is responsible for quality of query plans and majorly responsible for improving performance for any query. This logic was not updated for quite a while, but in the latest version of SQL Server 2104 this logic is re-designed. The new logic now incorporates various assumptions and algorithms of OLTP and warehousing workload. I hope my earlier blog post clearly explained how new cardinality estimation logic improves performance. If not, I suggest you watch following quick video where I explain this concept in extremely simple words. You can download the code used in this course from Simple Demo of New Cardinality Estimation Features of SQL Server 2014. Action Item Here are the blog posts I have previously written. You can read it over here: Simple Demo of New Cardinality Estimation Features of SQL Server 2014 Pluralsight Course You can subscribe to my YouTube Channel for frequent updates. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Video

    Read the article

  • Tips for debugging Samba performance?

    - by j-g-faustus
    Samba gives me 24 MB/s read and 44 MB/s write, while ftp gives 97 and 112 MB/s under the same circumstances. The documentation says that Generally, you should find that Samba performs similarly to ftp at raw transfer speed. In my case it clearly doesn't. Where can I find tips on how to debug Samba performance? Or alternatively tips for replacing Samba with something else? (I can't use ftp, unfortunately, as I need something that can be used with rsync/rsnapshot.) More details: Both computers are running Ubuntu 10.10 (using Samba because I have a Mac as well) The Samba share is on a local home network, mounted as $ mount ... //server.local/share/ on /mnt/share type cifs (rw,mand) Samba performance was tested by copying (cp) a single file of ~4GB to and from the share, using time for timing and calculating transfer speed by hand. ftp performance are the numbers from the ftp client for get/put of the same file. iperf gives network speed ~900 Mbits/s bonnie++ gives disk speeds 200 MB/s on both sides for block reads as well as block writes Tried changing the parameters suggested in the performance tuning HOWTO (read/write raw, read size, socket options), most of them made little to no difference. (The one that made a difference caused write speed to drop 50%.)

    Read the article

  • local web application vs desktop application speed?

    - by Josh
    Which one would be faster - a local web app gui made with something like qooxdoo or a desktop app? How much speed difference would there be expected? I would prefer creating a web app which could in the future be shared than creating a desktop gui which is specialized on certain gui toolkits.

    Read the article

  • Excel-based Performance Reviews transformed into Web Application for Performance Management

    - by Webgui
    HR TMS provides enterprise talent management solutions for healthcare, retail and corporate customers, focusing on performance management, compensation management and succession planning. As the competency of nurses and other healthcare workers is critical, the government, via the Joint Commission (JCAHO), tightly monitors their performances. On a regular basis, accredited healthcare organizations are required to review employee performance using a complex set of position dependent job descriptions and competencies. Middlesex Hospital managed their performance reviews for 2500 employees manually with Excel spreadsheets. This was a labor intensive process that proved to be error prone and difficult to manage. Reviews were not always where they belonged and the job descriptions and competencies for healthcare workers were difficult to keep accurate and up to date. As a result, when the Joint Commission visited and requested to see specific review documentation, there was intense stress. Middlesex Hospital needed to automate their review process, pull in the position information from those spreadsheets and be able to deliver reviews online. Users needed to have online access to those reviews from a standard browser. Although the manual system had its issues, it did have the advantage of being very comprehensive and familiar to users. The decision was made to provide a web-based solution that leveraged the look and feel of those spreadsheets in order to insure user acceptance of the system and minimize the training needed. Read the full article here >

    Read the article

  • HPCM 11.1.2.x - Outline Optimisation for Calculation Performance

    - by Jane Story
    When an HPCM application is first created, it is likely that you will want to carry out some optimisation on the HPCM application’s Essbase outline in order to improve calculation execution times. There are several things that you may wish to consider. Because at least one dense dimension for an application is required to deploy from HPCM to Essbase, “Measures” and “AllocationType”, as the only required dimensions in an HPCM application, are created dense by default. However, for optimisation reasons, you may wish to consider changing this default dense/sparse configuration. In general, calculation scripts in HPCM execute best when they are targeting destinations with one or more dense dimensions. Therefore, consider your largest target stage i.e. the stage with the most assignment destinations and choose that as a dense dimension. When optimising an outline in this way, it is not possible to have a dense dimension in every target stage and so testing with the dense/sparse settings in every stage is the key to finding the best configuration for each individual application. It is not possible to change the dense/sparse setting of individual cloned dimensions from EPMA. When a dimension that is to be repeated in multiple stages, and therefore cloned, is defined in EPMA, every instance of that dimension has the same storage setting. However, such manual changes may not be preserved in all cases. Please see below for full explanation. However, once the application has been deployed from EPMA to HPCM and from HPCM to Essbase, it is possible to make the dense/sparse changes to a cloned dimension directly in Essbase. This can be done by editing the properties of the outline in Essbase Administration Services (EAS) and manually changing the dense/sparse settings of individual dimensions. There are two methods of deployment from HPCM to Essbase from 11.1.2.1. There is a “replace” deploy method and an “update” deploy method: “Replace” will delete the Essbase application and replace it. If this method is chosen, then any changes made directly on the Essbase outline will be lost. If you use the update deploy method (with or without archiving and reloading data), then the Essbase outline, including any manual changes you have made (i.e. changes to dense/sparse settings of the cloned dimensions), will be preserved. Notes If you are using the calculation optimisation technique mentioned in a previous blog to calculate multiple POVs (https://blogs.oracle.com/pa/entry/hpcm_11_1_2_optimising) and you are calculating all members of that POV dimension (e.g. all months in the Period dimension) then you could consider making that dimension dense. Always review Block sizes after all changes! The maximum block size recommended in the Essbase Database Administrator’s Guide is 100k for 32 bit Essbase and 200k for 64 bit Essbase. However, calculations may perform better with a larger than recommended block size provided that sufficient memory is available on the Essbase server. Test different configurations to determine the most optimal solution for your HPCM application. Please note that this blog article covers HPCM outline optimisation only. Additional performance tuning can be achieved by methodically testing database settings i.e data cache, index cache and/or commit block settings. For more information on Essbase tuning best practices, please review these items in the Essbase Database Administrators Guide. For additional information on the commit block setting, please see the previous PA blog article https://blogs.oracle.com/pa/entry/essbase_11_1_2_commit

    Read the article

  • SQL SERVER – Convert IN to EXISTS – Performance Talk

    - by pinaldave
    In recent training one of the attendee asked if I can show simple method to convert IN clause to EXISTS clause. Here is the simple example. USE AdventureWorks GO -- use of = SELECT * FROM HumanResources.Employee E WHERE E.EmployeeID = ( SELECT EA.EmployeeID FROM HumanResources.EmployeeAddress EA WHERE EA.EmployeeID = E.EmployeeID) GO -- use of exists SELECT * FROM HumanResources.Employee E WHERE EXISTS ( SELECT EA.EmployeeID FROM HumanResources.EmployeeAddress EA WHERE EA.EmployeeID = E.EmployeeID) GO It is NOT necessary that every time when IN is replaced by EXISTS it gives better performance. However, in our case listed above it does for sure give better performance. Click on below image to see the execution plan. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • ASP.Net performance counters

    - by nikolaosk
    I was involved in designing and implementing an ASP.Net application some time ago. After we deployed the application we wanted to monitor various aspects of the application. We can use the Performance Monitor. In my windows Server 2008 machine, I go to Start-Run and type " perfmon " and the Performance monitor window pops up. There are thousands of counters in there and it is impossible for anyone to know them all. Most people I know use the Performance Monitor to add counters to monitor SQL Server...(read more)

    Read the article

  • How do you do ASP.Net performance testing?

    - by John
    Our team is in need of a performance testing process. We use ASP.Net (both web forms and MVC) and performance testing is not currently built into our projects. We occasionally do some ad-hoc analysis, such as checking the load on the server or SQL Server Profiler, but we don't have a true beginning to end, built into the project performance testing methodology. Where is a good place to start? I'm interested in both: Process - General knowledge, including best practices. Essential list of tools. I'm aware of a few tools, such as what's built into the pricier versions of VS 2010 and JetBrains products, though I haven't used them.

    Read the article

  • Applications affected by memory performance

    - by robotron
    I'm writing a paper on the topic of applications affected more by memory performance than processor performance. I've got a lot written regarding the gap between the two, however I can't seem to find anything about the applications that might be affected more by memory performance than by processor speed. I suppose these are applications that make a large amount of memory references, but I have no idea what kind of applications would make such large number of references to make it stand out? Perhaps databases? Can you please give me any pointers on how to proceed, some links to papers? I'm really stuck.

    Read the article

  • How do I objectively measure an application's load on a server

    - by Joe
    All, I'm not even sure where to begin looking for resources to answer my question, and I realize that speculation about this kind of thing is highly subjective. I need help determining what class of server I should purchase to host a MS Silverlight application with a MSSQL server back-end on a Windows Server 2008 platform. It's an interactive program, so I can't simply generate a list of URLs to test against, and run it with 1000 simultaneous users. What tools are out there to help me determine what kind of load the application will put on a server at varying levels of concurrent users? Would you all suggest separating the SQL server form the web server, to better differentiate the generated load on the different parts of the stack?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >