Search Results

Search found 18148 results on 726 pages for 'performance monitor'.

Page 232/726 | < Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >

  • How to monitor an existing queue from WebSphere MQ?

    - by Jorge
    I have a .NET application that needs to monitor a queue in WebSphere MQ. I need to react to each message without impacting the current process. The client application can't explicity send me the same message. Can I read a message without removing it from the queue? Can I be notified for each message? Can I configure the MQ to duplicate the current queue? Is there another solution?

    Read the article

  • Tool to monitor HTTP, TCP, etc. Web Service traffic

    - by huseyint
    What's the best tool that you use to monitor Web Service, SOAP, WCF, etc. traffic that's coming and going on the wire? I have seen some tools that made with Java but they seem to be a little crappy. What I want is a tool that sits in the middle as a proxy and does port redirection (which should have configurable listen/redirect ports). Are there any tools work on Windows to do this?

    Read the article

  • Hotkey to preview in browser in second monitor while still keeping focus in editor?

    - by Tony_Henrich
    I use two monitors. The second one has a browser open. I use Visual studio in the first monitor and making edits to the web page. Would love to press a key and instantly see the change in the browser. Basically doing a refresh in the browser behind the scene and still keep window focus in the editor. Instead of keep switching to the browser, refresh, switch back to VS. Any better ideas than using a keyboard recorder like AutoPilot?

    Read the article

  • How to monitor a directory for files in C++?

    - by sand
    I need to monitor a directory which contains many files and a process reads and deletes the .txt files from the directory; once all the .txt files are consumed, the consuming process needs to be killed. How do I check if all the .txt files are consumed using C++? I am developing my application on Visual Studio on windows platform.

    Read the article

  • Monitoring almost anything with BizTalk 360

    - by Michael Stephenson
    When you work in an integration environment it is common that you will find yourself in a situation where you integrate with some unusual applications or have some unusual dependencies. That is the nature of integration. When you work with BizTalk one of the common problems is that BizTalk often is the place where problems with applications you integrate with are highlighted and these external applications may have poor monitoring solutions. Fortunately if you are a working with a customer who uses BizTalk 360 then it contains a feature called the "Web Endpoint Manager". Typically the web endpoint manager is used to monitor web services that you integrate with and will ping them at appropriate times to make sure they return the expected HTTP status code. When you have an usual situation where you want to monitor something which is key to the success to your solution but you find yourself having to consider a significant custom solution to monitor the external dependency then the Web Endpoint Manager could be your friend. The endpoint manager monitors a url and checks for a certain status code. This means that you can create your own aspx web page and then make BizTalk 360 monitor this web page. Behind the web page you could write any code you wished. An example of this is architecture is shown in the below diagram.     In the custom web page you would implement some custom code to do whatever it is that you want to monitor. In the below code snippet you can see how the Page_Load default method is doing some kind of check then depending on the result of the check it returns a certain HTTP code. protected void Page_Load(object sender, EventArgs e) { var result = CheckSomething();   if (result == "Success") Response.StatusCode = 202; else if (result == "DatabaseError") Response.StatusCode = 510; else if (result == "SystemError") Response.StatusCode = 512; else Response.StatusCode = 513;   }   In BizTalk 360 you would go into the Monitor and Notify tab and then to BizTalk Environment which gives you access to the Web Endpoint Manager. You need an alarm setup which configures how the endpoint will be checked. I'm not going to go through the details of creating the alarm as this is already documented in the BizTalk 360 documentation. One point to note is that in the example I am using I setup a threshold alarm which means that the url is checked about every minute and if there is an error that persists for a period of time then the alarm will raise the alert notification. In my example I configured the alarm to fire if the error persisted for 3 minutes. The below picture shows accessing the endpoint manager.   In the web endpoint manager you would then configure your endpoint to monitor and the HTTP response code which indicates all is working fine. The below picture shows this. I now have my endpoint monitoring setup and BizTalk 360 should be checking my custom endpoint to see that it is available. If I wanted to manually sanity check that the endpoints I have registered are working fine then clicking the Refresh button will show if they are all good or not. If my custom ASP.net page which is checking my dependency gets a problem you will see in the endpoint manager that the status code does not match the expected return code and your endpoints will display in red and you can see the problem. The below picture shows this. If I use specific HTTP response codes for the errors the custom ASP.net page might encounter I can easily interpret these to know what the problem is. Using the alarms and notifications with BizTalk 360 it means when your endpoint goes into an error state you can easily configure email or SMS notifications from BizTalk 360 to tell you that your endpoint is having problems and you can use BizTalk 360 to help correlate what the problem is to allow you to investigate further. Below you can see the email which tells me my endpoint is not working.   When everything returns to normal you will see the status is now fixed and you will see a situation like below where you can see the WebEndpoints are now green and the return code matches what is expected.   Conclusion As you can see it is really easy to plug your own custom ASP.net page into the BizTalk 360 web endpoint monitoring feature. This extension then gives you the power to really extend the monitoring to almost anything you want as long as you can write some .net code to check that the dependency is available and working. It would be interesting to hear of any ideas people have around things they would monitor with this extension. More details on the end point monitor can be found on the following link: http://www.biztalk360.com/tour/monitoring_notifications

    Read the article

  • Thread synchronization and aborting.

    - by kubal5003
    Hello, I've got a little problem with ending the work of one of my threads. First things first so here's the app "layout": Thread 1 - worker thread (C++/CLI) - runs and terminates as expected for(...) { try { if(TabuStop) return; System::Threading::Monitor::Enter("Lock1"); //some work, unmanaged code } finally { if(stop) { System::Threading::Monitor::Pulse("Lock1"); } else { System::Threading::Monitor::Pulse("Lock1"); System::Threading::Monitor::Wait("Lock1"); } } } Thread 2 - display results thread (C#) while (WorkerThread.IsAlive) { lock ("Lock1") { if (TabuEngine.TabuStop) { Monitor.Pulse("Lock1"); } else { Dispatcher.BeginInvoke(RefreshAction); Monitor.Pulse("Lock1"); Monitor.Wait("Lock1", 5000); } } // Thread.Sleep(5000); } I'm trying to shut the whole thing down from app main thread like this: TabuEngine.TabuStop = true; //terminates nicely the worker thread and if (DisplayThread.IsAlive) { DisplayThread.Abort(); } I also tried using DisplayThread.Interrupt, but it always blocks on Monitor.Wait("Lock1", 5000); and I can't get rid of it. What is wrong here? How am I supposed to perform the synchronization and let it do the work that it is supposed to do? //edit I'm not even sure now if the trick with using "Lock1" string is really working and locks are placed on the same object..

    Read the article

  • Cannot set video resolution above 640x480 after installing Windows XP SP2

    - by waanders
    I've installed Windows XP SP2 on a computer (there was not SP at all). Now the display settings are set back to 640x480 and 4 bits colors. And I can't change it, it's the only option in Settings tab of the Display dialog of Windows. The screen look awful now, how can I solve this problem? UPDATE: Seems to be a problem with the video driver (thanks @Karan and @Hennes). I did run Speccy (PC-Wizard freezes the computer) and this is a part of the log file: Summary Operating System Microsoft Windows XP Professional 32-bit SP3 CPU Intel Celeron Willamette 0.18um Technology RAM 512 MB DDR @ 133MHz (2.5-3-3-6) Motherboard COMPAQ 0838h (FC-478) Graphics Standard Monitor (640x480@1Hz) Hard Drives 19.0GB Maxtor 2B020H1 (PATA) Optical Drives No optical disk drives detected Audio No audio card detected ... Graphics Monitor Name Standard Monitor on Current Resolution 640x480 pixels Work Resolution 640x450 pixels State enabled, primary Monitor Width 640 Monitor Height 480 Monitor BPP 4 bits per pixel Monitor Frequency 1 Hz Device \\.\DISPLAY1 OpenGL Version 1.1.0 Vendor Microsoft Corporation Renderer GDI Generic GLU Version 1.2.2.0 Microsoft Corporation Values GL_MAX_LIGHTS 8 GL_MAX_TEXTURE_SIZE 1024 GL_MAX_TEXTURE_STACK_DEPTH 10 GL Extensions GL_WIN_swap_hint GL_EXT_bgra GL_EXT_paletted_texture GL_EXT_bgra

    Read the article

  • So who decided VGA cables should be symetrical ? [closed]

    - by mgb
    < rant So who decided that VGA cables should be the same gender connector at both ends? I just spent an hour trying to fix a server that would apparently boot but wouldn't display any post messages - I needed to change some bios settings. The monitor gives me a helpful error message - "unable to display this resolution". Wondering how it can't show simple VGA res, I reset the bios, when that doesn't work I remove the BIOS battery. The system is in a rack with a keyboard and monitor mounted 6ft away - with the cable running through the normal waist thick rats nest of wiring of wiring. The monitor worked with another system, so it and the VGA cable are good, I brought in another monitor and tried that - still nothing. Eventually I ripped the rack apart to discover that the other end of the VGA connector I was trying to plug into the server was actually plugged into another server box ! And the second monitor I was testing was plugged into the first monitor. An error message like - "I'm actualy plugged into another monitor stupid" - would be useful. So would having a connector with a male end and a female end. < end rant - thank you

    Read the article

  • New Enhancements for InnoDB Memcached

    - by Calvin Sun
    In MySQL 5.6, we continued our development on InnoDB Memcached and completed a few widely desirable features that make InnoDB Memcached a competitive feature in more scenario. Notablely, they are 1) Support multiple table mapping 2) Added background thread to auto-commit long running transactions 3) Enhancement in binlog performance  Let’s go over each of these features one by one. And in the last section, we will go over a couple of internally performed performance tests. Support multiple table mapping In our earlier release, all InnoDB Memcached operations are mapped to a single InnoDB table. In the real life, user might want to use this InnoDB Memcached features on different tables. Thus being able to support access to different table at run time, and having different mapping for different connections becomes a very desirable feature. And in this GA release, we allow user just be able to do both. We will discuss the key concepts and key steps in using this feature. 1) "mapping name" in the "get" and "set" command In order to allow InnoDB Memcached map to a new table, the user (DBA) would still require to "pre-register" table(s) in InnoDB Memcached “containers” table (there is security consideration for this requirement). If you would like to know about “containers” table, please refer to my earlier blogs in blogs.innodb.com. Once registered, the InnoDB Memcached will then be able to look for such table when they are referred. Each of such registered table will have a unique "registration name" (or mapping_name) corresponding to the “name” field in the “containers” table.. To access these tables, user will include such "registration name" in their get or set commands, in the form of "get @@new_mapping_name.key", prefix "@@" is required for signaling a mapped table change. The key and the "mapping name" are separated by a configurable delimiter, by default, it is ".". So the syntax is: get [@@mapping_name.]key_name set [@@mapping_name.]key_name  or  get @@mapping_name set @@mapping_name Here is an example: Let's set up three tables in the "containers" table: The first is a map to InnoDB table "test/demo_test" table with mapping name "setup_1" INSERT INTO containers VALUES ("setup_1", "test", "demo_test", "c1", "c2", "c3", "c4", "c5", "PRIMARY");  Similarly, we set up table mappings for table "test/new_demo" with name "setup_2" and that to table "mydatabase/my_demo" with name "setup_3": INSERT INTO containers VALUES ("setup_2", "test", "new_demo", "c1", "c2", "c3", "c4", "c5", "secondary_index_x"); INSERT INTO containers VALUES ("setup_3", "my_database", "my_demo", "c1", "c2", "c3", "c4", "c5", "idx"); To switch to table "my_database/my_demo", and get the value corresponding to “key_a”, user will do: get @@setup_3.key_a (this will also output the value that corresponding to key "key_a" or simply get @@setup_3 Once this is done, this connection will switch to "my_database/my_demo" table until another table mapping switch is requested. so it can continue issue regular command like: get key_b  set key_c 0 0 7 These DMLs will all be directed to "my_database/my_demo" table. And this also implies that different connections can have different bindings (to different table). 2) Delimiter: For the delimiter "." that separates the "mapping name" and key value, we also added a configure option in the "config_options" system table with name of "table_map_delimiter": INSERT INTO config_options VALUES("table_map_delimiter", "."); So if user wants to change to a different delimiter, they can change it in the config_option table. 3) Default mapping: Once we have multiple table mapping, there should be always a "default" map setting. For this, we decided if there exists a mapping name of "default", then this will be chosen as default mapping. Otherwise, the first row of the containers table will chosen as default setting. Please note, user tables can be repeated in the "containers" table (for example, user wants to access different columns of the table in different settings), as long as they are using different mapping/configure names in the first column, which is enforced by a unique index. 4) bind command In addition, we also extend the protocol and added a bind command, its usage is fairly straightforward. To switch to "setup_3" mapping above, you simply issue: bind setup_3 This will switch this connection's InnoDB table to "my_database/my_demo" In summary, with this feature, you now can direct access to difference tables with difference session. And even a single connection, you can query into difference tables. Background thread to auto-commit long running transactions This is a feature related to the “batch” concept we discussed in earlier blogs. This “batch” feature allows us batch the read and write operations, and commit them only after certain calls. The “batch” size is controlled by the configure parameter “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size”. This could significantly boost performance. However, it also comes with some disadvantages, for example, you will not be able to view “uncommitted” operations from SQL end unless you set transaction isolation level to read_uncommitted, and in addition, this will held certain row locks for extend period of time that might reduce the concurrency. To deal with this, we introduce a background thread that “auto-commits” the transaction if they are idle for certain amount of time (default is 5 seconds). The background thread will wake up every second and loop through every “connections” opened by Memcached, and check for idle transactions. And if such transaction is idle longer than certain limit and not being used, it will commit such transactions. This limit is configurable by change “innodb_api_bk_commit_interval”. Its default value is 5 seconds, and minimum is 1 second, and maximum is 1073741824 seconds. With the help of such background thread, you will not need to worry about long running uncommitted transactions when set daemon_memcached_w_batch_size and daemon_memcached_r_batch_size to a large number. This also reduces the number of locks that could be held due to long running transactions, and thus further increase the concurrency. Enhancement in binlog performance As you might all know, binlog operation is not done by InnoDB storage engine, rather it is handled in the MySQL layer. In order to support binlog operation through InnoDB Memcached, we would have to artificially create some MySQL constructs in order to access binlog handler APIs. In previous lab release, for simplicity consideration, we open and destroy these MySQL constructs (such as THD) for each operations. This required us to set the “batch” size always to 1 when binlog is on, no matter what “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size” are configured to. This put a big restriction on our capability to scale, and also there are quite a bit overhead in creating destroying such constructs that bogs the performance down. With this release, we made necessary change that would keep MySQL constructs as long as they are valid for a particular connection. So there will not be repeated and redundant open and close (table) calls. And now even with binlog option is enabled (with innodb_api_enable_binlog,), we still can batch the transactions with daemon_memcached_w_batch_size and daemon_memcached_r_batch_size, thus scale the write/read performance. Although there are still overheads that makes InnoDB Memcached cannot perform as fast as when binlog is turned off. It is much better off comparing to previous release. And we are continuing optimize the solution is this area to improve the performance as much as possible. Performance Study: Amerandra of our System QA team have conducted some performance studies on queries through our InnoDB Memcached connection and plain SQL end. And it shows some interesting results. The test is conducted on a “Linux 2.6.32-300.7.1.el6uek.x86_64 ix86 (64)” machine with 16 GB Memory, Intel Xeon 2.0 GHz CPU X86_64 2 CPUs- 4 Core Each, 2 RAID DISKS (1027 GB,733.9GB). Results are described in following tables: Table 1: Performance comparison on Set operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8*** 5.6.7-RC* X faster Set (QPS) Set** 8 30,000 5,600 5.36 32 59,000 13,000 4.54 128 68,000 8,000 8.50 512 63,000 6.800 9.23 * mysql-5.6.7-rc-linux2.6-x86_64 ** The “set” operation when implemented in InnoDB Memcached involves a couple of DMLs: it first query the table to see whether the “key” exists, if it does not, the new key/value pair will be inserted. If it does exist, the “value” field of matching row (by key) will be updated. So when used in above query, it is a precompiled store procedure, and query will just execute such procedures. *** added “–daemon_memcached_option=-t8” (default is 4 threads) So we can see with this “set” query, InnoDB Memcached can run 4.5 to 9 time faster than MySQL server. Table 2: Performance comparison on Get operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8 5.6.7-RC* X faster Get (QPS) Get 8 42,000 27,000 1.56 32 101,000 55.000 1.83 128 117,000 52,000 2.25 512 109,000 52,000 2.10 With the “get” query (or the select query), memcached performs 1.5 to 2 times faster than normal SQL. Summary: In summary, we added several much-desired features to InnoDB Memcached in this release, allowing user to operate on different tables with this Memcached interface. We also now provide a background commit thread to commit long running idle transactions, thus allow user to configure large batch write/read without worrying about large number of rows held or not being able to see (uncommit) data. We also greatly enhanced the performance when Binlog is enabled. We will continue making efforts in both performance enhancement and functionality areas to make InnoDB Memcached a good demo case for our InnoDB APIs. Jimmy Yang, September 29, 2012

    Read the article

  • What tools are people using to measure SQL Server database performance?

    - by Paul McLoughlin
    I've experimented with a number of techniques for monitoring the health of our SQL Servers, ranging from using the Management Data Warehouse functionality built into SQL Server 2008, through other commercial products such as Confio Ignite 8 and also of course rolling my own solution using perfmon, performance counters and collecting of various information from the dynamic management views and functions. What I am finding is that whilst each of these approaches has its own associated strengths, they all have associated weaknesses too. I feel that to actually get people within the organisation to take the monitoring of SQL Server performance seriously whatever solution we roll out has to be very simple and quick to use, must provide some form of a dashboard, and the act of monitoring must have minimal impact on the production databases (and perhaps even more importantly, it must be possible to prove that this is the case). So I'm interested to hear what others are using for this task? Any recommendations?

    Read the article

  • Can I expect a performance gain from removing this JOIN?

    - by makeee
    I have a "items" table with 1 million rows and a "users" table with 20,000 rows. When I select from the "items" table I do a join on the "users" table (items.user_id = user.id), so that I can grab the "username" from the users table. I'm considering adding a username column to the items table and removing the join. Can I expect a decent performance increase from this? It's already quite fast, but it would be nice to decrease my load (which is pretty high). The downside is that if the user changes their username, items will still reflect their old username, but this is okay with me if I can expect a decent performance increase. I'm asking stackoverflow because benchmarks aren't telling me too much. Both queries finish very quickly. Regardless, I'm wondering if removing the join would lighten load on the database to any significant degree.

    Read the article

  • How to improve performance of opening Microsoft Word when automated from c#?

    - by Abdullah BaMusa
    I have Microsoft Word template that I automated filling it’s fields from my application, and when the user request print I open this template. but creating word application every time user request print after filling fields is very expensive and lead to some delay while opening the template, so I choose to cache the reference to Word then just open the new filled template. that solve the performance issue as opening file is less expensive than recreating Word each time, but this work while the user just close the document not the entire Word application which when happened my reference to Word become invalid and return with exception says: “The RPC server is unavailable” next time request opening template . I tried to subscribe to BeforClosing event but his trigger for Quitting Word as well as Closing documents. My question is how to know if the word is closing document or quit the entire application so I take the proper action, or any hint for another direction of thinking about improve performance of opening word template.

    Read the article

  • What are the differences in performance between synchronous and asynchronous JavaScript script loading?

    - by jasdeepkhalsa
    My question is simply: what are the differences in performance between synchronous and asynchronous JavaScript script loading? From what I've gathered synchronous code blocks the loading of a page and/or rest of the code from executing. This happens at two levels. First, at the level of the script actually loading, and secondly, within the JavaScript code itself. For example, on the page: Synchronous: <script src="demo_async.js" type="text/javascript"></script> Asynchronous: <script async src="demo_async.js" type="text/javascript"></script> And within a script: Synchronous: function a() {alert("a"); function b() {alert("b");}} Asynchronous (and self-executing): (function(a, function(b){ alert(b); }) { alert(a); }))(); So what really is the difference in performance from using these different loading methods and JavaScript patterns?

    Read the article

  • C# performance of static string[] contains() (slooooow) vs. == operator

    - by Andrew White
    Hiya, Just a quick query: I had a piece of code which compared a string against a long list of values, e.g. if(str == "string1" || str = "string2" || str == "string3" || str = "string4". DoSomething(); And the interest of code clarity and maintainability I changed it to public static string[] strValues = { "String1", "String2", "String3", "String4"}; ... if(strValues.Contains(str) DoSomething(); Only to find the code execution time went from 2.5secs to 6.8secs (executed ca. 200,000 times). I certainly understand a slight performance trade off, but 300%? Anyway I could define the static strings differently to enhance performance? Cheers.

    Read the article

  • BI and EPM Landscape

    - by frank.buytendijk
    Most of my blog entries are not about Oracle products, and most of the latest entries are about topics such as IT strategy and enterprise architecture. However, given my background at Gartner, and at Hyperion, I still keep a close eye on what's happening in BI and EPM. One important reason is that I believe there is significant competitive value for organizations getting BI and EPM right. Davenport and Harris wrote a great book called "Competing on Analytics", in which they explain this in a very engaging and convincing way. At Oracle we have defined the concept of "management excellence" that outlines what organizations have to do to keep or create a competitive edge. It's not only in the business processes, but also in the management processes. Recently, Gartner published its 2009 market shares report for BI, Analytics, and Performance Management. Gartner identifies the same three segments that Oracle does: (1) CPM Suites (Oracle refers not to Corporate Performance Management, but Enterprise Performance Management), (2) BI Platform, and (3) Analytic Applications & Performance Management. According to Gartner, Oracle's share is increasing with revenue growing by more than 5%. Oracle currently holds the #2 market share position in the overall BI Software space based on total BI software revenue. Source: Gartner Dataquest Market Share: Business Intelligence, Analytics and Performance Management Software, Worldwide, 2009; Dan Sommer and Bhavish Sood; Apr 2010 Gartner has ranked Oracle as #1 in the CPM Suites worldwide sub-segment based on total BI software revenue, and Oracle is gaining share with revenue growing by more than 6% in 2009. Source: Gartner Dataquest Market Share: Business Intelligence, Analytics and Performance Management Software, Worldwide, 2009; Dan Sommer and Bhavish Sood; Apr 2010 The Analytic Applications & Performance Management subsegment is more fragmented. It has for instance a very large "Other Vendors" category. The largest player traditionally is SAS. Analytic Applications are often meant for very specific analytic needs in very specific industry sectors. According to Gartner, from the large vendors, again Oracle is the one who is gaining the most share - with total BI software revenue growth close to 15% in 2009. Source: Gartner Dataquest Market Share: Business Intelligence, Analytics and Performance Management Software, Worldwide, 2009; Dan Sommer and Bhavish Sood; Apr 2010 I believe this shows Oracle's integration strategy is working. In fact, integration actually is the innovation. BI and EPM have been silo technology platforms and application suites way too long. Management and measuring performance should be very closely linked to strategy execution, which is the domain of other business application areas such as CRM, ERP, and Supply Chain. BI and EPM are not about "making better decisions" anymore, but are part of a tangible action framework. Furthermore, organizations are getting more serious about ecosystem thinking. They do not evaluate single tools anymore for different application areas, but buy into a complete ecosystem of hardware, software and services. The best ecosystem is the one that offers the most options, in environments where the uncertainty is high and investments are hard to reverse. The key to successfully managing such an environment is middleware, and BI and EPM become increasingly middleware intensive. In fact, given the horizontal nature of BI and EPM, sitting on top of all business functions and applications, you could call them "upperware". Many are active in the BI and EPM space. Big players can offer a lot, but there are always many areas that are covered by specialty vendors. Oracle openly embraces those technologies within the ecosystem as well. Complete, open and integrated still accurately describes the Oracle product strategy. frank

    Read the article

  • How the number of indexes built on a table can impact performances?

    - by Davide Mauri
    We all know that putting too many indexes (I’m talking of non-clustered index only, of course) on table may produce performance problems due to the overhead that each index bring to all insert/update/delete operations on that table. But how much? I mean, we all agree – I think – that, generally speaking, having many indexes on a table is “bad”. But how bad it can be? How much the performance will degrade? And on a concurrent system how much this situation can also hurts SELECT performances? If SQL Server take more time to update a row on a table due to the amount of indexes it also has to update, this also means that locks will be held for more time, slowing down the perceived performance of all queries involved. I was quite curious to measure this, also because when teaching it’s by far more impressive and effective to show to attended a chart with the measured impact, so that they can really “feel” what it means! To do the tests, I’ve create a script that creates a table (that has a clustered index on the primary key which is an identity column) , loads 1000 rows into the table (inserting 1000 row using only one insert, instead of issuing 1000 insert of one row, in order to minimize the overhead needed to handle the transaction, that would have otherwise ), and measures the time taken to do it. The process is then repeated 16 times, each time adding a new index on the table, using columns from table in a round-robin fashion. Test are done against different row sizes, so that it’s possible to check if performance changes depending on row size. The result are interesting, although expected. This is the chart showing how much time it takes to insert 1000 on a table that has from 0 to 16 non-clustered indexes. Each test has been run 20 times in order to have an average value. The value has been cleaned from outliers value due to unpredictable performance fluctuations due to machine activity. The test shows that in a  table with a row size of 80 bytes, 1000 rows can be inserted in 9,05 msec if no indexes are present on the table, and the value grows up to 88 (!!!) msec when you have 16 indexes on it This means a impact on performance of 975%. That’s *huge*! Now, what happens if we have a bigger row size? Say that we have a table with a row size of 1520 byte. Here’s the data, from 0 to 16 indexes on that table: In this case we need near 22 msec to insert 1000 in a table with no indexes, but we need more that 500msec if the table has 16 active indexes! Now we’re talking of a 2410% impact on performance! Now we can have a tangible idea of what’s the impact of having (too?) many indexes on a table and also how the size of a row also impact performances. That’s why the golden rule of OLTP databases “few indexes, but good” is so true! (And in fact last week I saw a database with tables with 1700bytes row size and 23 (!!!) indexes on them!) This also means that a too heavy denormalization is really not a good idea (we’re always talking about OLTP systems, keep it in mind), since the performance get worse with the increase of the row size. So, be careful out there, and keep in mind the “equilibrium” is the key world of a database professional: equilibrium between read and write performance, between normalization and denormalization, between to few and too may indexes. PS Tests are done on a VMWare Workstation 7 VM with 2 CPU and 4 GB of Memory. Host machine is a Dell Precsioni M6500 with i7 Extreme X920 Quad-Core HT 2.0Ghz and 16Gb of RAM. Database is stored on a SSD Intel X-25E Drive, Simple Recovery Model, running on SQL Server 2008 R2. If you also want to to tests on your own, you can download the test script here: Open TestIndexPerformance.sql

    Read the article

  • Why don't SCOM R2 web console performance views load?

    - by Nexus
    When I select any performance view via my SCOM R2 web console, I get the following error: Unexpected error There was an error displaying the page you requested. ... and some suggestions about restarting my browser, which doesn't resolve the issue. The request produces the following event in the logs: Event code: 3005 Event message: An unhandled exception has occurred. Event time: 7/05/2010 11:41:38 AM Event time (UTC): 7/05/2010 1:41:38 AM Event ID: f4c47d1302694e1c8039e6c0088c2520 Event sequence: 18 Event occurrence:1 Event detail code: 0 [snip] Exception information: Exception type: HttpException Exception message: Error executing child request for /ResultViews/ViewTypePerformance.aspx. I'm using forms authentication and all other web console functionality works perfectly. My server is Windows 2008 R2 Standard running SCOM R2 and runs the DB, Web Console and RMS roles. Has anyone else experienced this issue? Is it fixed in the cumulative update release for SCOM R2?

    Read the article

< Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >