Search Results

Search found 14841 results on 594 pages for 'performance monitoring'.

Page 32/594 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • What is a good web interface for remote linux load monitoring?

    - by Jakobud
    I'm looking for some type of remote linux monitoring software that you can view using a web interface. And I'm not just looking for the basic load information. I'm also looking for process information, similar to the info that you get from TOP. Like I'd just like to be able to pop open this webpage to view whats going on with the server at a moments notice. For example, perhaps just a basic PHP page that is on the server that uses basic AJAX to display and refresh results from the TOP command in the page. I was thinking about writing something like this, but I don't want to reinvent the wheel.

    Read the article

  • Do you know a script for email round trip monitoring?

    - by crazyfr
    Do you know a simple script that can do email round trip monitoring ? I want to monitor email service on A. I install the script on B. B sends an email to A A sends an auto-reply to B B knows that email service is working well. B restarts this test every 30min. If B didn't received replies, it sends an alert message. B could also monitor round-trip delay of emails. I have not access to the email service, I can only ask new end user email and configure auto-reply. B is an Ubuntu. Thanks!

    Read the article

  • Understanding MySQL Query Caches and when to implement it?

    - by Jeff
    On our current MySQL server query cache is enabled. Qchache_hits: 31913 Qchache_inserts: 50959 Qchache_lowmem_prunes: 9320 Qchache_not_chached: 209320 Qchache_queries_in_chace: 986 com_update: 0 com_delete: 0 I do not fully understand the Query cache - I am reading about it currently and trying to understand it. Our database holds inventory data, customer data, employee data, sales data and so forth. The query is very rarely run more than once. The possibility of a query being run twice is viewing a specific sales information twice. But basically everything in our system changes constantly. It is always being updated, deleted, insterted and off the top of my head I can't picture users running the same query twice within a week. Do I even need to have the query cache enabled? I am guessing that the inserts means 51k entries have been added, but only 986 of those are being stored? Would an idea be to refresh the cache, and watch it for a week and check how many of the queries in cached are accessed maybe on a weekly basis to see if it is actually returning any benefits? Any help/guidance on this is appreciated, thanks

    Read the article

  • Performance implications of Synchronous Sockets vs Asynchronous Sockets

    - by Akash Kava
    We are trying to build an SMTP Server to receive mail notifications from various clients over internet. As each of the communication will be longer and it needs to log everything, doing this Asynchronous way is little challenging as well as by using Socket's Asynchronous methods we are not sure of how flow of control and error handling happens. Previously we wrote lot of server/client apps but we always used Synchronous sockets, reason being they are longer sessions and each session also has lot of local data to manage and parsing messages etc. Does anyone have any experience over real performance differences between these two methods? Async calls use ThreadPool which we have experienced many times to just die for no reason. And we fail to restart threadpool etc. In one way Request-Response protocol of HTTP, Async Sockets makes sense, but SMTP/IMAP etc protocols are longer and they have interleaved messages plus state machine of server. So Async methods are really complicated to program. However if anyone can share the performance of Sockets, it will be helpful.

    Read the article

  • SQL Server 2005 high memory usage and performance problems

    - by emzero
    Hi there guys. I have this ASP.NET/SQLServer2005 website running on a production server (Win2003, QuadCore, 4GB). The site runs smoothly normally, but after 2-3 weeks I notice a slow performance on the site (especifically in one particular page). Also I notice that the SQL Server process is using like 2GBs of RAM. So I restart the service, the site runs fast again and the process 300-400MBs. I'm looking for an explanation of why is this happening? What is SQL Server storing in RAM that takes too much space and degrades the performance? What can I do to avoid this? I'm trying to avoid restarting the SQLServer everytime this happens. Thank you!

    Read the article

  • WCF performance improvements

    - by Burt
    I am developing a WPF application that talks to a server via WCF services over the internet. After profiling the application I noticed a lot of time is being taking up by creating the appropriate WCF client proxy and making the call to the server. The code on the server is optimised and doesn't take any time to run yet I am still seeing a 1.5 second delay from when a service is invloked to it returning to the client. A few points to give a bit of background: I am using the ASP.Net membership for security I will eventually hook into the same server side code through a website I would eventually like to have offline support in the application I really need to nail the performance early though as if the app is taking a couple of seconds to come back it is too long for what I am trying to do. Can anyone suggest performance tips that will help me please?

    Read the article

  • Why does PostgresQL query performance drop over time, but restored when rebuilding index

    - by Jim Rush
    According to this page in the manual, indexes don't need to be maintained. However, we are running with a PostgresQL table that has a continuous rate of updates, deletes and inserts that over time (a few days) sees a significant query degradation. If we delete and recreate the index, query performance is restored. We are using out of the box settings. The table in our test is currently starting out empty and grows to half a million rows. It has a fairly large row (lots of text fields). We are search is based of an index, not the primary key (I've confirmed the index is being used, at least under normal conditions) The table is being used as a persistent store for a single process. Using PostgresQL on Windows with a Java client I'm willing to give up insert and update performance to keep up the query performance. We are considering rearchitecting the application so that data is spread across various dynamic tables in a manner that allows us to drop and rebuild indexes periodically without impacting the application. However, as always, there is a time crunch to get this to work and I suspect we are missing something basic in our configuration or usage. We have considered forcing vacuuming and rebuild to run at certain times, but I suspect the locking period for such an action would cause our query to block. This may be an option, but there are some real-time (windows of 3-5 seconds) implications that require other changes in our code. Additional information: Table and index CREATE TABLE icl_contacts ( id bigint NOT NULL, campaignfqname character varying(255) NOT NULL, currentstate character(16) NOT NULL, xmlscheduledtime character(23) NOT NULL, ... 25 or so other fields. Most of them fixed or varying character fiel ... CONSTRAINT icl_contacts_pkey PRIMARY KEY (id) ) WITH (OIDS=FALSE); ALTER TABLE icl_contacts OWNER TO postgres; CREATE INDEX icl_contacts_idx ON icl_contacts USING btree (xmlscheduledtime, currentstate, campaignfqname); Analyze: Limit (cost=0.00..3792.10 rows=750 width=32) (actual time=48.922..59.601 rows=750 loops=1) - Index Scan using icl_contacts_idx on icl_contacts (cost=0.00..934580.47 rows=184841 width=32) (actual time=48.909..55.961 rows=750 loops=1) Index Cond: ((xmlscheduledtime < '2010-05-20T13:00:00.000'::bpchar) AND (currentstate = 'SCHEDULED'::bpchar) AND ((campaignfqname)::text = '.main.ee45692a-6113-43cb-9257-7b6bf65f0c3e'::text)) And, yes, I am aware there there are a variety of things we could do to normalize and improve the design of this table. Some of these options may be available to us. My focus in this question is about understanding how PostgresQL is managing the index and query over time (understand why, not just fix). If it were to be done over or significantly refactored, there would be a lot of changes.

    Read the article

  • How to maximize http.sys file upload performance

    - by anelson
    I'm building a tool that transfers very large streaming data sets (possibly on the order of terabytes in a single stream; routinely in the tens of gigabytes) from one server to another. The client portion of the tool will read blocks from the source disk, and send them over the network. The server side will read these blocks off the network and write them to a file on the server disk. Right now I'm trying to decide which transport to use. Options are raw TCP, and HTTP. I really, REALLY want to be able to use HTTP. The HttpListener (or WCF if I want to go that route) make it easy to plug in to the HTTP Server API (http.sys), and I can get things like authentication and SSL for free. The problem right now is performance. I wrote a simple test harness that sends 128K blocks of NULL bytes using the BeginWrite/EndWrite async I/O idiom, with async BeginRead/EndRead on the server side. I've modified this test harness so I can do this with either HTTP PUT operations via HttpWebRequest/HttpListener, or plain old socket writes using TcpClient/TcpListener. To rule out issues with network cards or network pathways, both the client and server are on one machine and communicate over localhost. On my 12-core Windows 2008 R2 test server, the TCP version of this test harness can push bytes at 450MB/s, with minimal CPU usage. On the same box, the HTTP version of the test harness runs between 130MB/s and 200MB/s depending upon how I tweak it. In both cases CPU usage is low, and the vast majority of what CPU usage there is is kernel time, so I'm pretty sure my usage of C# and the .NET runtime is not the bottleneck. The box has two 6-core Xeon X5650 processors, 24GB of single-ranked DDR3 RAM, and is used exclusively by me for my own performance testing. I already know about HTTP client tweaks like ServicePointManager.MaxServicePointIdleTime, ServicePointManager.DefaultConnectionLimit, ServicePointManager.Expect100Continue, and HttpWebRequest.AllowWriteStreamBuffering. Does anyone have any ideas for how I can get HTTP.sys performance beyond 200MB/s? Has anyone seen it perform this well on any environment?

    Read the article

  • MySQL index cardinality - performance vs storage efficiency

    - by Sean
    Say you have a MySQL 5.0 MyISAM table with 100 million rows, with one index (other than primary key) on two integer columns. From my admittedly poor understanding of B-tree structure, I believe that a lower cardinality means the storage efficiency of the index is better, because there are less parent nodes. Whereas a higher cardinality means less efficient storage, but faster read performance, because it has to navigate through less branches to get to whatever data it is looking for to narrow down the rows for the query. (Note - by "low" vs "high", I don't mean e.g. 1 million vs 99 million for a 100 million row table. I mean more like 90 million vs 95 million) Is my understanding correct? Related question - How does cardinality affect write performance?

    Read the article

  • Wpf: Tips for better performance

    - by viky
    I am working on a wpf application. In which I am working with a TreeView, each node represents different datatypes, these datatypes are having properties defined and using data template to show their properties. My application reads from xml and create tree accordingly. My problem is that when I load it, it is too slow, I want to know about the tricks that will help me to improve performance of my(any) wpf application. Edit: Please provide me some tips for better performance in wpf!! I am using wpf Profiler but it is not much helpful for me.

    Read the article

  • Performance Improvement: Alternative for array_flip function.

    - by Rachel
    Is there any way I can avoid using array_flip to optimize performance. I am doing a select statement from database, preparing the query and executing it and storing data as an associative array in $resultCollection and than I have array op and for each element in $resultCollection am storing its outputId in op[] as evident from the code. I have explained code and so my question is how can I achieve an similar alternative for array_flip with using array_flip as I want to improve performance. $resultCollection = $statement->fetchAll(PDO::FETCH_ASSOC); $op = array(); //Looping through result collection and storing unicaOfferId into op array. foreach ($resultCollection as $output) { $op[] = $output['outputId']; } //Here op array has key as 0, 1, 2...and value as id {which I am interested in} //Flip op array to get unica offer ids as key $op = array_flip($op); //Doing a flip to get id as key. foreach ($ft as $Id => $Off) { $ft[$Id]['is_set'] = isset($op[$Id]); }

    Read the article

  • Performance logging tips

    - by Germstorm
    I am developing large data collecting ASP.Net/Windows service application-pair that uses Microsoft SQL Server 2005 through LINQ2Sql. Performance is always the issue. Currently the application is divided into multiple larger processing parts, each logging the duration of their work. This is not detailed and does not help us with anything. It would be nice to have some database tables that contain statistics that the application itself collected from its own behavior. What logging tips and data structures do you recommend to spot the parts that cause performance problems?

    Read the article

  • Declarative JDOQL vs Single-String JDOQL : performance

    - by DrDro
    When querying with JDOQL is there a performance difference between using the declarative version and the Single-String version: Example from the JDOQL doc: //Declarative JDOQL : Query q = pm.newQuery(org.jpox.Person.class, "lastName == \"Jones\" && age < age_limit"); q.declareParameters("double age_limit"); List results = (List)q.execute(20.0); //Single-String JDOQL : Query q = pm.newQuery("SELECT FROM org.jpox.Person WHERE lastName == \"Jones\"" + " && age < :age_limit PARAMETERS double age_limit"); List results = (List)q.execute(20.0); Other then performance, are there any reasons for which one is better to use then the other or is it just about the one with which we feel more comfortable.

    Read the article

  • Can GPU capabilities impact virtual machine performance?

    - by Dave White
    While this many not seem like a programming question directly, it impacts my development activities and so it seems like it belongs here. It seems that more and more developers are turning to virtual environments for development activities on their computers, SharePoint development being a prime example. Also, as a trainer, I have virtual training environments for all of the classes that I teach. I recently purchased a new Dell E6510 to travel around with. It has the i7 620M (Dual core, HyperThreaded cpu running at 2.66GHz) and 8 GB of memory. Reading the spec sheet, it sounded like it would be a great laptop to carry around and run virtual machines on. Getting the laptop though, I've been pretty disappointed with the user experience of developing in a virtual machine. Giving the Virtual Machine 4 GB of memory, it was slow and I could type complete sentences and watch the VM "catchup". My company has training laptops that we provide for our classes. They are Dell Precision M6400 Intel Core 2 Duo P8700 running at 2.54Ghz with 8 GB of memory and the experience on this laptops is night and day compared to the E6510. They are crisp and you barely aware that you are running in a virtual environment. Since the E6510 should be faster in all categories than the M6400, I couldn't understand why the new laptop was slower, so I did a component by component comparison and the only place where the E6510 is less performant than the M6400 is the graphics department. The M6400 is running a nVidia FX 2700m GPU and the E6510 is running a nVidia 3100M GPU. Looking at benchmarks of the two GPUs suggest that the FX 2700M is twice as fast as the 3100M. http://www.notebookcheck.net/Mobile-Graphics-Cards-Benchmark-List.844.0.html 3100M = 111th (E6510) FX 2700m = 47th (Precision M6400) Radeon HD 5870 = 8th (Alienware) The host OS is Windows 7 64bit as is the guest OS, running in Virtual Box 3.1.8 with Guest Additions installed on the guest. The IDE being used in the virtual environment is VS 2010 Premium. So after that long setup, my question is: Is the GPU significantly impacting the virtual machine's performance or are there other factors that I'm not looking at that I can use to boost the vm's performance? Do we now have to consider GPU performance when purchasing laptops where we expect to use virtualized development environments? Thanks in advance. Cheers, Dave

    Read the article

  • Performance of .NET ILMerged assemblies

    - by matt
    I have two .NET libraries: "Foo.Bar" and "Foo.Baz". "Foo.Bar" is self-contained, while "Foo.Baz" references "Foo.Bar". Assuming I do the following: Use ILMerge to merge "Foo.Bar.dll" with "Foo.Baz.dll" into "Foo1.dll". Create a new solution containing the entirity of both "Foo.Bar" and "Foo.Baz" (since I have access to their source code), and compile this into "Foo2.dll". Will there be any differences in the performance of Foo1.dll and Foo2.dll when using their functionality from an external project? If so, how significant is this performance difference, and is it a once-off (on load?) or ongoing difference? Are there any other pros or cons with either approach?

    Read the article

  • Performance concern when using LINQ "everywhere"?

    - by stiank81
    After upgrading to ReSharper5 it gives me even more useful tips on code improvements. One I see everywhere now is a tip to replace foreach-statements with LINQ queries. Take this example: private Ninja FindNinjaById(int ninjaId) { foreach (var ninja in Ninjas) { if (ninja.Id == ninjaId) return ninja; } return null; } This is suggested replaced with the following using LINQ: private Ninja FindNinjaById(int ninjaId) { return Ninjas.FirstOrDefault(ninja => ninja.Id == ninjaId); } This looks all fine, and I'm sure it's no problem regarding performance to replace this one foreach. But is it something I should do in general? Or might I run into performance problems with all these LINQ queries everywhere?

    Read the article

  • MYSQL Convert rows to columns performance problem

    - by Tarski
    I am doing a query that converts rows to columns similar to this post but have encountered a performance problem. Here is the query:- SELECT Info.Customer, Answers.Answer, Answers.AnswerDescription, Details.Code1, Details.Code2, Details.Code3 FROM Info LEFT OUTER JOIN Answers ON Info.AnswerID = Answers.AnswerID LEFT OUTER JOIN (SELECT ReferenceNo, MAX(CASE DetailsIndicator WHEN 'cde1' THEN DetailsCode ELSE NULL END ) Code1, MAX(CASE DetailsIndicator WHEN 'cde2' THEN DetailsCode ELSE NULL END ) Code2, MAX(CASE DetailsIndicator WHEN 'cde3' THEN DetailsCode ELSE NULL END ) Code3 FROM DetailsData GROUP BY ReferenceNo) Details ON Info.ReferenceNo = Details.ReferenceNo There are less than 300 rows returned, but the Details table is about 180 thousand rows. The query takes 45 seconds to run and needs to take only a few seconds. When I type show processlist; into MYSQL it is hanging on "Sending Data". Any thoughts as to what the performance problem might be?

    Read the article

  • Apache modules: C module vs mod_wsgi python module - Performance

    - by Gopal
    Hi A client of ours is asking us to implement a module in C in Apache webserver for performance reasons. This module should handle RESTful uri's, access a database and return results in json format. Many people here have recommended python mod_wsgi instead - but for simplicity of programming reasons. Can anyone tell me if there is a significant difference in performance between the mod_wsgi python solution vs. the Apache + C.module. Any anecdotes? Pointers to some study posted online?

    Read the article

  • Visual Studio add-in for performance benchmarking

    - by chiccodoro
    I'd like to measure the performance of some code blocks in my c# winforms application. In particular I want to measure performance regression/improvement after some restructuring of the code. So long I've seen the System.Diagnostics.Stopwatch. However, I want to avoid writing measuring code into my classes, I would rather prefer to separate measuring from actual code. As for debugging, you can set breakpoints on several code lines and "jump" from one to the next by "Continue Execution", I imagine something similar for measuring: Mark to lines of code and make Visual Studio display the time elapsing from one to the next. Is there any feature/add-in in that direction?

    Read the article

  • What's the performance penalty of weak_ptr?

    - by Kornel Kisielewicz
    I'm currently designing a object structure for a game, and the most natural organization in my case became a tree. Being a great fan of smart pointers I use shared_ptr's exclusively. However, in this case, the children in the tree will need access to it's parent (example -- beings on map need to be able to access map data -- ergo the data of their parents. The direction of owning is of course that a map owns it's beings, so holds shared pointers to them. To access the map data from within a being we however need a pointer to the parent -- the smart pointer way is to use a reference, ergo a weak_ptr. However, I once read that locking a weak_ptr is a expensive operation -- maybe that's not true anymore -- but considering that the weak_ptr will be locked very often, I'm concerned that this design is doomed with poor performance. Hence the question: What is the performance penalty of locking a weak_ptr? How significant is it?

    Read the article

  • C# debug vs release performance

    - by sagie
    Hi. I've encountered in the following paragraph: “Debug vs Release setting in the IDE when you compile your code in Visual Studio makes almost no difference to performance… the generated code is almost the same. The C# compiler doesn’t really do any optimisation. The C# compiler just spits out IL… and at the runtime it’s the JITer that does all the optimisation. The JITer does have a Debug/Release mode and that makes a huge difference to performance. But that doesn’t key off whether you run the Debug or Release configuration of your project, that keys off whether a debugger is attached.” The source is here and the podcast is here. Can someone direct me to a microsoft an article that can actualy prove this?

    Read the article

  • Performance of Managed C++ Vs UnManaged/native C++

    - by bsobaid
    I am writing a very high performance application that handles and processes hundreds of events every millisecond. Is Unmanaged C++ faster than managed c++? and why? Managed C++ deals with CLR instead of OS and CLR takes care of memory management, which simplifies the code and is probably also more efficient than code written by "a programmer" in unmanaged C++? or there is some other reason? When using managed, how can one then avoid dynamic memory allocation, which causes a performance hit, if it is all transparent to the programmer and handled by CLR? So coming back to my question, Is managed C++ more efficient in terms of speed than unmanaged C++ and why?

    Read the article

  • What are some good code optimization methods?

    - by esac
    I would like to understand good code optimization methods and methodology. How do I keep from doing premature optimization if I am thinking about performance already. How do I find the bottlenecks in my code? How do I make sure that over time my program does not become any slower? What are some common performance errors to avoid (e.g.; I know it is bad in some languages to return while inside the catch portion of a try{} catch{} block

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >