Search Results

Search found 14841 results on 594 pages for 'performance monitoring'.

Page 277/594 | < Previous Page | 273 274 275 276 277 278 279 280 281 282 283 284  | Next Page >

  • Real pagination vs Next and Previous buttons

    - by Pablo
    By real pagination i mean something like this when in page 3: <<Previous 1 | 2 | {3} | 4 | 5 |...| 15 | Next>> By Next and Previous buttons i mean something like this when in page 3: <<previous Next>> Performance wise im sure the Previous and Next Buttons are better since unlike the real pagination it doesn't require over-querying the database. By over-querying the database i mean getting more information from the database than what you will need to display on the page. My theory is that the Previous and Next Buttons can drastically increase a site performance as it only requires the exact information you will need to display on a page, please correct me if im wrong on this. so, do users really have preference when it comes to this two options? is it just a Developer preference and its convenience? Which one do you prefer? why? *Note: Previous and Next Buttons are usually labeled Newer and older.

    Read the article

  • Fedora internet access and managing

    - by Fractal
    This questions speaks about a UNIX Fedora install, DWA-552 wireless adapter, and internet What are the required packages on a KDE GUI installation, and on the basic UNIX TUI installation, to access internet and manage wireless networks? On a larger scale, does anyone knows of an all encompassing list of basic functions (such as monitoring or hardware control) with their respective packages dependencies?

    Read the article

  • How to get the speed of a network card on the command line?

    - by nelaar
    I am trying to see what the speed of some network cards on a remote server. Our reporting software says they are 10Mbps, but I am sure that is wrong they should be 1Gbps. Our monitoring software uses SNMP to query the servers, perhaps the servers are reporting information incorrectly. ifconfig does not report what the speed of the devices are. How can I see what the currently configured speed of the cards are.

    Read the article

  • Can I set up a different method of authentication on Nagios?

    - by cwd
    Nagios is a wonderful too for monitoring servers. Their web interface is not bad, either. However I am not crazy about using the HTTP Authentication that comes standard. Is there a way to use another method of authentication? (and I don't mean restricting access by IP address in the .htaccess file) Something with a form-based login would be wonderful, but perhaps there is no such thing. I'm hoping you guys have found something I haven't.

    Read the article

  • Icons in Silverlight: Images vs. Vectors

    - by Shnitzel
    I like using the vector drawing feature of Expression Blend to create icons. That way I can change colors easily on my icons without having to resort to an image editor. But my question is... Say I have a treeview control that has an icon next to each tree element and say I have hundreds of elements. Do you think using images is faster - performance wise than using vector icons? B/c I'd rather use vectors but I'm wondering about performance concerns.

    Read the article

  • MS Access vs SQL Server and others ? Is it worth taking a db server when less than 2 Gb and only 20

    - by asksuperuser
    After my experiment with MSAccess vs MySQL which shows MS Access hugely overperforming Mysql odbc insert by a factor 1000% before I would do the same experiment with SQL Server I searched for some other's people and found this one: http://blog.nkadesign.com/2009/access-vs-sql-server-some-stats-part-1/ which says "As a side note, in this particular test, Access offers much better raw performance than SQL Server. In more complex scenarios it’s very likely that Access’ performance would degrade more than SQL Server, but it’s nice to see that Access isn’t a sloth." So is worth bother with some db server when data is less than 2 Gb and users are about 20 (knowing that MS Access theorically supports up to 255 concurrent users though practically it's around a dozen concurrent users only). Are there any real world studies that really compare MS Access with other db in these specific use case ? Because professionaly speaking I keep hearing people systematically recommend DB server from people who have never used Access just because they think DB Server can only perform better in every case which I used to think myself I confess.

    Read the article

  • Using Lambda Statements for Event Handlers

    - by lush
    I currently have a page which is declared as follows: public partial class MyPage : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { //snip MyButton.Click += (o, i) => { //snip } } } I've only recently moved to .NET 3.5 from 1.1, so I'm used to writing event handlers outside of the Page_Load. My question is; are there any performance drawbacks or pitfalls I should watch out for when using the lambda method for this? I prefer it, as it's certainly more concise, but I do not want to sacrifice performance to use it. Thanks.

    Read the article

  • When accessing ResultSets in JDBC, is there an elegant way to distinguish between nulls and actual z

    - by Uri
    When using JDBC and accessing primitive types via a result set, is there a more elegant way to deal with the null/0 than the following: int myInt = rs.getInt(columnNumber) if(rs.wasNull())? { // Treat as null } else { // Treat as 0 } I personally cringe whenever I see this sort of code. I fail to see why ResultSet was not defined to return the boxed integer types (except, perhaps, performance) or at least provide both. Bonus points if anyone can convince me that the current API design is great :) My solution was to write a wrapper that returns an Integer (I care more about elegance of client code than performance), but I'm wondering if I'm missing a better way to do this.

    Read the article

  • How have your coding values changed since graduating?

    - by Matt
    We all walked out of school with the stars in our eyes and little experience in "real-world" programming. How have your opinions on programming as a craft changed since you've gained more experience away from academia? I've become more and more about design a la McConnell : wide use of encapsulation, quality code that gives you warm fuzzy feelings when you read it, maintainability over execution performance, etc..., whereas many of my co-workers have followed a different path of fewer middlemen layers getting in the way, code that is right out in the open and easier to locate, even if harder to read, and performance-centric designs. What have you learned about the craft of software design which has changed the way you approach coding since leaving the academic world?

    Read the article

  • Do MySQL Locked Tables affect related Views?

    - by CogitoErgoSum
    So after reading http://stackoverflow.com/questions/1415602/performance-in-pdo-php-mysql-transaction-versus-direct-execution in regards to performance issues I was thinking about I did some research on locking tables in MySQL. On http://dev.mysql.com/doc/refman/5.0/en/table-locking.html Table locking enables many sessions to read from a table at the same time, but if a session wants to write to a table, it must first get exclusive access. During the update, all other sessions that want to access this particular table must wait until the update is done. This part struck me particularly becuase most of our queries will be updates rather than inserts. I was wondering if one created a table called foo on which all updates/inserts were carried out and then a view called foo_view (A copy of foo, or perhaps foo and a linkage of several other tables plus foo) on which all selects occured, would this locking issue still occur? That is, would SELECT quries on foo_view still have to wait for an update to finish on foo?

    Read the article

  • Not to forward certain email Outlook

    - by kitokid
    I have set up a rule to forward incoming emails from Outlook to my Gmail account. The problem is that certain mails in which I'm a CC (about 1000/day monitoring system running status) are also forwarded to my Gmail and fill up my account very quickly. I have set up rules in Outlook to move those emails to a certain folder (called Monitored_Emails), but I don't know how to filter those emails so they don't forward to Gmail. How can I set this rule to forward all emails except those in a certain folder name?

    Read the article

  • Best way to merge two identical ASPNET web sites?

    - by ase69s
    We have two websites which only diference is in the design (Diferent images, styles, layouts..etc) but the web structure of files and cs code is the same so we want to simplify its manteinance... The actual structure would be: DefaultA.aspx DefaultA.aspx.cs DefaultB.aspx DefaultB.aspx.cs LoginA.aspx LoginA.aspx.cs LoginB.aspx LoginB.aspx.cs One idea would be changing the design differencies at runtime depending of the origin website, but we dont like much this because performance, abstraction in designing them and url confusion... Another one is sharing the cs (both aspx inheriting and using the same cs) file but we never have done or seen it done in any website before so we wonder if its a good aproach... What do you think? Any other way better in terms of performance vs development-ease?

    Read the article

  • Network speeds being report as 4x higher than actual in Windows 7 SP1

    - by Synetech
    Ever since installing Windows 7 SP1, I have noticed that all programs that display my network transfer rate have been exactly 4x higher than they actually are. For example, when I download something from a high-bandwidth web site or through torrents with lots of sources, the download rate indicated is is ~5MBps (~40Mbps) even though my Internet connection has a maximum of only 1.5MBps (12Mbps). It is the same situation with the upstream bandwidth: the connection maximum is 64KBps, but I’m seeing up to 256KBps. I have tried several different programs for monitoring bandwidth throughput and they all give the same results. I also tried different times and different days, and they always show the rate as being four times too high. My initial thought was that my ISP had increased the speeds (without my noticing), which they have done before. However, I checked my ISP’s site and they have not increased the speeds. Moreover, when I look at the speeds in the program actually doing the transfer (eg Chrome, µTorrent, etc.), the numbers are in line with the expected values at the same time that bandwidth monitoring programs are showing the high numbers. The only significant change (and pretty much the only change at all) that has occurred to my system since the change was the installation of SP1 for Windows 7. As such, it is my belief that some sort of change exists in SP1 whereby software that accesses the bandwidth via a specific API receives (erroneously?) high numbers while others that have access to the raw data continue to receive the correct values. I booted into Windows XP and downloaded some things via HTTP and torrent and in both cases, the numbers were as expected (like they were in Windows 7 before installing SP1). I then booted back into 7SP1 and once again, the numbers were four times higher than possible. Therefore it is definitely something in SP1 that has changed how local bandwidth is calculated/returned. There is definitely something wonky with Windows 7 SP1’s network speed calculation. I tried Googling this, but (for multiple reasons), have had a difficult time finding anything relevant. Has anybody else noticed this behavior? Does anybody know of any bugs or changes in SP1 that could account for it?

    Read the article

  • How to change size of STL container in C++

    - by Jaime Pardos
    I have a piece of performance critical code written with pointers and dynamic memory. I would like to rewrite it with STL containers, but I'm a bit concerned with performance. Is there a way to increase the size of a container without initializing the data? For example, instead of doing ptr = new BYTE[x]; I want to do something like vec.insert(vec.begin(), x, 0); However this initializes every byte to 0. Isn't there a way to just make the vector grow? I know about reserve() but it just allocates memory, it doesn't change the size of the vector, and doesn't allows me to access it until I have inserted valid data. Thank you everyone.

    Read the article

  • Javascript large number array compression

    - by gatapia
    Hi All, I've got a javascript application that sends a large amount of numerical data down the wire. This data is then stored in a database. I am having size issues (too much bandwidth, database getting too big). I am now ready to sacrifice some performance for compression. I was thinking of implementing a base 62 number.toString(62) and parseInt(compressed, 62). This would certainly reduce the size of the data but before I go ahead and do this I thought I would put it to the folks here as I know there must be some outside the box solution I have not considered. The basic specs are: - Compress large number arrays into strings for JSONP transfer (So I think UTF is out) - Be relatively fast, look I'm not expecting same performance as I have now but I also don't want gzip compression either. Any ideas would be greatly appreciated. Thanks Guido Tapia

    Read the article

  • Clustered index on frequently changing reference table of one or more foreign keys

    - by Ian
    My specific concern is related to the performance of a clustered index on a reference table that has many rapid inserts and deletes. Table 1 "Collection" collection_pk int (among other fields) Table 2 "Item" item_pk int (among other fields) Reference Table "Collection_Items" collection_pk int, item_pk int (combined primary key) Because the primary key is composed of both pks, a clustered index is created and the data physically ordered in the table according to the combined keys. I have many users creating and deleting collections and adding and removing items to those collections very frequently affecting the "Collection_Items" table, and its clustered index. QUESTION PART: Since the "Collection_Items" table is so dynamic, wouldn't there be a big performance hit on constantly resorting the table rows because of the clustered index ? If yes, what should I do to minimize this ?

    Read the article

  • Adding more OR searches with CONTAINS Brings Query to Crawl

    - by scolja
    I have a simple query that relies on two full-text indexed tables, but it runs extremely slow when I have the CONTAINS combined with any additional OR search. As seen in the execution plan, the two full text searches crush the performance. If I query with just 1 of the CONTAINS, or neither, the query is sub-second, but the moment you add OR into the mix the query becomes ill-fated. The two tables are nothing special, they're not overly wide (42 cols in one, 21 in the other; maybe 10 cols are FT indexed in each) or even contain very many records (36k recs in the biggest of the two). I was able to solve the performance by splitting the two CONTAINS searches into their own SELECT queries and then UNION the three together. Is this UNION workaround my only hope? Thanks. SELECT a.CollectionID FROM collections a INNER JOIN determinations b ON a.CollectionID = b.CollectionID WHERE a.CollrTeam_Text LIKE '%fa%' OR CONTAINS(a.*, '"*fa*"') OR CONTAINS(b.*, '"*fa*"') Execution Plan (guess I need more reputation before I can post the image):

    Read the article

  • Performing task on remote window server 2003 machine

    - by Vaibhav Jain
    I have to perform various task such as restarting process, monitor process state check disk space, log monitoring on Window server 2003 machine. For this i am using remote desktop access which is very slow. Is there any alternate (Tool or Framework) for windows server where i can execute my script on my machine and the required task will be performed on the remote machine in somewhat interactive manner (like putty in linux)

    Read the article

  • Sql Server as logging, best connection practise

    - by ozz
    I'm using SqlServer as logging. Yes this is wrong decision, there are better dbs for this requirement. But I have no other option for now. Logging interval is 3 logs per second. So I've static Logger class and it has static Log method. Using "Open Connection" as static member is better for performance. But what is the best implemantation of it? This is not that I know. public static class OzzLogger { static SqlConnection Con; static OzzLogger() { Con=ne SqlConnection(....); Con.Open(); } public static void Log(....) { Con.ExecuteSql(......); } } UPDATE I asked because of my old information. People say "connection pooling performance is enough". If there is no objection I'm closing the issue :)

    Read the article

  • Video chat application : Which technology to choose ?

    - by WarDoGG
    I have to undertake a project which is to make a video chat application. The video has to be streamed from one location and can be viewed by multiple people spread out over the globe. Performance is really an issue and a delay of more than 2-3 seconds is unacceptable. From what i gather, this can be done in Flex and also in JAVA. Any performance issues and caveats with a particular approach ? I would really like the pros to comment on this and guide me through. Will be very very helpful. Are there any open source libraries available for video recording in flash / JAVA which i can integrate into my app and customize according to my needs ?

    Read the article

  • What is the best solution to replace a new memory allocator in an existing code?

    - by O. Askari
    During the last few days I've gained some information about memory allocators other than the standard malloc(). There are some implementations that seem to be much better than malloc() for applications with many threads. For example it seems that tcmalloc and ptmalloc have better performance. I have a C++ application that uses both malloc and new operators in many places. I thought replacing them with something like ptmalloc may improve its performance. But I wonder how does the new operator act when used in C++ application that runs on Linux? Does it use the standard behavior of malloc or something else? What is the best way to replace the new memory allocator with the old one in the code? Is there any way to override the behavior or new and malloc or do I need to replace all the calls to them one by one?

    Read the article

  • List filtering: list comprehension vs. lambda + filter

    - by Agos
    I happened to find myself having a basic filtering need: I have a list and I have to filter it by an attribute of the items. My code looked like this: list = [i for i in list if i.attribute == value] But then i thought, wouldn't it be better to write it like this? filter(lambda x: x.attribute == value, list) It's more readable, and if needed for performance the lambda could be taken out to gain something. Question is: are there any caveats in using the second way? Any performance difference? Am I missing the Pythonic Way™ entirely and should do it in yet another way (such as using itemgetter instead of the lambda)? Thanks in advance

    Read the article

  • which is better, creating a view or a new table?

    - by Carson
    I have some demanding mysql queries that are needed to grap same datasets from several mysql tables. I am thinking of creating a table or view to gather all demanding columns from other tables, so as to increase performance. If I create that table, I may need to do extra insert / update / delete operation each time other tables updated. if I create view, I am worrying if the performance can be greatly improved. Because data from other tables are changing very frequently. Most likely, the view may need to be created first everytime before selecting it. Any ideas? e.g. how to cache? other extra measures I can do?

    Read the article

< Previous Page | 273 274 275 276 277 278 279 280 281 282 283 284  | Next Page >