Search Results

Search found 13249 results on 530 pages for 'performance tuning'.

Page 122/530 | < Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >

  • Having munin server monitoring problem: Graphs not being generated.

    - by geerlingguy
    When I run munin-cron (munin-cron --debug), I get the following error: 2010/05/10 13:39:01 [WARNING] Call to accept timed out. Remaining workers: archstl.org;archstl.archstl.org 2010/05/10 13:39:01 [DEBUG] Active workers: 1/8 These errors simply keep repeating themselves until I quit munin-cron. I've followed the directions for debugging munin on the 'Debugging Munin plugins' wiki page, but I get the following results when going through their directions: After telnetting to localhost 4949, I can see a list of plugins, see a node at archstl.archstl.org, but can't fetch anything. The output is as follows: >fetch cpu . However, on the same machine (which is both the node and the master munin server), I can run munin-run cpu, and it prints the results correctly to the command line, like so: user.value 100829130 nice.value 3479880 system.value 13969362 idle.value 664312639 iowait.value 12180168 irq.value 14242 softirq.value 199526 steal.value 0 Looking at the wiki page mentioned above, it looks like it might be a plugin environment problem, but I can't figure out how to fix/change this... If the plugin does run with munin-run but not through telnet, you probably have a PATH problem. Tip: Set env.PATH for the plugin in the plugin's environment file.

    Read the article

  • How can I simulate a slow machine in a VM?

    - by Nathan Long
    I'm testing an AJAX-heavy web-application. I develop on a new Mac, but I use VmWare Fusion (currently 3.1.2) to test in Windows XP, using IETester to simulate older versions of IE. This lets me see how older IE versions would render the site, but I'd also like to see how the site would perform on an older machine. I see in the VM's settings that I can decrease the RAM; is there a way to also dial down the processor speed? How else might I simulate a slow machine? (I am also going to check out how to simulate a slow internet connection.)

    Read the article

  • How do you debug why Windows is slow?

    - by aaron
    I've got Vista Biz and when my machine chugs I think it is because of paging, but I never know how to verify this. Procexp doesn't seem to provide useful information because it appears that nothing is going on when the chugs happen. perfmon seems like it has the counters I need, but I'm never sure what counters I should add to cover the information I want. For perfmon, I prefer numbers that are percents, so I can gauge load. Here are the counters I have up, but they don't always seem to correlate to chugs: - % disk time (logical) - page faults/sec (an indicator of lots of paging activity) - processor/%priviliged time

    Read the article

  • What is the effect on LVM snapshot size when a file block is rewritten with it's original contents?

    - by NevilleDNZ
    I'm exploring using LVM snapshot's to off site incremental archives from a snapshot "master" file system. In essence: simply copy across only the files on the "master" that have changed since the last incremental copy to the "archive". Then snapshot the "archive" to retain the incremental. I am a bit puzzled as to the block usage behaviour of the archive's own incremental snapshot. I'm expecting that LVM is not smart enough to know that the "file block" is actually unchanged, and the a new copy will be allocated and written for the fresh "archive" file system. Can anyone confirm this, or point me to a document/page that gives some hints? BTW: the OS hard disk cache, hard disk physical cache and hard disk itself also doesn't need to do any actual "disk writes" as the "disk block" likewise is unnecessary. Any pointers to discussion of this style of optimisation would also be ineresting.

    Read the article

  • How to tell if linux disk IO is causing excessive (> 1 second) application stalls

    - by noahz
    I have a Java application performing a large volume (hundreds of MB) of continuous output (streaming plain text) to about a dozen files a ext3 SAN filesystem. Occasionally, this application pauses for several seconds at a time. I suspect that something related to ext3 vsfs (Veritas Filesystem) functionality (and/or how it interacts with the OS) is the culprit. What steps can I take to confirm or refute this theory? I am aware of iostat and /proc/diskstats as starting points. Revised title to de-emphasize journaling and emphasize "stalls" I have done some googling and found at least one article that seems to describe behavior like I am observing: Solving the ext3 latency problem Additional Information Red Hat Enterprise Linux Server release 5.3 (Tikanga) Kernel: 2.6.18-194.32.1.el5 Primary application disk is fiber-channel SAN: lspci | grep -i fibre 14:00.0 Fibre Channel: Emulex Corporation Saturn-X: LightPulse Fibre Channel Host Adapter (rev 03) Mount info: type vxfs (rw,tmplog,largefiles,mincache=tmpcache,ioerror=mwdisable) 0 0 cat /sys/block/VxVM123456/queue/scheduler noop anticipatory [deadline] cfq

    Read the article

  • Multiple servers vs 1 big server performace

    - by pistacchio
    Hi to all! My team of developers has suggested a server structure for an upcoming project we are developing. Our structure is "logical", meaning that the various logical components of the application (it is a distributed one) relies on different servers. Some components are more critical than others and will be subjected to more load. Our proposal was to have 1 server per component but the hardware guys suggested to replace the various machines with a single, bigger one with virtual servers. They're gonna use Blade Servers. Now, I'm not an expert at all, but my question to the guys was: so if we need, for example, 3 2GHz CPU / 2GB RAM machines and you give me 1 machine with 3 2GHz CPUs and 6 GB of RAM it is the same? They told me it is. Is this accurate? What are the advantages or disadvantages of both the solutions? What are the generally accepted best practices? Could you point out some URL reference dealing with the problem? Thank you in advance! EDIT: Some more info. The (internet / intranet) application is already layered. We have some servers on the DMZ that will expose pages to the internet and the databases are on their own machines. What we want to split (and they want to join) are some webservers that mainly expose webservices. One is a DAL that communicates with the database layer, one is our Single Sign On / User Profile application that gets called once per page and one is a clone of what seen on the Internet to be used on our lan.

    Read the article

  • Chipset GPU causes a massive slowdown

    - by zyboxenterprises
    My AMD Radeon HD 7700 recently broke (fan stopped working and GPU overheated), and now I'm running on internal chipset graphics, and it causes a massive slowdown of the whole PC. I've changed the graphics memory from 32MB (minimum) to 256MB (highest), and it hasn't made any difference whatsoever. I'm using Windows Aero, and disabling it should have made a small difference, but it didn't; the whole PC is still slow. I know that it's not the computer build, because I built it myself, and it was a lot faster when it had the AMD Radeon HD 7700 in it, which is the reason why I believe it's the internal chipset graphics that are causing the problem. Is this behavior normal? I don't have the cash right now to go out and buy a new dedicated GPU. I'm using an ASRock N68C-GS FX motherboard with an AMD FX 4100 (overclocked to 4.3GHZ), with 4GB RAM. The overclock was an attempt to resolve this issue, and it isn't related to this issue that the integrated graphics is causing a slowdown.

    Read the article

  • SQL Management Studio is painfully slow on 32-bit Windows 7

    - by Sergei
    I've been having issues running anything in SQL Management Studio on Win 7. Basically, doing anything through the Management Studio interfaces completely freezes it up for a few minutes. Running a query is nearly impossible because it takes nearly 2 minutes just for the IDE to parse it and another minute to run it when the query itself completes instantaneously outside of the IDE. I'm not even going to go into the query designer. Anything with heavy user interaction such as editing a row in the result set where i have to click a cell freezes up the front-end. I tried reinstalling to no avail. Also tried running in compatibility mode without any difference whatsoever. Anybody had a similar experience? I'm running SQL Management Studio 2008 version 10.0.2531.0 on 32-bit Windows 7. Connecting to a remote SQL Server instance (2008 R2). Thanks.

    Read the article

  • linux: accessing thousands of files in hash of directories

    - by 130490868091234
    I would like to know what is the most efficient way of concurrently accessing thousands of files of a similar size in a modern Linux cluster of computers. I am carrying an indexing operation in each of these files, so the 4 index files, about 5-10x smaller than the data file, are produced next to the file to index. Right now I am using a hierarchy of directories from ./00/00/00 to ./99/99/99 and I place 1 file at the end of each directory, like ./00/00/00/file000000.ext to ./00/00/00/file999999.ext. It seems to work better than having thousands of files in the same directory but I would like to know if there is a better way of laying out the files to improve access.

    Read the article

  • Firefox takes a really long time to load some sites on Ubuntu

    - by Dave
    Hello guys, I have an issue here. Some sites - just a few - takes a really long time to load on Firefox. One example is A List Apart (http://www.alistapart.com/) which takes more than 30 minutes (yes, minutes, not seconds). On Opera, ou even through a telnet session, the problematic sites run without problem, fast as expected. I am using Linux 8.04, running Firefox 3.6.3 downloaded from mozilla site, with a 10M ADSL connection. I tried many tweaks I found googling, like disable IPv6, and change http pipelining settings on FF's about:config. None worked. I also used Firebug to find what phase during negotiation is the bottleneck. Findings are in the screenshot. Well guys, any idea what is the issue? And how to solve it? I repeat, this only happens with firefox (3.6.3 and prior versions), for a few sites only (even sites with much more requests, images, javascripts, stylesheets work fine), and http pipelines and IPv6 tweaks on about:config didn't work. Thanks

    Read the article

  • Where do vendors publish internal transfer rates of HDDs?

    - by red888
    So I've started to dig into storage fundamentals and found that in order to calculate the IOPS of a HDD you need to know the internal transfer rate of the drive (time it takes data to move from the platters to internal disk's cache). I went on newegg and even a few vendor sites and could not find this info published for any HDDs. Is it sometimes called something else? Take this link to a seagate HDD for instance. Nowhere do I see "internal transfer rate", but I do see something called "Sustained Data Rate OD"- is that the same thing? Just so you know where I'm getting this info (Book: "Information Storage and Management Storing, Managing..."): Consider an example with the following specifications provided for a disk: The average seek time is 5 ms in a random I/O environment; therefore, T = 5 ms. Disk rotation speed of 15,000 revolutions per minute or 250 revolutions per second — from which rotational latency (L) can be determined, which is one-half of the time taken for a full rotation or L = (0.5/250 rps expressed in ms). 40 MB/s internal data transfer rate, from which the internal transfer time (X) is derived based on the block size of the I/O — for example, an I/O with a block size of 32 KB; therefore X = 32 KB/40 MB. Consequently, the time taken by the I/O controller to serve an I/O of block size 32 KB is (TS) = 5 ms + (0.5/250) + 32 KB/40 MB = 7.8 ms. Therefore, the maximum number of I/Os serviced per second or IOPS is (1/TS) = 1/(7.8 × 10^-3) = 128 IOPS.

    Read the article

  • How to stop Firefox on an SSD from freezing when using the search box or submitting a form?

    - by sblair
    Firefox usually freezes for about a second whenever I search for something from the toolbar search box, when submitting a form, or when clearing the search box history. I suspect it has something to do with the auto-complete feature. Using Windows 7's Resource Monitor, the problem seems to be from the file: C:\Users\<username>\AppData\Roaming\Mozilla\Firefox\Profiles\<profile>\formhistory.sqlite-journal I believe this is a temporary file which caches database writes. The following screenshot shows the very high response times from six different searches, and that the queue length on drive C shoots off the scale: My Firefox profile is on an Intel X25-M G2 SSD. The problem doesn't seem to occur if I create a new profile on a hard disk drive. However, I'd like to know why the problem exists on the SSD in the first place (because it's an annoying problem which contradicts the reason I bought an SSD, and it might happen with other applications too), and how to prevent it. It still occurs if Firefox is started in safe mode, and with the recent beta versions. Updates: VACUUMing the Firefox profile databases does not help with this problem. The SSD Optimizer in the Intel SSD Toolbox does not help either.

    Read the article

  • How can I setup a Proxy I can sniff traffic from using an ESX vswitch in promiscuous mode?

    - by sandroid
    I have a pretty specific requirement, detailed below. Here's what I'm not looking for help for, to keep things tidy and on topic: How to configure a standard proxy Any ESX setup required to facilitate traffic sniffing How to sniff traffic Any changes in design (my scope limits me) I need to setup a test environment for a network-sniffing based HTTP app monitoring tool, and I need to troubleshoot a client issue but he only has a prod network, so making changes to the config on client's system "just to try" is costly. The goal here is to create a similar system in my lab, and hit the client's webapp and redirect my traffic - using a proxy - into the lab environment. The reason I want to use a proxy is so that only this specific traffic is redirected for all to see, and not all my web traffic (like my visits to serverfault :P). Everything will run inside an ESX 4.1 machine. In there, there is a traffic collection vswitch in promiscuous mode that is not on the local network for security reasons. The VM containing our listening agent is connected to this vswitch. On the same ESX host, I will setup a basic linux server and install a proxy (either apache + mod_proxy or squid, doesn't matter). I'm looking for ideas on how to deploy this for my needs so I can then figure out how to set it up accordingly. Some ideas I've had were to setup two proxies, and have them talk to eachother through this vswitch in promiscuous mode, but it seems like alot of work. Another idea is a dual-homed proxy, but I've never seen/done that before so I'm not sure how doable it is for what I'd like. I am OK with setting up a second vswitch in promiscuous mode to facilitate this if need be, but I cannot put the vswitch on the lan (which is used so my browser would communicate with the proxy) in promiscuous mode. Any ideas are welcome.

    Read the article

  • Is there a way to have a working search bar in Explorer with Windows Search Service disabled?

    - by Desmond Hume
    I had to disable Windows Search Service (turn it off in Windows Features) for the reason that it was constantly using the hard drive in an excessive way (maybe because I've got very large quantities of files on my PC), noticeably slowing down my computer, and the Windows.edb database file grew way too large, about 2.5 GB in size. But the side-effect of it is that now the search bar is gone from any Explorer window and I miss this useful feature. So my question is, is there a way to stop Windows Search Service torturing my hard drive and still being able to search for files and folders directly from Explorer, perhaps using some third-party software?

    Read the article

  • RAID 6 that can read with least 1000 Mbit/s?

    - by Diblo Dk
    I purchased a Dell PERC 6/i which I expected to be able to read with 1000 Mbps. There is not much to do now, but there are some things I wanted knowledge about for another time. I have configured it with four 2 TByte drives and RAID 6. It have 256 MByt ram and transfer rate of 300 Mbps. The benchmark test showed: Min read rate: 136.3 Mbps Max read rate: 329,6 Mbps Avg read rate: 242,2 Mbps What could I had done to get at least 1000 Mbps? Is it normal for internal and external RAID controllers to have a lower transfer rate eg. 300 Mbps? (I did not noticed at the time that it was not 3 Gbps) How would a RAID 10 had performed compared to RAID 6 or 5? Would it have been better to use software RAID (Linux) with the internal 3 Gbps SATA controller? UPDATE: The drives is SATA III 6 Gbps. http://www.seagate.com/files/staticfiles/docs/pdf/datasheet/disc/desktop-hdd-data-sheet-ds1770-1-1212us.pdf (2TB)

    Read the article

  • What is the computer "doing" when it is running slow and task manager is not showing any CPU activity?

    - by Joakim Tall
    Typical example is when shutting down a memoryintensive application. It can take quite a while before the computer gets back up to speed. Is there some inherent cost in releasing memory? Or is it throttled by some kind of harddrive activity, and if so is there any good way to track that? I usually bring up task manager when a computer is running slow, and usually sorting by cpu activity can show what process is causing the problem, but sometimes there is no activity showing. And yes I "show processes from all users", I have been wondering this since the days win2k :)

    Read the article

  • VMWare Pre-Allocated vs. Growable, which is faster?

    - by tekiegreg
    In an effort to increase speed in my Vmware setup, I was thinking about converting a Windows XP Guest 32 bit I have from growable to pre-allocated, I'm currently running VMWare Workstation 7 with Windows 7 64 bit as the host. Specs: Dual Core CPU, one allocated to guest 4GB of RAM, 2GB to guest HD max capacity is 500GB, 150GB allocated to guest (I have 300GB left and don't mind parting with the space, currently HD is 80GB and converting would obviously add another 70GB of space), HD that guest is running on is separate from Host OS Either that or any other suggestions you have might be appreciated, thanks!

    Read the article

  • MySQL Locking Up

    - by Ian
    I've got a innodb table that gets a lot of reads and almost no writes (like, 1 write for every 400,000 reads approx). I'm running into a pretty big problem though when I do INSERT into the table. MySQL completely locks up. It uses 100% cpu, and every single other table (in other databases even) have their statuses set to "Locked" until the INSERT is done. This is a big problem because MySQL stays locked up for up to 4 minutes. I'm using version 5.1.47 (rpm from mysql.com). Any ideas?

    Read the article

  • Over gigabit connection, Teracopy does 31MB/s, but Windows 8 does it at ~109MB per second?

    - by Gaurang
    I got my brain-melting first taste of Gigabit networking today, between my 2011 MacMini and Windows 8 Pro desktop connected via Cat.5e to Linksys WRT320N(sporting dd-WRT). After making sure that the line speed on both systems showed 1Gbps, I proceeded to copying a 2.4GB MP4 from the Mini to the Win 8 desktop (SMB sharing). Although satisfied with the 30-34 MB/s that Teracopy was showing (that was a proper step-up for me from 10 MB/s), I still was curious about this massive difference in the advertised and real-world speed. 2 hours of Google had me believing that there were other factors that resulted in less speed, SMB being one. So just for the sake of doing it, I iPerf'd both the systems and guess what that showed - around 875mbps on both systems! I then stumbled upon this little piece of info after which I turned off Teracopy and copied the same file through Windows 8's regular copier. 109 MB/s. Molten brains :) What exactly is causing this? And can I enable such speeds via Teracopy? I really dig the extra features that Teracopy has, will surely miss them now :D

    Read the article

  • Jumbo Frames, ISCSI and ESXi

    - by vlannoob
    I have enabled Jumbo Frames (9000) in ESXi for all my vmNICs, vmKernels, vSwitches, iSCSI Bindings etc - basically anywhere in ESXi where it has an MTU settings I have put 9000 in it. The ports on the switches (Dell PowerConnects) are all set for Jumbo Frames. I have a Dell MD3200i with 2 controllers, each with 4 ports for iSCSI. Each of these ports is set to Jumbo Frames (9000) as well. So now the questions: Do I need to log into each Windows Server VM I am running and delve into the NIC properties and manually set it to Jumbo Frames in the NIC properties in the device Manager as well? Whats the best way of testing that Jumbo Frames are indeed working as intended?

    Read the article

  • Using wildcard domains to serve images without http blocking

    - by iopener
    I read that browsers sometimes block waiting for multiple images from the same host, and I'm trying to do everything I can to speed up page load times. One caveat: I need to serve files over HTTPS. Any opinions about whether this is feasible: Setup a wildcard cert for *.domain.com. Whenever I need an image, generate an number based on a hash mod 5 of the filename, and append it to an 'img' subdomain (eg img1.domain.com, img4.domain.com, img3.domain.com, etc.); the hash will make any filename always use the same subdomain, and therefore the browser should be able to cache the images Configure a dynamic virtualhost record to point all img#. subdomains to /var/www/img I am looking for feedback about this plan. My concerns are: Will I get warnings when my page has https:// links to multiple subdomains? Is the dynamic virtualhost record I'm talking about even possible? Considering the amount of processing this would require, is it likely to even produce any kind of overall benefit? I'm probably averaging a half-dozen images per page, with only half being changed on each page refresh. Thanks in advance for you feedback.

    Read the article

  • Intel Core 2 Duo E6600 versus AMD Athlon II X2 3GHZ

    - by Billy ONeal
    Hello :) I have an Intel Core 2 Duo E6600 (2.4GHZ) in my current desktop, and I have a newer machine with an AMD Athlon II X2 3.0GHZ. I'm wondering how the systems will perform in comparison to one another. I'd like to use the AMD because it's 45nm and uses less power, but I don't want to do so at a loss in perforamnce. Which should perform better? Billy3

    Read the article

  • MySQL maintenance - how to clear the buffer?

    - by Dougal
    We have a server running our web app (PHP / MySQL) which is SLOW. My predecessor says that: "We use to do database maintenance, which use to clear the buffer, cached and unwanted variables." And I wonder what on earth he means with that statement? Does he mean a simple optimize of the tables? Or the query cache? I understand MySQL but don't really know what he is describing. I would appreciate any pointers. Thanks.

    Read the article

  • How to configure an ASUS motherboard.

    - by Absolute0
    I have an ASUS P7P55-M motherboard running with an Intel Core i5-750 processor and 4 GB RAM with 1600 MT/s speed. For some reason the default settings of the motherboard make all the components run at half their optimums. I have switched to the "D.O.C.P." profile and supposedly everything is as it's supposed to be (verified with CPU-Z). There is also an "X.M.P." profile and a manual one. Are either of the DOCP or XMP safe to go with? I wouldn't use the manual mode as I would likely mess something up real bad. XMP seems to be more memory oriented.

    Read the article

< Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >