Search Results

Search found 35507 results on 1421 pages for 'performance test'.

Page 172/1421 | < Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >

  • Chipset GPU causes a massive slowdown

    - by zyboxenterprises
    My AMD Radeon HD 7700 recently broke (fan stopped working and GPU overheated), and now I'm running on internal chipset graphics, and it causes a massive slowdown of the whole PC. I've changed the graphics memory from 32MB (minimum) to 256MB (highest), and it hasn't made any difference whatsoever. I'm using Windows Aero, and disabling it should have made a small difference, but it didn't; the whole PC is still slow. I know that it's not the computer build, because I built it myself, and it was a lot faster when it had the AMD Radeon HD 7700 in it, which is the reason why I believe it's the internal chipset graphics that are causing the problem. Is this behavior normal? I don't have the cash right now to go out and buy a new dedicated GPU. I'm using an ASRock N68C-GS FX motherboard with an AMD FX 4100 (overclocked to 4.3GHZ), with 4GB RAM. The overclock was an attempt to resolve this issue, and it isn't related to this issue that the integrated graphics is causing a slowdown.

    Read the article

  • How to tell if linux disk IO is causing excessive (> 1 second) application stalls

    - by noahz
    I have a Java application performing a large volume (hundreds of MB) of continuous output (streaming plain text) to about a dozen files a ext3 SAN filesystem. Occasionally, this application pauses for several seconds at a time. I suspect that something related to ext3 vsfs (Veritas Filesystem) functionality (and/or how it interacts with the OS) is the culprit. What steps can I take to confirm or refute this theory? I am aware of iostat and /proc/diskstats as starting points. Revised title to de-emphasize journaling and emphasize "stalls" I have done some googling and found at least one article that seems to describe behavior like I am observing: Solving the ext3 latency problem Additional Information Red Hat Enterprise Linux Server release 5.3 (Tikanga) Kernel: 2.6.18-194.32.1.el5 Primary application disk is fiber-channel SAN: lspci | grep -i fibre 14:00.0 Fibre Channel: Emulex Corporation Saturn-X: LightPulse Fibre Channel Host Adapter (rev 03) Mount info: type vxfs (rw,tmplog,largefiles,mincache=tmpcache,ioerror=mwdisable) 0 0 cat /sys/block/VxVM123456/queue/scheduler noop anticipatory [deadline] cfq

    Read the article

  • What is the effect on LVM snapshot size when a file block is rewritten with it's original contents?

    - by NevilleDNZ
    I'm exploring using LVM snapshot's to off site incremental archives from a snapshot "master" file system. In essence: simply copy across only the files on the "master" that have changed since the last incremental copy to the "archive". Then snapshot the "archive" to retain the incremental. I am a bit puzzled as to the block usage behaviour of the archive's own incremental snapshot. I'm expecting that LVM is not smart enough to know that the "file block" is actually unchanged, and the a new copy will be allocated and written for the fresh "archive" file system. Can anyone confirm this, or point me to a document/page that gives some hints? BTW: the OS hard disk cache, hard disk physical cache and hard disk itself also doesn't need to do any actual "disk writes" as the "disk block" likewise is unnecessary. Any pointers to discussion of this style of optimisation would also be ineresting.

    Read the article

  • Multiple columns in a single index versus multiple indexes

    - by Tim Coker
    The short version of my question is what's the difference between three indexes each indexing a single column and one index indexing three columns. Background follows. I'm primarily a programmer but have to do DBA work because we don't have a DBA. I'm evaluating our indexes versus the queries run against a particular table. The table as 3 columns that I'm often filtering against or getting the max value of. Most of the time the queries look like select max(col_a) from table where col_b = 'avalue' or select col_c from table where col_b = 'avalue' and col_a = 'anothervalue' All columns are independently indexed. My question is would I see any difference if I had an index that indexed col_b and col_a together since they can appear in a where clause together?

    Read the article

  • Linux server became extremely slow

    - by Ariel Aharonson
    I have a file sharing website, and my files hosted in a server with those system specifications: 32GB RAM 12x3TB 2x Intel Quad Core E5620 I have files in this server up to 4gb for each file. 446gb is full (/36TB) [root@hosted-by ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 50G 2.7G 44G 6% / tmpfs 16G 0 16G 0% /dev/shm /dev/sda1 97M 57M 36M 62% /boot /dev/mapper/VolGroup01-LogVol00 33T 494G 33T 2% /home And take a look at this: Why is the wa% so high? (I think that what makes the server to be so slow)

    Read the article

  • Run serveral daemon using python

    - by ylc
    I noticed that serveral daemon invoked python seperately. For example, I have both wicd and ibus daemon running on my machine. Instead of launching a single instance of python, the daemons run with two python instance at the same time in htop: /usr/bin/python2 -O /usr/share/wicd/daemon/monitor.py python2 /usr/share/ibus/ui/gtk/main.py Is it a waste of doing that? If yes, how can I improve this? If no, why avoid putting all daemons run on a single python instance?

    Read the article

  • Monitoring tools that can take high rate and high volume?

    - by Jon Watte
    We're using Cacti with RRDTool to monitor and graph about 100,000 counters spread across about 1,000 Linux-based nodes. However, our current setup generally only gives us 5-minute graphs (with some data being minute-based); we often make changes where seeing feedback in "near real time" would be of value. I'd like approximately a week of 5- or 10-second data, a year of 1-minute data, and 5 years of 10-minute data. I have SSD disks and a dual-hexa-core server to spare. I tried setting up a Graphite/carbon/whisper server, and had about 15 nodes pipe to it, but it only has "average" for the retention function when promoting to older buckets. This is almost useless -- I'd like min, max, average, standard deviation, and perhaps "total sum" and "number of samples" or perhaps "95th percentile" available. The developer claims there's a new back-end "in beta" that allows you to write your own function, but this appears to still only do 1:1 retention (when saving older data, you really want the statistics calculated into many streams from a single input. Also, "in beta" seems a little risky for this installation. If I'm wrong about this assumption, I'd be happy to be shown my error! I've heard Zabbix recommended, but it puts data into MySQL or some other SQL database. 100,000 counters on a 5 second interval means 20,000 tps, and while I have an SSD, I don't have an 8-way RAID-6 with battery backup cache, which I think I'd need for that to work out :-) Again, if that's actually something that's not a problem, I'd be happy to be shown the error of my ways. Also, can Zabbix do the single data stream - promote with statistics thing? Finally, Munin claims to have a new 2.0 coming out "in beta" right now, and it boasts custom retention plans. However, again, it's that "in beta" part -- has anyone used that for real, and at scale? How did it perform, if so? I'm almost thinking about using a graphing front-end (such as Graphite) and rolling my own retention backend with a simple layer on top of mmap() and some stats. That wouldn't be particularly hard, and would probably perform very well, letting the kernel figure out the balance between frequency of flushing to disk and process operations. Any other suggestions I should look into? Note: it has to have shown itself able to sustain the kinds of data loads I'm suggesting above; if you can point at the specific implementation you're referencing, so much the better!

    Read the article

  • Why is MySQL table_cache full but never used

    - by Jeremy Clarke
    I have been using the tuning-primer.sh script to tune my my.cnf settings. I have most things working well but the part about TABLE CACHE makes no sense: TABLE CACHE Current table_cache value = 900 tables. You have a total of 0 tables You have 900 open tables. Current table_cache hit rate is 1% , while 100% of your table cache is in use. You should probably increase your table_cache When I do SHOW STATUS; I get the following table-related numbers: Open_tables = 900 Opened_tables = 0 It seems like something is going wrong. I have some extra memory I could use on increasing the table_cache size, but my sense is that the 900 tables already available aren't doing anything, and increasing it will just waste more energy. Why might this be happening? Are there other settings that could cause all my table_cache slots to be used even though there are no hits to them? I have 150 max connections and probably no more than 4 tables per join, FWIW. Here is the tuner script output for temp tables, which I've also been tuning: TEMP TABLES Current max_heap_table_size = 90 M Current tmp_table_size = 90 M Of 11032358 temp tables, 40% were created on disk Perhaps you should increase your tmp_table_size and/or max_heap_table_size to reduce the number of disk-based temporary tables. Note! BLOB and TEXT columns are not allow in memory tables. If you are using these columns raising these values might not impact your ratio of on disk temp tables.

    Read the article

  • How To Troubleshoot Excess Time From Connect to First Byte?

    - by Gaia
    I measured load times for a wordpress 2.9.2 install on apache 2.2.3 and I was intrigued by the long periods between connect and first byte for the css and image files. Load Average is 0.0, 0.0, 0.0 and there are 150MB free RAM on the VPS. Pingdom results are at http://imagebin.ca/img/6UaiOU.png How do I gain insight into the possible causes of this problem and how would I troubleshoot it? Thanks

    Read the article

  • Zabbix - Some of the monitored items dont get refreshd. how to find the reason?

    - by Niro
    I'm experiencing a strange issue with Zabbix monitoring a MySQL server. Most of the data from the server such as MySQL queries per second and MySQL uptime , Buffers memory etc. update nicely while some data like CPU iowait time (avg1) , Host local time ,MySQL number of threads and other items which were monitored in the past has last check time of about a week ago. I can't find any logic in this, for example Mysql number of threads and Mysql queries per second are obtained in a similar way so it does not make sense one of them is monitored and one is not. Please help- how can I fix this?

    Read the article

  • GNOME/KDE Linux entirely in RAM?

    - by František Žiacik
    Hi. I'd like to have very responsive linux but I also like modern, elegant and functional desktops like gnome or kde, not the lightweight ones like xfce or lxde. Once I tried PuppyLinux and was impressed by the responsivity when I clicked an application. In my Ubuntu, it bothers me much when I click chromium and must wait 5 seconds of disk flashing until main window appears. Or evolution or anything else. Is it possible to make GNOME or KDE run entirely in RAM like PuppyLinux (of course, I mean frequently used applications and services, not all) if you have enough of it? I don't care if boot time is longer. I tried using "preload" but it didn't help much.

    Read the article

  • Why Can't I Pre-Zip Server Files?

    - by ThinkBohemian
    It's just good common sense to have your server gzip your files before they send them to users (I use Nginx) Is there anyway to save the server some overhead and pre-zip those files for the server, and if not why? For instance rather than giving the server an myscript.js and having the server zip the file and send it to the user, is there a way to create myscript.js.zip so the server doesn't have to?

    Read the article

  • Using wildcard domains to serve images without http blocking

    - by iopener
    I read that browsers sometimes block waiting for multiple images from the same host, and I'm trying to do everything I can to speed up page load times. One caveat: I need to serve files over HTTPS. Any opinions about whether this is feasible: Setup a wildcard cert for *.domain.com. Whenever I need an image, generate an number based on a hash mod 5 of the filename, and append it to an 'img' subdomain (eg img1.domain.com, img4.domain.com, img3.domain.com, etc.); the hash will make any filename always use the same subdomain, and therefore the browser should be able to cache the images Configure a dynamic virtualhost record to point all img#. subdomains to /var/www/img I am looking for feedback about this plan. My concerns are: Will I get warnings when my page has https:// links to multiple subdomains? Is the dynamic virtualhost record I'm talking about even possible? Considering the amount of processing this would require, is it likely to even produce any kind of overall benefit? I'm probably averaging a half-dozen images per page, with only half being changed on each page refresh. Thanks in advance for you feedback.

    Read the article

  • Over gigabit connection, Teracopy does 31MB/s, but Windows 8 does it at ~109MB per second?

    - by Gaurang
    I got my brain-melting first taste of Gigabit networking today, between my 2011 MacMini and Windows 8 Pro desktop connected via Cat.5e to Linksys WRT320N(sporting dd-WRT). After making sure that the line speed on both systems showed 1Gbps, I proceeded to copying a 2.4GB MP4 from the Mini to the Win 8 desktop (SMB sharing). Although satisfied with the 30-34 MB/s that Teracopy was showing (that was a proper step-up for me from 10 MB/s), I still was curious about this massive difference in the advertised and real-world speed. 2 hours of Google had me believing that there were other factors that resulted in less speed, SMB being one. So just for the sake of doing it, I iPerf'd both the systems and guess what that showed - around 875mbps on both systems! I then stumbled upon this little piece of info after which I turned off Teracopy and copied the same file through Windows 8's regular copier. 109 MB/s. Molten brains :) What exactly is causing this? And can I enable such speeds via Teracopy? I really dig the extra features that Teracopy has, will surely miss them now :D

    Read the article

  • Is there a way to have a working search bar in Explorer with Windows Search Service disabled?

    - by Desmond Hume
    I had to disable Windows Search Service (turn it off in Windows Features) for the reason that it was constantly using the hard drive in an excessive way (maybe because I've got very large quantities of files on my PC), noticeably slowing down my computer, and the Windows.edb database file grew way too large, about 2.5 GB in size. But the side-effect of it is that now the search bar is gone from any Explorer window and I miss this useful feature. So my question is, is there a way to stop Windows Search Service torturing my hard drive and still being able to search for files and folders directly from Explorer, perhaps using some third-party software?

    Read the article

  • SQL Management Studio is painfully slow on 32-bit Windows 7

    - by Sergei
    I've been having issues running anything in SQL Management Studio on Win 7. Basically, doing anything through the Management Studio interfaces completely freezes it up for a few minutes. Running a query is nearly impossible because it takes nearly 2 minutes just for the IDE to parse it and another minute to run it when the query itself completes instantaneously outside of the IDE. I'm not even going to go into the query designer. Anything with heavy user interaction such as editing a row in the result set where i have to click a cell freezes up the front-end. I tried reinstalling to no avail. Also tried running in compatibility mode without any difference whatsoever. Anybody had a similar experience? I'm running SQL Management Studio 2008 version 10.0.2531.0 on 32-bit Windows 7. Connecting to a remote SQL Server instance (2008 R2). Thanks.

    Read the article

  • linux: accessing thousands of files in hash of directories

    - by 130490868091234
    I would like to know what is the most efficient way of concurrently accessing thousands of files of a similar size in a modern Linux cluster of computers. I am carrying an indexing operation in each of these files, so the 4 index files, about 5-10x smaller than the data file, are produced next to the file to index. Right now I am using a hierarchy of directories from ./00/00/00 to ./99/99/99 and I place 1 file at the end of each directory, like ./00/00/00/file000000.ext to ./00/00/00/file999999.ext. It seems to work better than having thousands of files in the same directory but I would like to know if there is a better way of laying out the files to improve access.

    Read the article

  • Macbook Pro 2010 13,3'' 2,4 vs. 2,66Ghz

    - by Milde
    Hi, is the 13,3'' MBP 2,66ghz worth the extra 300€, comparing it to the 2,4ghz version? What CPUs are installed? P8600/P8800 ? 300€ for 70GB more space and 0,26ghz or would it be better to use the 300€ for a solid state disk? What's your opinion? Thanks in advance, Milde

    Read the article

  • How do you debug why Windows is slow?

    - by aaron
    I've got Vista Biz and when my machine chugs I think it is because of paging, but I never know how to verify this. Procexp doesn't seem to provide useful information because it appears that nothing is going on when the chugs happen. perfmon seems like it has the counters I need, but I'm never sure what counters I should add to cover the information I want. For perfmon, I prefer numbers that are percents, so I can gauge load. Here are the counters I have up, but they don't always seem to correlate to chugs: - % disk time (logical) - page faults/sec (an indicator of lots of paging activity) - processor/%priviliged time

    Read the article

  • What is fastest way to backup a disk image over LAN?

    - by David Balažic
    Sometimes I boot sysrescd or a similar live linux on a PC to backup the hardrive over local network to my server. I noticed many times, that the transfer speed is not optimal (slower than HDD and network speed). Any rules of thumb what to do and what to avoid? What I typically do is something like: dd bs=16M if=/dev/sda | nc ... # on client nc ... | dd bs=16M of=/destination/disk/backup1 # on server I also "throw" in lzop (other are way too slow) and sometimes on the fly md5sum calculation (both of uncompressed and compress source). I try to add (m)buffer (or other alternatives) to improve throughput (and get a progress indicator). I noticed that even with enough free CPU, adding commands to the pipeline slows things down. Typically the destination is on a NTFS volume (accessed via ntfs-3g, with the _big_writes_ option).

    Read the article

  • Justifying a memory upgrade, take 2

    - by AngryHacker
    Previously I asked a question on what metrics I should measure (e.g. before and after) to justify a memory upgrade. Perfmon was suggested. I'd like to know which specific perfmon counters I should be measuring. So far I got: PhysicalDisk/Avg. Disk Queue Length (for each drive) PhysicalDisk/Avg. Disk Write Queue Length (for each drive) PhysicalDisk/Avg. Disk Read Queue Length (for each drive) Processor/Processor Time% SQLServer:BufferManager/Buffer cache hit ratio What other ones should I use?

    Read the article

< Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >