Search Results

Search found 4580 results on 184 pages for 'faster'.

Page 84/184 | < Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >

  • rm on a directory with millions of files

    - by BMDan
    Background: physical server, about two years old, 7200-RPM SATA drives connected to a 3Ware RAID card, ext3 FS mounted noatime and data=ordered, not under crazy load, kernel 2.6.18-92.1.22.el5, uptime 545 days. Directory doesn't contain any subdirectories, just millions of small (~100 byte) files, with some larger (a few KB) ones. We have a server that has gone a bit cuckoo over the course of the last few months, but we only noticed it the other day when it started being unable to write to a directory due to it containing too many files. Specifically, it started throwing this error in /var/log/messages: ext3_dx_add_entry: Directory index full! The disk in question has plenty of inodes remaining: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda3 60719104 3465660 57253444 6% / So I'm guessing that means we hit the limit of how many entries can be in the directory file itself. No idea how many files that would be, but it can't be more, as you can see, than three million or so. Not that that's good, mind you! But that's part one of my question: exactly what is that upper limit? Is it tunable? Before I get yelled at--I want to tune it down; this enormous directory caused all sorts of issues. Anyway, we tracked down the issue in the code that was generating all of those files, and we've corrected it. Now I'm stuck with deleting the directory. A few options here: rm -rf (dir)I tried this first. I gave up and killed it after it had run for a day and a half without any discernible impact. unlink(2) on the directory: Definitely worth consideration, but the question is whether it'd be faster to delete the files inside the directory via fsck than to delete via unlink(2). That is, one way or another, I've got to mark those inodes as unused. This assumes, of course, that I can tell fsck not to drop entries to the files in /lost+found; otherwise, I've just moved my problem. In addition to all the other concerns, after reading about this a bit more, it turns out I'd probably have to call some internal FS functions, as none of the unlink(2) variants I can find would allow me to just blithely delete a directory with entries in it. Pooh. while [ true ]; do ls -Uf | head -n 10000 | xargs rm -f 2/dev/null; done ) This is actually the shortened version; the real one I'm running, which just adds some progress-reporting and a clean stop when we run out of files to delete, is: export i=0; time ( while [ true ]; do ls -Uf | head -n 3 | grep -qF '.png' || break; ls -Uf | head -n 10000 | xargs rm -f 2/dev/null; export i=$(($i+10000)); echo "$i..."; done ) This seems to be working rather well. As I write this, it's deleted 260,000 files in the past thirty minutes or so. Now, for the questions: As mentioned above, is the per-directory entry limit tunable? Why did it take "real 7m9.561s / user 0m0.001s / sys 0m0.001s" to delete a single file which was the first one in the list returned by "ls -U", and it took perhaps ten minutes to delete the first 10,000 entries with the command in #3, but now it's hauling along quite happily? For that matter, it deleted 260,000 in about thirty minutes, but it's now taken another fifteen minutes to delete 60,000 more. Why the huge swings in speed? Is there a better way to do this sort of thing? Not store millions of files in a directory; I know that's silly, and it wouldn't have happened on my watch. Googling the problem and looking through SF and SO offers a lot of variations on "find" that obviously have the wrong idea; it's not going to be faster than my approach for several self-evident reasons. But does the delete-via-fsck idea have any legs? Or something else entirely? I'm eager to hear out-of-the-box (or inside-the-not-well-known-box) thinking. Thanks for reading the small novel; feel free to ask questions and I'll be sure to respond. I'll also update the question with the final number of files and how long the delete script ran once I have that. Final script output!: 2970000... 2980000... 2990000... 3000000... 3010000... real 253m59.331s user 0m6.061s sys 5m4.019s So, three million files deleted in a bit over four hours.

    Read the article

  • Opening offline version of Microsoft Books Online in browser

    - by ercan
    I often use the MSDN website for language reference. In order to make navigation faster, I downloaded the offline version of SQL Server 2005 Books Online from here: http://www.microsoft.com/downloads/details.aspx?familyid=be6a2c5d-00df-4220-b133-29c1e0b6585f&displaylang=en The reason why it is 137MB is that it comes with its own GUI, which, not surprisingly, is rather poor! Apparently though, the pages are written in html. The URIs look like: ms-help://MS.SQLCC.v9/sqlcc9/html/674933a8-e423-4d44-a39b-2a997e2c2333.htm . I can open the URI in IE, but with errors. Do you know if I can open them with Firefox and how? Or is there a simple HTML version of "MS Books Online", for example in a ZIP file?

    Read the article

  • Can Resource Governor for SQL Server 2008 be scripted?

    - by blueberryfields
    I'm looking for a method to, in real-time, automatically, adjust Resource Governor settings. Here's an example: Imagine that I have 10 applications, each hitting a different database on the same database machine. For normal operations, they do not hit the database very hard, so I might want each one to have 10% CPU power reserved. Occasionally, though, one or two of them might spike, and run an operation which could really use the extra power to run faster. I'd like to be able to adjust to compensate (say, reducing the non-spiking apps to 3%, and splitting the difference between the spiking apps). This is a kind of poor man's method of trying to dynamically adjust resource allocation and priorities. Scripts (or something script-like) is preferred, since the requirement is for meta-level adjustments to be possible in real-time, also.

    Read the article

  • Reducing latency for different geographic regions on Amazon Cloud

    - by Shoaibi
    I have got an application which has three components Application code : Amazon EC2 US-EAST-1 instance Application images, and other static data : Amazon S3 with CloudFront Application Database : Amazon RDS In short i need something like Cloud Front for EC2. In long, people using this application from a different region say middle east will have faster static content downloading due to Cloud Front but there would be a lot of latency in communicating to EC2 instance. I want to use a budget friendly way of enhancing this. Launching Amazon Instances in every region that offer is sure a choice, but isn't really cheap, so would try to avoid it unless its last resort. Also say if my clients also need to communicate to the RDS database directly, is there some kind of solution which gives that kind of functionality mentioned above, but for RDS?

    Read the article

  • how to optimize virtual box shared folders

    - by Nrew
    This is really pissing me off. No matter how much memory I put into the guest os(windows xp). It still hangs for about 365 days before you can access the file you want to access from the shared folder. What do I do to make things faster? Because after it hangs and not respond for 365 days. It will do it again for another 250 days. Ive even set the shared folder to permanent. This is a fairly decent machine: 2.50ghz processor(x64 architecture, but I have only 2Gb of memory so my host os is just 32 bit windows 7) hdd has much space left: 156 Gb free of 250Gb

    Read the article

  • What is the best file system and allocation size for a USB flash drive?

    - by e-t172
    I'm considering using my 4 GB Kingston DataTraveler USB stick to store my Firefox and Thunderbird profiles for my laptop and desktop PCs. I want to maximize performance when using Firefox. The question is: what is the best file system and allocation size for the fastest Firefox profile operation on a USB flash drive? I'm using Windows 7 on both machines and I don't care about compatibility or the drive's lifetime. I just want to maximize performance. I could even use ext2 with the Ext2 IFS driver if that means it'll be faster. I'm assuming (perhaps I'm wrong) that putting a Firefox profile on a USB stick would be a "lots of small files" usage. In that case, it seems that NTFS would perform best, but I'm not sure. Besides I found nothing regarding the best allocation size to use. Considering that the default allocation size is designed for hard drives (which have different characteristics), I'm assuming that the default allocation size is not the best.

    Read the article

  • HD movies stutter.

    - by Absolute0
    I just put together a new system build in the hopes that all of my daily tasks would run smoothly and without any hiccups. Unfortunately I am still seeing some sound and sometimes video stuttering when playing HD movies in VLC (no problems with xvid/divx files). My setup is as follows: Intel core i5 750 quad core 2.66mhz 4GB ram asus p55 motherboard radeon hd 5570 video card 650gb 7200rpm western digital sata HDD 23" Nec ea23wmi monitor Operating System: Windows 7 What might be the main bottleneck that needs upgrading to fix my delays? Seems like the hard drive might be the problem but anything faster than 7200rpm is beyond my budget for a decent hard drive. Could it be anything else?

    Read the article

  • EBS+RAID10+XFS slower than EBS+RAID10+EXT3 using MySQL?

    - by Johann Tagle
    We're currently using EC2 with 16 EBS volumes in RAID10 configuration for our MySQL data. I know some people don't recommend to put EBS volumes to RAID but that's not what I'm concerned about at the moment. Current format is ext3, but we're experimenting with moving to xfs, given many reports that it is faster. However, we're actually experiencing a performance degradation when the partition was converted to xfs - a benchmark run with inserts, updates, selects and deletes was more than 10 seconds slower using xfs. Any idea what could be the problem? Below is the fstab entry (really only changed ext3 to xfs). Database tables are innodb and we are using innodb_file_per_table. /dev/mapper/vg_data-lv_data /data xfs noatime 0 0 Thanks.

    Read the article

  • Mouse for a lefty?

    - by Kyralessa
    As a programmer, I'm fairly particular about my keyboard and mouse. One thing I've only recently noticed is that the mice I tend to prefer don't work well for my five-year-old. He's left-handed, but ends up mousing with his right hand on the computer because (a) that's where the mouse is, and (b) most mice are designed for right-handed people anyway. So, a couple of questions: If you're left-handed and you mouse with your left hand, do you have any recommendations on good mice? If you have a lefty in your household (whether or not it's you), is there an easy way to swap settings between left-handed and right-handed mouse buttons? One way is Control Panel Mouse Check the box on the first tab OK. Do you know of anything faster? Or, better yet, what I'd really like is a way to reverse those settings on only one particular mouse, which would be designated as my son's mouse.

    Read the article

  • PS/2 vs USB keyboards: performance and energy consumption

    - by Mister Smith
    As far as I know, PS/2 keyboards are interrupt driven, while USB are polled. Typically a PS/2 keyboard was assigned IRQ_1 on Windows. I'm no hardware expert, but at a first glance seems like the PS/2 keyboards are more efficient. So here are my questions: On modern day computers, are PS/2 keyboard better (or faster), and if so, would it be noticeable at all? (e.g.: in gaming) Since they don't need polling, do PS/2 keyboards save energy compared to USB? (notice I'm not talking only about the peripheral here, but about the overall computer energy consumption). In case PS/2 had any advantage over USB, would adding a PS/2 adapter to my USB keyboard make the device as good as an actual PS/2 keyboard? Conversely, would adding a USB adapter to a PS/2 make it as bad as a USB KB? Thanks in advance.

    Read the article

  • Gateway MX6440 CPU Upgrade

    - by BPugh
    I have received an a Gateway MX6440 laptop as a freebie, but I'm interested in upgrading its AMD Turion 64 ML-32 (socket 754) to something faster (and more cache). I know the range of processors that could work based on the family list in Wikipedia. However, this computer has the stock bios, and any updates I haven't applied from Gateway doesn't specify processor support. I'm looking to go to at least a 2.2 (ML-40). Has anybody upgraded the processor in this model or other in the series success or failure and do you happen to have any guides handy for working with the heat sink? Any Googling I have done keeps hitting RAM marketers. Update Computer died before I had a chance to try this out.

    Read the article

  • Ngingx wont start with fastcgi_split_path_info" error

    - by Ke
    Hi, I heard that nginx is faster and since im on a VPS with low ram i thought id try it out. I got through this tutorial http://www.howtoforge.com/installing-php-5.3-nginx-and-php-fpm-on-ubuntu-debian But I now get the following error: unknown directive "fastcgi_split_path_info" in /etc/nginx/sites-enabled/default:28 Anyone know what might be causing the problem? I cant find any reference to the problem on Google Also I have heard conflicting things about Nginx vs Apache. Some say use one, some say the other. Im using allsorts such as rewrite rules, proxies etc. Am I setting myself up for a fall by using Nginx? If I go for apache, does anyone know of anyway to tweak it so that it performs better on a low ram VPS? Cheers Ke

    Read the article

  • Ngingx wont start with fastcgi_split_path_info" error

    - by Ke
    Hi, I heard that nginx is faster and since im on a VPS with low ram i thought id try it out. I got through this tutorial http://www.howtoforge.com/installing-php-5.3-nginx-and-php-fpm-on-ubuntu-debian But I now get the following error: unknown directive "fastcgi_split_path_info" in /etc/nginx/sites-enabled/default:28 Anyone know what might be causing the problem? I cant find any reference to the problem on Google Also I have heard conflicting things about Nginx vs Apache. Some say use one, some say the other. Im using allsorts such as rewrite rules, proxies etc. Am I setting myself up for a fall by using Nginx? If I go for apache, does anyone know of anyway to tweak it so that it performs better on a low ram VPS? Cheers Ke

    Read the article

  • Building a new cluster for mathematical calculations (Win/Lin)

    - by Muhammad Farhan
    I would like to build a new cluster to perform heavy mathematical calculations in Matlab and Abaqus. One of my friend told me that distributed computing is way faster than parallel computing, which is very true after reading a bit on the internet. However, I have never clustered before. Current workstation I own: Dell Precision T5400 2 x Intel Xeon 2.5 GHz 16 GB RAM (2GB x 8) 1 x Western Digital 1TB HDD 7200 rpm 1 x nVidia Quadro FX4600 768MB GPU 1 x 870W PSU OS: Windows 7 Ultimate 64-bit 2nd WS: I can buy another WS similar configuration to the one I own I am not bothered about OS, I am willing to cluster with either Windows or Linux. However, my software are compatible with windows 64-bit only. Please help me setup a cluster. Thank you.

    Read the article

  • Slow old notebook Hardy => Karmic

    - by Mailo
    Hi, i have one very slow notebook from year about 2000. On the computer is running icewm with firefox (in this times chromium for testing). My question is if it's good step to upgrade the system to Karmic Koala? I can't install another OS on that. It doesn't have CD-ROM, it can't boot from flash, or network. The new wanted state is little bit faster system for browsing web and copying photos to local NAS. I don't mention hardware configuration, becouse it's real speed is really deep below the paper parameters.

    Read the article

  • Alternatives for heapdumps creation with higher performance than jmap?

    - by Christian
    Hi, I have to create heapdumps, which works nice with jmap. My problem is, that jmap takes very long to create the heapdump file. Especially when the heap is getting bigger ( 1GB) it is taking too long. One situation as example: When the server gets into trouble with the heapspace, I want to restart it automatically and create a heapdump before the restart. This works, but takes too long to write the heapdump. This way the server is down for too long. The heapdump creation takes longer than one hour. I know about -XX:+HeapDumpOnOutOfMemoryError, but most of the time I can find the memory problem before the exception is thrown by the jvm. Is there an alternative to jmap which writes the heapdumps faster? A special solution for the example above would also be appreciated. This question is a mix between programming and system-administration, but I think I'm at the right place here.

    Read the article

  • Fedora12 Slow USB 2.0 Write Speed, ehci_hcd module is missing

    - by MA1
    I am using Fedora 12, the problem I am facing is USB 2.0 write speed. I have a dual boot system with Windows XP and Fedora 12. USB 2.0 write speed in Windows XP is much faster then what I am getting in Fedora 12. After searching Google I came to know that ehci_hcd module is missing/not present in my system. ehci_hcd module is neither loaded nor it is present in the available list of modules. Can someone guide me how to fix this issue? Does ehci_hcd have something to do with USB 2.0 write speed? Do I have to recompile the kernel and add/enable the ehci_hcd module?

    Read the article

  • Very low throughput on 10GbE network

    - by aix
    I have two Linux machines, each equipped with a Solarflare SFN5122F 10GbE NIC. The two NICs are connected together with an SFP+ Direct Attach cable. I am using netperf to measure TCP throughput between the two machines. On one box, I run: netserver and on the other: netperf -t TCP_STREAM -H 192.168.x.x -- -m 32768 I get: MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.x.x (192.168.x.x) port 0 AF_INET Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 32768 10.02 1321.34 The measured throughput is 1.3Gb/s. This is 7.5x below the theoretical maximum, and only 30% faster than 1GbE. What steps can I take to troubleshoot this?

    Read the article

  • Changing IP every sec with Firefox

    - by Carol
    I looking for ip changer what is faster than PROXY (i tried Elite Proxy Switcher + Firefox add-on, but it's too slow. I set automatic switching to 4 sec and yes he change the ip every 4 sec however it's not enough because it loads pages very slowly.) Secondly I tried the TOR Project but this is not good..because the TOR would be nice and working good however he needs more than 10 seconds, a new identity and it's not to good for me because i want to change my ip lass than 10 sec. So I find the solution. This is IPfucker alias ipFlood (https://addons.mozilla.org/en-us/firefox.../ipflood/) But it does not work on all sites unfortunately...because this is just simulation "Simulate the use of a series of proxy changing at each new connection.". Anyone knows a solution to the problem? Is there an alternative (VPS, Proxy, TOR)? Thanks in advice.

    Read the article

  • How to speed up apache

    - by Zen_silence
    We have a server with 8Cores, 16GB of RAM and RAID 0 SAS 10K drives. Our goal is to use this to serve a fairly simple php application quickly. We have tested all other components and we think we have narrowed it down to apache is our bottleneck. I am no apache guru I have done some research and tested a couple things but when i test with JMeter launching 100 concurrent connections against the server the first 10 - 20 come back quickly 30 - 100ms but the rest take between 1000ms to 3000ms. Anyone have any ideas on what to change in our apache config to make this faster right now its a vanilla install of apache.

    Read the article

  • Godaddy vs. Route53 for DNS

    - by tim peterson
    I have my website set up as an EC2 instance and my DNS is currently Godaddy. I'm considering switching to Amazon AWS Route53 for DNS. The one thing I noticed however is that Route53 charges monthly fees but I never get any bills from Godaddy. Obviously, nobody likes getting charged for something they can get for free. If Godaddy is cheaper, can anyone confirm that the page load speed of an EC2 instance is actually better via Route53 vs. Godaddy? If it is not faster or cheaper, can someone point out other reasons it might make sense to do this switch? thanks, tim

    Read the article

  • VirtualBox in production?

    - by MrG
    I'm planning to move a service which is currently powered by Debian into a VirtualBox. That would allow us to easily port it i.e. to a faster machine if required. The setup would be: debian host > Virtual Box #1 > debian instance #1 running Apache & application > Virtual Box #2 > debian instance #2 containing database Do you have any experience with a production setup based on Virtual Box? Is it stable and fast enough? Many thanks!

    Read the article

  • ubuntu server 10 - slow and can't remove desktop environment

    - by Alex
    i'm running ubuntu server 10.10 with the desktop environment. simple page requests are taking over 5 seconds even when connecting to the server through our local network. i believe this is partially related to having the desktop environment installed, as the server worked faster (but not as fast as it should considering that it's on the local network), but tasksel fails every time (aptitude failed 100). my knowledge of networking and linux in general is limited. would really appreciate ideas on how i can troubleshoot this problem. oh also, in the system monitor, one of the processors is almost always around 100%. i doubt this is normal too....

    Read the article

  • ASUS EAH4670 vs. ASUS EAH4670 V2 -- Whats the difference?

    - by roosteronacid
    I've been offered to buy a used ASUS EAH4670/DI/1GD3 graphics card. I went price-scouting to see if the offer I was given was fair, and I found out that there's a similar card, labelled ASUS EAH4670/DI/1GD3 V2. Question is; What's the difference? What's with the V2? What does it mean? Is it just a BIOS upgrade that I can do myself? Updated driver-software which I can download myself? Or is the card actually a better version--faster, more reliable (physical changes to the print), etc.?

    Read the article

  • Computer becomes very slow (permanently) after running a bunch of apps

    - by djzmo
    Hello there, My computer with Windows XP installed becomes very slow after I ran some heavy tasks at a time. (play a full 3D online game while extracting a 4GB RAR archive) It freezes for about 200~500ms every few seconds, and this always happens if I do heavy tasks at once in my computer (for example, installing a program while playing games), and the lag remains permanently (even a reboot won't make it better) unless I repair-install the Windows. Since I have a low-end computer: Intel(R) Pentium(R) 4 CPU 2.00 GHz, 512 MB of RAM ATI RADEON 9550 AGP 256 MB And the only way I used everytime to fix this problem is by repair-installing my Windows XP. So that I won't lose any data or installed programs. But I believe there's a better and faster way to fix this without repair-installing the Windows. Any idea?

    Read the article

< Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >