Search Results

Search found 10910 results on 437 pages for 'john speed'.

Page 248/437 | < Previous Page | 244 245 246 247 248 249 250 251 252 253 254 255  | Next Page >

  • Monitor and copy file changes on Windows Server 2003 over NFS or CIFS to *nix

    - by davenolan
    Machine A, Windows Server 2003. Machine B, Ubuntu 9.04. Aim is to copy new and updated files as fast as possible from A to B. B can mount A either as CIFS or NFS (Services for Unix NFS server running on A). This is an absolutely time critical operation. What is the best way of achieving this? Note: benchmarked NFS vs CIFS and CIFS was faster and there was less variance in the speed (haven't tuned the NFS setup at all)

    Read the article

  • Adding more RAM at different speeds? Will it impact performance as much as to make it worse than without adding it?

    - by user1676874
    I got a new laptop with 4GBs of ram, expandable to 8GBs. It has 1 4GB stick DDR3 PC3-12800 at 1600Mhz. I can't seem to find another one exactly the same locally, the closest I've found is 1 4GB stick DDR3 PC3-10600 at 1333Mhz. So my question is, I know they will both run at the slowest speed, so even if I have more available RAM it will become slower. Is the performance loss big enough to make the upgrade not worth the hassle?

    Read the article

  • How to clone or copy running windows 7 to child partition

    - by saad
    Is there anyway to clone partition to partition in windows 7 for free using some kind of command line tool so that i can set block size to increase speed i google and found some tools like dd for windows and dcfldd but when i use them it gives me error like access denied and permission denied i tried to login as administrator using: net user administrator on but its same problem dcfldd bs=4096 if=.\k: of=\.\m: while its working to create image file : dcfldd bs=4096 if=.\k: of=\.\M:\filename.ext some help needed on this will appreciate thanks

    Read the article

  • Few questions on bittorrent

    - by user23950
    Details Torrent Client: Bit lord Is it possible to continue torrent downloads on other computer? I'm at about 38% of the 8Gb file that I'm downloading. And then my isp suddenly dropped my speed from 90-100kbps into 40-48kbps(Maybe because I'm really producing lots of traffic with the 8 hrs/day downloading) If its possible what do I need to do in order for the downloads to be continued on other computer

    Read the article

  • How stable is zfs-fuse 0.6.9 on Linux?

    - by Mavrik
    I'm thinking of using ZFS for my home-made NAS array. I would have 4 HDDs in raidz on a Ubuntu Server 10.04 machine. I'd like to use the snapshot capability and dedup when storing data. I'm not so much concerned about the speed, since the machine is accessed via N wireless network and that is probably going to be the bottleneck. So does anyone have any practical experience with zfs-fuse 0.6.9 on such (or simillar) configuration?

    Read the article

  • Deploy RoR on Apache 1.3 without admin permissions

    - by blinry
    At work I have a SuSE 7.3 running Apache 1.3.20, which I don't have admin access to. I'd like to deploy Ruby on Rails with no or very little work for the admins. I need the service to keep running all the time, even if the server is rebooted, I need it to run faster than CGI-Speed and I'd like to have a simple domain without ports. What are my options?

    Read the article

  • Satellite data in Russia?

    - by Eddy
    Anyone familiar with options for transmitting data in Russia? I'd be interested in hearing about low-speed packet data and faster. Not really looking at VSAT initially as I'd like to keep the power requirements low unless we find no other options.

    Read the article

  • Index a set of files to search their text quickly?

    - by Ricket
    I have a unique need: I am frequently searching a large set of text files for a keyword. Right now, I open up Notepad++ and use the "Find in files" feature. It works just fine, but with the amount of files, each search takes several minutes to complete. Is there a good program more suited for this purpose, perhaps that indexes a set of files and then lets you search the set repeatedly and very quickly? It would greatly speed up my workflow.

    Read the article

  • ruby on rails gitorious setup on ubuntu

    - by dogmatic69
    Ive been trying to install gitorious for a while now which required ruby and rails etc. Ive finally got rails pages serving but cant finish the installation of gitorious because the gem version is too new. the error logs say please run 'rake ultrasphinx:configure' and that gives rake ultrasphinx:configure (in /var/www/apps/gitorious) rake aborted! uninitialized constant ActiveSupport::Dependencies::Mutex /var/www/apps/gitorious/Rakefile:10:in `require' (See full trace by running task with --trace) From searching google this is beacuse of the wrong gem verison. Cant find a way to down grade it. apparently sudo gem update --system 1.4.2 should do the trick but ubuntu 10.10 does not like this. ERROR: While executing gem ... (RuntimeError) gem update --system is disabled on Debian, because it will overwrite the content of the rubygems Debian package, and might break your Debian system in subtle ways. The Debian-supported way to update rubygems is through apt-get, using Debian official repositories. If you really know what you are doing, you can still update rubygems by setting the REALLY_GEM_UPDATE_SYSTEM environment variable, but please remember that this is completely unsupported by Debian. So i added export REALLY_GEM_UPDATE_SYSTEM=1 to .bashrc and reloaded it with . ~/.bashrc and still the same. ive tried various forms of setting this environmental variable with no luck. Ive also been told on #gitorious irc channel to add the file config/initializers/rubygems.rb with the line require "thread" to it. This has done nothing. EDIT: Just found another way which was rvm install rubygems 1.4.2 and it gave: Removing old Rubygems files... Installing rubygems dedicated to ruby-1.8.7-p334... Retrieving rubygems-1.4.2 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 288k 100 288k 0 0 282k 0 0:00:01 0:00:01 --:--:-- 414k Extracting rubygems-1.4.2 ... Installing rubygems for /home/ubuntu/.rvm/rubies/ruby-1.8.7-p334/bin/ruby ERROR: Error running 'GEM_PATH="/home/ubuntu/.rvm/gems/ruby-1.8.7-p334:/home/ubuntu/.rvm/gems/ruby-1.8.7-p334@global:/home/ubuntu/.rvm/gems/ruby-1.8.7-p334@global" GEM_HOME="/home/ubuntu/.rvm/gems/ruby-1.8.7-p334" "/home/ubuntu/.rvm/rubies/ruby-1.8.7-p334/bin/ruby" "/home/ubuntu/.rvm/src/rubygems-1.4.2/setup.rb"', please read /home/ubuntu/.rvm/log/ruby-1.8.7-p334/rubygems.install.log WARN: Installation of rubygems did not complete successfully. TL;DR please tell me how to downgrade rubygems on ubuntu 10.10 or upgrade gitorious to work with 1.6.2 gems

    Read the article

  • Assignment to LAN queues (qWAN1 & qWAN2) from dual-WAN in pfSense

    - by Kurian
    I need help creating floating rules in pfSense for the following: I have two upper limited download queues qWAN1 and qWAN2 on my LAN interface. Each has its own qACK1, qDefault1 etc. How do I assign all unclassified traffic from WAN1 to LAN->qWAN1->qDefault1 and from WAN2 to LAN->qWAN2->qDefault2. qLink is the one and only default queue on LAN so that I get wire speed to the pfSense host from the LAN.

    Read the article

  • port is not open in torrent client even though it was forwarded

    - by aukxn
    I have a problem with port forwarding with my torrent, port check tool says that it open, the firewall exception as well. But with the test by the client not say so. I don't know why. Can anybody help me fix me this problem, the download and upload speed with torrent is very slow. ![enter image description here][1] I don't have enough reputation to post images so here is the link of the image http://i.stack.imgur.com/tgOdr.png http://i.stack.imgur.com/OgjX4.png

    Read the article

  • Linux filesystem with inodes close on the disk

    - by pts
    I'd like to make the ls -laR /media/myfs on Linux as fast as possible. I'll have 1 million files on the filesystem, 2TB of total file size, and some directories containing as much as 10000 files. Which filesystem should I use and how should I configure it? As far as I understand, the reason why ls -laR is slow because it has to stat(2) each inode (i.e. 1 million stat(2)s), and since inodes are distributed randomly on the disk, each stat(2) needs one disk seek. Here are some solutions I had in mind, none of which I am satisfied with: Create the filesystem on an SSD, because the seek operations on SSDs are fast. This wouldn't work, because a 2TB SSD doesn't exist, or it's prohibitively expensive. Create a filesystem which spans on two block devices: an SSD and a disk; the disk contains file data, and the SSD contains all the metadata (including directory entries, inodes and POSIX extended attributes). Is there a filesystem which supports this? Would it survive a system crash (power outage)? Use find /media/myfs on ext2, ext3 or ext4, instead of ls -laR /media/myfs, because the former can the advantage of the d_type field (see in the getdents(2) man page), so it doesn't have to stat. Unfortunately, this doesn't meet my requirements, because I need all file sizes as well, which find /media/myfs doesn't print. Use a filesystem, such as VFAT, which stores inodes in the directory entries. I'd love this one, but VFAT is not reliable and flexible enough for me, and I don't know of any other filesystem which does that. Do you? Of course, storing inodes in the directory entries wouldn't work for files with a link count more than 1, but that's not a problem since I have only a few dozen such files in my use case. Adjust some settings in /proc or sysctl so that inodes are locked to system memory forever. This would not speed up the first ls -laR /media/myfs, but it would make all subsequent invocations amazingly fast. How can I do this? I don't like this idea, because it doesn't speed up the first invocation, which currently takes 30 minutes. Also I'd like to lock the POSIX extended attributes in memory as well. What do I have to do for that? Use a filesystem which has an online defragmentation tool, which can be instructed to relocate inodes to the the beginning of the block device. Once the relocation is done, I can run dd if=/dev/sdb of=/dev/null bs=1M count=256 to get the beginning of the block device fetched to the kernel in-memory cache without seeking, and then the stat(2) operations would be fast, because they read from the cache. Is there a way to lock those inodes and/or blocks into memory once they have been read? Which filesystem has such a defragmentation tool?

    Read the article

  • How to install Dvorak Type 2 (ii) German on Ubuntu using Gnome

    - by Peter Lustig
    Currently I am using standard Dvorak on Ubuntu 13.10 and Gnome 3.10. Unfortunately, writing Umlauts (ä,ö,ü) in German requires me to switch to QWERTY/QWERTZ frequently or forces me to not write those umlauts (which looks strange to German native speakers). Is there a way to use Dvorak Type 2, including the umlauts, but otherwise using the standard English layout on Ubuntu with Gnome? I'm a fast typer on standard English Dvorak and would like to avoid fully switching to German Dvorak as this would (at least temporarily) reduce my typing speed.

    Read the article

  • Configure Apache with a htaccess file to strip out unneeded respond-headers.

    - by Koning Baard XIV
    For ultimate speed, I want my Apache server strip unneeded headers from the response. Currently, the headers looks like this (excluding the status header): Connection:Keep-Alive Content-Length:200 Content-Type:text/html Date:Sat, 15 May 2010 16:28:37 GMT Keep-Alive:timeout=5, max=100 Server:Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 PHP/5.3.1 Phusion_Passenger/2.2.7 X-Powered-By:PHP/5.3.1 Which I want to be like: Connection:Keep-Alive Content-Type:text/html Keep-Alive:timeout=5, max=100 How can I configure this in a .htaccess file? Thanks

    Read the article

  • Why should I use Amazon Route 53 over my registrar's DNS servers?

    - by Abtin Forouzandeh
    I am building a site that I anticipate will have high usage. Currently, my registrar (GoDaddy) is handling DNS. However, Amazon's Route 53 looks interesting. They promise high speed and offer globally distributed DNS servers and a programmable interface. While GoDaddy doesn't offer a programmable interface, I assume their servers are geographically distributed as well. What are the main reasons I should opt to use Amazon Route 53 over free registrar-based DNS?

    Read the article

  • How much it costs to tun own hosting server

    - by Mirage
    I currently have VPS in my company and there i host about 20 websites. My copany wants to set up server locally where they can host all websites rtaher using 3rd party VPS How it will cost e,g about upload ,download speed from data centre. Cpanels license IP registration, hardware , backups, electricity backups, Any other cots etc I would prefer centos

    Read the article

  • How to permanently save power options in windows 7

    - by Ieyasu Sawada
    How can I set the hard disk to never turn off permanently?And the sleep to never, together with the hibernate. Because the options resets to their default values when I turn off, log off or restart my computer. I am using granola on my laptop and it is set to lowest speed. When I restart it turns to full power again. Does it have something to do with the power options resetting to their default values?

    Read the article

  • How should I choose my DNS?

    - by Jader Dias
    When I have to choose my DNS I think that I should consider: Speed Reliability Privacy Control (reports and stats) The main options that come to my mind, and how I weigh them according to the above factors, are: My ISP = faster (closer to me) but less privacy (they can associate my DNS requests to myself) OpenDNS and such = more control and more privacy (all they have is one of my e-mail addresses) Google = less privacy (they can associate my DNS requests to my Google Account and my searches) What weighting factors, or other options, have I missed?

    Read the article

  • methods for preventing large scale data scraping from REST api

    - by Simon Kenyon Shepard
    I know the immediate answer to this is going to be there is no 100% reliable method of doing this. But I'd like to create a question that details the different possibilities, the difficulty of implementing them and success rates. I would like to go from simple software ip/request speed analysis to high end sophisticated soft/hardware tools, e.g. neural networks. With a goal of predicting and preventing bogus requests and attempts to scrape the service. Many Thanks.

    Read the article

< Previous Page | 244 245 246 247 248 249 250 251 252 253 254 255  | Next Page >