Search Results

Search found 8185 results on 328 pages for 'transfer encoding'.

Page 164/328 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • How do I analyze an Apache Bench result?

    - by Alan Hoffmeister
    I need some help with analyzing a log from Apache Bench: Benchmarking texteli.com (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Completed 600 requests Completed 700 requests Completed 800 requests Completed 900 requests Completed 1000 requests Finished 1000 requests Server Software: Server Hostname: texteli.com Server Port: 80 Document Path: /4f84b59c557eb79321000dfa Document Length: 13400 bytes Concurrency Level: 200 Time taken for tests: 37.030 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 13524000 bytes HTML transferred: 13400000 bytes Requests per second: 27.01 [#/sec] (mean) Time per request: 7406.024 [ms] (mean) Time per request: 37.030 [ms] (mean, across all concurrent requests) Transfer rate: 356.66 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 27 37 19.5 34 319 Processing: 80 6273 1673.7 6907 8987 Waiting: 47 3436 2085.2 3345 8856 Total: 115 6310 1675.8 6940 9022 Percentage of the requests served within a certain time (ms) 50% 6940 66% 6968 75% 6988 80% 7007 90% 7025 95% 7078 98% 8410 99% 8876 100% 9022 (longest request) What this results can tell me? Isn't 27 rps too slow?

    Read the article

  • Accidentally ejected my Verbatim drive and can't get the icon back

    - by Erin
    Hi, I have time machine running on my iMac OSX v10.5.8 and also have a Verbatim 1TB attached that I use as a workspace/scratchdisk so I can manipulate large music files before I transfer them. However, when cleaning behind my computer the other day I think I dislodged the connection (or maybe one of the kids hit the eject button, i don't know) however, I've re-booted many times and it's not reconnected. It doesn't appear in my disc utility windown and I don't know how to get the icon back! I've looked in time machine but it doesn't appear there at all (cos it's not supposed to I think - it's not connected - my mate hooked it up for me and he won't return my calls!). Help. I don't know how to get it back! Sorry for being a plank.

    Read the article

  • ctrl-v key on AIX

    - by antenore
    Hi all, I'm new to AIX and I miss some tricks that work well on other *nix flavors. I need a CTRL sequence in a ksh scripts, like ^[ (CTRL-[) and to do that I'm habit to use the ctrl-v[ , but here it doesn't work. At the moment I'm obliged to use a windows box with putty so I cannot even edit the scripts on my Linux box and transfer the scripts on the AIX server. Do you know why and how I can fix the issue? Thanks in advance Kind regards Antenore.

    Read the article

  • iTunes want to remove my existing apps on iPad

    - by Pablo
    Here is the situation. I connected my iPad to some new PC, Synced, then ticking Sync Apps from Devices-Apps will give me warning that all existing apps on my iPad will be replaced with those from Library-Apps. But in Library I have just couple old apps, which long time ago I used. So how I can sync the library in iTunes with my existing apps from iPad? EDIT: I have tried to click on Transfer Purchases, but not all of the items went to the library, just few of them.

    Read the article

  • central log-server with auditdisp

    - by johan
    I want to setup a central log-server. The log-server is running with debian 6.0.6 and the audit daemon is installed in version 1.7.13-1. The Clients are running with Red Hat 5.5 and they connect to the log-server via audispd. The connection works fine and i get all messages from each node. My questions is: is it possible that the auditd daemon from the log server write the messages from each node in a separate file? I try to transfer the messages via the syslog daemon, that works but i can not use tools like ausearch to analyze these log-files.

    Read the article

  • How to efficiently dump a huge MySQL innodb database?

    - by Jagbir
    I got an Ubuntu 10.04 production MySQL database server where total size of database is 260 GB while size of root partition is itself 300 GB where DB is stored, essentially means around 96% of / is full and there's no space left for storing dump/backup etc. No other disk is attached to server as of now. My task is to migrate this database to other server sitting in different datacenter. Question is how to do that efficiently with minimum downtime? I'm thinking in line of: Request to attach an extra drive to server and take a dump in that drive. Transfer dump to new server, restore it and make new server slave of existing one to keep data in sync When migration is needed, break replication, update slave config to accept read/write requests and make old server read-only so it won't entertain any write requests and tell app developers to update there config with new IP address for db. What's your suggestions to improve this or any alternate better approach for this task?

    Read the article

  • Problem connecting to Ubuntu Server in same local network.

    - by frbry
    I have my LAN set up as below: 192.168.2.1: ADSL Router (DHCP Range: 192.168.2.2-192.168.2.250) 192.168.2.254: Wireless Access Point 192.168.2.253: Ubuntu Server (Static IP) 192.168.2.2: My Laptop (Connects to Internet via the Wireless AP) NAT in router is active and set up to transfer requests made over port 80 to 192.168.2.253. Router's firewall is inactive. No IPs in DMZ. My friends get Apache's It Works page when they try to enter http://my_external_ip. But I get Router's configuration page instead of that. What should I check or do? Thanks.

    Read the article

  • Rsync over ssh with root access on both sides

    - by Tim Abell
    Hi, I have one older ubuntu server, and one newer debian server and I am migrating data from the old one to the new one. I want to use rsync to transfer data across to make final migration easier and quicker than the equivalent tar/scp/untar process. As an example, I want to sync the home folders one at a time to the new server. This requires root access at both ends as not all files at the source side are world readable and the destination has to be written with correct permissions into /home. I can't figure out how to give rsync root access on both sides. I've seen a few related questions, but none quite match what I'm trying to do. I have sudo set up and working on both servers.

    Read the article

  • What is fastest way to backup a disk image over LAN?

    - by David Balažic
    Sometimes I boot sysrescd or a similar live linux on a PC to backup the hardrive over local network to my server. I noticed many times, that the transfer speed is not optimal (slower than HDD and network speed). Any rules of thumb what to do and what to avoid? What I typically do is something like: dd bs=16M if=/dev/sda | nc ... # on client nc ... | dd bs=16M of=/destination/disk/backup1 # on server I also "throw" in lzop (other are way too slow) and sometimes on the fly md5sum calculation (both of uncompressed and compress source). I try to add (m)buffer (or other alternatives) to improve throughput (and get a progress indicator). I noticed that even with enough free CPU, adding commands to the pipeline slows things down. Typically the destination is on a NTFS volume (accessed via ntfs-3g, with the _big_writes_ option).

    Read the article

  • Why is my ftp connection timing out?

    - by NEPatriot
    This is the log info: Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is your current location Command: TYPE I Response: 200 TYPE is now 8-bit binary Command: PASV Response: 227 Entering Passive Mode (173,201,145,1,199,43) Command: MLSD Error: Connection timed out Error: Failed to retrieve directory listing The strange thing is that I've set the transfer mode to active... I've called my hosting company support and they're able to connect to this server using my ftp credentials. I've also tried to connect on another machine on my network and have the same issue. Could it be the firewall? My ISP?

    Read the article

  • Rejecting new HTTP requests when server reaches a certain throughput

    - by Sam
    I have a requirement to run an HTTP server that rejects new HTTP requests (with a 503, or similar) when the global transfer rate of current HTTP responses exceeds a certain level. For example, if the web server is transferring at 98Mbps, and a new HTTP request arrives, we would want to reject this (as we couldn't guarantee a good speed). I've had a look at mod_cband for Apache, limit_req for nginx, and lighttpd's rate limiting features, but none of them seem to handle my (rather contrived, granted) use case. I should add that I'm open to using pretty much any web server, and am open to implementing this in iptables rules if someone can craft such a rule! (Refusing the TCP connection is fine, it doesn't have to respond with an HTTP 503). Any suggestions?

    Read the article

  • SSH rsa key works with external IP not internal IP

    - by Ian
    I am using rackspace cloud hosting. I have 2 servers behind a load balancer. Each server has an external IP and an internal IP. I want to setup a sync job that uses SSH to transfer files. I made an rsa key, and I can successfully SSH from server A into server B, using the external IP of server B, without being prompted for a password. If I try to do the same but use the internal IP, it prompts me for a password. I want to be able to use the key instead of the password. Why is this? Is there something special I have to do during key generation so it works for both IPs? Any help is appreciated.

    Read the article

  • ASA Slow IPSec Performance

    - by Brent
    I have a IPSec link between two sites over ASA 5520s running 8.4(3) and I am getting extremly poor performance when traffic passes over the VPN. CPU on the device is 13%, Memory at 408 MB, and active VPN sessions 2 so the load on the device is particularly low. Screenshot of wireshark file transfer between the two hosts over the VPN: The large amount of Header checksum failures is alarming, but I am not sure what to check now. I perf is showing around 4-5 Mbit/sec with differing TCP window sizes. Show Run on the ASA http://pastebin.com/uKM4Jh76 Show cry accelerator stats http://pastebin.com/xQahnqK3

    Read the article

  • Transferring NS records to a new server

    - by lanemiller
    I feel like that was NOT worded well, but here is my current predicament. I recently had a GoDaddy dedicated server, and decided after their customer support failed to do anything but disappoint, to switch to Rackspace. We have 2 ns records that point to our godaddy server, and we have a few sites left on the server, that rely on it for their DNS zones, and the owners of the domains fail to respond to us. So, the question is, if I need to transfer the sites off of the OLD godaddy NS, can I point the A records from my ns1.domain.com and ns2.domain.com to match up with IP addresses of the Rackspace nameservers? OR, do I cname my NS records to match the rackspace ones? I DO know that this isn't advised, either method, but I need to get these sites moved before Godaddy tries charging another $2k for the server.

    Read the article

  • Inaccurate bandwidth limiting in altq queues

    - by overkordbaever
    I'm setting up an environment where I have one Linux server, one OpenBSD router and one Linux client and I want to be able to limit how much bandwidth the client should be able to use. I've been performing these tests with "netcat" and "time" (using time to measure the time of the transfer with netcat), and what happens when trying these tests (using the TCP protocol, the queues will for some reason not work with UDP) is that the queues aren't exact at all. For example: when setting a bandwidth limit of 10mbit, the client cannot use more than five mbits, when setting a limit of 100mbit, the client cannot use more than around 50mbit. The config looks like (using a 100mbit limit in the example): #queue rules altq on { $int_if, $ext_if } cbq bandwidth 100Mb queue { def, low } queue def bandwidth 0Mb cbq(default) queue low bandwidth 100Mb cbq(default) #Passrules test pass out quick from $int_if to $ext_if queue low pass in quick from $ext_if to $int_if queue low pass out quick from $ext_if to $int_if queue low pass in quick from $int_if to $ext_if queue low

    Read the article

  • How do I extract files from one tarball to another tarball in one step?

    - by Martin
    I have some fairly large tarball archives, from which I need to extract some files. I will later repack those files to transfer them to another server. Currently that is a two (multi) step process for me: mkdir ttmp tar -vxzf large.tgz -C ttmp/ --strip-components=<INT> <folder-to-be-extracted> or alternatively with wildcards mkdir ttmp tar -vxzf large.tgz -C ttmp/ --strip-components=<INT> \ --wildcards --no-anchored '*pattern*' Then I go ahead and recompress the created folder tar -vczf small.tgz ttmp/* rm -rf ttmp How can I combine these two commands into one? Like this tar -x large.tgz > tar -c small.tgz Just to show, what I already tried: Whenever I search the terms "extract" I will end up here or here or even here. When I use the term "split" I will end up here and that is definitely not what I intend to do. When I use "repack" I end up in strange places.

    Read the article

  • Web/Cloud Based OS with Torrent Features and Free Storage?

    - by Kristina E
    Hi, I want a web-based OS with a torrent client and I want to link it to one of the many free cloud storage solutions. I think it would be really cool to be able to check and download torrents anywhere and not use my hardware or connection until I want to transfer the files down to my actual desktop (like burning a Linux ISO or to convert the file to a IFO format). Anywyas, I created accounts at 4Shared, EyeOS, GlideOS, ADrive and iCloud and am having no luck. There is an eyeTorrent app but I can't seem to get it configured and I can't log into my cloud storage from the cloud OS. Has anyone been able to pull this off and if so would you please explain how? Thanks, Kristina

    Read the article

  • Lost support for Web Access on Verizon BlackBerry World Edition

    - by Jimsmithkka
    Hello all, I believe that some silliness has occurred with my blackberry after a OS upgrade. I have 2010 Blackberry world edition phone, purchased off a friend who went iPhone, that at first worked with web on the Verizon network. When i connected it to my PC to transfer contacts, it prompted for an OS upgrade, which I performed. Post-Upgrade I have found that i can no longer access any of the web services: eg. AppWorld, Email, Twitter, Browser. And they all state that i need to upgrade my account to gain access. I had a Storm previous to this that worked fine, and at the VZ store they told me this device is no longer supported (new in 2010 though), and they got me a free "upgrade" to the Blackberry flip. What i could use help with is finding a source stating it is discontinued or a guide that will help me re-enable the web features. I can provide further info later if needed (currently at work with the flip, the WorldEdition is at home).

    Read the article

  • Micro Btx Motherboard Replacement.

    - by Judy
    I got the Vundo virus that took out the mobo and hard drive on my Gateway GT5220. Is there a replacement that will work? I'm willing to keep XP. I want to fix it the cheapest way possible. The mobo is a NVIDIA GeForce 6150 AM2 microBTX with primary IDE and 4 SATA connections. I want to use a new SATA as my primary, but would like to be able to connect my old IDE drives to transfer my pictures from them to the new drive. I would appreciate any help. Thanks!

    Read the article

  • Which is the best way to sync and share contacts and calender between Thunderbird, iPhone and Android?

    - by bensch
    I would like to keep my contacts and a calendar synchronized between several desktops and cellphones. Is there a way to achieve this without using Google or similar organisations? I want to keep my data protected and safe, so an encrypted transfer would be useful. Do i need to install a service on my own rootserver? or are there any services available, that are safe? I read this post, but there is not mentioned not to use Google: Thunderbird contacts sync so no solutions with SoGo or LDAP. maybe Zimbra is a solution? or Funambol? I tried kolab, but had some unsolveable problems.

    Read the article

  • Rsync : execute permission required

    - by user651488
    I'm using rsync between two servers to transfer files. The problem is some files are not transferred. I get this error : rsync: readlink "/var/www/index.html" failed: Permission denied (13) So I check permissions on the server and after make tests, I notice a file is transferred only if it has these permissions : R-W ! If the file have these permissions : R--, Rsync can't download it !? Command: /usr/bin/rsync -avzr -e "/usr/bin/ssh -i /home/replication/thishost-rsync-key" [email protected]:/var/www/index.html ./ Is it a bug with Rsync ? I find any information about this problem. Thanks for your help Debian Etch 2.6.30 Rsync 2.6.9 protocol version 29

    Read the article

  • Home Server Restore

    - by Bryan Avery
    I have had to reinstall Home Server on my server and I would now like to restore it back to the state it was the last moment it stopped. I have the hard drive in a state it was last in, which is a small 250 gb hard disk. I have now installed 1.5tb hard disks, and installed a full licenced copy, as the original copy was a trial version. So I'm in a state where I have a new install, I have one of the old drives plugged in, but I can't transfer the old backups across, how do I do this?

    Read the article

  • Home Server Restore

    - by Bryan Avery
    I have had to reinstall Home Server on my server and I would now like to restore it back to the state it was the last moment it stopped. I have the hard drive in a state it was last in, which is a small 250 gb hard disk. I have now installed 1.5tb hard disks, and installed a full licenced copy, as the original copy was a trial version. So I'm in a state where I have a new install, I have one of the old drives plugged in, but I can't transfer the old backups across, how do I do this?

    Read the article

  • RS-232 vs. RS-485

    - by user60524
    Doing a little research on the two to figure out which one may better suit my purposes (communications amongst different hardware). How do they fare against one another? Im far from being a specialist and have no idea where I would even start looking for data to compare and contrast. If possible can someone please answer the following questions in regards to each of these. Can they be networked amongst each other? Can they be easily networked over ethernet? What speeds do they transfer at? (Min, Max, Etc.) Reliability? Best framework to build on top of to support the above? Any standard communications programs? Debugging capability? Any help would be very much appreciated, thanks.

    Read the article

  • JAVA vs .NET Technology - Way 2 go Futher

    - by Sarang
    I have my subject .net acedemically. I also learned core-java and did a project as well. I took training from a java firm. Now, as a skill I do have knowledge as both language. But, it is creating a large problem to me that, which field I should chhose? Even if having better OOP funda, will it be easier for me to transfer from one-another in future ? Please suggest me a way. Also, we do have may technologies available at both side, like JSP, JSF, J2ME, Share Point, SilverLight etc. Which is better as per their reliabity point of view? Which are fast growing and useful technologies used mostly in current IT corporate world ? Are they easier to learn at fresher's point of view? Please answer. Perhaps, this answer may help me mostly to create my way to learn them and go further.

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >