Search Results

Search found 9286 results on 372 pages for 'transfer speed'.

Page 257/372 | < Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >

  • Permission Denied for FTP User

    - by Alasdair
    I have an FTP user whose default is /root/ftpuser This user can login fine. The user is the owner of the directory & the directory is even set to 777 permissions. But the user can't upload anything, the display is: Status: Connecting to xx.xxx.xxx.xx:21... Status: Connection established, waiting for welcome message... Response: 220---------- Welcome to Pure-FTPd [privsep] [TLS] ---------- Response: 220-You are user number 2 of 50 allowed. Response: 220-Local time is now 05:12. Server port: 21. Response: 220-This is a private system - No anonymous login Response: 220-IPv6 connections are also welcome on this server. Response: 220 You will be disconnected after 15 minutes of inactivity. Command: USER ftpuser Response: 331 User ftpuser OK. Password required Command: PASS ********* Response: 230 OK. Current restricted directory is / Command: OPTS UTF8 ON Response: 200 OK, UTF-8 enabled Status: Connected Status: Starting upload of test.html Command: CWD / Response: 550 Can't change directory to /: Permission denied Command: MKD / Response: 550 Can't create directory: Permission denied Command: CWD / Response: 550 Can't change directory to /: Permission denied Command: SIZE /btn.png Response: 550 Can't check for file existence Command: TYPE I Response: 200 TYPE is now 8-bit binary Command: PASV Response: 227 Entering Passive Mode (66,232,106,33,52,218) Command: STOR /test.html Response: 553 Can't open that file: Permission denied Error: Critical file transfer error It's a Linux CentOS 6 server. Any ideas?

    Read the article

  • How to avoid sshfs freezing?

    - by Andreas Hagen
    So the issue is this: I've installed sshfs on Ubuntu 12.04 and I'm trying to connect to a couple of remote servers. So initially the mount seams successful. Sometimes Gnome even picks it up and displays the "new device found" box at the bottom of the screen. but from here on there is not much that works. Or at least not any more. The first couple of times i connected it seamed to work fine, and I was able to transfer some files, then i disconnected using fusermount -u <folder> and after reconnecting a little later the trouble started. Now after executing sshfs -o ServerAliveInterval=15 -o reconnect -C -o workaround=all -o idmap=user root@<host>:/ <folder>, when I change directory into the mount-point, the shell just freezes. Strangely ls -al <folder> works when listing just the root of the remote system, but nothing more. Also every file-explorer I've tried freezes just like cd <folder>. To me it seamed like there was some kind of zombie thread or something hanging around my system, due to the fact that it did work the first time, so I have tried rebooting but no luck. sshfs -V gives this: SSHFS version 2.3 FUSE library version: 2.8.6 fusermount version: 2.8.6 using FUSE kernel interface version 7.12 So yea, any ideas?

    Read the article

  • Trouble in Team Viewer VPN Connection

    - by Sumit Pal
    I have completed initial vpn connection setup. It has connected. I have tested with ping and it is ok. My problem is, when I want to start file transfer in VPN it asks for username & password. So what is the user name? I have tried giving Computer-Name/User-Name. I have found my Computer Name by going to Control Panel/System/ & clicking 'Computer Name' tab & username from user accounts or it is shown when I login in windows account (Please correct me if the above procedure is wrong). But what is the password? I have tried giving the account password but it always give 'The username or password is incorrect.' My Question: How to find the username & password? For Information: a) I have Team Viewer 7 installed in one Windows XP PC & one Windows 8 PC. I like to create a secure connection between these two PCs. b) The two PCs are connected in the same local network via a router. Please ask if you need additional information.

    Read the article

  • Apache Bench reports different result with same page

    - by Aspis
    I'm running into a little problem base-lining an Apache2/fcgi/php-fpm server I am setting up. 1) If I run: ab -n 15000 http://mysite.com/index.php. Apache Bench returns Time per request: 41ms but document length: 0 bytes and html transferred: 0 bytes. The Transfer rate: 7.9Kb/s. 2) If I run: ab -n 15000 http://mysite.com/ Apache Bench returns Time per request: 83ms along with the accurate document length and html transferred total. The APC cache status reports identical hit counts from both test. Also Apache Bench reports no errors in either case. Overall, no errors on any test sites and all logs are clean, etc. DocumentRoot is set to index.php so I would expect both of these test runs to produced a similar result. My 2 question(s) are: 1) why the discrepancy? 2) which is the correct result? I've seen plenty of results like test 1 posted (with out question) but frankly from my own experience and those of others, accurate testing is hard to come by. Even with out goofy issues like this.

    Read the article

  • Why isn't my phone charging with some micro usb cables?

    - by Jacxel
    I ordered 3 microUSB cables over ebay. My phone was at about 50% battery and I wanted to use it as a hot spot for some browsing on my laptop, so I plugged it in, a charging icon appeared on the phone, my laptop showed it as a connected usb device and so I went about my business. About 30 minutes later I checked the phone and to my dismay saw 45% battery. But ahh, I thought, I have been putting the poor little thing under too much pressure, acting as a WiFi hotspot must drain the battery quicker than it can charge via usb, perhaps even using my laptops usb port wouldn't output enough power. Unscathed I continued on and when I was going to bed I plugged the usb cable into a mains adapter and switched everything battery consuming off and content, went to sleep. The next morning I was awoken by my phones alarm which got cut off unexpectedly. I attempted to unlock my phone which showed no more signs of life. Why isn't my phone charging with these new USB cables? For clarity: They transfer data with no problems The phone appears to be charging, showing all the signs and lights it normally would, the cable that came with the phone works as you would expect, so its not a fault with the phone, I think they slow the discharging of the phone, but I could be wrong. Are these just bad quality cables? Is there a way to fix this issue?

    Read the article

  • How do I share a complete XP disk so it can be seen from a Windows 7 system? (To move all files to a

    - by Ian Ringrose
    This should be easier! (both computers can see the internet etc so I know the network it’s self is working) I have a normal home network with a Windows XP machine on it and the new Windows 7 (64 bit) machine. So I can transfer the files to the new Windows 7 machine, I wish to share the complete disk (and all files) from the Windows XP machine and access them from the Windows 7 machine. Is there a step by step set of instructions for doing this anywhere? So fare I have: put both computers into the same workgroup put the windows 7 machine into work network mode so it can see the XP machine in the work group shared the XP disk as read only But when I try to access a lot of the folders on the XP disks, I am told I am not allowed to access them. (I was not asked for any passwords by the windows 7 machine when I accessed the XP machine. The XP machine just has its default account with no password set on it) The XP machine runs XP home and hence has "simple file shairing" turn on. So it seems that even if I create a admin account (with password) and connect with that account, it still comes in as "guest" on the XP machine. Chooseing to share the folder I want access to rather then the top of the disk drive seems to work, but is a pain as I need to share each user's folder with a different share name. If the new computer was not a laptop, I would just plug the hard disk from the old machine into it, but being a laptop I don't have that option.

    Read the article

  • Write permissions on uploaded files - Linux, Apache, PHP

    - by letseatfood
    I am working on a PHP script that transfers files using FTP functions. It has always worked on my production server (which is a hosting service). The development server I have just setup (I am a novice to servers) is Debian Lenny with Apache2, PHP5, and MySQL5. The file transfer works correctly, but once the file has been written to the server, it has permissions of 600. This makes it impossible for me to view the file (JPEG) in the web browser, as permission is denied. I have scoured the internet and even broken my server installation and reinstalled it trying to figure this out (which has been fun, nonetheless!). I know it is unwise to set 777 permissions on public accessible files, but even that will not solve the problem. The only thing that works is if I chmod 777 thefile.jpg after it has been transferred, which is not a working solution. I tried changing the owner of my site files to www-data per this post, but that also does not work. My user is mike, and it still does not work whether the owner of the files is mike or root. Would somebody point me in the right direction? Thanks! And, of course, let me know if I can clarify anything.

    Read the article

  • When a server IP changes, do exising TCP (e.g. http/mysql) connections remain running

    - by Luke Cousins
    We have some PHP-FPM servers and when they need a database connection, they connect to an HAProxy server which selects them a database server to use and the connection opens. When we then want to perform some maintenance on the HAProxy servers (such as config changes requiring an HAProxy restart), the process is as follows: Shutdown Keepalived on the HAProxy server Wait for the floating IP to transfer to the backup HAProxy server (also running Keepalived) Wait until HAProxy stats is reporting just one connection (us checking how many connections there are) Restart HAProxy Restart Keepalived As step 2 occurs, what will happen to the open mysql connections at that point? According to this TCP Sessions and IP Changes question the connections will be dropped. Is this really the case? If so, what, if anything, can be done to prevent this happening? Can the connection be in some way forced to use the main (non-floating) IP of the server? We also have a similar setup with two Nginx servers with Keepalived running on them and we were planning on doing the equivalent process. If we do, the same question applies - what happens to the existing http connections when the IP moves to the other server? I appreciate your help.

    Read the article

  • Mysqld shutting down by itself

    - by AJ Naidas
    I'm running a Wordpress Blog that gets medium-high traffic. It is hosted in an Ubuntu Server 2GB Memory 2 Core Processor 40GB SSD Disk, 3TB Transfer. The problem is that MySQL shuts down by itself after an hour or two. I had to restart mysql each and every time this happens. I checked the logs and this is what I found: 140612 6:48:14 [Warning] Using unique option prefix myisam-recover instead of myisam-recover-options is deprecated and will be removed in a future release. Please use the full name instead. 140612 6:48:14 [Note] Plugin 'FEDERATED' is disabled. 140612 6:48:14 InnoDB: The InnoDB memory heap is disabled 140612 6:48:14 InnoDB: Mutexes and rw_locks use GCC atomic builtins 140612 6:48:14 InnoDB: Compressed tables use zlib 1.2.3.4 140612 6:48:14 InnoDB: Initializing buffer pool, size = 1.4G InnoDB: mmap(1502412800 bytes) failed; errno 12 140612 6:48:14 InnoDB: Completed initialization of buffer pool 140612 6:48:14 InnoDB: Fatal error: cannot allocate memory for the buffer pool 140612 6:48:14 [ERROR] Plugin 'InnoDB' init function returned error. 140612 6:48:14 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 140612 6:48:14 [ERROR] Unknown/unsupported storage engine: InnoDB 140612 6:48:14 [ERROR] Aborting 140612 6:48:14 [Note] /usr/sbin/mysqld: Shutdown complete judging by this line: 140612 6:48:14 InnoDB: Fatal error: cannot allocate memory for the buffer pool I suspect that this is a memory problem, but I would like to hear from the experts here before I conclude. Is this a lack of memory problem? Do you think the value of max_connections in my.cnf (currently 100) is a potential cause and needs increasing? TIA.

    Read the article

  • VirtualBox how to merge arbitrary snapshot into base vdi

    - by jmathew
    I botched a transfer of a VM from one harddisk to the other. Now I'm left with the base vdi and a whole bunch of snapshots. My steps Copied old VM directory over to new HDD Deleted old VM and added new VM using using Machine-add and providing the old XML file Couldn't add base vdi file due to conflict so changed the UUID of base vdi with VBOXMANGE.EXE internalcommands sethduuid Attempt to rollback to a snapshot, but it seems the VM is looking for the snapshots on the old HDD (which is formatted and gone) This is the error (networked is the name): Failed to restore the snapshot networked of the virtual machine lfs. Could not open the medium 'H:\vm\ft.vdi'. VD: error VERR_PATH_NOT_FOUND opening image file 'H:\vm\ft.vdi' (VERR_PATH_NOT_FOUND). Result Code: E_FAIL (0x80004005) Component: Medium Interface: IMedium {53f9cc0c-e0fd-40a5-a404-a7a5272082cd} The old HDD was drive H: the new one is drive N: How can I modify the snapshots/VM to look in N:\vm\ft.vdi for the base vdi? I've already set the default settings in VirtualBox in general (default vm/vm snapshot location). Or if not that how can I merge the old snap shot with the base vdi given that the only things that have changed is the base vdi's UUID? Thanks

    Read the article

  • Autounmounting USB keys with FAT filesystem on Linux (RHEL5)

    - by niXar
    For security reasons, I have two workstations i front of me, and I can only transfer data between them through a USB key. As you can imagine, it can get quickly tiresome, but the most annoying is having to unmount the things before removing them. Not umounting them results in missing files most of the time, even if I remove them a while after having last written to them. Now, since they're only used for transferring smallish files, and each are basically written once and read once, I don't need the fancy pansy caching infrastructure that makes clean unmounting a necessary step. And since the data is always a copy of something I have at hand, I don't care if the filesystem croaks from time to time. But anyway the system doesn't need to force that on me, it could simply make sure everything is committed with a second, and works synchronously. Then when I remove the key, nothing is lost. Is there a way to do this? I would appreciate any other tips on handling this situation. Edit: it appears the situation has changed between RHEL5 and Fedora up to F11 on one hand, and F12 on the other. The latter use DeviceKit-disk, and I haven't quite figured out how to do this. The method provided below in gconf does not work anymore.

    Read the article

  • Exchange 2003 automatically converts text/plain emails to text/html for IMAP retrieval

    - by wfaulk
    When accessing an Exchange 2003 server via IMAP, emails that were sent as text/plain (and ones that had no MIME encoding specified at all) get automatically converted to multipart/alternative with the original text/plain body and a text/html body. This is … stupid. It doesn't even bother to specify a monospaced font. The new MIME part starts like this: Content-Type: text/html; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2//EN"> <HTML> <HEAD> <META HTTP-EQUIV=3D"Content-Type" CONTENT=3D"text/html; = charset=3Diso-8859-1"> <META NAME=3D"Generator" CONTENT=3D"MS Exchange Server version = 6.5.7654.12"> <TITLE>{{subject}}</TITLE> </HEAD> <BODY> <!-- Converted from text/plain format --> <BR> <P><FONT SIZE=3D2>{{body}} (All the "3D" stuff is quoted-printable encoding for an equals sign; there's nothing wrong on that front, surprisingly.) How can I make this stop?

    Read the article

  • Loadbalancing with nginx and tomcat

    - by London
    Hello this should be fairly easy to answer for any system admin, the problem is that I'm not server admin but I have to complete this task, I'm very close but still not managing to do it. Here is what I mean, I have two tomcat instance running on machine1 and machine2. People usually access those by visiting urls : http://machine1:8080/appName http://machine2:9090/appName The problem is when I setup nginx with domain name i.e domain.com, nginx sends requests to http://machine1:8080/ and http://machine2:9090/ instead of http://machine1:8080/ and http://machine2:9090/appName Here is my configuration (very basic as it can be noted) : upstream backend { server machine1:8080; server machine2:9090; } server { listen 80; server_name www.mydomain.com mydomain.com; location / { # needed to forward user's IP address to rails proxy_set_header X-Real-IP $remote_addr; # needed for HTTPS proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_max_temp_file_size 0; proxy_pass http://backend; } #end location } #end server What changes must I do to do the following : - when user visits mydomain.com - transfer him to either machine1:8080/appName or machine2:9090 Thank you

    Read the article

  • Need advise for choosing software\hardware for virtualization.

    - by Anatoly
    Currently we have these servers : Windows SBS 2003 premium on IBM X266 double Xeon F43, 2GB ram. DC, exchange (70 users), Mssql. Windows 2003 R2 32bit on IBM x3400 with double XEON E5310 and 4GB ram. Terminal server (40+ users), ERP application based on uniPaaS platform from Magicsoftware, and Pervasive sql. Ubuntu 8.04 (simple pc box) with squid proxy, GLPI system and PHPBB3 forum for internal use. Recently number of concurrent users on Terminal server passed 40 users in rush hours and it gets stuck frequently. Therefore we need an upgrade. I think about transfer all physical servers to virtual servers based on cluster of 2 physical servers for reducing downtime. I think we will grow till 50-60 concurrent terminal users in rush hours. I also plan to virtualize 10-15 Win XP/7 workstation (office,ERP etc), and there is a little probability for Asterisk\Hylafax for 100 users (if it possible on same VM). Also we need NAS storage for 2-3TB. What hardware upgrade/purchase we need for complete this task? Which VM solution is preferable VmWare or Hyper-V? What backup software should we choose? Acronis or something another? Thank you in advance.

    Read the article

  • Mac OS X will only upload zero-byte files through FTP

    - by tabacitu
    I'm using Mac OS X Lion and i've been having this problem with FTP (any FTP client, mind you. I tried Transmit, FileZilla, Cyberduck and the Terminal, all with the same result) I can browse files in my FTP Client, but when I upload files, the client hangs for a few seconds, then thinks it uploaded the files successfully, but it only creates a new file with one blank line in it. Sometimes, it manages to upload 4-5 lines. It then returns: 226 - Error during read from data connection 226 Transfer aborted But 2xx is a success message. It is not a server issue, since any Windows machine will upload just fine using the same network. Can anybody figure out what the problem is? It renders my mac useless for web development. The problem persists with SFTP and FTP with SSL/TLS. Later edit: Solved! Ok, turns out the problem goes away when I take out my router and connect directly through PPPoE. So the problem is with the router, I thought. But no, the problem is with the mac that connects through a router that connects through a PPPoE and tries to upload using FTP. Pretty specific, I know. The problem is with the MTU (maximum transmission unit). Apparently, mac os x breaks the file into chunks that are too large for the router to send, because the router's MTU was set lower than Mac OS X's. My router's was 1492, which is ok, but my Mac's MTU was 1500, which is unacceptable. I don't even understand why it works directly with PPPoE. Anyway, if you encounter the same problem, this is how you diagnose and fix it: In terminal, run: ifconfig | grep mtu to see what the MTU is for en0 (or en1, mine was en0) If it's 1500, run sudo ifconfig en0 mtu 1300 This should solve it. If so, it may only be until the next restart. You can also change the MTU in System Preferences \ Network \ Ethernet - Advanced \ Hardware

    Read the article

  • Windows thinks outgoing connections are incoming connections?

    - by Slayer537
    I have a rather weird issue.. I'm trying to configure Windows Firewall to block all outgoing connections to a certain app, but allow all incoming. This app is used to transfer files across a network. The reason for this type of setup is to only allow certain users (IP Address) access to the files I have, but to still allow others to see what's available. Since Windows Firewall defaults to allowing all outgoing connections, I made a rule to deny all outgoing connections that were not in the IP ranges I specified. For the incoming connections, I'd like to leave it at allow all, but at the moment it is set to only allow the connections that also have outgoing permissions set. If I blanket say allow all incoming connections, I observe that unauthorized IP Address are able to actually download files, even though their IP was blocked in the outgoing connections. To shed a little more visibility on this, I used NetLimiter to see what was going on. NetLimiter showed me that the connection was an incoming connection. Shouldn't this be an outgoing connection, as I am uploading files to them, not the other way around? Is there a way to make the connection type be correct and show up as outgoing instead of incoming?

    Read the article

  • Shoretel Upgrade Path

    - by Brian
    I currently have a Shoretel Server running Server 2003 x32 as a virtual machine paired with a ShoreGear 90 switch and another unused switch of the same model being reserved for manual failover. I am getting the software mailed to me from my partner, but with limited support since the server is in a relatively remote area. I am tempted to upgrade the OS at the same time as performing the upgrade, but want to know if there are any horror stories or advice I should know about before diving in. I'm upgrading from Shoretel 9.2 Build. I will be upgrading first to version 10.1 then finally to 11.1. The system has been bullet proof since it was installed and we are upgrading mainly to get a client that is a little more modern. My question boils down to: Should I even bother with an OS upgrade or even possibly a fresh install of Windows with an install of Shoretel 11.1 and just transfer the configuration? Should I just stay with Server 2003 since it is supported in my target version of Shoretel and the upgrade will be more than enough to keep me busy as a novice?

    Read the article

  • Windows VPN always disconnects after < 3 minutes, only from my network

    - by hemp
    First, this problem has existed for almost two years. Until serverfault was born, I pretty much gave up on solving it - but now, hope is reborn! I've set up a Windows 2003 server as a domain controller and VPN server at a remote office. I am able to connect to and work over the VPN from every windows client I've tried, including XP, Vista, and Windows 7 without issue, from at least five different networks (corporate and home, domain and non.) It works fine from all of them. However, whenever I connect from clients on my home network, the connection drops (silently) after 3 minutes or less. After a short while, it will eventually tell me the connection has dropped and attempt to redial/reconnect (if I've configured the client that way.) If I reconnect, the connection will re-establish and appear to work correctly, but again will silently drop, this time after a seemingly shorter time period. These are not intermittent drops. It happens every single time, in exactly the same way. The only variable is how long the connection survives. It doesn't matter what type of traffic I send. I can sit idle, send continuous pings, RDP, transfer files, all of that at once - it makes no difference. The result is always the same. Connected for a few minutes, then silent death. Since I doubt anyone has experienced this exact situation, what steps can I take to troubleshoot my evanescing VPN?

    Read the article

  • How to Deploy an ASP.NET Web API- and Browser-based Application to a Production Environment [closed]

    - by lmttag
    Possible Duplicate: How to Deploy an ASP.NET Web API- and Browser-based Application to a Production Environment We have an ASP.NET Web API server that serves up a SQL Server data driven website. The API uses JSON to transfer data from SQL Server to the front end. We need to move it to an internal production environment (nothing will be exposed on the public Internet) and we’re having problems - or just not understanding what needs to be done. There are two domains: The corporate domain - where all users login normally. The process domain - contains the database the Web API needs to access. The IT staff wants to put a DMZ between the two domains to house the IIS app and shield the users on the corporate domain from having access into the process domain directly. The ideal configuration is: corp domain (end users) <–> firewall (open port 80) <–> DMZ (web server running IIS) <–> firewall (open port 80 or 1433????) <–> process domain (IIS for Web API and SQL Server) We don’t really understand how to deploy our browser/Web API application in this scenario. Do we need to break up our application so that all the client code is on the IIS server in the DMZ, while the Web API gets installed on the server in the process domain? Does the entire app (client code and Web API) stay together on the IIS server in the DMZ, which then somehow accesses the SQL Server instance to get data? From the IIS server and app in the DMZ, would you simply access the Web API on the server in the process domain by going to http://server/appname/api/getitmes? In the second firewall between the DMZ and the process domain, would you have to open port 1433 or just port 80 since the Web API is a HTTP endpoint? Or, is there some better way of deployment (i.e., how ASP.NET Web API single page applications written all in HTML5 and JavaScript supposed to be deployed to production environments?)? NB: The servers are Win2k8 R2, SQL Server 2k8 R2, and IIS 7.5.

    Read the article

  • Windows 7 - SBS - Why does copying a directory not include all subdirectories and files?

    - by indeed005
    Using Windows 7, fully updated... I have had some strange behaviour copying a whole user directory (e.g. "c:\users\bob" to "c:\backups\bob") I understand now that I should have used Easy Transfer or at least robocopy, but at the time all I wanted to do was backup the user's data before using the "Delete account" button. Unfortunately, I didn't check that my copy-paste had actually worked; all it had actually done was copied the appdata subdirectory of the user account. At the time of doing this backup I was logged in as the same user, bob (a local admin) in this example. When I discovered the missing files, I tried again using the domain admin account -- same story. Only appdata copied. No documents folders, nothing else. Then my boss tried, and it worked fine -- it copied all the files. Ctrl+C, Ctrl+V. I tried again... same profile... copied all the files. Same profile, same destination, same rights and permissions, same ownership, but different behaviour. Has anyone encountered this before and come up with a solution? BTW this was not using roaming profiles and the accounts are stored locally.

    Read the article

  • Apache https is slsow

    - by raucous12
    Hey, I've set apache up to use SSL with a self signed certificate. With http (KeepAlive off), I can get over 5000 requests per second. However, with https, I can only get 13 requests per second. I know there is supposed to be a bit of an overhead, but this seems abnormal. Can anyone suggest how I might go about debugging this. Here is the ab log for https: Server Software: Apache/2.2.3 Server Hostname: 127.0.0.1 Server Port: 443 SSL/TLS Protocol: TLSv1/SSLv3,DHE-RSA-AES256-SHA,4096,256 Document Path: /hello.html Document Length: 29 bytes Concurrency Level: 5 Time taken for tests: 30.49425 seconds Complete requests: 411 Failed requests: 0 Write errors: 0 Total transferred: 119601 bytes HTML transferred: 11919 bytes Requests per second: 13.68 [#/sec] (mean) Time per request: 365.565 [ms] (mean) Time per request: 73.113 [ms] (mean, across all concurrent requests) Transfer rate: 3.86 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 190 347 74.3 333 716 Processing: 0 14 24.0 1 166 Waiting: 0 11 21.6 0 165 Total: 191 361 80.8 345 716 Percentage of the requests served within a certain time (ms) 50% 345 66% 377 75% 408 80% 421 90% 468 95% 521 98% 578 99% 596 100% 716 (longest request)

    Read the article

  • Force delivery retry without restarting the SMTP Service on Windows Server 2008 R2

    - by Mathias R. Jessen
    I have a Windows Server 2008 R2 box hosting 3 virtual SMTP servers; vSMTP01, vSMTP02 and vSMTP03. The first two are configured to deliver all messages to dedicated smarthosts, while the last is set to just deliver the messages on its own. All other delivery settings are as default ----(vSMTP01)-----> {SMARTHST01} / ----Inbound mail--->---SMTPSRV01---[----(vSMTP02)-----> {SMARTHST02} \ ----(vSMTP03)-----> { Internet } Now I want to take SMARTHST01 out for maintenance, but I don't want to reject submissions to vSMTP01 while doing so, so I just let it continue running. When SMARTHST01 is no longer responding, vSMTP01 queues the messages and wait for the first retry interval to pass (15 minutes). So far so good. Let's say SMARTHST01 gets online again after 20 minutes. The first interval has passed, and I'll have to wait another 25 minutes for the second retry interval to pass. If I stop and start the SMTP Service (Services.msc - Simple Mail Transfer Protocol service - Stop), the server will retry all deliveries, but that would cause a service interruption for ALL virtual SMTP servers on the machine, which is highly undesirable. How can I manually force vSMTP01 to retry delivery of all queued messages without interrupting the service of vSMTP02 and vSMTP03?

    Read the article

  • Spam or exchange issue?

    - by John
    I am getting an error message to unknow user on my domain. I would like to know is this just a phishing spam email or it was really send from our domain? I have changed our domain name to OURDOMAIN.COM I have Exchange 2010 installed. Body of the email is Delivery has failed to these recipients or distribution lists: sales The recipient's e-mail address was not found in the recipient's e-mail system. Microsoft Exchange will not try to redeliver this message for you. Please check the e-mail address and try resending this message, or provide the following diagnostic text to your system administrator. Sent by Microsoft Exchange Server 2007 Diagnostic information for administrators: Generating server: murraygroup.local [email protected] #550 5.1.1 RESOLVER.ADR.RecipNotFound; not found ## Original message headers: Received: from ironport.mih.co.uk (10.10.29.9) by mih-exca-01.murraygroup.local (10.10.29.133) with Microsoft SMTP Server id 8.3.106.1; Fri, 29 Jun 2012 12:36:12 +0100 Received: from glamf04.netintelligence.com (HELO mailfilter.iomart.com) ([62.128.193.114]) by ironport.mih.co.uk with SMTP; 29 Jun 2012 12:42:48 +0100 Received: from glamta4.netintelligence.com(localhost.localdomain[127.0.0.1]) by mailfilter.iomart.com ; Fri, 29 Jun 2012 12:37:18 BST Received: from [195.43.137.66] ([195.43.137.66]) by glamta4.netintelligence.com (8.13.1/8.12.8) with ESMTP id q5TBbH4j022142 for <[email protected]>; Fri, 29 Jun 2012 12:37:18 +0100 Date: Fri, 29 Jun 2012 12:37:17 +0100 Message-ID: <20120629145229.4C2A817231D8A7958044@SONW> From: Ines Hampton <[email protected]> To: sales <[email protected]> Reply-To: Marguerite Soto <[email protected]> Subject: User sales MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Return-Path: [email protected] eporting-MTA: dns;murraygroup.local Received-From-MTA: dns;ironport.mih.co.uk Arrival-Date: Fri, 29 Jun 2012 11:36:12 +0000 Final-Recipient: rfc822;[email protected] Action: failed Status: 5.1.1 Diagnostic-Code: smtp;550 5.1.1 RESOLVER.ADR.RecipNotFound; not found X-Display-Name: sales

    Read the article

  • backup an existing linux server to a virtualbox virtual machine

    - by user146526
    I have some servers and VPSs to many companies across the world. I want to back them up locally. I have some backup solutions enabled to remote hosts, but I want to have a local backup on a computer at home. What I am thinking is: 1) Create a virtualbox virtual machine, install the same version linux as the server. 2) Use rsync to backup the server to the local virtualbox machine. (something like rsync -av --delete --progress --exclude '/dev/' --exclude '/proc/' root@server_ip:// / ) 3) Repeat the command every few days update files. 4) In case of a hard disk failure, or any other bad event, reverse the rsync command and get the files back and continue my bussiness. I tried it with 2 openvz VPS, the one was a backup of the other. I also tried to transfer normal linux server host to openvz machine and it worked great. That way looks pretty clean and easy to me, this is the kind of solution I am looking for. However I need to be sure that this will work if I am going to do it. The question is, will that work ok ? Does anyone see any problem with that ? Do you have any other suggestions ? Thanks

    Read the article

  • Apache, logerror and logrotate: what is the best method?

    - by OlivierDofus
    Hi! Here's a vhost example of my sites: <VirtualHost *:80> DocumentRoot /datas/web/woog ServerName woog.com ServerAlias www.woog.com ErrorLog "|/httpd-2.2.8/bin/rotatelogs /logs/woog/error_log 86400" CustomLog "|/httpd-2.2.8/bin/rotatelogs /logs/woog/access_log 86400" combined DirectoryIndex index.php index.htm <Location /> Allow from All </Location> <Directory /*> Options FollowSymLinks AllowOverride Limit AuthConfig </Directory> </VirtualHost> I've got 12 sites running now. This gives something like: [Shake]:/sources/software/mod_log_rotate# ps x | grep rotate /httpd-2.2.8/bin/rotatelogs /logs/[hidden siteweb]/error_log 86400 /httpd-2.2.8/bin/rotatelogs /logs/[hidden siteweb]/error_log 86400 [snap (as many error_log as virtual hosts)] /httpd-2.2.8/bin/rotatelogs /logs/[hidden siteweb]/access_log 86400 /httpd-2.2.8/bin/rotatelogs /logs/[hidden siteweb]/access_log 86400 [snap (as many access_log as virtual hosts)] grep rotate [Shake]:/sources/software/mod_log_rotate# !!! I've been looking everywhere but I've only found mod_log_rotate. The "little" problem is that the author (very good C developper) explains: "Unfortunately Apache error logs are handled in such a way that we can't work the same log rotation magic on them. Like transfer logs they support piped logging though so you can still use rotatelogs for them. " So my question is: what would be the best way to handle multiple logs? If I just do a very classical log and I use the system's "logrotate" program couldn't this be a good deal? How would/do you deal with that? Thank you!

    Read the article

< Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >