Search Results

Search found 12497 results on 500 pages for 'linked servers'.

Page 415/500 | < Previous Page | 411 412 413 414 415 416 417 418 419 420 421 422  | Next Page >

  • 550 Requested action not taken: mailbox unavailable

    - by Porch
    I setup a small box with Server 2003 64bit to be used as a webserver and email server for a small school. Real simple stuff for a few users. A simple website and a handful of emails. rDNS and spf records setup and pass every test I found including test at dnsstuff.com. Email sending to almost every email address (google, hotmail, aol, whatever) works. However, with one domain, I get an bounce back with the error. 550 Requested action not taken: mailbox unavailable It's another school running Exchange judging from some packet sniffing with WireShark. Every email on this domain I have tried sending to gives this error. The email address is valid as I can send to it from my personal, and gmail account without a problem. Does anyone know of some anti-spam software that gives an 550 error like the above? What else could this be? Thanks for any suggestions. Packet capture of the two servers communicating look like this. 220 <server snip> Microsoft ESMTP MAIL Service, Version: 6.0.3790.3959 ready at Sat, 2 Oct 2010 12:48:17 -0700 EHLO <email snip> 250-<server snip> Hello [<ip snip>] 250-TURN 250-SIZE 250-ETRN 250-XXXXXXXXXX 250-DSN 250-ENHANCEDSTATUSCODES 250-8bitmime 250-BINARYMIME 250-XXXXXXXX 250-VRFY 250-X-EXPS GSSAPI NTLM LOGIN 250-X-EXPS=LOGIN 250-AUTH GSSAPI NTLM LOGIN 250-AUTH=LOGIN 250-X-LINK2STATE 250-XXXXXXX 250 OK MAIL FROM: <email snip> 250 2.1.0 <email snip>....Sender OK RCPT TO:<email snip> 250 2.1.5 <email snip> DATA 354 Start mail input; end with <CRLF>.<CRLF> <email body here> . 550 Requested action not taken: mailbox unavailable QUIT 221 Goodbye

    Read the article

  • Choose identity from ssh-agent by file name

    - by leoluk
    Problem: I have some 20-30 ssh-agent identities. Most servers refuse authentication with Too many failed authentications, as SSH usually won't let me try 20 different keys to log in. At the moment, I am specifying the identity file for every host manually, using the IdentityFile and the IdentitiesOnly directive, so that SSH will only try one key file, which works. Unfortunately, this stops working as soon as the original keys aren't available anymore. ssh-add -l shows me the correct paths for every key file, and they match with the paths in .ssh/config, but it doesn't work. Apparently, SSH selects the indentity by public key signature and not by file name, which means that the original files have to be available so that SSH can extract the public key. There are two problems with this: it stops working as soon as I unplug the flash drive holding the keys it renders agent forwarding useless as the key files aren't available on the remote host Of course, I could extract the public keys from my identity files and store them on my computer, and on every remote computer I usually log into. This doesn't looks like a desirable solution, though. What I need is a possibility to select an identity from ssh-agent by file name, so that I can easily select the right key using .ssh/config or by passing -i /path/to/original/key, even on a remote host I SSH'd into. It would be even better if I could "nickname" the keys so that I don't even have to specify the full path.

    Read the article

  • VirtualBox problems writing to shared folders (Guest Additions installed)

    - by vincent
    I am trying to setup a shared folder from the host (ubuntu 10.10) to mount on a virtualized CentOS 5.5 with Guest Additions (4.0.0) installed (Guest addition features are working ie. seamless mode etc.). I am able to successfully mount the share with: mount -t vboxsf -o rw,exec,uid=48,gid=48 sf_html /var/www/html/ (uid and guid belong to the apache user/group) the only problem is that once mounted and I try to write/create directories and files I get the following: mkdir: cannot create directory `/var/www/html/test': Protocol error I am using the proprietary version of VirtualBox version 4.0.0 r69151. Has anyone had the same problem and been able to fix it or has any idea how to potentially fix this? Another question, the reason for setting this up is this. Our production servers are on CentOS 5.5 however I am a great fan of Ubuntu and would like to develop on Ubuntu rather than CentOS. However in order to stay as close to the production environment I would like to virtualize CentOS to use a web server and use the shared folder as web root. Anyone know whether this isn't a good idea? Has anyone successfully been able to set this up? Thanks guys, your help is always much appreciated and if you need any more information please let me know.

    Read the article

  • Ensure Mac's get correct machine name from DHCP?

    - by Greg Whitfield
    I have a problem in our network where our Mac's occasionally get given the wrong machine name while, I guess, getting a new DHCP lease. The DHCP servers are Windows based - the bulk of our network is Windows, but we have some Linux machines and an increasing number of Macs. The problem specifics is that occasionally a Mac will take on the name of another machine in the network. For example, I have a new Macbook Pro. In the OSX setup is gets called "gomez", and initially starts up on the network with that name without any problems. But after a few days when the machine was restarted (it had several restarts in the meantime), it ended up being called "florrie", which is actually the name of another machine in another part of the network. All network ops work fine, and indeed you don't notice most of the time - it's only when you run apps like Perforce that require the hostname that you get problems. I'm sorry I don't have more info than that, but if I know what to look for I can dig out some more facts. Or any hints on checking the network setup would be useful.

    Read the article

  • Linux DHCPD Mac-Address based Groups

    - by GruffTech
    Our Current DHCPD.conf looks like the following. subnet 10.0.32.0 netmask 255.255.255.0 { range 10.0.32.100 10.0.32.254; option subnet-mask 255.255.255.0; option broadcast-address 10.0.32.255; option domain-name-servers 208.67.222.222,208.67.220.220; option routers 10.0.32.5; host Dev-ABaird-W { hardware ethernet 00:1D:09:3E:49:13; fixed-address 10.0.32.94; } ... more static hosts .... } About as basic as it gets. The old router is 10.0.32.1, our company wanted to implement a squid proxy to better monitor web traffic while at work, and if necessary block large time-wasters, IE Facebook.com. However, we've quickly realized that this change has played a mean prank on our Polycom SIP Phones. Occasionally our phones will not ring, the end recipient hears ringing (this is artificially created by our PBX) however the handset never rings. The ONLY thing that has changed in our network is the option routers line. So, Since all Polycom MAC addresses begin with 00:04:F2 would it be possible in DHCP to say any 00:04:F2:::* MAC addresses get option routers 10.0.32.1, and anything else must talk with our Gateway?

    Read the article

  • Hyper-V Virtual Machine won't respond over network

    - by Brad Gignac
    Recently, one of our Hyper-V virtual machines has periodically stopped responding over the network. It seems to be happening every few days, and it occasionally happens up to several times a day. I am by no means a sysadmin, so any direction you guys could provide would be very welcome. I've included everything I know to include below. If you need any additional information, I'll be glad to include it. I can connect through the Hyper-V console. I can't connect to network shares, IIS web apps, using RDP, or using ping. Memory usage seems to be normal (3 of 4 GB) Processor usage seems low. We don't know the exact time the server goes down, but the following error appears consistently around the time it goes down: Error 5719, NETLOGON This computer was not able to set up as secure session with a domain controller in domain *** due to the following: There are currently no logon servers available to service the logon request. This may lead to authentication problems. Make sure that this computer is connected to the network. If this problem persists, please contact your domain administrator.

    Read the article

  • Changing default openVPN IP in linux server

    - by Lamboo
    The problem is that we have a public OpenVPN service. Pay €9.95 and you get an OpenVPN account at currently half a dozen of servers for a month. This means there are always and will always be some people who create a certain amount of abuse or trouble. On the long run, the external IP every OpenVPN user gets assigned is prohibited from editing Wikipedia, it might be banned by e-gold and on some popular webforums, one-click-hosters, etc. Not a pleasant experience for the 97% of our customers who use our service responsibly and legitimately to regain their privacy. So even if I could change the assigned external IP every few months; e. g. from 216.xx.xx.164 to 216.xx.xx.170, it would help us a lot to combat this abuse and to provide our paying clients with "fresh" IP addresses that aren't banned or restricted on some popular Internet sites and services, yet. Does anybody know how to change the first IP address assigned to the public interface in CentOS? So that e.g. OpenVPN in future doesn't give our OpenVPN clients the external IP 123.xx.xx.164 but rather 123.xx.xx.170?

    Read the article

  • How to copy mailboxes from Exchange 2003 to Exchange 2007 across forests?

    - by Tor Ivar Larsen
    Hi. Were going through a quite difficult conversion from an old ASP-solution to an entirely new one. This includes moving mailboxes from Ex2003 to Ex2007. We want to do this without deleting the old mailboxes on the Ex2003 server, to have a "fall back" in case somehing goes wrong. I have investigated the "Move-Mailbox" cmdlet in the Ex2007 shell, and it seems to fit our needs quite well. The only problem being that we want to keep the old mailboxes. This could easily be accomplished with the -SourceMailboxCleanupOptions, but we can't use this when we have used the -AllowMerge switch. The reason we need -AllowMerge is because all the user accounts with connected mailboxes are already created on Ex2007(Some automatic user creation tool, no real relevance to the case in question) The twist is that the exchange servers are in two different forests... Windows 2003 SP1 on DC1, Windows 2003 SP2 on DC2 in forest 1. Windows 2003 R2 SP2 on DC1 in forest 2. Can we use the Move-Mailbox safely for this purpose? And if yes, how?

    Read the article

  • Cannot login to ISCSI Target - hangs after sending login details

    - by Frank
    I have an ISCSI target volume, to which i am trying to connect using CentOS Linux server. Everything works fine, but cannot its stuck at login. Here are the steps i am performing: [root@neon ~]# iscsiadm -m node -l iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session20 iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session21 iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session22 iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session23 iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session30 iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session31 iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session78 iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session79 iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session80 iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session81 Logging in to [iface: eql.eth2, target: iqn.2001-05.com.equallogic:0-8a0906-ab4764e0b-55ed2ef5cf350a66-neon105, portal: 10.10.1.1,3260] (multiple) After this step, its stucks, waits for some time and then gives this output: Logging in to [iface: iface1, target: iqn.2001-05.com.equallogic:0-8a0906-ab4764e0b-55ed2ef5cf350a66-neon105, portal: 10.10.1.1,3260] (multiple) iscsiadm: Could not login to [iface: eql.eth2, target: iqn.2001-05.com.equallogic:0-8a0906-ab4764e0b-55ed2ef5cf350a66-neon105, portal: 10.10.1.1,3260]. My iscsi.conf is this: node.startup = automatic node.session.timeo.replacement_timeout = 15 # default 120; RedHat recommended node.conn[0].timeo.login_timeout = 15 node.conn[0].timeo.logout_timeout = 15 node.conn[0].timeo.noop_out_interval = 5 node.conn[0].timeo.noop_out_timeout = 5 node.session.err_timeo.abort_timeout = 15 node.session.err_timeo.lu_reset_timeout = 20 node.session.initial_login_retry_max = 8 # default 8; Dell recommended node.session.cmds_max = 1024 # default 128; Equallogic recommended node.session.queue_depth = 32 # default 32; Equallogic recommended node.session.iscsi.InitialR2T = No node.session.iscsi.ImmediateData = Yes node.session.iscsi.FirstBurstLength = 262144 node.session.iscsi.MaxBurstLength = 16776192 node.conn[0].iscsi.MaxRecvDataSegmentLength = 262144 discovery.sendtargets.iscsi.MaxRecvDataSegmentLength = 32768 node.conn[0].iscsi.HeaderDigest = None node.session.iscsi.FastAbort = Yes Also, in access control, i have given full access to Any IP, Any CHAP user and fixed iscsi initiator name. With same access level, all other volumes on rest of servers are working, except this one.

    Read the article

  • Hosting websites in our Workplace custom-built datacentre

    - by i.h4d35
    I'm faced with unique learning opportunity at work at the moment. Due to the slowdown (amongst other reasons), the powers that be at my office have decided to abandon our shared hosting providers (both shared and dedicated hosting) and have decided to host the websites at our office's datacentre. We're running 7 websites, wherein the average unique hits per day at the moment is about 900. We have 2 servers set aside for this - one is a DELL POWER EDGE 1850 (Intel Xeon 3 GHZ*2, 4GB RAM, 73GB HDD and the other is an HP DL 380 G3 (Intel Xeon 2.8 GHz, 6 GB RAM, 73 GB HDD) a) I would like to know the pros and cons of going ahead with this project.All the sites will be hosted on a single IP. In all probability, the OS is going to be CentOS. b) Do you think I should consider Virtualization into this equation (KVM/Xen)? I was thinking in terms of separate instances of the DB server and the frontend though I do not know if this is the best way to go. c) Should I be trying to use cloud stacks like OpenStack and try to make it look like websites hosted on some sort of Public Cloud? (something that I checked out here). Here is something else I came across, which looks similar to what needs to be done at our office. About the websites - Of the 7 websites, 4 are basic static websites which basically gives a whole lot of information about a few local institutions. The remaining 3 are local product-based websites developed in PHP wherein end user can view products and order them online. I am trying to take this as a learning experience wherein I can learn to build something from scratch and save the company a little something in the process. The migration needs to be completed by Easter so I guess it gives us some time (or am I being overly optimistic??). I am confused here and would appreciate all the help I can get. Thanks in advance.

    Read the article

  • Filezilla client unable to get directory listing from Filezilla Server (Windows)

    - by sestocker
    I've set up a self signed certificate in FileZilla server and enabled FTP over SSL/TPS. When I connect from the client FileZilla, I am able to authenticate but cannot get a directory listing: Status: Connecting to MY_SERVER_IP:21... Status: Connection established, waiting for welcome message... Response: 220-FileZilla Server version 0.9.39 beta Response: 220-written by Tim Kosse ([email protected]) Response: 220 Please visit http://sourceforge.net/projects/filezilla/ Command: AUTH TLS Response: 234 Using authentication type TLS Status: Initializing TLS... Status: Verifying certificate... Command: USER MYUSER Status: TLS/SSL connection established. Response: 331 Password required for MYUSER Command: PASS ******** Response: 230 Logged on Command: PBSZ 0 Response: 200 PBSZ=0 Command: PROT P Response: 200 Protection level set to P Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is current directory. Command: TYPE I Response: 200 Type set to I Command: PORT 10,10,25,85,219,172 Response: 200 Port command successful Command: MLSD Response: 150 Opening data channel for directory list. Response: 425 Can't open data connection. Error: Failed to retrieve directory listing I have ports 21 and 50001 through 50005 open on the firewall. We are migrating servers - the 50001 - 50005 is one of the things that helped get FTPS working on the old server. I'm not sure this installation would use the same ports? What else could be the problem?

    Read the article

  • dns configuration error in plesk

    - by Karthik Malla
    I purchased a domain www.softmail.me at Godaddy.com and tried it DNS and getting lots of errors and finally change my nameservers to my server DNS i.e. NS101.VPSLAND.COM and NS202.VPSLAND.COM and created a domain in my plesk panel (Marked DNS & Mail required). After adding my domain to my plesk panel of my server I opened DNS records of that domain and found DNS records are automatically generated to my needs as following 65.75.241.26 / 24 PTR softmail.me. ftp.softmail.me. CNAME softmail.me. lists.softmail.me. CNAME softmail.me. mail.softmail.me. A 65.75.241.26 mssql.softmail.me. A 65.75.241.26 ns.softmail.me. A 65.75.241.26 sitebuilder.softmail.me. A 65.75.241.26 softmail.me. NS ns.softmail.me. softmail.me. A 65.75.241.26 softmail.me. MX (10) mail.softmail.me. webmail.softmail.me. A 65.75.241.26 www.softmail.me. CNAME softmail.me. Finally I waited for a week for I am unable to use my domain. Also in DNS lookup I cannot find any records to my Server except name servers of VPSland. Do I need to add VPSland namesevers anywhere in Plesk panel? If so where? Can anyone assist me where the mistake is?...

    Read the article

  • Problem IIS 7.0 Locking files durring upload

    - by viscious
    I am running a server 2008 with iis7 and the ftp addon on to iis 7.0 I have the ftp site configured and mostly working Except that about 70% of the time when transferring a file the upload will hang forever. If I disconnect the ftp client and reconnect and try to upload the same file I will get an error on the client saying the file is locked. I have to restart the ftp service to clear the lock. I fired up process explorer and did a search on the file in question and sure enough the ftp service has a lock on the file and it takes around 20 minutes to release the lock on its own (and sometimes longer). This lock stays around even after I disconnect the client. Like I said this only happens about 70% of the time, the other 30% of the time it goes through just fine. Things i have verified. -Not a firewall issue. Server is using passive port range 8000-9000 which is allowed on the firewall. -Not a nat issue, server has a globally rout-able ip address -all recommended/required updates installed I have 5 other servers in a very similar configuration and this is the only one i have problems with.

    Read the article

  • Windows 7 x64 wired connection problem. IP, gateway, dns assigned, can't ping. Network detected as "Network"

    - by Emil Lerch
    I am having a problem connecting to a specific wired network with my Latitude E6410 laptop. Other wired networks seem to work fine, but this one does not. I have a coworker with me with the same Intel 82577LM Gigabit Network card, and he can connect just fine. I've updated to the latest Intel drivers (11.8.75.0) and am not using Pro Set. I obtain all DHCP information just fine (IP, netmask, DNS server, default gateway). I cannot ping anything (internal or on the Internet - I tried pinging Google's public DNS servers by IP 8.8.8.8), nor can I get answers to any DNS queries through NS Lookup. Windows troubleshooting says everything is fine, but I can't get DNS responses. I've seen issues like this in the past that were related to link speed/duplex autonegotiaion failures, so I've tried manually setting link speed/duplex to all values one by one with no success. My coworker is using all default settings, so he is just using autonegotiate. Any ideas of other things to try?

    Read the article

  • Managing multiple Apache proxies simultaneously (mod_proxy_balancer)

    - by Hank
    The frontend of my web application is formed by currently two Apache reverse proxies, using mod_proxy_balancer to distribute traffic over a number of backend application servers. Both frontend reverse proxies, running on separate hosts, are accessible from the internet. DNS round robin distributes traffic over both. In the future, the number of reverse proxies is likely to grow, since the webapplication is very bandwidth-heavy. My question is: how do I keep the state of both reverse balancers / proxies in sync? For example, for maintenance purposes, I might want to reduce the load on one of the backend appservers. Currently I can do that by accessing the Balancer-Manager web form on each proxy, and change the distribution rules. But I have to do that on each proxy manually and make sure I enter the same stuff. Is it possible to "link" multiple instances of mod_proxy_balancer? Or is there a tool out there that connects to a number of instances, and updates all with the same information? Update: The tool should retrieve the runtime status and make runtime changes, just like the existing Balancer-Manager, only for a number of proxies - not just for one. Modification of configuration files is not what I'm interested in (as there are plenty tools for that).

    Read the article

  • Either, nginx+php-fpm bad config or nginx+php-fpm cannot handle high query?

    - by The Wolf
    I have wordpress installed in my server configured(hopefully with nginx+php-fpm+mariaDB). I am trying to import using wordpress importer a 1.5MB xml file. Everytime I try to upload it using the importer, it got cut of... meaning just blank screen result.. Here is my error log: actually I just posted 2 of the errors [error] 858#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xxx.xx.xx, server: xxx.com, request: "GET xxxx.html HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "xxx.com" [error] 858#0: *13 connect() failed (111: Connection refused) while connecting to upstream, client: xxx.x.xx.xx, server: xxx.com, request: "GET xxxx.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "xxx.com" I don't know what is the reason why it can't process the wordpress export .xml. I already increased max_file_upload & etc., but nothing happens. Hope somebody can help me. Here are my conf: nginx.conf user nginx; worker_processes 8; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; server_tokens off; keepalive_timeout 65; fastcgi_read_timeout 500; #gzip on; client_max_body_size 2M; php-fpm.conf ;;;;;;;;;;;;;;;;;;;;; ; FPM Configuration ; ;;;;;;;;;;;;;;;;;;;;; ; All relative paths in this configuration file are relative to PHP's install ; prefix. ; Include one or more files. If glob(3) exists, it is used to include a bunch of ; files from a glob(3) pattern. This directive can be used everywhere in the ; file. include=/etc/php-fpm.d/*.conf ;;;;;;;;;;;;;;;;;; ; Global Options ; ;;;;;;;;;;;;;;;;;; [global] ; Pid file ; Default Value: none pid = /var/run/php-fpm/php-fpm.pid ; Error log file ; Default Value: /var/log/php-fpm.log error_log = /var/log/php-fpm/error.log ; Log level ; Possible Values: alert, error, warning, notice, debug ; Default Value: notice ;log_level = notice ; If this number of child processes exit with SIGSEGV or SIGBUS within the time ; interval set by emergency_restart_interval then FPM will restart. A value ; of '0' means 'Off'. ; Default Value: 0 ;emergency_restart_threshold = 0 ; Interval of time used by emergency_restart_interval to determine when ; a graceful restart will be initiated. This can be useful to work around ; accidental corruptions in an accelerator's shared memory. ; Available Units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;emergency_restart_interval = 0 ; Time limit for child processes to wait for a reaction on signals from master. ; Available units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;process_control_timeout = 0 ; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging. ; Default Value: yes daemonize = no ;;;;;;;;;;;;;;;;;;;; ; Pool Definitions ; ;;;;;;;;;;;;;;;;;;;; ; See /etc/php-fpm.d/*.conf [root@host etc]# vim php-fpm.conf [root@host etc]# vim php-fpm.conf ; Default Value: notice ;log_level = notice ; If this number of child processes exit with SIGSEGV or SIGBUS within the time ; interval set by emergency_restart_interval then FPM will restart. A value ; of '0' means 'Off'. ; Default Value: 0 ;emergency_restart_threshold = 0 ; Interval of time used by emergency_restart_interval to determine when ; a graceful restart will be initiated. This can be useful to work around ; accidental corruptions in an accelerator's shared memory. ; Available Units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;emergency_restart_interval = 0 ; Time limit for child processes to wait for a reaction on signals from master. ; Available units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;process_control_timeout = 0 ; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging. ; Default Value: yes daemonize = no ;;;;;;;;;;;;;;;;;;;; ; Pool Definitions ; ;;;;;;;;;;;;;;;;;;;; ; See /etc/php-fpm.d/*.conf ps aux [root@host etc]# ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.1 2900 1380 ? Ss Jun02 0:00 init root 2 0.0 0.0 0 0 ? S Jun02 0:00 [kthreadd/9308] root 3 0.0 0.0 0 0 ? S Jun02 0:00 [khelper/9308] root 124 0.0 0.0 2464 576 ? S<s Jun02 0:00 /sbin/udevd -d root 460 0.0 0.1 35976 1308 ? Sl Jun02 0:00 /sbin/rsyslogd -i /var/run/syslogd.pid -c 5 root 474 0.0 0.0 8940 1028 ? Ss Jun02 0:00 /usr/sbin/sshd root 481 0.0 0.0 3264 876 ? Ss Jun02 0:00 xinetd -stayalive -pidfile /var/run/xinetd.pid root 491 0.0 0.1 6268 1432 ? S Jun02 0:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --pid-file=/var/lib/mysql/host.busilak.com. mysql 584 0.1 6.8 679072 71456 ? Sl Jun02 0:04 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib/mysql/plugin --use root 586 0.0 0.3 12008 3820 ? Ss Jun02 0:01 sshd: root@pts/0 root 629 0.0 0.0 9140 756 ? Ss Jun02 0:00 /usr/sbin/saslauthd -m /var/run/saslauthd -a pam -n 2 root 630 0.0 0.0 9140 520 ? S Jun02 0:00 /usr/sbin/saslauthd -m /var/run/saslauthd -a pam -n 2 root 645 0.0 0.1 12788 1928 ? Ss Jun02 0:01 sendmail: accepting connections smmsp 653 0.0 0.1 12576 1728 ? Ss Jun02 0:00 sendmail: Queue runner@01:00:00 for /var/spool/clientmqueue root 691 0.0 0.1 7148 1184 ? Ss Jun02 0:00 crond root 698 0.0 0.1 6272 1688 pts/0 Ss Jun02 0:00 -bash root 1006 0.0 0.0 7828 924 ? Ss 00:30 0:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf nginx 1007 0.0 0.1 8156 1724 ? S 00:30 0:00 nginx: worker process nginx 1008 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1009 0.0 0.1 8020 1356 ? S 00:30 0:00 nginx: worker process nginx 1011 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1012 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1013 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1014 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1015 0.0 0.1 8024 1344 ? S 00:30 0:00 nginx: worker process root 1030 0.0 0.2 25396 2904 ? Ss 00:30 0:00 php-fpm: master process (/etc/php-fpm.conf) apache 1031 0.0 1.9 40700 20624 ? S 00:30 0:00 php-fpm: pool www apache 1032 0.0 2.0 41924 21888 ? S 00:30 0:01 php-fpm: pool www apache 1033 0.0 1.9 41212 20848 ? S 00:30 0:01 php-fpm: pool www apache 1034 0.0 1.9 40956 20792 ? S 00:30 0:01 php-fpm: pool www apache 1035 0.0 2.0 41560 21556 ? S 00:30 0:02 php-fpm: pool www apache 1040 0.0 1.8 39292 19120 ? S 00:30 0:00 php-fpm: pool www root 1125 0.0 0.0 6080 1040 pts/0 R+ 01:04 0:00 ps aux netstat -l [root@host etc]# netstat -l Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 *:ssh *:* LISTEN tcp 0 0 localhost.localdomain:smtp *:* LISTEN tcp 0 0 localhost.locald:cslistener *:* LISTEN tcp 0 0 *:mysql *:* LISTEN tcp 0 0 *:http *:* LISTEN tcp 0 0 *:ssh *:* LISTEN Active UNIX domain sockets (only servers) Proto RefCnt Flags Type State I-Node Path unix 2 [ ACC ] STREAM LISTENING 60575947 /var/run/saslauthd/mux unix 2 [ ACC ] STREAM LISTENING 60574168 @/com/ubuntu/upstart unix 2 [ ACC ] STREAM LISTENING 60575873 /var/lib/mysql/mysql.sock Hope somebody can help me to figure out what is the problem.

    Read the article

  • network latency, TCP and UDP packets

    - by user115848
    Hello recently my network has started to cause me lots of problems. I have a cable modem, connected to a tp-link router (with some port forwarding). Everything was working fine then i started to get lots of udp (port 53) "UNREPLIED" logs in the router. Now there are tcp UNREPLIED logs too. This is causing lots of latency and failed connections when trying to connect to different internet sites. Also, we run an openfire server for spark connections, and I believe its causing connectivity issues for some users who are trying to connect using Spark (some people connect fine, others don't). Please see screen shot below for packet logs. It has to be something internally, as I connected straight to the comcast modem and i was able to connect to the internet and various sites as normal. I tried to swap out the router with a different and got the same issue. I scanned both my internal dns servers for viruses or malware and it came up empty. Another anomaly is that when i try to connect to www.cnn.com, i get redirected to the different site. I scanned my own machine for hijacks. Not sure if this is related to the networking issue. Please let me know if you have any ideas for troubleshooting.

    Read the article

  • server and user directly connected no pinging...

    - by jtzero
    I have a server(fedora 12) with two nics on it, directly connected to say 192.168.1.0 and 192.168.2.0 the route table looks like this Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.1.0 192.168.1.1 255.255.255.0 U 0 0 0 eth0 192.168.2.0 * 255.255.255.0 U 0 0 0 eth1 eth0 = 192.168.1.15 eth1 = 192.168.2.1 and a directly connected user (Mythdora) on the 192.168.2.0 network with ip 192.168.2.2 and route table like so Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.2.0 * 255.255.255.0 U 0 0 0 eth0 the cable is a crossover and it works all three nics work -- connected my laptop to either end and assign it a valid 192.168.2.0 ip the pings will work. In fact if I disconnect the server side and plug the eth cable into the laptop and have thte box ping the laptop continually remove the eth cable and plug it back into the server both sides ping... unfortunately the box realizing it's connected to a different pc wipes its route table after say ten minutes or so. if I do a trace route from a box on the 1.0 network to the servers 192.168.2.1 interface never get a reply from it. as a note at one point I could ping the server from the 192.168.2.2 box but the server couldnt ping the 192.168.2.2 box.

    Read the article

  • Linux Tuning for High Traffic JBoss Server with LDAP Binds

    - by Levi Stanley
    I'm configuring a high traffic Linux server (RedHat) and running into a limit I haven't been able to track down. I need to be able to handle sustained 300 requests per second throughput using Nginx and JBoss. The point of this server is to run checks on a user's account when that user signs in. Each request goes through Nginx to JBoss (specifically Torquebox with JBoss A7 with a Sinatra app) and then makes an LDAP request to bind that user and retrieve several attributes. It is during the bind that these errors occur. I'm able to reproduce this going directly to JBoss, so that rules out Nginx at least. I get a variety of error messages, though oddly JBoss stopped writing to the log file recently. It used to report errors about creating native threads. Now I just see "java.net.SocketException: Connection reset" and "org.apache.http.conn.HttpHostConnectException: Connection to http://my.awesome.server:8080 refused" as responses in jmeter. To the best of my knowledge, I have plenty of available file handles, processes, sockets, and ports, yet the issue persists. Unfortunately, I have very little experience tuning servers. I've found a couple useful documents - Ipsysctl tutorial 1.0.4 and Linux Tuning - but those documents are a bit over my head (and just entering the the configuration described in Linux Tuning doesn't fix my issue. Here are the configuration changes I've tried (webproxy is the user that runs Nginx and JBoss): /etc/security/limits.conf webproxy soft nofile 65536 webproxy hard nofile 65536 webproxy soft nproc 65536 webproxy hard nproc 65536 root soft nofile 65536 root hard nofile 65536 root soft nproc 65536 root hard nofile 65536 First attempt /etc/sysctl.conf sysctl net.core.somaxconn = 8192 sysctl net.ipv4.ip_local_port_range = 32768 65535 sysctl net.ipv4.tcp_fin_timeout = 15 sysctl net.ipv4.tcp_keepalive_time = 1800 sysctl net.ipv4.tcp_keepalive_intvl = 35 sysctl net.ipv4.tcp_tw_recycle = 1 sysctl net.ipv4.tcp_tw_reuse = 1 Second attempt /etc/sysctl.conf net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.core.netdev_max_backlog = 30000 net.ipv4.tcp_congestion_control=htcp net.ipv4.tcp_mtu_probing=1 Any ideas what might be happening here? Or better yet, are there some good documentation resources designed for beginners?

    Read the article

  • Distributed File Systems.

    - by GruffTech
    So, I've been reading several articles around ServerFault as well as google. (For Example, this link) My Requirements are very similar to the link above, however i'd like to also have dynamic or at least resizeable file volumes, so if necessary i can add 4-5 servers to the pool, and then expand the volume. Any Distributed File systems that support that, to save me some time? Thanks! LustreFS will be my next test cluster to build. GlusterFS I've build a 3-machine test GlusterFS cluster, However i quickly became aware of several of its limitations that it doesn't seem to make clearly public. One, i can't seem to resize a volume. Once a volume is created, its done. Which seems retarded, why have a fully scalable file system if i can't scale a volume? So maybe i'm doing something wrong. I'm not sure. AmazonS3 while gives the cheapest startup adds too much cost when broken down to per client per month, so its out. Building my own system when prorated over several years with no bandwidth costs makes it significantly cheaper. MogileFS isn't an option as we'd like this server to be a SAN-Replacement, for storing tons of media from a multitude of systems, which for us means it needs to be POSIX compliant so it can be remotely mounted via NFS or CIFS.

    Read the article

  • How do i set up a fully featured small business network?

    - by JoshReedSchramm
    This has the possibility to be a very large question but I recently acquired a few rack mount servers and the hardware necessary to run them. Unfortunately I'm a programmer with very little understanding of how to set up a good working network so I'm hoping someone on here might be able to help. What I want to do is run a domain with a series of subdomains which would all be externally accessible. The setup would live inside my home and my internet connection is your run of the mill cable model (which means a dynamic IP) I want to be able to set up a couple site, specifically: www.mycompany.com (mycompany.com with no subdomain would redirect to this) build.mycompany.com (for my continuous integration server) ruby.mycompany.com (for ruby projects) win.mycompany.com (for windows project) etc. Additionally this is still my home network so our personal machines need to be able to get on via wifi with at least the same security we have now through an out of the box router from best buy. I'm thinking i need a DNS server, DHCP server and one of those would run either no-ip or dyndns to accommodate the dynamic ip. I don't necessarily need mail but it might be helpful to have some sort of mail server i could use for testing, it doesn't need to get out to the greater internet though. So how do i set up this kinda of network? tl;dr Need to know how to set up your standard office style network in my home off my normal consumer level cable modem connection.

    Read the article

  • Swap static public IPs without creating DNS conflicts?

    - by Jakobud
    Our ISP is Comcast and we have 5 static public IPs from them that we use for various services, including customers connecting to our network, VPN, web, DNS, etc... We need more IP addresses from Comcast. Unfortunately, Comcast is telling us that they can't just simply give us 5 more addresses. They only give static IP addresses in blocks of 1, 5 or 13. In order for us to get more static IPs, they have to take away our current 5 static IPs and give us 13 new ones. How do we make this transition without causing all sorts of DNS chaos? We run public DNS servers, so we can make the DNS changes ourselves, but it will take some time obviously for those DNS changes to propagate throughout the internet. Are there any easy ways to make this transition? Like create some type of fallback DNS entry or something? Surely there must be some sort of procedure for this kind of thing. The Comcast support guy was useless.

    Read the article

  • Help with memory usage issues on VPS

    - by Niall Collins
    Hi there, I am running a VPS server with 6 .net web sites/applications running on it. I am having issues with performance on the server, mainly it running out of memory. I contacted the company that lease the server to me and they told me it was because I also had sql server 2008 express also running on the server. So I went ahead and removed this, uninstalled etc. However I still seem to be having issues. For example at present, looking at resource consumption, the virtual memory is: ID: vprvmem Current Use: 894,328,832 bytes Limit: 1,073,741,824 bytes This means useage of ~80%. Is there any way I can check out exactly that applications, web sites, software is taking up most of the servers memory, so I can look at rectifying it. I feel that 80% is much to high to allow for contingency for a spike in traffic. I have got extra memory resources added to the box recently, but I would prefer finding the source of the problem rather than throwing extra memory at it. Maybe these levels are correct and alls running ok, but would like to investigate it to make sure. My knowledge of hardware is limited as I mostly deal in the spectrum of software. So any tools out there that can help me or any pertient advice.

    Read the article

  • cannot resolve DNS server's own domain name

    - by sims
    I have a DNS server (mega.dude - 123.123.123.123) running bind 9.4. When I: dig mega.dude I get no answer section. I have nameserver 123.123.123.123 in /etc/resolv.conf Here is my zone file: $TTL 1W @ IN SOA mega.dude. names.mega.dude. ( 2009081502 ; serial 3H ; refresh 15M ; retry 1W ; expiry 1D ) ; minimum NS ns1 NS ns2 MX 10 mail.mega.dude. A 123.123.123.123 @ A 123.123.123.123 ns1 A 123.123.123.123 ns2 A 123.123.123.123 www CNAME @ mail A 123.123.123.123 It didn't used to look like this. I read that it's evil to have an mx record pointing to a CNAME. So I changed that. Then I thought maybe that was also the case for NS. So I changed those too. Still no good. The ports are open. I can't figure it out. Oh by the way, all the other zones return fine. But not the servers own domain. So I know I'm doing something stupid. Thanks for your help all!

    Read the article

  • Local DNS and Apache Server Configuration Interferring - example.com / www.example.com

    - by nicorellius
    I have a domain for my site: example.com I am also running local DNS with these lines: www IN CNAME server.<host_provider>.com. dev IN CNAME server.<host_provider>.com. So www.example.com and dev.example.com go to production and development sites, respectively, that are hosted by a host company. In my Apache configuration for the main site, I'm running a rewrite rule like this: RewriteEngine ON RewriteCond %{HTTP_HOST} ^example\.com$|!dev\.example\.com$ [NC] RewriteRule ^(.*)$ http://www\.%{HTTP_HOST}/$1 [R=302,L,NE] This rule seems to work, as when you are off the network and go to example.com in the browser, you get redirected to www.example.com. The problem is when I'm on the network, and I go to example.com I get an error page, saying page can't be found. No server errors; just a page can't be found, as if the local DNS is causing it to stop looking at that point. I'm also using Nettica for DNS service and have this A record in place: example.com Host (A) Default xxx.xx.xxx.xx This handles the external DNS, but my problem seems to be related to my internal DNS. For example, inside my network, I can go to servers on the network with addresses like this: server.example.com server1.example.com server2.example.com These are configured in my local DNS. I'm just not sure how to get past the "empty" subdomain and go to example.com. Adding to this since it might not be clear. If I'm out side the example.com network, on another network, like example123.com, then when I go to example.com I'm redirected to www.example.com as expected, eg, the Apache rewrite rule is working. Thanks in advance for any information.

    Read the article

< Previous Page | 411 412 413 414 415 416 417 418 419 420 421 422  | Next Page >