Search Results

Search found 9715 results on 389 pages for 'servers'.

Page 321/389 | < Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >

  • Network Load Balancing, intermittent port problem on Windows Server 2008

    - by Jimmy Chandra
    Trying to troubleshoot an intermittent problem on a Windows Server 2008 NLB. I think it might be related to an NLB issue. We are using Windows Network Load Balancing to balance load for our multiserver SharePoint front ends. Say... Web Front End 1 IP is 192.168.1.100 and Web Front End 2 IP is 192.168.1.101, the NLB is setup to load balance both WFE servers on any incoming traffic to the IP 192.168.1.200. Sometimes we got an intermittent issue where when we try to access the SharePoint site using 192.168.1.200:8080 (say the site is set up to run on port 8080) from a remote client, it will display page not found. Pinging the 192.168.1.200 will give responses, but when trying to telnet to 192.168.1.200:8080 it just won't connect. However, browsing the SharePoint site directly on individual WFE (192.168.1.100 and 192.168.1.101) show no problem whatsoever. My guess also (we didn't get a chance to try it yet, but I think it should work), if I try connecting remotely to individual server, it will respond just fine. But any attempt on trying to connect using the virtual IP (192.168.1.200) will fail miserably. Funny thing is, after a while it will return back to normal. Anyone had similar experience with this type of problem while implementing NLB before? We are doing this in a virtual environment.

    Read the article

  • 0% CPU in top for all processes, but load average > 1

    - by chrisdew
    On two different servers (with Ubuntu 12.04LTS AMD64) I have seen the following behaviour: op - 10:50:05 up 305 days, 21:17, 1 user, load average: 1.94, 2.52, 2.97 Tasks: 141 total, 2 running, 139 sleeping, 0 stopped, 0 zombie Cpu(s): 41.5%us, 6.5%sy, 0.0%ni, 51.8%id, 0.0%wa, 0.2%hi, 0.1%si, 0.0%st Mem: 8178432k total, 5753740k used, 2424692k free, 159480k buffers Swap: 15625208k total, 0k used, 15625208k free, 4905292k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 20 0 23928 2072 1216 S 0 0.0 0:56.42 init 2 root 20 0 0 0 0 S 0 0.0 0:00.01 kthreadd 3 root RT 0 0 0 0 S 0 0.0 0:01.23 migration/0 4 root 20 0 0 0 0 S 0 0.0 2:39.82 ksoftirqd/0 5 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/0 6 root RT 0 0 0 0 S 0 0.0 0:02.99 migration/1 7 root 20 0 0 0 0 S 0 0.0 2:32.15 ksoftirqd/1 8 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/1 9 root RT 0 0 0 0 S 0 0.0 0:11.67 migration/2 10 root 20 0 0 0 0 S 0 0.0 29:00.34 ksoftirqd/2 The server is working fine, but top shows all processes as using 0% CPU. A reboot fixed this on an earlier machine, but I haven't yet tried it on this one. I have tried top several times, and so am sure that I haven't accidentally pressed '<' or '' to sort by a different column. Sorting the process list by all of the available columns, stills shows 0% CPU for all displayed processes. What is going on? If this a kernel bug? Update: If I use top -p <PID> for a know, busy process, top still displays 0% CPU for that process.

    Read the article

  • DAS vs SAN storage for serving 2 to 4 nodes

    - by Luke404
    We currently have 4 Linux nodes with local storage, arranged in two active/passive pairs with storage mirrored using DRBD, running virtual machines (actually using Xen Hypervisor) for typical hosting workloads (mail, web, a couple VPS, etc.). We're approaching the (presumed) maximum IOPS of those servers, and we're planning to migrate to an external storage solution with two active nodes, with capacity for up to four active nodes. Since we're an all-Dell shop I've done some research and found the MD3200 / MD3200i products should be the ones we're looking for. We are pretty sure we won't be attaching more than 4 hosts on a single storage and I'm wondering if there is any clear advantage for one or the other. In theory I should be able to attach 4 SAS hosts to a single MD3200 (single links on a single controller MD3200, or dual redundant SAS links from each host to a dual-controller MD3200), or 4 iSCSI hosts to a single MD3200i (directly on its 4 GigE ports without any switch, again with dual links for the dual controller option). Both setups should let us implement live VM migration since all hosts can access all the LUNs at the same time, and also some shared filesystem like GFS2 or OCFS2. Also, both setups should allow full redundancy of the whole system (assuming dual controllers in the storage). One difference I can see is that the DAS solution is actually limited to 4 hosts while the iSCSI one should be able to grow to more hosts (adding two GigE switches to the mix). One point for the iSCSI solution is that it would allow us to start out with our current nodes and upgrade them at a later time (we can't add other SAS controllers, but they already have 4 GigE ports each). With the right (iSCSI|SAS) controllers I should be able to connect diskless nodes and boot them off the external storage which I think is a good thing (get rid of any local storage). On the other hand, I would have thought the SAS one to be cheaper but it seems like an MD3200 actually costs a little less than an MD3200i (?) (please note: I've used Dell gear in my examples since that's what we're looking for but I assume the same goes with other vendors) I would like to know if my assumptions above are correct, and if I'm missing any important difference between the two setups.

    Read the article

  • Setup Exchange 2010 cannot verify Host (A) record warning

    - by Joost Verdaasdonk
    When I try to install Exchange 2010 on my server 2008 R2 server I get a warning during the prerequisites check: Warning: setup cannot verify that the 'Host' (A) record for this computer exists within the DNS database on server: 90.195.200.12. The goal of this Exchange setup is that I'm able to sent email in my local domain as well receive/sent email through the public domain name. Some information about my setup This Server is going to be a dedicated exchange host and has the following IP setup: (IP's are examples and not the real IP's ofc) Local VLAN NIC: IP: 10.10.50.22 Subnet: 255.255.255.0 No gateway DNS: 10.10.50.1 (is domain controler with authoritive DNS) public WAN NIC: IP: 90.195.200.148 Subnet: 255.255.255.235 Gateway: 90.195.200.145 DNS: 90.195.200.12 | 190.160.230.14 My public domain - exampledomain.com A record: mail - IP: 90.195.200.148 MX record IP: 90.195.200.148 As I'm seeing now the exchange setup is looking for the A record in one of the DNS servers in my Public WAN NIC. And ofc this is not where my A records are defined. I have those A records in 2 places: 1. In the domain controler DNS (the private nic) 2. In the online dns registration of my public domain (exampledomain.com) My question is... is this warning going to be a problem? Can I do something better in my setup so that this warning will go away? Please advice?

    Read the article

  • Using a Level 2 switch as a core switch

    - by imtech
    I have a small user base of about 20 people on at a time and spiking up to about 80 people during peak times. Most people (80+%) are connected over our Aruba managed wireless system. We have a Windows Domain. We have 3 24-Port switches all connecting back to a central 48-port switch where additional access ports, firewall, servers, and wireless controller all centrally connect back to. It's a flat network with dumb switches. I'm in the process of upgrading our infrastructure. Cisco pricing for switches is pretty high for us so I've been looking at HP Procurves which seem to be within our budget range. I want to eventually make use of 802.1x, SNMP, QoS for possible VOIP upgrades, VLAN to separate guest VLAN from authenticated users, and other more advanced features. PoE would be nice but that's probably too expensive for us. I was thinking of having our core switch be a Procurve 2610 and the rest of our switches that centrally connect to it be Procurve 2510s. A true and full blown level 3 switch is way out of our price range but a 2610 seems to be good enough for us. The 2610 does static routing which ought to be good enough for us but I'm in unfamiliar territory so I'm looking for any gotchas. Also, should all the switches be 2610s or just the core switch? Do I even need the 2610, can I just go with all 2510s? I'm new to VLANs as well so I'm not sure what it is I need but I would like an affordable infrastructure that won't need replacing 2-3 years down the line because I choose a product that was lacking.

    Read the article

  • Postfix: Modify sender address based on recipient

    - by PJ P
    We have a Postfix server that receives mail from our application servers. Senders are in the form [email protected] (where host.fqdn can vary, depending on source server) and recipients can be internal or external users. Messages going to external users should have the sender changed to [email protected]. I have tried using canonical maps, but since that is handled by the cleanup daemon, before any transport decisions are made, it would affect all sender addresses. I have also tried creating a custom smtp transport with generic mappings and configuring transport_maps to use that custom smtp transport for external domains. However, generic mappings affect both sender and recipient addresses. Lastly, I've tried the following: Create a custom smtpd daemon that specifies sender canonical maps and a unique transport table. Send all externally addressed mail to that custom daemon. Ideally, sender canonical maps would transform the sender address and the unique transport table would relay messages to the internet. However, evidently, only one transport table can be used per Postfix instance. I want to avoid creating an entirely new Postfix instance to accommodate this rewriting. Any suggestions? (and thanks in advance)

    Read the article

  • SQL Server 2008 Bring Database Online trying to open a file from a drive that doesn't exist

    - by Nai
    This is my error I am facing TITLE: Microsoft.SqlServer.Smo Set offline failed for Database 'Go3D_Retailer ------------------------------ ADDITIONAL INFORMATION: An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) Unable to open the physical file "E:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\ftrow_Go3D_catalog.ndf". Operating system error 2: "2(failed to retrieve text for this error. Reason: 15105)". Database 'Go3D_Retailer' cannot be opened due to inaccessible files or insufficient memory or disk space. See the SQL Server errorlog for details. ALTER DATABASE statement failed. (Microsoft SQL Server, Error: 5120) Background to this error I've been trying to move my destination logshipping database to another physical server for analysis purposes. Because I do not have domain keys and active directory set up, I had to hack my process by using the same username/password for both the source and destination servers to get the process to work. Following that, I used this guy's solution to move the destination database to another server. However, this error occurs when I try to bring the database back online. I don't have an E drive on my server and I have no idea why it's trying to open a file from E drive. I have over a 100gb left on my hard disk so it's definitely not a space issue. This sounds like a bug... Any ideas?

    Read the article

  • Wake OSX 10.8 over WiFi (WoWL - Wake on WiFi Lan)

    - by WrinkledCheese
    I have a stack of Apple Mac minis running SSH servers for remote login. The problem is that I can't seem to get them to wake up. From what I gathered, as of Mac OSX 10.7 you required to have a boot time option set - darkwake=0 10.7 and darkwake=no 10.8. So I tried this and then I came to the realization that this will probably work for a wired connection but I'm using WiFi. My wired connections are used for another local subnet without Internet access, so I have to get it to wake on WiFi. I realize that I can just set the stack of Mac minis to just not sleep, but I'm looking for a sleep enabled option. These services don't require initial response speed as once the connection is made they will be active and once they are no longer active they will hopefully go back to sleep. I have a FreeBSD box running avahi-daemon in order to try and wake the Macs with the Bonjour Service but it doesn't seem to work. I tried registering the service as Gordon suggested in the below post, but that just makes it so that there isn't a timeout when discovering services and resolving them. It still doesn't allow ssh connections to port 22 when it's asleep. For reference, I want what seems like what Gordon Davisson explained on this question: Wake on Demand for Apache server in OS X 10.8

    Read the article

  • Exchange Connector Won't Send to External Domains

    - by sisdog
    I'm a developer trying to get my .Net application to send emails out through our Exchange server. I'm not an Exchange expert so I'll qualify that up front!! We've set up a receive Connector in Exchange that has the following properties: Network: allows all IP addresses via port 25. Authentication: Transport Layer Security and Externally Secured checkboxes are checked. Permission Groups: Anonymous Users and Exchange Servers checkboxes are checked. But, when I run this Powershell statement right on our Exchange server it works when I send to a local domain address but when I try to send to a remote domain it fails. WORKS: C:\Windows\system32Send-Mailmessage -To [email protected] -From [email protected] -Subject testing -Body testing -SmtpServer OURSERVER (BTW: my value for OURSERVER=boxname.domainname.local. This is the same fully-qualified name that shows up in our Exchange Management Shell when I launch it). FAILS: C:\Windows\system32Send-Mailmessage -To [email protected] -From [email protected] -Subject testing -Body testing -SmtpServer OURSERVER Send-MailMessage : Mailbox unavailable. The server response was: 5.7.1 Unable to relay At line:1 char:17 + Send-Mailmessage <<<< -To [email protected] -From [email protected] -Subject testing -Body himom -SmtpServer FTI-EX + CategoryInfo : InvalidOperation: (System.Net.Mail.SmtpClient:SmtpClient) [Send-MailMessage], SmtpFailed RecipientException + FullyQualifiedErrorId : SmtpException,Microsoft.PowerShell.Commands.SendMailMessage EDIT: From @TheCleaner 's advice, I ran the Add-ADPermission to the relay and it didn't help; [PS] C:\Windows\system32Get-ReceiveConnector "Allowed Relay" | Add-ADPermission -User "NT AUTHORITY\ANONYMOUS LOGON" -E xtendedRights "Ms-Exch-SMTP-Accept-Any-Recipient" Identity User Deny Inherited -------- ---- ---- --------- FTI-EX\Allowed Relay NT AUTHORITY\ANON... False False Thanks for the help. Mark

    Read the article

  • Home network with two isolated separate subnets, running on cablemodem/router and WRT-router.

    - by Johan Allgoth
    I have a new connection with a nice new router/cable-modem. I'd like to setup it up optimally and needs some pointers. I am a complete n00b when it comes to routing. I want to end up with two separate subnets, 10.1.2.0/24 and 192.168.1.0/24 each available on their own wireless channel/SSID. Both firewalled. I want my wired computers on the gigabit switch, optimally with public ips. I want to be able to reach 192.168.1.0/24 from 10.1.2.0/24, but not vice versa. Everyone should have internet access. Hardware and capabilities: Netgear CG3100. Handles cable connection. Gigabit switch. 802.11n. Can do DHCP, firewall, NAT etc. Can choose subnet. Can turn of NAT and if so hand out up to 4 public ips. Somewhat challenged when it comes to configuration. WRT-router. Runs DD/Open-WRT very stable. 100 Mbit switch. 802.11.g Can do DHCP, firewall, NAT etc. Can choose subnet. Highly configurable. I hope to be able to keep 10.1.2.0/24 on the CG3100, for speed reasons and 192.168.0.0/24 on the WRT-router for quota and user control reasons. On my 10.1.2.0/24 network I plan on running servers for various services. Should I turn of NAT on the WRT-router? Or on the cable modem? Activate what in that case? Is double NAT always f-ed up?

    Read the article

  • keepalived issues on xen domU

    - by David Cournapeau
    Hi, I cannot manage to run keepalived correctly on xen domU. I am following this link for configuration, and it works great on some local VM (running with KVM). If I set up the exact same configuration, but on xen domU, it does not work: both servers do not see each other and decide to be master (10.120.100.99 being the virtual IP) $ sudo ip addr sh eth0 # host1 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:16:3e:78:f5:31 brd ff:ff:ff:ff:ff:ff inet 10.120.100.104/24 brd 10.120.100.255 scope global eth0 inet 10.120.100.99/32 scope global eth0 inet6 fe80::216:3eff:fe78:f531/64 scope link valid_lft forever preferred_lft forever $ sudo ip addr sh eth0 # host2 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:16:3e:51:36:20 brd ff:ff:ff:ff:ff:ff inet 10.120.100.105/24 brd 10.120.100.255 scope global eth0 inet 10.120.100.99/32 scope global eth0 inet6 fe80::216:3eff:fe51:3620/64 scope link valid_lft forever preferred_lft forever Is there a way I could debug this - it seems some people are able to use keepalived on xen following some mailing list, but without much info on their config.

    Read the article

  • Backup Exec 10 - Network connection to the remote agent has been lost

    - by jherlitz
    Okay, so I have 4 remote offices, all running off of a 3mb ethernet connection. Two sites are part of a WAN and 2 sites are using 3mb connections over a site to site tunnel. I am using Backup Exec 2010, I have the remote agent installed on all the remote servers. For the past few weeks now, on the two sites running over the site to site tunnel have been failing with the following error message now. "The network connection to the Backup Exec Remote Agent has been lost. Check for network errors" We used to be on a DSL connection site to site tunnel, now we changed to the 3mb ethernet connection using site to site tunnel. I have to find out, has it been failing ever since we changed, or just recently. Backup exec support is telling me it is a network issue. My communication or connection to the server is solid, we don't have any issues, or outages. So I am baffled on why this continues to fail. And why just those two sites.. Any advice?

    Read the article

  • Sending mail through local MTA while domain MX records point to Google Apps

    - by Assaf
    My domain's email is managed by Google Apps, so that domain users get Gmail and Calendar, etc. But I also want to be able to send applicative notifications to users outside the domain via email (e.g. "some commented on your post", and so on). However, if I try to send email through code I get blocked by Gmail after a few emails. I send marketing email through MailChimp, to minimize the risk of appearing as spam to my users (one-click unsubscribe, etc.). But I can't send applicative message in this way. I want to install a local MTA (my server runs Ubuntu), but I'm not sure what anti-spam measures I need to implement so that receiving MTAs don't think it's a spam server. What's stopping anyone from setting up a mail server and sending emails using my domain name? AFAIK it's the DNS records that show the MTA's address actually belongs to the domain. But my understanding of this is rather superficial, so someone please correct me if I'm wrong. But what sort of DNS configuration do I need to put in place so that I don't get blacklisted (assuming I don't actually spam anyone)? The MX records already point to Google, and I'd like to keep it this way. So do I just need to define an A record for my internal mail server? Should it show email as coming from a sub-domain, so as not to conflict with the bare domain being managed by google? Edit: Does the following SPF record make sense if I want email from my domain name to be sent by either google's servers or any server with a dns name ending with mydomain.com? "v=spf1 ptr mx:google.com mx:googlemail.com ~all" How should I set up reverse DNS for my server? If I have an A record that points mailsender.mydomain.com to my MTA's ip address, does it mean that reverse lookup will only allow emails sent from [email protected]?

    Read the article

  • Setup Entourage for Exchange via HTTP communication

    - by Johandk
    Our ISP set up a hosted exchange server for all our mail. I've setup all our Outlook users with no problems. We have two people using Mac OSX Leopard and Entourage. Entourage has the option of adding an Exchange account, but I have no idea how to tell it to connect to exchange via HTTP. Heres an excerpt from the client setup docs the hosting company sent me for Outlook: 1 .Go to control panel 2. Select ‘Mail’ 3. Select ‘Email accounts’ Under the E-mail tab select ‘New’ Select ‘Manually configure server settings......’ - click next Select ‘Microsoft Exchange’ – click next Complete details as below with Microsoft Exchange Server as: [server address] Do not select ‘Check Name’. Instead select ‘More Settings’. Go to the Connection tab, and select the bottom option ‘Connect to Microsoft Exchange using HTTP’. And then select the ‘Exchange Proxy Settings’ button. Enter Proxy server for Exchange Check Only connect to proxy servers that have this principal name in their certificate, Enter msstd:[servername] Proxy Authentication - select Basic Authentication Select OK, and again, so that you return to the main screen. Now select ‘Check Name’. Enter Username and Password: The username should now be the full name and underlined. If so select next, and then finish. Next time you open Outlook, enter username and password Any help GREATLY appreciated.

    Read the article

  • Dell R320 RAID 10 with CacheCade

    - by Geekman
    I'm looking for a higher-performance build for our 1RU Dell R320 servers, in terms of IOPS. Right now I'm fairly settled on: 4 x 600 GB 3.5" 15K RPM SAS RAID 1+0 array This should give good performance, but if possible, I want to also add an SSD Cache into the mix, but I'm not sure if there's enough room? According to the tech-specs, there's only up to 4 total 3.5" drive bays available. Is there any way to fit at least a single SSD drive along-side the 4x3.5" drives? I was hoping there's a special spot to put the cache SSD drive (though from memory, I doubt there'd be room). Or am I right in thinking that the cache drives are simply drives plugged in "normally" just as any other drive, but are nominated as CacheCade drives in the PERC controller? Are there any options for having the 4x600GB RAID 10 array, and the SSD cache drive, too? Based on the tech-specs (with up to 8x2.5" drives), maybe I need to use 2.5" SAS drives, leaving another 4 bays spare, plenty of room for the SSD cache drive. Has anyone achieved this using 3.5" drives, somehow?

    Read the article

  • CentOS Latency High Troubleshooting

    - by Sarah Weinberger
    I have two CentOS servers connected via a 10 Gb fiber optic cable with a network emulator connected between them. All three units sit on a desk in the lab. There is also a regular 1 Gbit Ethernet cable connected to each of the machines, which provide internet connectivity. When I set the latency to something roughly below 30 ms, all is fine. When the latency gets to 70ms and above, and definitely 130ms, the network layer suspends. For instance, if I set the latency (delay) to 70ms, then launching TeamViewer (or any other application that uses network connectivity) never happens or does not work. There is no timeout message, simply no response. I have to lower to latency back down to zero to see any response and have the box start working. What is the problem and how would I go about fixing it? It seems to me some sort of setting in Linux causes one of the CentOS networking drivers to sit in an infinite loop or something. eth0 is the connection to the internet, all settings are default eth2 is the 10 Gbit fiber optic connection to the other computer with the MTU set to 9600 with all other parameters at default values.

    Read the article

  • Is .htaccess slowing down my dedicated server?

    - by David Robles
    First of all, I consider myself more a programmer than a servers guy. I have a website where I receive about 3,000 visits per day, which I think is a lot less than the max capacity for a dedicated server. However, I've noticed that the connection to the website is pretty slow, e.g., to load images, to connect to it via SSH, etc. I configured .httaccess recently to avoid hotlinking to images in my server (i.e. .jpg, .gif and .png), and I was wondering if that could be slowing down my website. This is the configuration that I have: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteEngine on RewriteCond %{HTTP_REFERER} !^$ RewriteCond %{HTTP_REFERER} !^http://www.mysite.com/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http://www.mysite.com$ [NC] RewriteRule .*\.(jpg|jpeg|gif|png|bmp|swf)$ http://www.google.com/ [R,NC] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress I found some code to do that in google, and I just copied to .htacces since I'm not an expert in apache. It works, but I don't know if that is the best way to do it. How can I see if that is the reason why the server is slow? Are there any tools to monitor it? What would you do guys? Thanks in advance!

    Read the article

  • Is timeout in tracertoutput an indication of an error?

    - by nitramk
    TCP/IP packages sent from my computer to a remote server does not always reach destination and ends up being retransmitted sometimes several times before they succeed. To troubleshoot this, I'm running a tracert to the server: Tracing route to <site> [<address>] Over a maximum of 30 hops: 1 <1 ms <1 ms <1 ms mymachine 2 <1 ms <1 ms <1 ms gw.levonline.com [217.70.32.30] 3 <1 ms <1 ms <1 ms 81.201.213.218 4 <1 ms <1 ms <1 ms bmf1-hmf1.driften.net [81.201.213.12] 5 <1 ms <1 ms <1 ms 10ge-2-4-cr2.a1.sth.ownit.se [84.246.88.157] 6 <1 ms * <1 ms netnod-ix-ge-b-sth-4470.microsoft.com [195.69.11.181] 7 26 ms * * ge-3-0-0-0.ams-64cb-1a.ntwk.msn.net [207.46.42.1] 8 48 ms 57 ms 56 ms ten9-1.lts-76e-1.ntwk.msn.net [207.46.42.133] 9 * * * Request timed out. In step 6 and 7, I'm seeing timeouts while waiting for the reply from the server (as seen above). Running the same tracert many times gives varying output, sometimes the response is fine, but sometimes I get this timeout 1, 2 and sometimes for all 3 packets. The timeout always starts at the same server, netnod-ix-ge-b-sth-4470.microsoft.com. I've tried setting the tracert timeout to 10 seconds, but am still getting the timeout. Running tracert towards other servers does not give me the same timeout. Microsoft network technicians tells me that the problem is not on "their" side. Are these timeouts an indicator of a lost packet on the specific node which did not respond? Are the timeouts an indication of there being a problem, or is it normal?

    Read the article

  • How do I load balance between two Linux machines?

    - by William Hilsum
    Inspired by the Stack Overflow network, I am now obsessed with HAProxy and trying to use it myself. At the moment, each HAProxy box has got two network cards (well, two configured, I can have a maximum of 4 and wasn't sure if they needed their own one for management between the boxes). On both machines, the backend one (eth1) is a private IP that goes to a switch connected to the webservers, and the front facing one (eth0) has a public internet IP that is routed straight though. In addition, I have created an additional virtual ip for eth0 called eth0:0 which has got a third public ip address. I just about get how to use it for load balancing between multiple web servers that are behind it, but, I am failing to load balance between the two HAProxy boxes - they appear to fight for the virtual IP, but, this does not appear to be a smart solution. Now, by using the virtual shared IP address, this solution appears to work and does seem to give me maximum uptime, but, is this the correct way to do it, or is there a smarter way? I have been looking at other Linux packages such as keepalived, but, I have only been using Linux (server) for a week now and am at the limits of my understanding. Is there anyone who has done this before and can you advise anything for maximum uptime?

    Read the article

  • Cannot establish XMPP server-to-server connection with gmail

    - by v_2e
    My jabber-server fails to connect to gmail.com giving the error: outgoing s2s stream myserver.com.ua-bot.talk.google.com closed: undefined-condition (myserver.com.ua is a Google Apps Domain with Talk service enabled.) I am using the Prosody XMPP server. It works just fine with other jabber-servers I tested so far (e.g. jabber.ru). However, when some of my clients tries to add a gmail contact to his contact-list, the subscription request lasts forever, and the Prosody gives the following sequence of messages in its log: Oct 21 22:57:16 s2sout95897f8 info Beginning new connection attempt to gmail.com ([173.194.70.125]:5269) Oct 21 22:57:16 s2sout95897f8 info sent dialback key on outgoing s2s stream Oct 21 22:57:16 s2sout95897f8 info Session closed by remote with error: undefined-condition (myserver.com.ua is a Google Apps Domain with Talk service enabled.) Oct 21 22:57:16 s2sout95897f8 info outgoing s2s stream myserver.com.ua->gmail.com closed: undefined-condition (myserver.com.ua is a Google Apps Domain with Talk service enabled.) Oct 21 22:57:16 s2sout95897f8 info sending error replies for 2 queued stanzas because of failed outgoing connection to gmail.com Here for the domain name of my server I use myserver.com.ua I found a similar problem described in this thread, but there is no detailed description of the solution there. As for the Google services, I did have a google account where I added the domain name under question to the Webmasters tools page. However, I deleted my account long ago, so now it is unclear, how any of the Google services can relate to my domain name. So my question is: What is the real cause of this problem (my jabber-server configuration or imaginary Google account or something else) and how can I make my Prosody server connect to gmail.com jabber service?

    Read the article

  • Truncated content with Apache on Vagrant VM

    - by Nev Stokes
    I'm using Vagrant to run a CentOS VM in order to try and achieve local development parity with our live servers. I've symlinked /var/www/html with the /vagrant shared directory and am forwarding port 80 for viewing at http://localhost:4567. I'm developing using SublimeText 2 on OS X Mountain Lion. Once I figured that iptables was tripping me up, all was well and good. Until I noticed something strange. I have a sample HTML page consisting of several paragraphs of lorem copy. I can view this fine in a browser on OS X. But when I make an edit, for example removing a paragraph, and refresh the content is truncated with the paragraph I deleted still visible. When I cat the files on the server I can see the changes I made but these aren't even reflected when I curl localhost. I strongly suspect that it's a problem with my Apache settings — with which I didn't really tinker — as the issue doesn't arise when I stop Apache and run sudo python -m SimpleHTTPServer 80 in the directory to view pages instead. What gives?

    Read the article

  • Apache multiple vhost logs, stored locally and sent to remote logstash

    - by benbradley
    I'm investigating centralised logging and it seems there's so many different ways this can be done. I don't want to run logstash as a log "sender", preferring to keep the web servers as lean and simple possible. So that means either using syslog, syslog-ng or the one I'm testing now, rsyslog. But I would like to have separate vhost log files on the web server, in addition to these logs being sent to a remote log collector. I've tested rsyslog using the imfile module to watch the Apache log files, but this means I have to hard-code each vhost log file into my rsyslog.conf. Not ideal as people will invariably forget when they add/remove sites on the server. The reason I'm using rsyslog's imfile is that Apache doesn't appear to let you log to file and syslog. And I want to keep vhost-specific log files on the web server. So how can I do this? Is there a way of having rsyslog produce local log files and forward the logs to a remote collector? I am prepared to change my Apache config to log to a single access/error log for all vhosts, so long as there are vhost-specific log files produced somewhere on the web server machine. I just don't want to lose any logging info if the remote log collector can't be contacted for any reason. Any comments/suggestions? Cheers, B

    Read the article

  • Sporadic email delivery to one user

    - by minamhere
    I have a user that occasionally does not receive emails from outside our organization. It does not seem to matter whether the other person is replying to an initial email or sending a new message. I have checked the Exchange System Manager and there is no record of the sender at all during this time period. No record of the message getting captured by the spam software (GFI Mail Essentials). The sender does not receive an NDR or any other indication that the message didn't arrive. It seems to me that these messages are not even getting to our servers at all. But, this is only impacting one user(that I am aware of) and not all the time. Some messages get through without any problem, others just disappear. The senders are not related at all. One is in another country, one uses AOL, one uses a corporate Exchange server locally. I can't seem to find a pattern. Where else can I look to try to figure out where these messages are going/getting captured? Are there additional logs that I can enable either within GFI or Exchange that might shed some light on this? Thanks. We are using Exchange 2003 on Server 2003. Desktop client is Outlook 2003 on Windows XP Pro.

    Read the article

  • Cannot Access Shared Folder From IIS

    - by Tim Scott
    From IIS I need to access a folder on another computer. Both servers are Window 2008 SP2, and they live in a Virtual Private Cloud on Amazon EC2. They reach one another by private IP -- they are in WORKGROUP, not a domain. I can access the shared folder manually when logged in to the client as Administrator. But IIS gets "access denied." Here's what I have done: Set File Sharing = ON Set Password Protected Sharing = OFF Set Public Folder Sharing = ON Shared the folder Added permission to the share: Everyone, Full Control Added permission to the share: NETWORK SERVICE, Full Control Verified that File & Printer Sharing is checked in Windows Firewall Opened port 445 to inbound traffic from local sources I tried adding <remote-machine-name>\NETWORK SERVICE to the share but it says it does not recognize the machine, which makes sense, I guess. As I said, from the other computer I have no trouble accessing the shared folder from my user account, but IIS is shut out. How does the file server even know the difference? I would assume that with Everyone given full control and password protected sharing turned off, it would not matter what the client user account is. In any case, how to solve? UPDATE: To clarify, I am not trying to serve up files on the share directly through IIS. Rather I am writing files to the share from my code (System.IO).

    Read the article

  • 530 5.7.1 Client was not authenticated Exchange 2010 for some computers within mask

    - by user1636309
    We have a classic problem with Client not Authenticated but with a specific twist: We have an Exchange 2010 cluster, let's say EX01 and EX02, the connection is always to smtp.acme.com, then it is switched through load balancer. We have an application server, call it APP01 There are clients connected to the APP01. There is a need for anonymous mail relay from both clients and APP01. The Anonymous Users setting of the Exchange is DISABLED, but the specific computers - APP01 and clients by the mask, let's say, 192.168.2.* - are enabled. For internal relay, a "Send Connector" is created, and then the above IP addresses are added for the connector to allow computers, servers, or any other device such as a copy machine to use the exchange server to relay email to recipients. The problem is that the relay works for APP01 and some clients, but not others (we get "Client not Authenticated") - all inside the same network and the same mask. This is basically what we do to test it outside of our application: http://smtp25.blogspot.sk/2009/04/530-571-client-was-not-authenticated.html So, I am looking for ideas: What can be the reason for such a strange behaviour? Where I can see the trace of what's going on at the Exchange side?

    Read the article

< Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >