Search Results

Search found 13411 results on 537 pages for 'proxy servers'.

Page 189/537 | < Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >

  • Unable to telnet out on port 25 on windows server 2008

    - by NickGPS
    Hi All, I just setup a Windows 2008 R2 server and am trying to get a basic mail server up and running so that I can send emails from my applications. I setup a virtual SMTP server in IIS6 and tried doing a local telnet to port 25, which seemed to work fine. There were no errors during this stage and I can see the mail message appear in the Queue folder. The problem is that mail never leaves the Queue folder. I then tried to telnet to a remote mail server on port 25 but couldn't connect:- telnet 209.85.227.27 25 Could not open connection to the host, on port 25: Connection failed) I checked my firewall and there is a default setting to allow all outgoing TCP traffic with no restriction. I even setup a specific rule for outgoing port 25 traffic but to no avail. I then ran a SmtpDiag.exe command .\SmtpDiag.exe [email protected] [email protected] and received the following output Searching for Exchange external DNS settings. Computer name is WIN-SERVERNAME. Failed to connect to the domain controller. Error: 8007054b Checking SOA for gmail.com. Checking external DNS servers. Checking internal DNS servers. SOA serial number match: Passed. Checking local domain records. Checking MX records using TCP: gmail.com. Checking MX records using UDP: gmail.com. Both TCP and UDP queries succeeded. Local DNS test passed. Checking remote domain records. Checking MX records using TCP: gmail.com. Checking MX records using UDP: gmail.com. Both TCP and UDP queries succeeded. Remote DNS test passed. Checking MX servers listed for [email protected]. Connecting to gmail-smtp-in.l.google.com [209.85.227.27] on port 25. Connecting to the server failed. Error: 10060 Failed to submit mail to gmail-smtp-in.l.google.com. Is there any other diagnostics I can do to figure out if it's my firewall or something else? I have removed antivirus to make sure that it wasn't causing the problem. Any ideas would be much appreciated.

    Read the article

  • How to determine if a CentOS system is Raid-1?

    - by Tedd Johnson
    I've tried searching for this answer, but haven't found anything elegant. I have numerous servers in a colo that is in another state. I need to find a way to check that the servers have RAID-1 on them, so that I can determine if they were setup correctly by my colo. df -h shows: Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 442G 1.5G 418G 1% / /dev/sda1 99M 19M 75M 20% /boot tmpfs 4.0G 0 4.0G 0% /dev/shm however as CentOS uses LVM by default, this doesn't indicate if a RAID-1 is present. it is supposed to be a software raid, so I'm pretty sure there should be a way to check. Thanks

    Read the article

  • Using MongoDB + Redis + Apache on the same server in production?

    - by Dayson
    I intend to launch my web app using a 8 GB VPS. It uses MongoDB + Redis for storage/caching and Apache + PHP-FPM for serving requests. Could there be any issues with running Mongo + Redis + Apache on the same server? Would it make more sense to setup 2 x 4 GB VPS servers and keep Mongo on one and Redis + Apache on another? Should I just start with one server and worry about scaling horizontally later by delegating the existing server to Mongo in the future (due to its large RAM) and moving the web servers on to multiple smaller VPS'?

    Read the article

  • Any worker agent monitors for appliance based load balancers?

    - by Zethris
    Looking to find out to what extent an appliance load balancer can monitor servers for both failover (say for example a service like apache tomcat fails) and load balancing? Right now it looks like it's just port monitoring/connection tracking and healthcheck urls that it will heartbeat and detect as down if it doesn't come back with a finished request. We are looking at the Kemp 3500 or Loadbalancer.org solutions. Is there any sort of web application level monitoring/load balancing that these load balancers can offer that can more directly interact with the servers it's balancing?

    Read the article

  • Do I need DELL OpenManage to generate snmp traps on RAID degradation?

    - by jishi
    I need to setup surveillance on all our servers to spot any RAID degradation in time. However, not all of our servers have OpenManage installed, and since they are in production I don't like the idea of installing it on them. Therefor: Is it necessary to have it installed in order to get an event-log for any degradation of the RAID? Because, if I get an event-log I can send an SNMP trap, if I understand it correctly. I thought it was the driver that responsible for the event-logging, but on a machine that recently had a degradation, I can't seem to find any log event for it.

    Read the article

  • Appropriate Network switch for small server cluster

    - by Chris Dutrow
    Need to build a small business server cluster for the purpose of crunching data. It will not host a web site that needs to be available 24/7. It does need to support servers that host Redis, a Cassandra database cluster, and a Python web server. Operating system will most likely be Centos 6.4 Other servers in the cluster should be able to communicate very fast with each other, especially the Redis server. This will probably require the use of internal IP addresses. We will need to use multi-data center replication to synchronize the Cassandra cluster with the one that we currently have hosted on the cloud Was looking into network switches and we are unsure of the appropriate specifications that we should be looking for. Does the switch need to be "managed" or can it be "unmanged"? Does the switch need to support IPv6 or just IPv4? Do we need an enterprise level Cisco switch, or can we go with something like a $200 DLink managed (or unmanaged) small business switch? Thanks so much!

    Read the article

  • Recommended service account setup for MS SQL Server 2005/2008

    - by boxerbucks
    We have a number of MS SQL servers in our environment running either SQL Server 2005 standard/enterprise or SQL server 2008 enterprise. Currently the SQL services are running as local service or network service and the MS recommended best practice is to run as a domain account which is what we are trying to move towards. Is the best practice with regards to domain accounts to have a separate domain account per service per server? So if we have 4 SQL services we want to run per server and we have 50 servers, we would create 50 * 4 = 200 accounts in AD? This seems excessive to me and I was wondering if anyone has any real experience with this type of setup and it's management.

    Read the article

  • Adding a Windows Server 2012 Essentials server to an existing domain, without migrating the AD

    - by TiernanO
    I have an existing Active Directory in house, a mix between a Win2K8R2 and Win2K3 domain, and i would like to test out Windows Server 2012 Essentials BETA on the network. When walking though the install, it gives me the option of a new domain, or migrating from an existing domain. when clicking existing, it tells me i can only have one SBS server running on a domain at a time... So, i dont have any existing SBS servers in house (both are full standard or enterprise editions) but i do plan on keeping at least one of these extra servers running... So, how do i get a 2012 Essentials server to join a domain, and not migrate the existing domain? or if i do migrate, can i still get one of the other boxes to act as secondary controllers?

    Read the article

  • Bacula vs. BackupPC

    - by chronoz
    I have been googling about the differences between them. Bacula has lots of roles BackupPC is easier to configure Bacula works with agent, not rsync (great for Windows backups) It seems that Bacula is most often compared to Amanda though, while BackupPC seems a perfectly lovely and popular backup distribution to. I currently backup my servers with rsnapshot, but I am looking for a professional scalable solution that could also back-up 50 hosts without problems. Preferably a solution that can offer bare metal restores for my Linux servers. I am not looking to reinstall the exact same version of Plesk, the software, etc...

    Read the article

  • Shared storage solution for our sql server backups

    - by Gokhan
    We have 3 clustered sql servers. We have 5+ multi terrabyte databases and their backup files (compressed using quest litespeed) are hitting over 600gb each, We are required to keep at least a week or two weeks (if we can) of weekly full backups and then 6 days differential backups, and a week or 2 weeks worth of log backups local. We are currently limited to 2TB volumes from our san team, we can have multiple volumes but they are expensive ($200 per raw TB per month) and having to deal with many backup volumes instead of a single big volume is difficult. I think if we could have a shared network storage of 20TB+ raid 10 or so for all our servers for keeping the backups and another department will copy them to tape from the network storage and delete files according to the retention period would be good, if this box would be a build in operating system (even unix a complete file storage system) that would be good. What do you guys think, does this make sense to you, is there any manufacturer that sells a storage product like that which that work in a clustered environment? Thank you

    Read the article

  • Open a screen session inside a certain user on boot Ubuntu Server Linux

    - by Pez Cuckow
    I currently have a private server which I test my web apps on which runs Ubuntu Server 10.04 I also host a few game servers (rather than having wasted CPU time :-D) for some of my friends. These game servers I run in the game user account and each one has it's own screen session (so friends can ssh in and reboot the game server etc...). For example screen -R l4d2 runs ./start in the L4D2 folder. However if I reboot the server (which I have to do occasionally) all these sessions close and I have to manually create all the screen sessions and run the required games within them. Is there a way to set these screen sessions as Daemons or services or just boot on server start so they restart themselves on server reboot? I hope I have made my question easy to understand but feel free to ask questions! Many thanks,

    Read the article

  • Getting error while running apt-get update

    - by pradeepchhetri
    I am getting the following error while running apt-get update on all of the servers. W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used. Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 8B48AA6247928553 W: Some index files failed to download, they have been ignored, or old ones used instead. The solutions available are: To login into each of the server and run the following command to import the gpg key for that repo. sudo gpg --keyserver hkp://subkeys.pgp.net --recv-keys 8B48AA6247928553 sudo gpg --export --armor 8B48AA6247928553 | sudo apt-key add - But this requires logging into all of the individual servers and run the above two commands. I am looking for a way to correct by working on apt-repo server. Is there any way to do that.

    Read the article

  • PHP hits 100% CPU and eats RAM at the same time Monday to Friday

    - by Daniel Samuels
    We run a learning platform for primary schools here in the UK and it's all been running extremely well. However at around 4PM Monday to Friday we see the same issue arise -- 1-2 PHP threads will spike to 100% CPU and gradually start eating up RAM until the server(s) fall over. 98%+ of our requests are HTTPS, these come into our Layer 7 load balancer which then decrypts the SSL data, adds the X-HTTP-Forwarded-For header and forwards the data onto an application server (we have 2 of those at the moment) on port 80. Our application servers have Varnish on port 80 which takes in the request from the load balancer and passes the request through to Nginx on port 81. Nginx then works out which 'vhost' it needs to use and passes any PHP processing through to PHP-CGI which is listening on a socket (managed through spawn-fcgi). There's an instance of Memcached running too, MySQL runs on a separate server / slave setup. Throughout the day the load will typically go no higher than 0.8 on either of the application servers, however at around 4PM our problem arises. I've managed to run strace on a few of the actual threads when they cause the problem and I always see the same thing: stat("/usr/share/zoneinfo/Europe/London", {st_mode=S_IFREG|0644,st_size=3661, ...}) = 0 stat("/usr/share/zoneinfo/Europe/London", {st_mode=S_IFREG|0644,st_size=3661, ...}) = 0 This is repeated infinitely and never stops until you SEGKILL the process or oomkiller kills it. There are no cron jobs scheduled to run at that time and I don't have any way of seeing exactly what Nginx request is associated with the PHP process which is running. We are running PHP 5.3.14 which we upgraded to from 5.3.8 last week to rule out the older version being the problem. This issue has been going on a few months now and we have no idea what is causing it. We deploy our software very frequently, so it's difficult to track down a specific release which may have started the problem - especially as we do not know the date of the first occurrence of this issue. Varnish is version 3.0.1, Nginx is 1.0.6 (which I understand is about a year old now), our servers are running CentOS release 5.7 (Final) they have Intel i3 540s at 3.07Ghz and 8GB of RAM. There's a discussion on the Debian mailing list about something very similar, you can find that here. Has anyone seen anything like this in the past, does anyone have any ideas or suggestions? Are there a way of linking an Nginx request directly to a PHP thread? Is there a better way of seeing what the PHP process is doing? (I've seen GDB mentioned, though I'll have to recompile PHP) Thanks!

    Read the article

  • Remote desktop sessions - Unwanted automatic log off after period of time

    - by alex
    I'm having an issue whenever I connect to any of our servers via RDP - After a certain period of time, it seems to close these sessions, closing all the applications i had open etc... This is particularly annoying if I am running a long process - for example, copying a file - it cuts it off... I then re-connect via RDP, and it effectively loads a new session. Is this set somewhere in Group Policy? Or somewhere else? This is happening on Windows 2008 (it may also be on our 2003 servers, although I haven't noticed...)

    Read the article

  • How can I stop Windows DNS server properties settings from changing by themselves?

    - by paradroid
    When I open the DNS console in Administrative tools, I keep finding a couple of problems which keep on reappearing by themselves, and I want to stop them from happening. One of the DNS servers has two network interfaces, and it should only be listening for requests on of them, and I get errors in the Event Log otherwise. But when right clicking one DNS server and selecting Properties, I can see on the Interfaces tab that 'All IP addresses' is selected. If I Change it to 'Only the following IP addresses:' and deleselect the WAN addess, I will find it reslected when I next check it after a couple of days. In the other DNS server's Properties, on the Forwarders tab, there should only be two forwarder addresses. However, the address for the router keeps in appearing. This router has the DNS server as its forwarder. There shouldn't be anything using the router's DNS forwarders for DNS other than the router itself, but this surely is causing a loop. How do I get these properties on both DNS servers to stick?

    Read the article

  • Approach for monitoring internet backbone traffic volume

    - by Greg Harman
    I'm interested in getting a picture of relative volume across different internet backbones. In particular, I'd like to see how traffic volume over a given route differs over the course of a day or from one day to the next. InternetTrafficReport.com is the closest approximation to this that I've found online, and their approach is to test ping times to a number of key routers from several geographically-dispersed servers. This sounds like one straightforward way to measure, but I don't have several geographically-dispersed servers. Is there a different approach for sampling this type of information from a single server?

    Read the article

  • sql server log shipping

    - by voam
    I am setting up log shipping with two 2008 sql servers connecting with a vpn. As far as I know as long as the sql agent is able to access the share on the primary/secondary servers everything will work. When I set up the Log shipping on the primary server using SQL Mangagement Studio, before I can set any of the "Secondary Database Settings" it asks me to connect to the secondary server. But I really don't want to open up the connection to the secondary server. I will be initialing this secondary database with a backup so as long as the transaction logs get copies everything should work. How to I work around the GUI not enabling any of the settings for the secondary server until I actually connect to it? Thanks in advance!

    Read the article

  • SSL certs or intermediate for DMZ

    - by rex
    I've been tasked with deploying and managing load balancers covering internal servers and DMZ servers. I have no experience with this, and this is a first for my organization as well. Balancers are up, running, legit. Currently we are using a self-signed cert for Exchange/OWA. I know that we should have a cert signed by a CA, but the balancer has options for SSL cert or intermediate cert, and I'm unclear on the difference, or on which we need. We will be hosting Lync, Exchange and some custom apps in the DMZ. disclaimer: Apologies up front, I'm desktop support. I recently passed my Net+. It seems that has made me the network engineer in this organization.

    Read the article

  • What is the specific advantage of a blade server for virtualisation?

    - by ChrisZZ
    We are planning to implement a VDI Solution. We had some discussions about Blade vs Rack. As we are only planning to implement 75-100 Clients, we calculated, that we would need 2 Servers with Dual 8C Processors - and a shared storage server. This calculation is based on a paper by ORACLE, that says, 12 active virtual machines per core. Now, for buying to servers, a blade does not scale financially. But the Blade has some other advantages: a) The interconnectivity between the blades is super-fast. b) IO Virtualisation Are there other advantages, that we should consider, that would make up for price - and are this advantages so important, that we should think about investing in the blade?

    Read the article

  • Odd domain switching behavior in Firefox and Chrome

    - by Jeremy Detrempe
    We have different development severs and a production server. Testing is done in the development servers. As a QA engineer, I'm switching between these servers quite often throughout the day. In Chrome, sometimes I need to reload a page a few times to get it to pull from the newly switched server. In Firefox, sometimes I need to quit the browser in order to get it to pull from the newly switched server. (We have small tags that indicate which server you are pulling from, which is how I know in-browser.) Why does that happen? I'd love to know how that happens (maybe what it's called?) and what the best way to deal with it is. (I know that Firefox has an extension for domain switching; is that the best solution?)

    Read the article

  • Locking down a server for shared internet hosting.

    - by Wil
    Basically I control several servers and I only host either static websites or scripts which I have designed, so I trust them up to a point. However, I have a few customers who want to start using scripts such as Wordpress or many others - and they want full control over their account. I have started to do the basics - like on php.ini, I have locked it down and restricted commands such as proc, however, there is obviously a lot more I can do. right now, using NTFS permissions, I am trying to lock down the server by running Application Pools and individual sites in their own user, however I feel like I am hitting brick walls... (My old question on Server Fault). At the moment, the only route I can think of is either to implement an off the shelf control panel - which will be expensive and quite frankly, over the top, or look at the Microsoft guide - which is really for an entire infrastructure, not for someone who just wants to lock down a few servers. Does anyone have any guides that can put me on the correct path?

    Read the article

  • Howto maintain an EXT3 filesystem

    - by Reinoud van Santen
    Lately I had several servers which encountered a write error on an EXT3 filesystem and as a result of that remounted the filesystem read-only. Understandably on a production server this causes severe problems. On a reboot the filesystem where fixed but on large partitions this takes a lot of time. After the filesystem was fixed, correcting several errors, the server runs well again. What can I do to minimize the rate at which this happens? I can't seem to find much information on periodically checking the filesystem(s) on a running server. Is it possible to change the way in which EXT3 / the system handles write errors? What would be a sane solution. All servers which this is regarding to are running CentOS Linux 5.4 or 5.5.

    Read the article

  • How to store etckeeper repositories on a central server via git

    - by andreash
    Hello, I would like to have one central git repository for all my servers' etckeeper .git repos. Here the suggestion was to use a file in /etc/etckeeper/commit.d, which basically looks like this, assuming that a git repo had been set up in somedir on somehost: #!/bin/sh cd /etc git push faruser@farhost:somedir The problem with this is, that it would be really nice to have all servers in the same repo on the central server. I tried git push faruser@farhost:somedir/server1 but that failed. As you can see, I've never worked with git before ... Any ideas on how this can be accomplished is greatly appreciated :) Cheers, Andreas

    Read the article

  • FTP script download from linux to windows

    - by user53864
    I'm using following FTP script on windows xp to download zip files from ubuntu cloud servers. A zip file is created every day on ubutnu servers and I will download it to windows via this ftp script. I run this script everyday manually as I have to edit the last line(mget /usr/backup_02-11-2010.Zip) of the script to match today's date. I want to edit this script so that it will download only today's zip file at the scheduled time without needing to edit it everyday, when scheduled. It's clear that date is appended to the zip files and is in the format dd-mm-yyyy. Need help... open server-ip-here username-here user-password-here lcd C:\Backup\files bin hash prompt mget /usr/backup_02-11-2010.zip

    Read the article

< Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >