Search Results

Search found 9816 results on 393 pages for 'blade servers'.

Page 102/393 | < Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >

  • NAT rules betweek 2 network interfaces (with iptables)

    - by Simone Falcini
    this is the current network that I have: UBUNTU: eth0: ip: 212.83.10.10 bcast: 212.83.10.10 netmask 255.255.255.255 gateway 62.x.x.x eth1: ip: 192.168.1.1 bcast: 192.168.1.255 netmask: 255.255.255.0 gateway ? CENTOS: eth0: ip: 192.168.1.2 bcast: 192.168.1.255 netmask 255.255.255.0 gateway 192.168.1.1 I basically want this: Make specific NAT rules from the internet to specific internal servers depending on the port: Connections incoming to port 80 must be redirected to 192.168.1.2:80 Connections incoming to port 3306 must be redirected to 192.168.1.3:3306 and so on... I also need one NAT rule to allow the servers in the subnet 192.168.1.x to browse the internet. I need to route the requests on eth0 to eth1 to be able to exit to internet. Can I do this on the UBUNTU machine with iptables? Thanks!

    Read the article

  • Web service not accessible from behind corporates firewalls - how come?

    - by Niro
    We run a Saas serving a widget which is embedded in customer websites. The service include static javascript code hosted on amazon S3 and dynamic part hosted on EC2 with Scalr (using scalr name servers). We received some feedback from users behind corporate firewalls that they cant access our service (while they can access the sites including the widget). This does not make sense to me since the service is using normal http calls on port 80 and our URL is quite new without any reason to be banned by firewalls. My questions are: 1. Why is the service is not accessible and what can I do about it? 2. Is it possible that one of the following is blocked by corporate firewalls: Amazon s3, the dynamic IP address provided by amazon, Scalr name servers. Any other possible reasons, way to check them and remedies for this? Thanks!

    Read the article

  • What's more cost effective, Hosting your web startup on Foss or Windows?

    - by user37899
    Hi, Not coming from the windows world, I'm confused about licensing. I think my knowledge may be out of date. Before we gave up with windows web servers (IIS 2), we used to have to pay Client Access licence's. This worked out quite expensive. Is it cheaper to host 1000's of users on Windows than use Free open source software tools? http://serverfault.com/questions/124329/network-load-balancing-efficience-and-limits. This post suggests I can pay $15 a month, for unlimited users. I certainly hope that this is unbiased view, I am a professional and use the right technology for the right job. I hope i am not feeling the wrath of a windows (or linux for that matter) fanboy. Perhaps a Microsoft certified Licensing person can clear this up. Should i be recommending to startups windows servers and products over lamp? Cheers!

    Read the article

  • reverse proxy only from one internal server

    - by hrost
    I have configured a reverse proxy and is working ok for one internal server, for example our mail server. Now, I like to know if it is possible to configure a reverse proxy for only one server /application (in this case our web intranet). Our problem is Intranet call another aplication inside same intranet server and another internal servers, and the only way that I know to publish this resources is make a reverse proxy in our dmz apache for all apllications servers, but I like that from our DMZ reverse apache only intranet will be called, and other applications will be called by intranet server, and not reverse proxy. I like to configure with this system for security reason, and only allow external access to one server. I have configured With Debian Squeeze and apache 2.2 It is possible? How?

    Read the article

  • Odd domain switching behavior in Firefox and Chrome

    - by Jeremy Detrempe
    We have different development severs and a production server. Testing is done in the development servers. As a QA engineer, I'm switching between these servers quite often throughout the day. In Chrome, sometimes I need to reload a page a few times to get it to pull from the newly switched server. In Firefox, sometimes I need to quit the browser in order to get it to pull from the newly switched server. (We have small tags that indicate which server you are pulling from, which is how I know in-browser.) Why does that happen? I'd love to know how that happens (maybe what it's called?) and what the best way to deal with it is. (I know that Firefox has an extension for domain switching; is that the best solution?)

    Read the article

  • Finding Missing UDP Frames Using Wireshark + Custom Dissector (for CQS)

    - by John Dibling
    How do you use Wireshark to identify missing UDP frames? I have written a custom dissector for the CQS feed (reference page). One of our servers gaps when receiving this feed. According to Wireshark, some UDP frames are never received. I know that the frames were sent because all of our other servers are gap-free. A CQS frame consists of multiple messages, each having its own sequence number. My custom dissector provides the following data to Wireshark: cqs.frame_gaps - the number of gaps within a UDP frame (always zero) cqs.frame_first_seq - the first sequence number in a UDP frame cqs.frame_expected_seq - the first sequence number expected in the next UDP frame cqs.frame_msg_count - the number of messages in this UDP frame And I am displaying each of these values in custom columns, as shown in this screenshot: A typical CQS log will consist of millions of rows, so I can't just eyeball it. Is there any way I can get Wireshark to tell me which frames are missing?

    Read the article

  • Changing DNS - Forgot to change MX records for one month - Is there a way to retrieve emails that we

    - by Chris Altman
    We have our own domain. Our email is hosted by Google Apps. We switched web servers and name servers to a new provider. In the switch, I forgot to move the MX record. When people tried to send emails to our domain, there was no bounce back. We fixed the MX record and now receive email. Is there anyway to retrieve the emails that were sent in the month when there was no MX record? I doubt it because there was no MX record on our name server. Where would the emails have gone since they did not bounce back?

    Read the article

  • Adding a Windows Server 2012 Essentials server to an existing domain, without migrating the AD

    - by TiernanO
    I have an existing Active Directory in house, a mix between a Win2K8R2 and Win2K3 domain, and i would like to test out Windows Server 2012 Essentials BETA on the network. When walking though the install, it gives me the option of a new domain, or migrating from an existing domain. when clicking existing, it tells me i can only have one SBS server running on a domain at a time... So, i dont have any existing SBS servers in house (both are full standard or enterprise editions) but i do plan on keeping at least one of these extra servers running... So, how do i get a 2012 Essentials server to join a domain, and not migrate the existing domain? or if i do migrate, can i still get one of the other boxes to act as secondary controllers?

    Read the article

  • Appropriate Network switch for small server cluster

    - by Chris Dutrow
    Need to build a small business server cluster for the purpose of crunching data. It will not host a web site that needs to be available 24/7. It does need to support servers that host Redis, a Cassandra database cluster, and a Python web server. Operating system will most likely be Centos 6.4 Other servers in the cluster should be able to communicate very fast with each other, especially the Redis server. This will probably require the use of internal IP addresses. We will need to use multi-data center replication to synchronize the Cassandra cluster with the one that we currently have hosted on the cloud Was looking into network switches and we are unsure of the appropriate specifications that we should be looking for. Does the switch need to be "managed" or can it be "unmanged"? Does the switch need to support IPv6 or just IPv4? Do we need an enterprise level Cisco switch, or can we go with something like a $200 DLink managed (or unmanaged) small business switch? Thanks so much!

    Read the article

  • How to store etckeeper repositories on a central server via git

    - by andreash
    Hello, I would like to have one central git repository for all my servers' etckeeper .git repos. Here the suggestion was to use a file in /etc/etckeeper/commit.d, which basically looks like this, assuming that a git repo had been set up in somedir on somehost: #!/bin/sh cd /etc git push faruser@farhost:somedir The problem with this is, that it would be really nice to have all servers in the same repo on the central server. I tried git push faruser@farhost:somedir/server1 but that failed. As you can see, I've never worked with git before ... Any ideas on how this can be accomplished is greatly appreciated :) Cheers, Andreas

    Read the article

  • My server appears to have been hacked+ scanssh run by zabbix is it normal?

    - by Niro
    I'm running a few EC2/Scalr instances with zabbix monitoring. I received complaints about one of my servers port scanning other servers. the logs show it is accessing port 22 on consecutive IP addresses. I looked at the processes list and saw scanssh is running under the user Zabbix. My question is- Is scanssh part of zabbix? Is it suppesd to run? I have active autodiscovery on zabbix but it is looking at another IP addresses and definately not port 20. Is it possible that something in the config of zabbix agent is controlling it and not the settings on zabbix server? What can I do to find out if zabbix is somehow misbehaving or it is a hacker? Any advice is highly appreciated.

    Read the article

  • Locking down a server for shared internet hosting.

    - by Wil
    Basically I control several servers and I only host either static websites or scripts which I have designed, so I trust them up to a point. However, I have a few customers who want to start using scripts such as Wordpress or many others - and they want full control over their account. I have started to do the basics - like on php.ini, I have locked it down and restricted commands such as proc, however, there is obviously a lot more I can do. right now, using NTFS permissions, I am trying to lock down the server by running Application Pools and individual sites in their own user, however I feel like I am hitting brick walls... (My old question on Server Fault). At the moment, the only route I can think of is either to implement an off the shelf control panel - which will be expensive and quite frankly, over the top, or look at the Microsoft guide - which is really for an entire infrastructure, not for someone who just wants to lock down a few servers. Does anyone have any guides that can put me on the correct path?

    Read the article

  • Getting error while running apt-get update

    - by pradeepchhetri
    I am getting the following error while running apt-get update on all of the servers. W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used. Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 8B48AA6247928553 W: Some index files failed to download, they have been ignored, or old ones used instead. The solutions available are: To login into each of the server and run the following command to import the gpg key for that repo. sudo gpg --keyserver hkp://subkeys.pgp.net --recv-keys 8B48AA6247928553 sudo gpg --export --armor 8B48AA6247928553 | sudo apt-key add - But this requires logging into all of the individual servers and run the above two commands. I am looking for a way to correct by working on apt-repo server. Is there any way to do that.

    Read the article

  • Shared storage solution for our sql server backups

    - by Gokhan
    We have 3 clustered sql servers. We have 5+ multi terrabyte databases and their backup files (compressed using quest litespeed) are hitting over 600gb each, We are required to keep at least a week or two weeks (if we can) of weekly full backups and then 6 days differential backups, and a week or 2 weeks worth of log backups local. We are currently limited to 2TB volumes from our san team, we can have multiple volumes but they are expensive ($200 per raw TB per month) and having to deal with many backup volumes instead of a single big volume is difficult. I think if we could have a shared network storage of 20TB+ raid 10 or so for all our servers for keeping the backups and another department will copy them to tape from the network storage and delete files according to the retention period would be good, if this box would be a build in operating system (even unix a complete file storage system) that would be good. What do you guys think, does this make sense to you, is there any manufacturer that sells a storage product like that which that work in a clustered environment? Thank you

    Read the article

  • Ganglia divide colors by rolles

    - by com
    Sorry for a silly question I am still newbie to Ganglia. In Ganglia I control few important metrics for mysql (seconds behind master and etc.). In addition I have few bunches of mysql servers (every bunch has it's own tasks, but all of the bunches should be tested for seconds behind master). I am interested if this possible to show all metrics on the one page with different colors to different bunches. Right now in metric "seconds behind master" I see all mysql servers with metric "seconds behind master" with colors to different states (red is critical, gray is ok). Can I set a color to a graph according to it's bunch? Thanks!

    Read the article

  • How do I force a specific MTU for only certain TCP ports?

    - by Dave S.
    Background I have a set of embedded hardware deployed in the field. These remote machines connect back to my servers at AWS running Ubuntu and I use the iptables mangle chain to lower the MTU to 500 so these devices are happy. For reference, this is the iptables rule I am using: -A POSTROUTING -p tcp --sport 12345 --tcp-flags SYN,RST SYN -o eth0 -j TCPMSS --set-mss 500 Current Problem I'm trying to spin up some servers on the Joyent Cloud using SmartOS, but I can't find any information on selectively changing the MTU like I can on Linux (e.g. all info I've found is on changing it globally, which is not what I want). How would I do it so that all connections on TCP port 12345 get the MTU I want?

    Read the article

  • Unresponsive virtual OS

    - by confusedGeek
    Hopefully someone has a suggestion on how to resolve this. Configuration Host: Win 2003R2 w/Virtual Server 2005R2 Virtual1: Win 2003R2 w/Sql Server 2005 Virtual2: Win 2003R2 w/WSS 3.0 Situation This past weekend the power went out and took down the servers (no UPS, it's a desktop standing in as dev testing server). Since the servers went down the Virtual2 server after running WSS fairly heavily for an hour to two will become unresponsive via HTTP. If I login via virtual server's remote control I don't get anything beyond a background screen. The CPU counter on the virtual server's master status shows that it isn't doing anything. The only thing I have been able to do is to turn off Virtual2, which loses any state changes. Shutdown commands issue from the virtual server master status are ignored. After restarting Virtual2 the event logs and application logs don't indicate what caused the problem. Anyone have an idea as to how to repair the OS, or maybe what could be the problem? Thanks ahead of time.

    Read the article

  • once VPNed into pfSense, unable to hit the public URLs of my websites - they are routed to the pfSense box

    - by Sean
    I have a pfSense box setup as the firewall/router/VPN appliance at my colo. Once I VPN into the colo (either pptp or openvpn, pptp preferred due to multiple clients and ease of configuration), I am able to hit all my servers by their private 10.10.10.x ip and am able to browse the public internet without issue. When I try and hit the URL of a domain hosted by one of my servers, I am prompted for credentials. If I login using the pfSense credentials, I'm connected to pfSense as if I'd used it's internal IP. If I hack my hosts file to point url - server private IP it works fine, but this is obviously not a good solution. To recap: not connected to VPN - www.myurl.com works connected to VPN - www.myurl.com never makes it to the correct server, but is sent only to the pfSense box I'm sure it's something small that I've missed in the pfSense config.

    Read the article

  • Ping Unknown Host on CentOS at EC2

    - by organicveggie
    Weird problem. We have a collection of servers running CentOS 5 on EC2. The setup includes two DNS servers and two LDAP servers. DNS has a CNAME pointing at the primary LDAP server. One machine (and only one machine) is giving me problems. I can ssh into the server using LDAP authentication. But once I'm on the machine, ping won't resolve the LDAP host even though DNS seems to work fine. Here's ping: $ ping ldap.mycompany.ec2 ping: unknown host ldap.mycompany.ec2 Here's the output of dig: $ dig ldap.mycompany.ec2 ; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_5.3 <<>> ldap.studyblue.ec2 ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 2893 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;ldap.mycompany.ec2. IN A ;; ANSWER SECTION: ldap.mycompany.ec2. 3600 IN CNAME ec2-hostname.compute-1.amazonaws.com. ec2-hostname.compute-1.amazonaws.com. 55 IN A aaa.bbb.ccc.ddd ;; Query time: 12 msec ;; SERVER: 10.32.159.xxx#53(10.32.159.xxx) ;; WHEN: Tue May 31 11:16:30 2011 ;; MSG SIZE rcvd: 107 And here is resolv.conf: $ cat /etc/resolv.conf search mycompany.ec2 nameserver 10.32.159.xxx nameserver 10.244.19.yyy And here is my hosts file: $ cat /etc/hosts 10.122.15.zzz bamboo4 bamboo4.mycompany.ec2 127.0.0.1 localhost localhost.localdomain And here's nsswitch.conf $ cat /etc/nsswitch.conf passwd: files ldap shadow: files ldap group: files ldap sudoers: ldap files hosts: files dns bootparams: nisplus [NOTFOUND=return] files ethers: files netmasks: files networks: files protocols: files rpc: files services: files netgroup: files ldap publickey: nisplus automount: files ldap aliases: files nisplus So DNS works the way I would expect. And I can ping the ldap server by ip address. And I can even access the box with SSH using LDAP authentication. Any suggestions?

    Read the article

  • Lightweight, low cost enterprise backup solution

    - by Scott
    Looking for a backup solution primarily for Windows clients (XP/7), that will either back up to 2 different servers (1 on site, 1 off site - internet - can be our own server), or back up to 1 server and then we would need to somehow backup that server offsite/internet. By lightweight, I mean the backup client software should not eat up much memory and processor since some of the client machines are older. I am used to using Crashplan for home use - the pricing is nice for the amount of backup I get, and it works great / easy to install and get going - I can back up to my own machines locally and over the net. However, the price is going to be a little steep for enterprise level backup, 1500+ machines. Possibly ZManda and Bacula are good choices to consider? Are they light weight? Can the clients/agents be set to go over the net and/or multiple backup servers?

    Read the article

  • FTP script download from linux to windows

    - by user53864
    I'm using following FTP script on windows xp to download zip files from ubuntu cloud servers. A zip file is created every day on ubutnu servers and I will download it to windows via this ftp script. I run this script everyday manually as I have to edit the last line(mget /usr/backup_02-11-2010.Zip) of the script to match today's date. I want to edit this script so that it will download only today's zip file at the scheduled time without needing to edit it everyday, when scheduled. It's clear that date is appended to the zip files and is in the format dd-mm-yyyy. Need help... open server-ip-here username-here user-password-here lcd C:\Backup\files bin hash prompt mget /usr/backup_02-11-2010.zip

    Read the article

  • Alerting when a RAID Array disk fails locally on VMWare ESX or ESXi System

    - by Tim K
    With ESX and ESXi, we recently had two systems where that the boot partition became degraded due to a failed disk. The only alert we managed to capture was the visual alert on the Dell servers. We failed to received any electronic alerts regarding the failed or degraded array. Does anyone have any experience with monitoring for these types of failures? In both cases, the servers were running in a RAID 5 SCSI configuration (5 disks on one system, 3 disks on another) which if we were running a Windows Server OS, we would have had an alert created in the Eventviewer. Where would I begin to look for this solution. Can it be configured in VCenter or vFoglight?

    Read the article

  • CentOS 5 - Unable to resolve addresses for NFS mounts during boot

    - by sagi
    I have a few servers running CentOS 5.3, and am trying to get 2 NFS mount-points to mount automatically on boot. I added 2 lines similar to the following to fstab: server1:/path1 /path1 nfs soft 0 0 server2:/path2 /path2 nfs soft 0 0 When I run 'mount -a' manually, the mount points are properly mounted as expected. However, when I reboot the machine, only /path2 is mounted. For /path1 I get the following error: mount: can't get address for server1 It obviously looks like a DNS issue, but the record is properly configured in all the DNS servers and is mounted properly if I re-try the mount after the reboot is completed. I could properly fix this by using IP address instead of hostnames in /etc/fstab or adding server1 to /etc/hosts but I would rather not do that. What might be the reason for failing to resolve this specific address during boot time? Why the problem is only with the 1st mount point and the 2nd is properly mounted despite having identical configuration?

    Read the article

  • How to put 1000 lightweight server applications in the cloud

    - by Dan Bird
    The company I work for sells a commercial desktop/server app that runs on any non dedicated Windows PC or server and uses Tomcat for all interactions with the application. Customers are asking that we host their instance of the application so they don't have to run it locally on their own servers. The app is lightweight and an average server, in theory, could handle 25-50 instances before users would notice a slowdown. However only 1 instance can run per Windows instance (because the application writes to a common registry branch) so we'd need something like VMWare to create 25-50 Windows instances. We know we eventually need to reprogram to make it truly cloud-worthy but what would you recommend for a server farm or whatever for this? We don't have the setup to purchase our own servers so we must use a 3rd party. We have budgeted $500 - $1000 per year per customer for this service. Thanks in advance for your suggestions, experiences and guidance.

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >