Search Results

Search found 16455 results on 659 pages for 'hosts allow'.

Page 159/659 | < Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >

  • IE Kerberos failure on some machines with CNAME web server (with SPN for host's A record)

    - by Eric Thames
    It's fairly well known that IE doesn't like to do Kerberos against hosts that are registered in DNS as CNAMEs. What happens is that IE turns around and uses the underlying A record for the host for looking up the Service Principal Name (SPN). On a test network we are able to get Kerberos working by having the SPN registered for the A record of the host, so that Kerberos authentication happens successfully when accessing the web server via it's CNAME in the browser. Kerberos authentication works properly when directly accessing the web server with the A record host in the URL, but for various reasons that are beyond my control, it is desired to use the CNAME. On the production network, this same configuration fails though and I can't figure out why. Any thoughts? This is a java web application using the SPNEGO library - not IIS. Kerberos authentication is working properly in both the test and production networks (and has been confirmed to not fail back to NTLM), but the CNAME access only works in test.

    Read the article

  • Linked vSphere servers preventing cloning?

    - by brian
    I've currently got a pair of vSphere5 standard servers (physical, not VAs) managing about a hundred ESX 4.1 and 5 hosts in two different physical and logical datacenters. With our last purchase, we bought another vSphere license for the new vS server. I unmanaged all the ESX servers in one datacenter and added them to new vSphere server. Our previous single-vS-server layout used to be: -vSphere1 --Datacenter1 (where the physical ESX host was located) ---Folder ----ESX server1 --Datacenter2 ---Folder ----ESX server2 Now it looks like -vSphere1 --Datacenter1 ---Folder ----ESX server1 -vSphere2 (new vSphere server) --Datacenter2 ---Folder ----ESX server2 ESX server2 was removed from vSphere1's inventory and added to vSphere2's, so it is now managed by vSphere2. This is nice and all, as no vSphere <-- ESX management traffic leaves the physical datacenter, except for one huge oversight: when I go to clone a VM, the opposite vSphere server (and thus other datacenter) does not show up in the list on the first page of the wizard. Is this a bug, a license limitation, or is it just simply not possible to clone a VM from an ESX box managed by one vS server to another ESX box managed by a /different/ vS server?

    Read the article

  • Add a custom certificate authority to Ubuntu

    - by rmrobins
    Hello; I have created a custom root certificate authority for an internal network, example.com. Ideally, I would like to be able to deploy the CA certificate associated with this certificate authority to my Linux clients (running Ubuntu 9.04 and CentOS 5.3), such that all of the applications automatically recognize the certificate authority (i.e. I do not want to have to configure Firefox, Thunderbird, etc manually to trust this certificate authority). I have attempted this on Ubuntu by copying the PEM-encoded CA certificate to /etc/ssl/certs/ and /usr/share/ca-certificates/, as well as by modifying /etc/ca-certificates.conf and rerunning update-ca-certificates, however applications do not seem to recognize that I have added another trusted CA to the system. Therefore, is it possible to add a CA certificate once to a system, or is it necessary to manually add the CA to all of the possible applications that will attempt to make SSL connections to hosts signed by this CA in my network? If it is possible to add a CA certificate once to the system, where does it need to go? Thanks.

    Read the article

  • Endian Destination NAT

    - by Ben Swinburne
    I have installed Endian Community Firewall 2.3 and am clearly misunderstanding/doing something wrong with it. I'm trying to create some destination NAT rules to allow incoming connections to various services within the network. Router - RED I/F - x.x.x.x Router - GREEN I/F - 192.168.11.253 ECF - RED I/F - 192.168.11.254/24 ECF - GREEN I/F - 192.168.12.254/24 Target server - 192.168.12.1 Please ignore the haphazard choice of subnets and addresses- I'm trying to quickly plop Endian into an existing network before a complete rework in 6-12 months so for now. Everything works except destination NAT, so outgoing connections are fine, the routes between the two subnets are OK etc. I want to create various incoming NATs but let's take for the sake of argument, SMTP port 25 from the Internet to Target server 192.168.12.1. I've tried almost every combination of options in the Destination NAT section to achieve this and clearly am doing something wrong. I suspect my confusion must be somewhere in the Access From and/or Target section. The rest seems OK Filter Policy = Allow Service = SMTP Protocol = TCP Port = 25 Translate to type = IP DNAT Policy = NAT Insert IP = 192.168.12.1 Port Range = 25 Enabled = Checked Position = First I can't work out what I'm doing wrong, or am I doing it right and it's just not working!? Any help would be greatly appreciated.

    Read the article

  • Linux networking "jail" for a single process

    - by halp
    I need to tune up a networking app for network specific things like: make it use a DNS server different than the default one from /etc/resolv.conf make sure it does not try to connect to certain hosts/ports using tcp/udp connections I know I can get away with just modifying /etc/resolv.conf and writing some iptables rules, but going for a default DENY firewall policy for outgoing IP packets can trigger malfunctions in other services running on the server. I know I can set up a virtual machine with a whole OS and run my app there, but it seems a bit overkill. Is it possible to have a networking "jail" for a single app (think single Linux process) that could accept iptables-like rules for network traffic (think in terms of IP packets and above) allowed to and from this particular app? Maybe this is achievable through some dynamically loaded library that can deal with the networking layer, the same manner tsocks does, but more fine-grained?

    Read the article

  • Two domains, two servers, one dynamic IP address

    - by giantman
    I have two domains hi.org and bye.net and one dynamic IP address and two servers. I want to attach one domain bye.net to server1 and hi.org to server2. I'm using Apache wamp 2.0i. I have two servers behind one router with a dynamic IP address #httpd.conf file additions <IfModule mod_proxy.c> ProxyRequests Off <Proxy *> Order deny,allow Allow from all </Proxy> </IfModule> #vhost file additions NameVirtualHost *:80 #default <VirtualHost *:80> DocumentRoot "c:/wamp/www/fallback" </VirtualHost> # Server 1 <VirtualHost *:80> DocumentRoot "c:/wamp/www" ServerName h**p://bye.net ServerAlias bye.net </VirtualHost> # Server 2 <VirtualHost *:80> ProxyPreserveHost On ProxyPass / h**p://192.168.1.119/ DocumentRoot "g:/wamp/www" ServerName h**p://hi.org ServerAlias hi.org </VirtualHost> After doing all this I fallback to server1 only I don't get the page hi.org I only get the page bye.net, I don't even get the default fallback page which gets executed when a person enters IP address but not the domain name. I use Windows 7 (server 2) and Windows XP (server 1) UPDATE: I needed to remove DocumentRoot "g:/wamp/www" line :D it was there by mistake! things are working fine now. But one thing: the URL gets replaced by the local ip address any way to not make that happen?

    Read the article

  • Where is Amazon Linux AMI Test Page EC2?

    - by fuzzybee
    I have set up my websites as directories directly under /var/www/html/ and they are working just fine (the websites are mapped to virtual hosts). So, this is mainly out of curiosity for the moment. Furthermore, being able to customise this might bring some benefits in the future e.g. branding the elastic IPs my computer use temporarily. Notes I can always create a index.html page under /var/www/html/ and modify it but that's not my goal here. I can also map the elastic IP address to a directory /var/www/html/default/ and do my stuffs there but that is not also my goal here My goal is the find the Amazon Linux AMI test page I've tried running Linux command to find it but it takes too long obviously

    Read the article

  • Unable to connect to Github for the first time

    - by MaxMackie
    This is my first time with Git and I'm trying to set it up on my box. I added my key to my profile in the Github web interface. When I try to connect... : max@linux-vwzy:~> ssh [email protected] The authenticity of host 'github.com (207.97.227.239)' can't be established. RSA key fingerprint is xx Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'github.com,207.97.227.239' (RSA) to the list of known hosts. PTY allocation request failed on channel 0 max@linux-vwzy:~> ssh-add ~/.ssh/id_rsa Identity added: /home/max/.ssh/id_rsa (/home/max/.ssh/id_rsa) max@linux-vwzy:~> ssh [email protected] PTY allocation request failed on channel 0 I'm supposed to be getting some kind of welcome message however, I'm not.

    Read the article

  • Apache2 Service started twice

    - by Relentless
    My apache2 web-server starts twice and wont bind, so i have to do this: sudo netstat -nap | grep 0.0.0.0:443 sudo kill -9 1243 sudo /etc/init.d/apache2 restart Is there any way i can make a script out of the code above so that i can run automatically on start up? I have Ubuntu 10.04, this happened after an update. UPDATE: ports.conf - Could this be cause it: <IfModule mod_ssl.c> # If you add NameVirtualHost *:443 here, you will also have to change # the VirtualHost statement in /etc/apache2/sites-available/default-ssl # to <VirtualHost *:443> # Server Name Indication for SSL named virtual hosts is currently not # supported by MSIE on Windows XP. Listen 443 </IfModule> <IfModule mod_gnutls.c> Listen 443 </IfModule> could it be listening to 443 twice? or do i need to add NameVirtualHost *:443

    Read the article

  • Unable to connect to Adobe Connect

    - by ub3rst4r
    I am having troubles trying to connect my colleges Adobe Connect. I have done the test meeting connection and it will say "Unable to connect". I have tried connecting on 3 other computers and it works with flying colors. I am running Norton 360 on my computer and I also tried it on my other laptop thats also running Norton 360 and it works on that laptop. I also checked my hosts file and that is not the problem because I am able to connect to the server (on port 80) but not the Adobe Connect port (port 1935). The only thing in it is "127.0.0.1 localhost" Here are the details from the log that the test created: Player Version: WIN 11,3,300,271 App-Server returned: code:ok, servers=rtmp://connect.bowvalleycollege.ca:1935/_rtmp://localhost:8506/,rtmpt://connect.bowvalleycollege.ca:443/_rtmp://localhost:8506/ ERROR: FMS Server did not return correctly! Here is my specifications: Windows 7 SP1 x64 Norton 360 v6.3 (latest) It won't connect in Firefox v15, Chrome v19, or IE9 All of my computers are connected through the same router (D-Link DIR-625) Any ideas?

    Read the article

  • Online Storage and security concerns

    - by Megge
    I plan to set up a small fileserver. I already own a small server at HostEurope (VirtualServer L, 250GB space), but they don't offer enough space (there is the HostEurope Cloud, but paying for bandwidth isn't an option here, video-streaming should be possible) Requirements summarized: Storage: 2TB, Users: ~15, Filesizes: < 100GB, should be easily reachable (Mount as a networkdrive or at least have solid "communication" software) My first question would be: Where can I get halfway affordable online storages? And how should I connect them to my server? Getting an additional server is a bit overkill, as I know no hoster which allows 2 TB on a small 2 Ghz Dual Core 2 GB RAM thingy (that would be enough by far, I just need much space), and connecting it via NFS or FTP over Internet seems a bit strange and cripples performance. Do you have any advice where I could get that storage service from? (I sent HostEurope a custom request today, but they didn't answer till now. If they can provide me with that space, this question will be irrelevant, but the 2nd one is the more important one anway, don't do much more than recommend me some based on experience, you don't have to crawl hours through hosting services) livedrive for example offers 5 TB for 17€ / month, I'd be happy with 2 TB for 20 €, the caveat is: It doesn't allow multiple users, which leads me to my second question: Where are the security problems? Which protocol is sufficient (I want private and "public" folders etc. the usual "every user has its own and a public space"-thing), secure and fast? (I'd tend to (S)FTP, problem with FTP is: Most of those hosting services don't even allow FTP with mutliple users and single users lead me into "hacking" a solution (you could map the basic folder structure on the main server and just mount every subfolder from the storage, things get difficult with a public folder with 644 permissions though) Is useing something like PKI or 802.1X overkill for private uses?

    Read the article

  • Using GitOAuthPlugin for Jenkins - not working as expected

    - by Blundell
    I need some clarity and maybe a fix. I'm using this plugin to authorise who views our Jenkins ci server: https://wiki.jenkins-ci.org/display/JENKINS/Github+OAuth+Plugin As I understand it anyone who is auth'd to view one of our github project's can also login to our Jenkins box. This works I thought it would also allow the person logging in to only view the Project that they have GitHub permission on. For instance. Three projects on GitHub (A,B,C). Three builds on Jenkins. User 1 has Git access to all 3 projects (A B C). User 2 has Git access to only 1 project (A). When logging into Jenkins: User 1 can see all 3 projects ( this works ) User 2 can only see project A The problem is User 2 can also see all 3 projects when they should only see 1! Have I got this correct, and if so is this a bug? I have the settings set in Jenkins configuration Github Authorization Settings. Here we have some admin users. One organization. And none out of the 4 checkboxes ticked. (User 2, is not an admin, is not part of the org). The plugin is open sourced here: https://github.com/mocleiri/github-oauth-plugin I was trying to get Jenkins to print me the Logs from the plugin but I also failed at viewing these (to see if there was an issue). I followed these instructions: https://wiki.jenkins-ci.org/display/JENKINS/Logging It's the same concept as outlined below but using GitHub rather than manually selecting users: https://wiki.jenkins-ci.org/display/JENKINS/2012/01/03/Allow+access+to+specific+projects+for+Users%28Assigning+security+for+projects+in+Jenkins%29 Have I got this right or wrong? Is it possible to auth a Jenkins user to only see one project?

    Read the article

  • Nagios Apache Config with PHP-FPM downloading cgi files

    - by tubaguy50035
    I'm trying to setup Nagios 3 under Apache 2.4 with PHP-FPM. I've run into a couple problems I could use help with. The PHP side of things seems to be working, I can see the home page and the sidebar. But all of the CGI files are downloading instead of executing, and when I try to click on "Read What's New In Nagios Core 3", I get an error /nagios3/docs/whatsnew.html was not found on this server. Below is my vhost config for Nagios. <VirtualHost *:300> # apache configuration for nagios 3.x ScriptAlias /cgi-bin/nagios3 /usr/lib/cgi-bin/nagios3 ScriptAlias /nagios3/cgi-bin /usr/lib/cgi-bin/nagios3 # Where the stylesheets (config files) reside Alias /nagios3/stylesheets /etc/nagios3/stylesheets # Where the HTML pages live Alias /nagios3 /usr/share/nagios3/htdocs ProxyPassMatch ^/(.*\.php)$ fcgi://127.0.0.1:9001/usr/share/nagios3/htdocs/$1 <DirectoryMatch (/usr/share/nagios3/htdocs|/usr/lib/cgi-bin/nagios3|/etc/nagios3/stylesheets)> Options FollowSymLinks ExecCGI AllowOverride AuthConfig Order Allow,Deny Allow From All AuthName "Nagios Access" AuthType Basic AuthUserFile /etc/nagios3/htpasswd.users require valid-user </DirectoryMatch> <Directory /usr/share/nagios3/htdocs> Options +ExecCGI </Directory> </VirtualHost> I also added this in my global Apache config: AddHandler cgi-script .cgi Any help or instructions you can give me would be much appreciated. If more information is needed, let me know.

    Read the article

  • iTunes Home Sharing only works one way between 2 WinXP PC's on the same LAN

    - by scunliffe
    Both PC's have the latest iTunes installed. PC (A) can "see" that there is a shared library "B library" but attempts to connect to it return this error message: The shared library "{Username}'s Library" is not responding (-3259) Check that any firewall software running on either the shared computer or this computer has been set to allow communication on port 3689. however the reverse works fine. e.g. PC (B) can "see" shared library "A library" and can access all content. Notes: Both PC's have Home Sharing enabled (turned off/on several times to verify). Both PC's have Windows Firewall turned on, but in the exceptions tab, iTunes is allowed, and Port 3689 is also added as a firewall exception (just in case) Both iTunes accounts have been "authorized" on both PC's Both PC's connect via LAN via D-Link DIR-615 router. In the advanced application rules, iTunes has also been added to allow traffic on port 3689 un-hindered. Is there any other magical setting/configuration option that I should be aware of and set in order to get this to work? I could care less about sharing apps etc. I just want the music sharing to work. Update: Solved! It turns out on PC (B) there were multiple accounts set up. 1 of the accounts had the checkbox checked under the windows firewall "On" option which states "No exceptions" thus even though it was added to the exception list on the main user account, this other account was blocking access.

    Read the article

  • iproute2 rules and iptables NAT... what is the difference?

    - by Jakobud
    We have 2 different ISP connections. Our previous "IT guy" setup our firewall like so: When /etc/rc.local was executed on startup, it did a bunch of ip rule add and ip route add commands in order to route certain internal hosts to use certain ISP connections. Then at the end of /etc/rc.local, he executed our iptables firewall rules that were generated by Firewall Builder. These iptables rules have both Policy and NAT rules setup in them. What I don't understand, is why did he use iproute2 to specify rules and routes but also specify NAT rules for iptables? Why didn't he just do it all in one or the other instead of using them both? Could he have got rid of the iproute2 rules and routes and just put all those same rules into the iptables NAT settings?

    Read the article

  • Gateway time out connecting to tethered server from Android

    - by BentFX
    I've got an Android device running android-wifi-tether. It works as advertised. I connect to it from my Ubuntu 12.04 laptop running Apache 2.2.22. The laptop is manually configured to IP 192.168.2.100 in the hosts file. It can ping itself and access it's own web server through that address. The WiFi tether hotspot gives the laptop the same 192.168.2.100 address(Laptop was configured to match the hotspot address as a troubleshooting step, and could be wrong.) Using ping I can ping the laptop from the phone using the 192.168.2.100 address. Using portscan the phone shows port 80 open on the 192.168.2.100 address. So, everything looks like it's in place, but any attempt to browse to http://192.168.2.100 fails after a few moments with a 504(Gateway time out) Any help would certainly be help.

    Read the article

  • Transition domain to new web host without waiting for DNS propagation

    - by jcmoney
    I was considering switching to Amazon EC2 to host my website to handle more traffic. It seems like I would have to update DNS records to point to the new server but I was wondering if there was a way to avoid having to wait for the new DNS record to propagate. Putting the code on both hosts would not work for me since the app writes to a database pretty frequently. I thought about just using a meta redirect or php redirect on the old host to redirect to the new host ip but was wondering if there's a better more accepted way of doing this.

    Read the article

  • Custom/personal dyndns solution?

    - by Eddie Parker
    Hey: I can't think of how to make this work, but it seems like something that should be doable.. I currently own my own domain, and have been using dyndns.com's "custom DNS" to allow me to redirect 'example.com' to my website at home, which is on a dynamic IP. I've now switched over to a VPS solution which hosts my website and allows me root access to a box (me likey), which will now host "example.com" on a static IP. My question is, is it possible for me to somehow make "home.example.com" route to my box at home? Is there any software available that could automate updates to the DNS for this? Ideally I'd like not to pay a service if possible, but if that's the only way then I suppose I'll have to go that way. Thanks!

    Read the article

  • .htaccess - permissions forbidden

    - by user1732521
    I have an error with a new virtual host that I can't figure out.. My .htaccess doesn't have web access (403). [Thu Oct 31 17:51:01 2013] [crit] [client ] (13)Permission denied: /srv/data_disk/www /site.dev/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable I have set the permissions for the complete htdocs folder to 755, and to owned by my regular user and group (www-data). I have other vhosts set up with the same user and lesser permissions (rw-rw---) on the .htaccess. The virtual hosts are also setup in the same way.. as far a I can tell.. Thanks!

    Read the article

  • Can not connect to telnet server

    - by BloodPhilia
    So, I can't use telnet to connect to any server but it works fine from a different computer. It just says it can't connect. I tried the following things: Disable firewall and AV protection. (Basically, there was no security feature left online) Telnet is set to "Trusted" in my AV protection. (Kaspersky Internet Security 2011) Using Putty to telnet, but apparently Putty's connection is also inhibited. (Says it can't connect to host) Disabling the telnet client in Control Panel and then re-enabling it. (Windows 7 Ultimate) hosts file is clean. Checked for nasties using MBAM and KIS 2011 as well as going though my HijackThis logs, nothing found. I can connect to the same machines/servers through the web browser, ping, tracert, etc. Only telnet seems to be blocked. Any other thoughts?

    Read the article

  • Nexenta/OpenSolaris filer kernel panic/crash

    - by ewwhite
    I've an x4540 Sun storage server running NexentaStor Enterprise. It's serving NFS over 10GbE CX4 for several VMWare vSphere hosts. There are 30 virtual machines running. For the past few weeks, I've had random crashes spaced 10-14 days apart. This system used to open OpenSolaris and was stable in that arrangement. The crashes trigger the automated system recovery feature on the hardware, forcing a hard system reset. Here's the output from mdb debugger: panic[cpu5]/thread=ffffff003fefbc60: Deadlock: cycle in blocking chain ffffff003fefb570 genunix:turnstile_block+795 () ffffff003fefb5d0 unix:mutex_vector_enter+261 () ffffff003fefb630 zfs:dbuf_find+5d () ffffff003fefb6c0 zfs:dbuf_hold_impl+59 () ffffff003fefb700 zfs:dbuf_hold+2e () ffffff003fefb780 zfs:dmu_buf_hold+8e () ffffff003fefb820 zfs:zap_lockdir+6d () ffffff003fefb8b0 zfs:zap_update+5b () ffffff003fefb930 zfs:zap_increment+9b () ffffff003fefb9b0 zfs:zap_increment_int+68 () ffffff003fefba10 zfs:do_userquota_update+8a () ffffff003fefba70 zfs:dmu_objset_do_userquota_updates+de () ffffff003fefbaf0 zfs:dsl_pool_sync+112 () ffffff003fefbba0 zfs:spa_sync+37b () ffffff003fefbc40 zfs:txg_sync_thread+247 () ffffff003fefbc50 unix:thread_start+8 () Any ideas what this means?

    Read the article

  • How to configure Apache2 to host Django and PHP on multiple domains simultaneously?

    - by Bert B.
    I have a VPS (Ubuntu 10.04) that hosts multiple domains, one of them being a CodeIgniter (PHP) web app. The others are just static websites, no fancy backend languages required. Well I am starting a new project and want to use Django. I have Django installed, mod_wsgi enabled in Apache2, but when I did the first steps on the documentation (https://docs.djangoproject.com/en/dev/howto/deployment/wsgi/modwsgi/) it seemingly overwrote my existing Apache2 configuration and served up the Django welcome page to all my domains. What should my httpd.conf file should look like so that it doesn't overtake all my domains.

    Read the article

  • Secure iptables config for Samba

    - by Eric
    I'm trying to setup an iptables config such that outbound connections from my CentOS 6.2 server are allowed ONLY if they are of state ESTABLISHED. Currently, the following setup is working great for sshd, but all the Samba rules get totally ignored for a reason I cannot figure out. iptables Bash script to setup ALL rules: # Remove all existing rules iptables -F # Set default chain policies iptables -P INPUT DROP iptables -P FORWARD DROP iptables -P OUTPUT DROP # Allow incoming SSH iptables -A INPUT -i eth0 -p tcp --dport 22222 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 22222 -m state --state ESTABLISHED -j ACCEPT # Allow incoming Samba iptables -A INPUT -i eth0 -s 10.1.1.0/24 -p udp --dport 137:138 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -d 10.1.1.0/24 -p udp --sport 137:138 -m state --state ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -s 10.1.1.0/24 -p tcp --dport 139 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -d 10.1.1.0/24 -p tcp --sport 139 -m state --state ESTABLISHED -j ACCEPT # Enable these rules service iptables restart iptables rule list after running the above script: [root@repoman ~]# iptables -L Chain INPUT (policy DROP) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp dpt:22222 state NEW,ESTABLISHED Chain FORWARD (policy DROP) target prot opt source destination Chain OUTPUT (policy DROP) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp spt:22222 state ESTABLISHED Ultimately, I'm trying to restrict Samba the same way I have done for sshd. In addition, I'm trying to restrict connections to the following IP address range: 10.1.1.12 - 10.1.1.19 Can you guys offer some pointers or possibly even a full-blown solution? I've read man iptables quite extensively, so I'm not sure why the Samba rules are getting thrown out. Additionally, removing the -s 10.1.1.0/24 flags don't change the fact the rules get ignored.

    Read the article

  • Problem with tomcat and getLocalHost exception

    - by xain
    I'm running a Linux server named S1 in a "cloud" server, and when tomcat 6.0.24 starts, I get the exception: org.apache.catalina.connector.Connector pause SEVERE: Protocol handler pause failed java.net.UnknownHostException: S1: S1 at java.net.InetAddress.getLocalHost(InetAddress.java:1353) at org.apache.jk.common.ChannelSocket.unLockSocket(ChannelSocket.java:485) Which then leads to: ERROR ehcache.Cache - Unable to set localhost. This prevents creation of a GUID. Cause was: Sjira1: S1 java.net.UnknownHostException: S1: S1 at java.net.InetAddress.getLocalHost(InetAddress.java:1353) at net.sf.ehcache.Cache.<clinit>(Cache.java:143) My hosts file is: 127.0.0.1 localhost localhost.localdomain (valid-ip-address) S1 S1.(valid domain name) ping S1 and S1.(valid domain name) return valid ip address nslookup S1.(valid domain name) returns valid ip address nslookup S1 throws ** server can't find S1: NXDOMAIN Any ideas about how to fix this ? Thanks

    Read the article

  • Problem with tomcat and getLocalHost exception

    - by xain
    I'm running a Linux server named S1 in a "cloud" server, and when tomcat 6.0.24 starts, I get the exception: org.apache.catalina.connector.Connector pause SEVERE: Protocol handler pause failed java.net.UnknownHostException: S1: S1 at java.net.InetAddress.getLocalHost(InetAddress.java:1353) at org.apache.jk.common.ChannelSocket.unLockSocket(ChannelSocket.java:485) Which then leads to: ERROR ehcache.Cache - Unable to set localhost. This prevents creation of a GUID. Cause was: Sjira1: S1 java.net.UnknownHostException: S1: S1 at java.net.InetAddress.getLocalHost(InetAddress.java:1353) at net.sf.ehcache.Cache.<clinit>(Cache.java:143) My hosts file is: 127.0.0.1 localhost localhost.localdomain (valid-ip-address) S1 S1.(valid domain name) ping S1 and S1.(valid domain name) return valid ip address nslookup S1.(valid domain name) returns valid ip address nslookup S1 throws ** server can't find S1: NXDOMAIN Any ideas about how to fix this ? Thanks

    Read the article

< Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >