Search Results

Search found 16455 results on 659 pages for 'hosts allow'.

Page 176/659 | < Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >

  • Storing Cards and PCI Compliance

    - by Nimbuz
    I'm developing a SaaS service and will be managing payments as a merchant for customers, and since we'll be using multipe payment processors depending on users location, amount and other factors so its important to store card details. I did some research and from what I understood all you need is a PCI compliant host (VPS, Dedicated or Private Cloud) and get it validated and certified through some provider like TrustWave etc... Is that correct or am I missing something? Also, would be great if you could suggest a few (not necessasrily cheap, but affordable) PCI compliant hosts. Many thanks

    Read the article

  • Apache Virtual Host Configuration

    - by Carl
    I have been searching the internet for an hour now, and I was hoping for a quick hint here so that I could solve my problem a wee bit faster. My virtual server is so far only accessible through an IP address, no DNS entry yet, and so far none needed either. The problem I have is with Apache2, the virtual hosts are puzzling me. What I need is: Access to my project (based on Symfony2) from the outside with the IP address Access to my project from localhost What I have got: Access from the outside results in rendering the websites in /var/www/vhosts/htdocs/default, while from the inside results in rendering the websites in /var/www. Why the difference? What is a recommended configuration for my use case?

    Read the article

  • How do I prevent my ASP .NET site from continually prompting for user credentials?

    - by gilles27
    I'm trying to get an ASP .NET website up and running on IIS6. The site will run in its own application pool, and uses Windows authentication, with anonymous access turned off. When I run the app pool under NETWORK SERVICE, everything works fine. However we need the app pool to run under a different account, because this account needs some extra privileges (we are printing Word documents). This new account is a member of the local users group, and the IIS_WPG group. It has also been granted the "Log on as a service right". When I browse to the site I am prompted for credentials, not once, but several times. When the page finally loads it looks wrong because the style sheets have not been applied. My suspicion is that I am being prompted once for each file (e.g. all the images, styles and script files) the browser requests, and that for some reason the website is unable to validate those credentials in order to serve the files back. If I allow anonymous access the page loads fine - we don't want to allow it but I mention it in case it offers any further clues. My theory is that perhaps the account the app pool runs under needs permissions to validate domain credentials? If that is so, how do I enable this?

    Read the article

  • Can't get DNS Alias work on Ubuntu 10.04 with Apache 2

    - by Johnny
    I want to use the DNS Alias to configure one of my domain pointing to a specific directory on the server. Here is what I've done: Change the IP address in domain setting, and it works $ ping www.example.com PING example.com (124.205.62.xxx): 56 data bytes 64 bytes from 124.205.62.xxx: icmp_seq=0 ttl=48 time=53.088 ms 64 bytes from 124.205.62.xxx: icmp_seq=1 ttl=48 time=52.125 ms ^C --- example.com ping statistics --- 2 packets transmitted, 2 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 52.125/52.606/53.088/0.482 ms Add sites-available and sites-enabled $ ls -l /etc/apache2/sites-available/ total 16 -rw-r--r-- 1 root root 948 2010-04-14 03:27 default -rw-r--r-- 1 root root 7467 2010-04-14 03:27 default-ssl -rw-r--r-- 1 root root 365 2010-06-09 18:27 example.com $ ls -l /etc/apache2/sites-enabled/ total 0 lrwxrwxrwx 1 root root 26 2010-06-09 15:46 000-default -> ../sites-available/default lrwxrwxrwx 1 root root 33 2010-06-09 18:17 001-example.com -> ../sites-available/example.com But it doesn't work and when I open the browser for www.example.com, it shows an 111 error: The following error was encountered: Connection to 124.205.62.48 Failed The system returned: (111) Connection refused Here is how example.com's config: $ cat /etc/apache2/sites-enabled/001-example.com <virtualhost *:80> DocumentRoot "/vhosts/example.com/htdocs/" ServerName www.example.com ServerAlias example.com <Location /> Order Deny,Allow Deny from None Allow from all </Location> #Include /etc/phpmyadmin/apache.conf ErrorLog /vhosts/example.com/logs/error.log CustomLog /vhosts/example.com/logs/access.log combined Could you please tell me how to solve this?

    Read the article

  • How do I update my servers' domain name for Reverse DNS?

    - by Jeff
    I'm updating my mail servers' rDNS and I think I have it all figured out except for one thing. When I installed my OS (Debian Etch), the installer asked me to enter the "domain name". Is the "domain name" updated by using the hostname command? If so, which config file(s) are updated when using the hostname command? If not, how do I change my servers' domain name? My current /etc/hosts: 127.0.0.1 localhost 67.228.178.164 mrspock.example-old.com mrspock My current /etc/hostname: mrspock $ hostname -f mrspock.example-old.com I need to update hostname -f to be mrspock.example-new.com.

    Read the article

  • Empty rewrite.log on Windows, RewriteLogLevel is in httpd.conf

    - by ripper234
    I am using mod_rewrite on Apache 2.2, Windows 7, and it is working ... except I don't see any logging information. I added these lines to the end of my httpd.conf: RewriteLog "c:\wamp\logs\rewrite.log" RewriteLogLevel 9 The log file is created when Apache starts (so it's not a permission problem), but it remains empty. I thought there might be a conflicting RewriteLogLevel statement somewhere, but I checked and there isn't. What else could cause this? Could this be caused by Apache not flushing the log file? (I closed it by hitting CTRL-C on the httpd.exe command ... this caused the access logs to be flushed to disk, but still nothing in rewrite.log) My (partial) httpd-vhosts.conf: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName my.domain.com DocumentRoot c:\wamp\www\folder <Directory c:\wamp\www\folder> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule . everything-redirects-to-this.php [L] </IfModule> </Directory> </VirtualHost>

    Read the article

  • Virtual hosting all resolving to the same files

    - by nona urbiz
    I'm trying to set up virtual hosts on my VPS (centos). I set both domain nameservers to fns1.dnspark.net and fns2.dnspark.net and set an A record there for each domain pointing to my IP address 50.16.219.8. Both domains are currently resolving to the first virtual host. What am I doing wrong? Thanks! NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /var/www/root/dylanstestserver.com ServerName dylanstestserver.com ServerAlias www.dylanstestserver.com ErrorLog logs/dylanstestserver.com-error-log CustomLog logs/dylanstestserver.com-access_log common </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /var/www/root/repthis.info ServerName repthis.info ServerAlias www.repthis.info ErrorLog logs/repthis.info-error-log CustomLog logs/repthis.info-access_log common </VirtualHost>

    Read the article

  • Is it possible to upload only files that have been updated into a server?

    - by kamikaze_pilot
    Hi guys, Suppose I have a server accessible via FTP and it hosts websites Suppose I want to edit the website locally so it wont affect the site live, and suppose I edit a whole bunch of files, and I don't want to deal with the hassle of keeping track of which files I've edited all the time... Once I finished editing I want to upload it to the server via FTP....is there some FTP software that automatically detects which files have been edited and have only those files uploaded and overwritten rather than having me manually choosing the files I've edited (and hence having to keep track of edited files) or have me upload the entire site which is a waste of time thanks in advance

    Read the article

  • How do I set up a one way trust when some DCs are firewalled off from each other?

    - by makerofthings7
    I have two Windows 2008 forests in Win2003 mode and I need to set up a one way trust between them. The validation button in Domains And Trusts works in one forest but not in the other. I think this is because not all DCs can see all the other DCs. I'm not sure if I need to set up the hosts file, so I did so with company.com in the respective domain along with the relevant DC. (do I need _msdcs _tcp zones etc) How do I set up a one way trust when some DCs are firewalled off from each other?

    Read the article

  • Terminal Server 2003 Performance Troubleshooting

    - by MikeM
    Let me get your thoughts on terminal server performance problems. The server hosts average 25 users which, after running some numbers, on average use 600MB memory with their main applications running (web browser, adobe reader, IP phone client). All users are on the same LAN as server. We constantly experience slow response and short session lockups. Combined CPU usage is on average 10%. What appears strange to me is that the system shows 29GB physical memory with 25GB of it free. The page file usage is about 50% averaging 9GB used. Some server specs OS: Server 2003 32bit Enterprise with /PAE flag RAM: 32GB CPU: 2xQuad Core @ 2.27Ghz HD: RAID5 1.2GB After doing basic troubleshooting using performance monitor it leads me to believe that the performance problems are caused by the 32bit OS limitation in addressing full 32GB of physical memory even though the /PAE flag is used. Can anyone suggest something, troubleshooting steps that can lead to a more conclusive answer? Thanks

    Read the article

  • Save a single web page (with background images) with Wget

    - by mikael
    I want to use Wget to save single web pages (not recursively, not whole sites) for reference. Much like Firefox's "Web Page, complete". My first problem is: I can't get Wget to save background images specified in the CSS. Even if it did save the background image files I don't think --convert-links would convert the background-image URLs in the CSS file to point to the locally saved background images. Firefox has the same problem. My second problem is: If there are images on the page I want to save that are hosted on another server (like ads) these wont be included. --span-hosts doesn't seem to solve that problem with the line below. I'm using: wget --no-parent --timestamping --convert-links --page-requisites --no-directories --no-host-directories -erobots=off http://domain.tld/webpage.html

    Read the article

  • Apache2 default vhost in alphabetical order or override with _default_ vhost?

    - by benbradley
    I've got multiple named vhosts on an Apache web server (CentOS 5, Apache 2.2.3). Each vhost has their own config file in /etc/httpd/vhosts.d and these vhost config files are included from the main httpd conf with... Include vhosts.d/*.conf Here's an example of one of the vhost confs... NameVirtualHost *:80 <VirtualHost *:80> ServerName www.domain.biz ServerAlias domain.biz www.domain.biz DocumentRoot /var/www/www.domain.biz <Directory /var/www/www.domain.biz> Options +FollowSymLinks Order Allow,Deny Allow from all </Directory> CustomLog /var/log/httpd/www.domain.biz_access.log combined ErrorLog /var/log/httpd/www.domain.biz_error.log </VirtualHost> Now I when anyone tries to access the server directly by using the public IP address, they get the first vhost specified in the aggregated config (so in my case it's alphabetical order from the vhosts.d directory). Anyone accessing the server directly by IP address, I'd like them to just get an 403 or a 404. I've discovered several ways to set a default/catch-all vhost and some conflicting opinions. I could create a new vhost conf in vhosts.d called 000aaadefault.conf or something but that feels a bit nasty. I could have a <VirtualHost> block in my main httpd.conf before the vhosts.d directory is included. I could just specify a DocumentRoot in my main httpd.conf What about specifying a default vhost in httpd.conf with _default_ http://httpd.apache.org/docs/2.2/vhosts/examples.html#default Would having a <VirtualHost _default_:*> block in my httpd.conf before I Include vhosts.d/*.conf be the best way for a catch-all?

    Read the article

  • Apache Subversion and Sudo - Why can't I resolve this hostname?

    - by Hollowsteps
    Okay, I made a mistake and I'll be the first to admit I'm new at this setup. I built a bare bones kit, installed Ubuntu on it, and attempted to set up a source control server for a project some friend and I were going to work on. Unfortunately, I screwed up. I followed a dodgy tutorial from 2005 and when it didn't work, started mixing and matching trying to get to the source of my problem. So now I sit before you, a broken and miserable man. Desperate to escape this annoying echo of 'Unable to resolve host computer.repositoryname.com', I uninstalled apache and subversion. That did not fix it. Next I tried to edit my /etc/hosts, going so far as to remove the reference to '127.0.1.1 computername'. Still I'm plagued. I know I messed up, is there any way to track down this wayward bug?

    Read the article

  • Can't copy Ilias-folder via FQDN, but via ip-address?

    - by Lars
    I have an Ilias-Installation, which is available through two virtual hosts: the FQDN and the ip-address. The first server is ssl only, the second plain http. Both configuration files look the same except for the SSL-part: SSLEngine on SSLCertificateFile /etc/apache2/ssl/ilias.pem In the Ilias-Webinterface, I can copy a folder on the plain http. But if I try to copy the same folder on the ssl virtual host, I get the notice, that the copy was started (rough german translation here), but the folder does not show up. There are no errors in the error-logs of php or the webserver and as said, no differences beside the ssl-part. The guys at an ilias-forum did not have an idea, either. Any ideas in here?

    Read the article

  • Cannot access site via IP / hostname

    - by DaveB
    I am renting a VPS with Debian installed running JBoss AS6 for my web app. I recently had some problems with my DNS hosts as they messed up the A-records for my domain which caused some new A-records to be added by mistake The DNS problem is now sorted and the domain is working ok, however I noticed that the web server no longer responds via direct IP or hostname in a web browser (although it pings ok and I can SSH in using the hostname ok) Is there any explanation for this? I am using rinetd to forward traffic from 80 to port 8080 but thats been ok for a while Any suggestions would be appreciated Regards

    Read the article

  • How to change mount to grant user write permissions?

    - by nals
    I am on TomatoUSB, and using the feature to have a NAS. The only way I can write to the Samba share is if I force root: [global] interfaces = 127.0.0.1, 192.168.1.1/24 bind interfaces only = no workgroup = WORKGROUP netbios name = TOMATO security = share wins support = yes name resolve order = wins lmhosts hosts bcast guest account = nobody [Public] path = /mnt/sda2 read only = no public = yes only guest = yes guest ok = yes browseable = yes comment = Network share force user = root writeable = yes I dont really like the idea having to use root to allow write access to my share. I have a samba account created already named nobody to allow access to the share. However every time I try to write I get access denied error. fstab: /dev/sda2 /mnt/sda2 vfat defaults 0 0 Further more every time I try to chmod 777 /tmp/mnt/sda2 the permissions are not changed, and no error is produced. They stay 755. drwxr-xr-x 2 root root 4096 Jun 4 01:49 sda2 Basically; how can I give the user nobody write permissions to my mount? dev name: /dev/sda2 dev mount: /tmp/mnt/sda2

    Read the article

  • Apache config: Permissions, Directories and Locations

    - by James Murphy
    I'm trying to get my head around apache configuration to fix a problem I'm having but after a few hours I've decided to ask here. This is what I've got at the moment: DocumentRoot "/var/www/html" <Directory /> Options None AllowOverride None Deny from all </Directory> <Directory /var/svn> Options FollowSymLinks AllowOverride None Allow from all </Directory> <Directory /opt/hg> Options FollowSymLinks AllowOverride None Allow from all </Directory> <Location /hg> AuthType Digest AuthName "Engage HG" AuthDigestProvider file AuthUserFile /opt/hg/hgweb.users Require valid-user </Location> WSGISocketPrefix /var/run/wsgi WSGIDaemonProcess hg processes=3 threads=15 WSGIProcessGroup hg WSGIScriptAlias /hg "/opt/hg/hgweb.wsgi" <Location /svn> DAV svn SVNPath /var/svn/repos AuthType Basic AuthName "Subversion" AuthUserFile /etc/httpd/conf/users require valid-user </Location> I'm trying to get my head around how it's all laid out and how directories relate to locations/etc For /hg I get asked for a password but to /svn I get a 403 forbidden... the error I get is: [client 10.80.10.169] client denied by server configuration: /var/www/html/svn When I remove the entry it works fine.. I can't figure out how to get it linking to the /var/svn directory

    Read the article

  • Why is .htaccess not allowed in a directory but is allowed in another?

    - by John Isaacks
    I have apache2 installed on ubuntu 10.4 inside my var/www/ directory [amung others] I have a cakephp and a dvdcatalog directories. Each of which have CakePHP 1.3 installed. I can access them both via localhost/cakephp and localhost/dvdcatalog But the dvdcatalog shows up with no css styling. They both have these files: /var/www/cakephp/app/webroot/css/cake.generic.css /var/www/dvdcatalog/app/webroot/css/cake.generic.css But when I go to http://localhost/cakephp/css/cake.generic.css it sees the file but it does not see the file when I go to http://localhost/dvdcatalog/css/cake.generic.css I think this means the cakephp folder is able to use .htaccess and the dvdcatalog is not. I setup the cakephp directory last month when I was following in the blog tutorial. I am setting up the dvdcatalog directory now for a different tutorial. So I am not sure if I am missing a step. in my /etc/apache2/apache2.conf file I have this: <Directory "/var/www/*"> Order allow,deny Allow from all AllowOverride All </Directory> Which I thought gave .htaccesss to all. Does anyone have any ideas what the problem is?

    Read the article

  • XAMPP localhost and 127.0.0.1 don't work but computer name does work

    - by Steven
    I am running on a vista. Localhost and 127.0.0.1 do not work anymore but my computer name still works. Localhost and 127.0.0.1 only show a blank page however using my computer name can access xampp homepage as well as my server files. I am assuming there is something wrong with my hosts file but when i checked it looks fine: 127.0.0.1 localhost is the only thing in there that is not commented out, which seems right according to what I've seen on the web so far. I had a trojan recently that I got rid of so I believe that definitely has something to do with it. It was msa.exe among potentially others but I think I got rid of it with Malwarebytes. This is the only issue that is left. Thanks!

    Read the article

  • Wamp virtualhost with supporting of remote access

    - by Farid
    To cut the long story short, I've setup a Wamp server with local virtual host for domain like sample.dev, now I've bind my static IP and port 80 to my Apache and asked the client to make some changes in his hosts file and add x.x.x.x sample.dev , I've also configured my httpd virtual host like this : <VirtualHost *:80> ServerAlias sample.dev DocumentRoot 'webroot_directory' </VirtualHost> Client can reach to my web server using the direct access by ip address, but when he tries using the sample domain looks like he gets in to some infinite loop. The firewall is off too. What would be the problem?! Thanks.

    Read the article

  • How to figure out which directory is web server root?

    - by matt
    I want to view websites hosted on my Mac when running Windows VMware Fusion. I have an entry in the Windows hosts file to enable the routing: #ip of my mac domain i use on the VM to access it 192.168.1.70 mymac However, it resolves to an empty directory as a 404 is generated. I can see the access log on my Mac that everything is OK access wise. Firefox on VMware states the following response headers: Server Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 PHP/5.3.1 Any ideas how I can figure out what directory is being served? I am lost in a maze of twisty httpd.conf passages. localhost on my Mac resolves to my ~/Sites directory. 192.168.1.70 resolves to the same empty directory/404. Thanks.

    Read the article

  • VPN into multiple LAN Subnets

    - by Rain
    I need to figure out a way to allow access to two LAN subnets on a SonicWall NSA 220 through the built-in SonicWall GlobalVPN server. I've Googled and tried everything I can think of, but nothing has worked. The SonicWall NSA management web interface is also very unorganized; I'm probably missing something simple/obvious. There are two networks, called Network A and Network B for simplicity, with two different subnets. A SonicWall NSA 220 is the router/firewall/DHCP Server for Network A, which is plugged into the X2 port. Some other router is the router/firewall/DHCP server for Network B. Both of these networks need to be managed through a VPN connection. I setup the X3 interface on the SonicWall to have a static IP in the Network B subnet and plugged it in. Network A and Network B should not be able to access each other, which appears the be the default configuration. I then configured and enabled VPN. The SonicWall currently has the X1 interface setup with a subnet of 192.168.1.0/24 with a DHCP Server enabled, although it is not plugged in. When I VPN into the SonicWall, I get an IP address supplied by the DHCP Server on the X1 interface and I can access Network A remotely although I do not have access to Network B. How can I allow access to both Network A and Network B to VPN clients although keep devices on Network B from accessing Network A and vice-versa. Is there some way to create a VPN-only subnet (something like 10.100.0.0/24) on the SonicWall that can access Network A and Network B without changing the current network configuration or allowing devices on both netorks "see" each other? How would I go about setting this up? Diagram of the network: (Hopefully this kind of helps) WAN1 WAN2 | | [ SonicWall NSA 220 ]-(X3)-----------------[ Router 2 ] | | (X2) 192.168.2.0/24 10.1.1.0/24 Any help would be greatly appriciated!

    Read the article

  • What are steps to upgrade an cisco UCS B series system vmware vsphere from 4.1 to 5.0

    - by Gk.
    I have a Cisco UCS B-series system with 1.4 firmware running vsphere 4.1 (ESX) + Nexus 1000V. I want upgrade all that stuff to vsphere 5.0 without downtime. I tried to find any documentation describe all steps needed to do it, but cannot found anything clear. Here is my plan: Upgrade firmware of UCS from 1.4 to 2.0. Doc: http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/upgrading/from1.4/to2.0/b_UpgradingCiscoUCSFrom1.4To2.0.html Upgrade all vcenter, hosts+VEM, virtual machines, datastores using VMWare best practice. Is it OK? Am I missing something? Thank you, giobuon.

    Read the article

  • ssh many users to one home

    - by filippo
    Hiya, I want to allow some trusted users to scp files into my server (to an specific user), but I do not want to give these users a home, neither ssh login. I'm having problems to understand the correct settings of users/groups I have to create to allow this to happen. I will put an example; Having: MyUser@MyServer MyUser belongs to the group MyGroup MyUser's home will be lets say, /home/MyUser SFTPGuy1@OtherBox1 SFTPGuy2@OtherBox2 They give me their id_dsa.pub's and I add it to my authorized_keys I reckon then, I'd do in my server something like useradd -d /home/MyUser -s /bin/false SFTPGuy1 (and the same for the other..) And for the last, useradd -G MyGroup SFTPGuy1 (then again, for the other guy) I'd expect then, the SFTPGuys to be able to sftp -o IdentityFile=id_dsa MyServer and to be taken to MyUser's home... Well, this is not the case... SFTP just keeps asking me for a password. Could someone point out what am I missing? Thanks a mil, f. [EDIT: Messa in StackOverflow asked me if authorized_keys file was readable to the other users (members of MyGroup). Its an interesting point, this was my answer: Well, it wasn't (it was 700), but then I changed the permissions of the .ssh dir and the auth file to 750 though still no effect. Guess it's worth mentioning that my home dir ( /home/MyUser) is also readable for the group; most dirs being 750 and the specific folder where they'd drop files is 770. Nevertheless, about the auth file, I reckon the authentication would be performed by the local user on MyServer, isn't it? if so, I don't understand the need for other users to read it... well.. just wondering. ]

    Read the article

  • OpenVPN - client-to-client traffic working in one direction but not the other

    - by Pawz
    I have the following VPN configuration: +------------+ +------------+ +------------+ | outpost |----------------| kino |----------------| guchuko | +------------+ +------------+ +------------+ OS: FreeBSD 6.2 OS: Gentoo 2.6.32 OS: Gentoo 2.6.33.3 Keyname: client3 Keyname: server Keyname: client1 eth0: 10.0.1.254 eth0: 203.x.x.x eth0: 192.168.0.6 tun0: 192.168.150.18 tun0: 192.168.150.1 tun0: 192.168.150.10 P-t-P: 192.166.150.17 P-t-P: 192.168.150.2 P-t-P: 192.168.150.9 Kino is the server and has client-to-client enabled. I am using "fragment 1400" and "mssfix" on all three machines. An mtu-test on both connections is successful. All three machines have ip forwarding enabled, by this on the gentoo boxes: net.ipv4.conf.all.forwarding = 1 And this on the FreeBSD box: net.inet.ip.forwarding: 1 In the server's "ccd" directory is the following files: client1: iroute 192.168.0.0 255.255.255.0 client3: iroute 10.0.1.0 255.255.255.0 The server config has these routes configured: push "route 192.168.0.0 255.255.255.0" push "route 10.0.1.0 255.255.255.0" route 192.168.0.0 255.255.255.0 route 10.0.1.0 255.255.255.0 Kino's routing table looks like this: 192.168.150.0 192.168.150.2 255.255.255.0 UG 0 0 0 tun0 10.0.1.0 192.168.150.2 255.255.255.0 UG 0 0 0 tun0 192.168.0.0 192.168.150.2 255.255.255.0 UG 0 0 0 tun0 192.168.150.2 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 Outpost's like this: 192.168.150 192.168.150.17 UGS 0 17 tun0 192.168.0 192.168.150.17 UGS 0 2 tun0 192.168.150.17 192.168.150.18 UH 3 0 tun0 And Guchuko's like this: 192.168.150.0 192.168.150.9 255.255.255.0 UG 0 0 0 tun0 10.0.1.0 192.168.150.9 255.255.255.0 UG 0 0 0 tun0 192.168.150.9 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 Now, the tests. Pings from Guchuko to Outpost's LAN IP work OK, as does the reverse - pings from Outpost to Guchuko's LAN IP. However... Pings from Outpost, to a machine on Guchuko's LAN work fine: .(( root@outpost )). (( 06:39 PM )) :: ~ :: # ping 192.168.0.3 PING 192.168.0.3 (192.168.0.3): 56 data bytes 64 bytes from 192.168.0.3: icmp_seq=0 ttl=63 time=462.641 ms 64 bytes from 192.168.0.3: icmp_seq=1 ttl=63 time=557.909 ms But a ping from Guchuko, to a machine on Outpost's LAN does not: .(( root@guchuko )). (( 06:43 PM )) :: ~ :: # ping 10.0.1.253 PING 10.0.1.253 (10.0.1.253) 56(84) bytes of data. --- 10.0.1.253 ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 2000ms Guchuko's tcpdump of tun0 shows: 18:46:27.716931 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 1, length 64 18:46:28.716715 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 2, length 64 18:46:29.716714 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 3, length 64 Outpost's tcpdump on tun0 shows: 18:44:00.333341 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 3, length 64 18:44:01.334073 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 4, length 64 18:44:02.331849 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 5, length 64 So Outpost is receiving the ICMP request destined for the machine on it's subnet, but appears not be forwarding it. Outpost has gateway_enable="YES" in its rc.conf which correctly sets net.inet.ip.forwarding to 1 as mentioned earlier. As far as I know, that's all that's required to make a FreeBSD box forward packets between interfaces. Is there something else I could be forgetting ? FWIW, pinging 10.0.1.253 from Kino has the same result - the traffic does not get forwarded. UPDATE: I've found that I can only ping certain IP's on Guchuko's LAN from Outpost. From Outpost I can ping 192.168.0.3 and 192.168.0.2, but 192.168.99 and 192.168.0.4 are unreachable. The same tcpdump behavior can be seen. I think this means the problem can't be due to ipforwarding or routing, because Outpost can reach SOME hosts on Guchuko's LAN but not others and likewise, Guchuko can reach two hosts on Outpost's LAN, but not others. This baffles me.

    Read the article

< Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >