Search Results

Search found 120660 results on 4827 pages for 'http server'.

Page 1288/4827 | < Previous Page | 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295  | Next Page >

  • .htaccess redirect to error page if port is not 80

    - by Momo
    I'm running a portable server through usb stick. The thing is I also have WAMP installed in my local machine and Apache somehow gets started on windows startup, because of some random reason which I don't recall now and it can't be changed. I want to prepare my portable server in situations like this, so closing httpd.exe from process and starting my portable server is not an option. Anyway, because of already active httpd.exe my portable server's WordPress site can only be accessed through localhost:81 - this is a problem as WP site is very dependent on the URL and I don't want to include the url with port on WP database. Here is what I want to do through .htaccess: On any path except for error.php file check if not port 80 If not port 80 redirect to /error.php?code=port It it possible for it to have priority over WP redirection or URL handling? In the error.php I provided info on how to manually close httpd.exe and such so my family and friends can access the portable site. It's sort of like a gallery and calender application for events and other such stuff... Please help? I'm I can't figure it out at all. I know others may not have apache already running, but I want to prepare for such a situation. Something like the following, but the following doesn't work. # BEGIN WordPress <IfModule mod_rewrite.c> <If "%{SERVER_PORT} = 80"> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </If> <Else> RewriteEngine On RewriteRule ^(error.php)($|/) - [L] RewriteRule ^(.*)$ /error.php?code=port [L] </Else> </IfModule> # END WordPress By the way, the portable server Server2Go automatically generates vhosts based o the hostname set on it's config file and changes ports if the port (e.g. 80) is already open.

    Read the article

  • TFTP Timing Out on Ubuntu VM

    - by valsidalv
    I'm running a Windows 7 PC with VMware installed which has my Ubuntu (10.04 Lucid Lynx). I recently installed a DHCP server and TFTP (Xinet tftpd) using these instructions. I've mapped a network drive so that my Windows has access to all the files in my VM through a 192.x.x.x IP address. I'm trying to throw some custom firmware onto a router. The router has its own built-in TFTP utility that will download the image. It successfully manages to do everything but it is slow because it writes it to flash memory. There is another method that is much quicker because it writes to RAM directly but it must use the TFTP server in Ubuntu. The issue I'm facing is that the Ubuntu TFTP transfer seems to be timing out. The transfer starts but never goes past ~60%. Here's my /etc/xinetd.d/tftp file (similar to a known working config): service tftp { protocol = udp port = 69 socket_type = dgram wait = yes user = nobody server = /usr/sbin/in.tftpd server_args = -s /home/user/tftp/ disable = no cps = 300 2 per_source = 60 } I've done some searching but can't find any parameters for this file to control timeout time or number of retries. The last two arguments (cps, per_source) and completely alien to me (can anyone explain). I have a few possible solutions but the easiest would be to get this TFTP server working. Can anyone help? Either with a timeout configuration or maybe even recommend a different TFTP server? Thanks!

    Read the article

  • unattended-upgrades does not reboot

    - by Cheiron
    I am running Debian 7 stable with unattended-upgrades (every morning at 6 AM) to make sure I am always fully updated. I have the following config: $ cat /etc/apt/apt.conf.d/50unattended-upgrades // Automatically upgrade packages from these origin patterns Unattended-Upgrade::Origins-Pattern { // Archive or Suite based matching: // Note that this will silently match a different release after // migration to the specified archive (e.g. testing becomes the // new stable). "o=Debian,a=stable"; "o=Debian,a=stable-updates"; // "o=Debian,a=proposed-updates"; "origin=Debian,archive=stable,label=Debian-Security"; }; // List of packages to not update Unattended-Upgrade::Package-Blacklist { // "vim"; // "libc6"; // "libc6-dev"; // "libc6-i686"; }; // This option allows you to control if on a unclean dpkg exit // unattended-upgrades will automatically run // dpkg --force-confold --configure -a // The default is true, to ensure updates keep getting installed //Unattended-Upgrade::AutoFixInterruptedDpkg "false"; // Split the upgrade into the smallest possible chunks so that // they can be interrupted with SIGUSR1. This makes the upgrade // a bit slower but it has the benefit that shutdown while a upgrade // is running is possible (with a small delay) //Unattended-Upgrade::MinimalSteps "true"; // Install all unattended-upgrades when the machine is shuting down // instead of doing it in the background while the machine is running // This will (obviously) make shutdown slower //Unattended-Upgrade::InstallOnShutdown "true"; // Send email to this address for problems or packages upgrades // If empty or unset then no email is sent, make sure that you // have a working mail setup on your system. A package that provides // 'mailx' must be installed. E.g. "[email protected]" Unattended-Upgrade::Mail "root"; // Set this value to "true" to get emails only on errors. Default // is to always send a mail if Unattended-Upgrade::Mail is set Unattended-Upgrade::MailOnlyOnError "true"; // Do automatic removal of new unused dependencies after the upgrade // (equivalent to apt-get autoremove) //Unattended-Upgrade::Remove-Unused-Dependencies "false"; // Automatically reboot *WITHOUT CONFIRMATION* if a // the file /var/run/reboot-required is found after the upgrade Unattended-Upgrade::Automatic-Reboot "true"; // Use apt bandwidth limit feature, this example limits the download // speed to 70kb/sec //Acquire::http::Dl-Limit "70"; As you can see Automatic-Reboot is true and thus the server should automaticly reboot. Last time I checked the server was online for over 100 days, which means that the update from Debian 7.1 to Debian 7.2 has happened while the server was up (and indeed, all updates were installed), but this involves kernel updates, which means that the server should reboot. It did not. The server was running very slow, so I rebooted which fixed that. I did some research and found out that unattended-upgrades responds to the reboot-required file in /var/run/. I touched this file and waited one week, the file still exists and the server did not reboot. So I think that unattended-uppgrades ignores the auto-reboot part. So, am I doing somthing wrong here? Why did the server not restart? The upgrade part works perfect by the way, its just the reboot part that does not seem to work as it should.

    Read the article

  • Hybrid gmail MX + postfix for local accounts

    - by krunk
    Here's the setup: We have a domain, mydomain.com. Everything is on our own server, except general email accounts which are through gmail. Currently gmail is set as the MX record. The server also has various email aliases it needs to support for bug trackers and such. e.g. [email protected] |/path/to/issuetracker.script I'm struggling with a setup that allows the following, both locally and from user's email clients. guser1 - has a gmail account and a local account guser2 - only has a gmail account bugs - has a pipe alias in /etc/aliases for issue tracker Scenarios mail to [email protected] from local host (crons and such) needs to go to gmail account mail to [email protected] from local host mail to [email protected] needs to be piped to the local issue tracker script So, the first stab was creating a transport map. In this scenario, the our server would be set as teh MX and guser* destined emails are sent to gmail. Put the gmail users in a map like so: [email protected] smtp:gmailsmtp:25 [email protected] smtp:gmailsmtp:25 Problems: Ignores extensions such as [email protected] Only works if append_at_myorigin = no (if set to yes, gmail refuses to connect with: E4C7E3E09BA3: to=, relay=none, delay=0.05, delays=0.02/0.01/0.02/0, dsn=4.4.1, status=deferred (connect to gmail-smtp-in.l.google.com[209.85.222.57]:25: Connection refused)) since append_at_myorigin is set to no, all received emails have (unknown sender) The second stab was to set explicit localhost aliases in /etc/aliases and do a domain wide forward on mydomain. This too requires setting the local server as the MX: root: root@localhost # transport mydomain.com smtp:gmailsmtp:25 Problems: * If I create a transport map for a domain that matches "$myhostname", the aliases file is never parsed. So when a local user (or daemon) sends an email like: mail -s "testing" root < text.txt Postfix ignores the /etc/alias entry and maps to [email protected] and attempts to send it to the gmail transport mapping. Third stab: Create a subdomain for the bugs, something like bugs.mydomain.com. Set the MX for this domain to local server and leave the MX for mydomain.com to the Gmail server. Problems: * Does not solve the issue with local accounts. So when the bug tracker responds to an email from [email protected], it uses a local transport and the user never receives the email. % postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_at_myorigin = no append_dot_mydomain = no biff = no config_directory = /etc/postfix inet_interfaces = all mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 mydestination = $myhostname, localhost.$myhostname, localhost myhostname = mydomain.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname readme_directory = no recipient_delimiter = + relayhost = smtp_tls_cert_file = /etc/ssl/certs/kspace.pem smtp_tls_enforce_peername = no smtp_tls_key_file = /etc/ssl/certs/kspace.pem smtp_tls_note_starttls_offer = yes smtp_tls_scert_verifydepth = 5 smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) smtpd_recipient_restrictions = permit_mynetworks, reject_invalid_hostname, reject_non_fqdn_sender, reject_non_fqdn_recipient, reject_unknown_sender_domain, reject_unknown_recipient_domain, reject_unauth_destination smtpd_tls_ask_ccert = yes smtpd_tls_req_ccert = no smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport

    Read the article

  • Windows FTP client to mirror ftp sites?

    - by user15318
    I need a Windows application that works like *nix's lftp. The Windows host will download the latest changes made to FTP Server #1, and upload those changes to FTP Server #2. Server #2 can't pull files directly from Server #1 because it's a www shared host with no shell account. The requirements are: 1. Windows app available for XP/Vista/W7 2. Must run either as icon or service. I don't want to have an extra icon open in the task bar 3. Reliable, so I don't have to worry about it. Is there an application you would recommend? Thank you.

    Read the article

  • subversion problem on mac os x

    - by Mohsin Jimmy
    This exists in my httpd.conf file: <Location /svn> DAV svn SVNParentPath /Users/iirp/Sites/svn Allow from all #AuthType Basic #AuthName "Subversion repository" #AuthUserFile /Users/iirp/Sites/svn-auth-file #Require valid-user </Location> This is working file When I change this to: <Location /svn> DAV svn SVNParentPath /Users/iirp/Sites/svn #Allow from all AuthType Basic AuthName "Subversion repository" AuthUserFile /Users/iirp/Sites/svn-auth-file Require valid-user </Location> and when I access my repository through URL, it gives me the authentication screen but after that screen my svn repository is not showing up correctly. to see message that it gives to me is: Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator, [email protected] and inform them of the time the error occurred, and anything you might have done that may have caused the error. More information about this error may be available in the server error log.

    Read the article

  • User authentication -- username mismatch in IIS in ASP.NET application

    - by Cory Larson
    Last week, an employee's Active Directory username was changed (or a new one was created for them). For the purposes of this example, let's assume these usernames: Old: Domain\11111 New: Domain\22222 When this user now logs in using their new username, and attempts to browse to any one of a number of ASP.NET applications using only Windows Authentication (no Anonymous enabled), the system authenticates but our next layer of database-driven permissions prevents them from being authorized. We tracked it down to a mismatch of usernames between their logon account and who IIS thinks they are. Below are the outputs of several ASP.NET variables from apps running in a Windows 2008 IIS7.5 environment: Request.ServerVariables["AUTH_TYPE"]: Negotiate Request.ServerVariables["AUTH_USER"]: Domain\11111 Request.ServerVariables["LOGON_USER"]: Domain\22222 Request.ServerVariables["REMOTE_USER"]: Domain\11111 HttpContext.Current.User.Identity.Name: Domain\11111 System.Threading.Thread.CurrentPrincipal.Identity.Name: Domain\11111 From the above, I can see that only the LOGON_USER server variable has the correct value, which is the account the user used to log on to their machine. However, we use the "AUTH_USER" variable for looking up the database permissions. In a separate testing environment (completely different server: Windows 2003, IIS6), all of the above variables show "Domain\22222". So this seems to be a server-specific issue, like the credentials are somehow getting cached either on their machine or on the server (the former seems more plausible). So the question is: how do I confirm whether it's the user's machine or the server that is botching the request? How should I go about fixing this? I looked at the following two resources and will be giving the first one a try shortly: http://www.interworks.com/blogs/jvalente/2010/02/02/removing-saved-credentials-passwords-windows-xp-windows-vista-or-windows-7 http://stackoverflow.com/questions/2325005/classic-asp-request-servervariableslogon-user-returning-wrong-username/5299080#5299080 Thanks.

    Read the article

  • Massive Crawling requests from Google Apps Engine useragent

    - by SilentPlayer
    Hi friends, I'm badly affected with 'Google AppEngine-Google' UserAgent.. receiving 5/6 requests per second on http server. This bot is crawling my site just like GoogleBot does. Following is the sample of url in my access logs. 72.14.192.3 - - [19/May/2010:01:27:06 +0000] "GET /some-url/etc-123.htm HTTP/1.1" 200 4707 "-" "AppEngine-Google; (+http://code.google.com/appengine; appid: harpy000)" I have checked the ip address it is registered with Google Inc. Can anyone tell me where i can report Abuse to Google Inc. Or any information about this issue. Thank you!

    Read the article

  • Wordpress blog website apache and IIS subdirectory

    - by Philippe
    The issue is this : Our company has a website hosted on an IIS server. I have recently been given the task to configure a WordPress server for an eventual WordPress blog so that our social media employee could test and see how it works. This was completed successfully and easily on a new server and on a WAMP configuration. The website was published as wordpress.domain.com and works fine. HOWEVER! I have now been requested to ensure that the soon-to-go-online blog would be accessible through the address domain.com/blog. Is there a way to modify the original company website and simple redirect the /blog to the Apache WordPress website? If not, is there a way to transfer the wordpress.domain.com on the IIS server hosting the main website and keep the configuration? Is there a better solution that I haven't thought about (probably)? If so, what would you all suggest?

    Read the article

  • Routing different domains on a VPS

    - by Hans Wassink
    We just went from shared hosting to a VPS server. We have several domain names that we have pointing to our dns, but they all point to the root of the server. What I would like now is a setup where every domain name gets its own map so we can run different sites on the VPS server. Like: www.example.com points to: /var/www/example.com www.imapwnu.com points to: /var/www/imapwnu.com First of all, is this possible? Second, I have root SSH access and Webmin, on a LAMP server running on Ubuntu. Webmin doesnt have Bind9 (I dont know if I need that, some forums pointed me towards something called bind). Thanks in advance

    Read the article

  • DNS Aliases of multiple domains in win2k8

    - by dbekiaris
    Hello, I have set up a AD integrated server deploying also the role of DNS server. What I want is to put an alias for a specific host of my domain, putting an alias (CNAME) different from the domain name (for example, if the domain is www.mydomain1.com, the alias should be www.domain2.com). Is this possible in Windows Server 2008 and how? Thank you very much in advance. Kind Regards

    Read the article

  • haproxy access list using path_dir having issues with firefox

    - by user11243
    I'm trying to route all requests containing a path directory of /socket.io/ to a separate port with HAProxy. Here is my config file: global maxconn 4096 # Total Max Connections. This is dependent on ulimit nbproc 2 defaults mode http frontend all 0.0.0.0:80 timeout client 86400000 default_backend web_servers acl is_stream path_dir socket.io use_backend stream_servers if is_stream backend web_servers balance roundrobin option forwardfor # This sets X-Forwarded-For timeout server 30000 timeout connect 4000 server web1 127.0.0.1:4000 weight 1 maxconn 1024 check backend stream_servers balance roundrobin option forwardfor # This sets X-Forwarded-For timeout queue 5000 timeout server 86400000 timeout connect 86400000 server stream1 127.0.0.1:5100 weight 1 maxconn 1024 check URL paths with a /socket.io/ get correctly directed to port 5100 in chrome and safari. However not for firefox. I'm running Haproxy locally on my mac for dev, not sure if it has anything to do with it. I'm using haproxy 1.4.8 and Firefox 3.6.15. I've tried clearing cache on firefox and it didn't help, so I'm thinking there's something wrong with the way HAProxy parses through the Firefox request headers.

    Read the article

  • In TCP/IP terms, how does a download speed limiter in an office work?

    - by TessellatingHeckler
    Assume an office of people, they want to limit HTTP downloads to a max of 40% bandwidth of their internet connection speed so that it doesn't block other traffic. We say "it's not supported in your firewall", and they say the inevitable line "we used to be able to do it with our Netgear/DLink/DrayTek". Thinking about it, a download is like this: HTTP GET request Server sends file data as TCP packets Client acknowledges receipt of TCP packets Repeat until download finished. The speed is determined by how fast the server sends data to you, and how fast you acknowledge it. So, to limit download speed, you have two choices: 1) Instruct the server to send data to you more slowly - and I don't think there's any protocol feature to request that in TCP or HTTP. 2) Acknowledge packets more slowly by limiting your upload speed, and also ruin your upload speed. How do devices do this limiting? Is there a standard way?

    Read the article

  • Howto configure openSuSE firewall to route local traffic to local ports

    - by Eduard Wirch
    I have openSUSE 11.3 installed. I'm using the openSUSE firewall configuration mechanism (/etc/sysconfig/SuSEfirewall2). I have a http server application running on port 8080. I want the http service to be accessible using port 80. I created a redirect rule usign: FW_REDIRECT="0/0,0/0,tcp,80,8080" This works fine for every request coming from external. But it doesn't for local requests. (example: wget http://myserver/) Is there a way how I can tell the firewall to redirect local requests addressed for 80 to port 8080? (using the SUSE firewall configuration file)

    Read the article

  • Can I uninstall Qmail when I switch to Gmail Apps

    - by Saif Bechan
    I am not going to run a mail server anymore on my system. I am going to switch to Gmail. Can I then just uninstall Qmail or my previous mail server programm. I want to be able to send email from PHP. And I also want to get the email that Logwatch sends me from within the system. Basically, to have your server send email do you need an email server? Or will Gmail cover all of this.

    Read the article

  • force https with apache before .htpasswd

    - by johnlai2004
    I have this in my .htaccess file RewriteEngine On RewriteCond %{HTTPS} off RewriteRule ^(.*)$ https://www.myweb.com/phpmyadmin$1 [R,L] AuthUserFile /var/www/myweb/.htpasswd AuthGroupFile /dev/null AuthName "Sovereign Databases" AuthType Basic <Limit GET> require valid-user </Limit> But everytime I go to http://www.myweb.com/phpmyadmin, the .htpasswd prompts me for a credentials BEFORE i'm redirected to https://www.myweb.com/phpmyadmin. After I type in my username and password, I get redirected to https://www.myweb.com/phpmyadmin. The problem is that I don't want anyone to submit their username and password unencrypted via http. How do I force people to login via the https version even if they typed in the http version?

    Read the article

  • p2v v2v v2p tool from open source?

    - by neolix
    we have centos, fedora, ubuntu server and desktop we are looking for good open source tool for p2v v2v v2p and we are not using vmware here only we use xen or kvm. Same of the server shifted to new hardware and same of the server on xen or kvm. Can same help me !!

    Read the article

  • Hosting a site on amazon ec2

    - by Khalid Mushtaq
    I have recently bought an amazon ec2 instance. Now I want to host a website. I have googled and found some useful info but there is some confusion in my mind. Suppose domain name is "http://www.example.com" That's what I have done so far. I have configured my domain locally on amazon ec2 instance and it's working fine when I open that url in amazon ec2 instance's browser. I have used http://www.example.com in /etc/hosts file point it to 127.0.0.1 to open locally on instance. I have got one elastic ip address and associated it with the instance. I have changed http://www.example.com A's record with the elastic IP that I have got in above step. Now what should I do? When some user will open my website anywhere in the world, will it get pointed to my instanace's ip address? Have I done proper configurations for website on instance?

    Read the article

  • Nginx rewrite with Simple Machines Forum

    - by Kevin Worthington
    I am running Nginx 1.5.6 and I use the Simple Machines Forum software. Most rewrite rules seem to work properly, with the exception of the RSS feeds. In my Nginx configuration, I have the following line which is supposed to handle URLs which contain ".xml": rewrite ^/forum/(\.xml|xmlhttp)/?$ "/forum/index.php?pretty;action=$1" last; The above rule produces the following URL for the main forum, which returns a 403 Error: http://www.mydomain.com/forum/.xml/?type=rss I would like the rewrite rule to produce this type of URL, which returns code 200 (a real page): http://www.mydomain.com/forum/?type=rss;action=.xml Here is the entire block pertaining to the forum rewrites: http://pastebin.com/raw.php?i=tZkAibW3 I would really appreciate some help to create a rewrite rule to do that. Thanks.

    Read the article

  • ehcp - access pop3 account

    - by iko
    Hi, I finally manage to get mail working with ehcp on an ubuntu server. I can get and send email from the webmail, but I can't seems to be able to access the pop3. I can telnet to the pop3 (server?) but my identification are rejected. Apparently the pop3 server is courier which seems to be configured like it should. Someone has any idea about this ? Thank you Olivier

    Read the article

  • reverse ssh tunnel listens on wrong interface

    - by Jens Fahnenbruck
    I'm working with a server that is behind a firewall. I have established an ssh tunnel to an intermediate server in the internet like this: remoteuser@behind_fw$ ssh -N -f -R 10002:localhost:22 middleuser@middle But I can't connect directly throgh this server, this doesn't work: user@local$ ssh remoteuser@middle -p 10002 I have to connect in two steps: user@local$ ssh middleuser@middle middleuser@middle$ ssh remoteuser@localhost -p 10002 Output of netstat -l on middle: tcp 0 0 localhost:10002 *:* LISTEN but it should be something like this: tcp 0 0 *:10002 *:* LISTEN how can I achieve this?

    Read the article

  • IIS 7 - Application pools

    - by vikp
    I made a CompanySite website in IIS - it's the only website on that server. I created .NET 4 Integrated pool for that website. I've installed ASP.Net site into CompanySite and it works fine. Now I'd like to install an application within this site. I create another .NET 4 application pool for that application. Then I install application into CompanySite using that application pool. As soon as setup starts off the website goes offline either with 503 or with The page cannot be displayed because an internal server error has occurred. The only way to fix this (that I have found) is to uninstall that application and restart Windows Server, not just IIS server. Question is how can I install multiple applications within a single site? THank you

    Read the article

  • Cisco IOS BVI ACL: Only allow established UDP

    - by George Bailey
    Related: Cisco IOS ACL: Don't permit incoming connections just because they are from port 80 I know we can use the established keyword for TCP.. but what can we do for UDP (short of replacing a Bridge or BVI with a NAT)? Answer I found out what "UDP has no connection" means. DNS uses UDP for example.. named (DNS server) is lisenting on port 53 nslookup (DNS client) starts listening on some random port and sends a packet to port 53 of the server and notes the source port in that packet. nslookup will retry 3 times if necessary. Also the packets are so small that it does not have to worry about them coming in the wrong order. If nslookup receives a response on that port that comes from the servers IP and port then it stops listening. If the server tried to send two responses (for example a response and a response to the retry) then the server would not care if either of them made it because the client has the job to retry. In fact.. unless ICMP 3/3 packet gets through the server would not know about a failure. This is different from TCP where you get connection closed or timed out errors. DNS allows for an easy retry from the client as well as small packets.. so UDP is an excellent choice because it is more efficient. In UDP you would see nslookup sends request named sends answer In TCP you would see nslookup's machine sends SYN named's machine sends SYN-ACK nslookup's machine sends ACK and the request named's machine sends the response That is much more than is necessary for a tiny DNS packet

    Read the article

  • Vyatta internet connection + hosted site on same IP

    - by boburob
    Having a small issue setting up a vyatta. The company internet and two different websites are both on the same IP. Server 1 - Has websites hosted on ports 1000 and 3000 and also has a proxy server installed to provide internet connection to the domain Server 2 - Has a website hosted on ports 80 and 433 The vyatta is correctly natting the appropriate traffic to each server, and allowing the proxy to get internet traffic, however I have a problem getting to the websites hosted on these two servers inside the domain. I believe the problem is that the HTTP request is being sent with an IP, eg: 12.34.56.78. The request will reach the website and the server will attempt to send the request back to the IP, however this is the IP of the Vyatta, so it has nowhere else to go. I thought the solution would be something like this: rule 50 { destination { address 12.34.56.78 port 1000 } inbound-interface eth1 inside-address { address 10.19.2.3 } protocol tcp type destination } But this doesnt seem to do it! UPDATE I changed the rules to the following: rule 50 { destination { address 12.34.56.78 port 443 } outbound-interface eth1 protocol tcp source { address 10.19.2.3 } type masquerade } rule 51 { destination { address 12.34.56.78 port 443 } inbound-interface eth1 inside-address { address 10.19.2.2 } protocol tcp type destination } I am now seeing traffic going between the two with Wireshark, but the website will still fail to load.

    Read the article

< Previous Page | 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295  | Next Page >