Search Results

Search found 58094 results on 2324 pages for 'http status codes'.

Page 676/2324 | < Previous Page | 672 673 674 675 676 677 678 679 680 681 682 683  | Next Page >

  • Someone try to hack my site, want to understand the log

    - by garconcn
    I have a wordpress site hosted on CentOS 6. After see the following access log, I checked the server, it seems ok. Can anyone explain what does this guy trying to do? Did they get what they want? I have disabled allow_url_include, and restricted open_basedir to web dir and tmp(/etc is not in the path). 190.26.208.130 - - [05/Sep/2012:21:24:42 -0700] "POST http://my_ip/?-d%20allow_url_include%3DOn+-d%20auto_prepend_file%3D../../../../../../../../../../../../etc/passwd%00%20-n/?-d%20allow_url_include%3DOn+-d%20auto_prepend_file%3D../../../../../../../../../../../../etc/passwd%00%20-n HTTP/1.1" 200 32656 "-" "Mozilla/5.0"

    Read the article

  • How to limit server to specific IP addresses with mod_authz_host?

    - by BeeDog
    Hi! I am very new to this area, so please bear with me. :) Right now I am running an Apache HTTP server on my setup, a very basic configuration. The website hosted on it is accessible from anywhere, and I want to limit the access to a specific IP address range. I've looked into this and I found that one Apache module called mod_authz_host handles this. http://httpd.apache.org/docs/2.2/mod/mod_authz_host.html The problem is, I haven't managed to find documentation that explains well how to actually do the stuff. How do I actually make sure only a certain range of IP addresses can access my site/server? The machine is running Ubuntu Server 10.10, the web files are stored in /var/www/, the apache2 daemon has its stuff stored in /etc/apache2/ and /usr/lib/apache2/modules/*. Thanks in advance, and sorry if this is a stupid question!

    Read the article

  • How does hadoop decide what its nodes hostnames are?

    - by Dan R
    Currently the urls generated by the jobtracker & namenode return either hostnames like bubbles.local or just bubbles. These end up not resolving unless the client machine has specified these in their /etc/hosts file. When I run the hostname command on these machines it returns a hostname complete with the domain (E.G bubbles.example.com) Running a small java test on these machines InetAddress addr = InetAddress.getLocalHost(); byte[] ipAddr = addr.getAddress(); String hostname = addr.getHostName(); System.out.println(hostname); Produces output just like the hostname command. Where else could hadoop be grabbing a hostname to use in its jobtracker / namenode UI? This is occurring in clusters with Hadoop 1.0.3 and 1.0.4-SNAPSHOT from early august. The machines are running CentOS release 5.8 (Final). The generated URLs I'm referring to are like this http://example:50075/browseDirectory.jsp?namenodeInfoPort=50070&dir=/ or http://example.local:50075/browseDirectory.jsp?namenodeInfoPort=50070&dir=/

    Read the article

  • function to org-sort by three (3) criteria: due date / priority / title

    - by lawlist
    Is anyone aware of an org-sort function / modification that can refile / organize a group of TODO so that it sorts them by three (3) criteria: first sort by due date, second sort by priority, and third sort by by title of the task? EDIT: If anyone can please help me to modify this so that undated TODO are sorted last, that would be greatly appreciated -- at the present time, undated TODO are not being sorted: ;; multiple sort (defun org-sort-multi (&rest sort-types) "Multiple sorts on a certain level of an outline tree, or plain list items. SORT-TYPES is a list where each entry is either a character or a cons pair (BOOL . CHAR), where BOOL is whether or not to sort case-sensitively, and CHAR is one of the characters defined in `org-sort-entries-or-items'. Entries are applied in back to front order. Example: To sort first by TODO status, then by priority, then by date, then alphabetically (case-sensitive) use the following call: (org-sort-multi '(?d ?p ?t (t . ?a)))" (interactive) (dolist (x (nreverse sort-types)) (when (char-valid-p x) (setq x (cons nil x))) (condition-case nil (org-sort-entries (car x) (cdr x)) (error nil)))) ;; sort current level (defun lawlist-sort (&rest sort-types) "Sort the current org level. SORT-TYPES is a list where each entry is either a character or a cons pair (BOOL . CHAR), where BOOL is whether or not to sort case-sensitively, and CHAR is one of the characters defined in `org-sort-entries-or-items'. Entries are applied in back to front order. Defaults to \"?o ?p\" which is sorted by TODO status, then by priority" (interactive) (when (equal mode-name "Org") (let ((sort-types (or sort-types (if (or (org-entry-get nil "TODO") (org-entry-get nil "PRIORITY")) '(?d ?t ?p) ;; date, time, priority '((nil . ?a)))))) (save-excursion (outline-up-heading 1) (let ((start (point)) end) (while (and (not (bobp)) (not (eobp)) (<= (point) start)) (condition-case nil (outline-forward-same-level 1) (error (outline-up-heading 1)))) (unless (> (point) start) (goto-char (point-max))) (setq end (point)) (goto-char start) (apply 'org-sort-multi sort-types) (goto-char end) (when (eobp) (forward-line -1)) (when (looking-at "^\\s-*$") ;; (delete-line) ) (goto-char start) ;; (dotimes (x ) (org-cycle)) )))))

    Read the article

  • Why is apache/passenger unable to open the sqlite3 rails database file?

    - by sendos
    I'm running apache2/passenger2.0.3 (ubuntu 9.10 packages). I can start up Webrick in the rails folder and run the app perfectly as I do on my development box with script/server Why then does apache/passenger fail to open the database, throwing a 500 and putting the following in the log? Status: 500 Internal Server Error could not open database: unable to open database file /usr/lib/ruby/1.8/sqlite3/errors.rb:62:in `check'...

    Read the article

  • Not to forward certain email Outlook

    - by kitokid
    I have set up a rule to forward incoming emails from Outlook to my Gmail account. The problem is that certain mails in which I'm a CC (about 1000/day monitoring system running status) are also forwarded to my Gmail and fill up my account very quickly. I have set up rules in Outlook to move those emails to a certain folder (called Monitored_Emails), but I don't know how to filter those emails so they don't forward to Gmail. How can I set this rule to forward all emails except those in a certain folder name?

    Read the article

  • Load balanced proxies to avoid an API request limit

    - by ClickClickClick
    There is a certain API out there which limits the number of requests per day per IP. My plan is to create a bunch of EC2 instances with elastic IPs to sidestep the limitation. I'm familiar with EC2 and am just interested in the configuration of the proxies and a software load balancer. I think I want to run a simple TCP Proxy on each instance and a software load balancer on the machine I will be requesting from. Something that allows the following to return a response from a different IP (round robin, availability, doesn't really matter..) eg. curl http://www.bbc.co.uk -x http://myproxyloadbalancer:port Could anyone recommend a combination of software or even a link to an article that details a pleasing way to pull it off? (My client won't be curl but is proxy aware.. I'll be making the requests from a Ruby script..)

    Read the article

  • Dynamic authentication realms in Apache

    - by Cogsy
    I have a front end server acting as a gateway proxy for many (a dynamic 'many') building monitors with embedded webservers. They are accessed with a URL like: http://www.example.com/monitor1/ http://www.example.com/monitor2/ ... I'm trying to restrict access to these monitors to only the users that own them. So what I need is a way of specifying rights to users or groups for specific directories. The standard auth mechanisms I see in Apache won't work because I need to specify every location. I'd prefer some dynamic map or script. Any suggestions?

    Read the article

  • nginx: location, try_files, rewrite: Find pattern match in subfolder, else move on?

    - by Nick
    I'd like for Nginx to do the following: If the uri matches the pattern: http://mysite.com/$string/ and $string is not 'KB', and not 'images', look for $string.html in a specific subfolder. If $string.html exists in the subfolder, return it. If it does not exist, move on to the next matching location. $string = {any letters, numbers, or dash} For example, if the user requests: http://mysite.com/test/ It should look for a file called: /webroot/www/myfolder/test.html I've tried variations of: location ~ /[a-zA-Z0-9\-]+/ { try_files /myfolder/$uri.html @Nowhere; } But: It doesn't seem to find the file even when it does exist, and If it fails (which is always right now), it wants to jump to the @nowhere location, rather than moving on and trying to find another location that matches. I'd like for it to consider the current location "not a match" if the file doesn't exist.

    Read the article

  • Serving protected files using Nginx's X-Accel-Redirect header

    - by andybak
    I'm trying to serve protected files using this directive in my nginx.conf: location /secure/ { internal; alias /home/ldr/webapps/nginx/app/secure/; } I'm passing in paths in the form: "/myfile.doc" and the file's path would be: /home/ldr/webapps/nginx/app/secure/myfile.doc I just get 404's when I access "http: //myserver/secure/myfile.doc" (space inserted after http to stop ServerFault converting it to a link) I've tried taking the trailing / off the location directive and that makes no difference. Two questions: How do I fix it! How can I debug problems like this myself? How can I get Nginx to report which path it's looking for? error.log shows nothing and access.log just tells me which url is being requested - this is the bit I already know! It's no fun trying things randomly without any feedback. Here's my entire nginx.conf: daemon off; worker_processes 2; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server { listen 21534; server_name my.server.com; client_max_body_size 5m; location /media/ { alias /home/ldr/webapps/nginx/app/media/; } location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; fastcgi_pass unix:/home/ldr/webapps/nginx/app/myproject/django.sock; fastcgi_pass_header Authorization; fastcgi_hide_header X-Accel-Redirect; fastcgi_hide_header X-Sendfile; fastcgi_intercept_errors off; include fastcgi_params; } location /secure { internal; alias /home/ldr/webapps/nginx/app/secure/; } } } EDIT: I'm trying some of the suggestions here So I've tried: location /secure/ { internal; alias /home/ldr/webapps/nginx/app/; } both with and without the trailing slash on location. I've also tried moving this block before the "location /" directive. The page I linked to has ^~ after 'location' giving: location ^~ /secure/ { ...etc... Not sure what that signifies but it didn't work either!

    Read the article

  • Chrome Residual Redirect to Login Page

    - by Shadow503
    My college redirects people in the dorms to a login page when using an ethernet (or wifi) connection. I am now at home, and certain domains keep redirecting to this login page. I've tried running ipconfig /flushdns and I flushed the chrome's local dns cache as described here: How to clear/flush the DNS cache in Google Chrome?. Interestingly enough, while http://www.reddit.com redirects to the login page, http://www.reddit.com/r/funny works. Firefox works fine for both urls. Is there a way to fix this without deleting all of my cookies? Thanks!

    Read the article

  • Ubuntu 10.04 - install-info error during update

    - by user33684
    I'm using Ubuntu 10.04 beta 1. When I try to update & upgrade I get the following error: Setting up install-info (4.13a.dfsg.1-5ubuntu1) ... /etc/environment: line 4: LC-ALL=en_US.UTF-8: command not found dpkg: error processing install-info (--configure): subprocess installed post-installation script returned error exit status 127 Errors were encountered while processing: install-info Does anyone know how to fix this? Thanks

    Read the article

  • one of the webfront ends not responding Sharepoint

    - by TTL
    I have setup sharepoint farm with 2 webfront ends everything was working fine but now when i m trying to deploy any solution its not getting deployed to one WFE,My solution is getting stuck on deploying status message and when I kill the timer job associated with it ...its showing that it gets deployed on 1 but not on other... what are the things I can check for this?

    Read the article

  • one of the webfront ends not responding Sharepoint

    - by TTL
    I have setup sharepoint farm with 2 webfront ends everything was working fine but now when i m trying to deploy any solution its not getting deployed to one WFE,My solution is getting stuck on deploying status message and when I kill the timer job associated with it ...its showing that it gets deployed on 1 but not on other... what are the things I can check for this?

    Read the article

  • Htaccess Redirect with domain attributes

    - by PHP Bugs
    I have to write a redirect rule for the below condition. www.domain.com/custom.aspx?ATTR=VALUE to www.domain.com/custom?ATTR=VALUE How can this be achieved using the .htaccess I have the below set of codes using on the current .htaccess file. Please also suggest where to include your code. <IfModule mod_rewrite.c> Options +FollowSymLinks RewriteEngine on RewriteRule ^api/rest api.php?type=rest [QSA,L] RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}] RewriteCond %{REQUEST_METHOD} ^TRAC[EK] RewriteRule .* - [L,R=405] RewriteCond %{REQUEST_URI} !^/(media|skin|js)/ RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-l RewriteRule .* index.php [L] </IfModule>

    Read the article

  • Enable (or work around) Administrative shares in Windows 8

    - by Brado
    So in using Windows 8, I've discovered that the administrative shares are disabled. There seems to be no easy way to get them re-enabled. Is anyone aware of a work around, or solution? I did not have this issue with Windows 7 After disabling UAC. However in Windows 8 this still doesn't work. This is all I could find, however I am not satisfied with the information provided. http://www.computerperformance.co.uk/win8/windows8-administrative-shares.htm http://www.tomsitpro.com/articles/windows_8-file_sharing-windows_administrative_shares,2-195.html

    Read the article

  • Serving a default image with nginx

    - by ustun
    I have the following configuration in nginx: location /static/ { root /srv/kose/; expires 2w; access_log off; } location / { proxy_pass http://127.0.0.1:8089; } If a file is not found in /static/, I want to serve a default image, and not proxy_pass to 8089. Currently, it looks for the file in the root for static, if it cannot find it, it tries the proxy. I have tried the following, but it doesn't work. How can I tell nginx to serve the default image? I have also tried try_files to no avail. location /static/ { root /srv/kose/; expires 2w; access_log off; error_page 404 /srv/static/defaultimage.jpg; } location / { proxy_pass http://127.0.0.1:8089; }

    Read the article

  • how can I give openvpn clients access to a dns server (bind9) that is located on the same machine as the openvpn server

    - by lacrosse1991
    I currently have a debian server that is running an openvpn server. I also have a dns server (bind9) that I would like give allow access to by the connected openvpn clients, but I am unsure as of how to do this, I already known how to send dns options to the clients using push "dhcp-option DNS x.x.x.x" but I am just unsure how give the clients access to the dns server that is located on the same machine as the vpn server, so if anyone could point me in the right direction I would really appreciate it. Also in case this would have anything to do with adding rules to iptables, this is my current configuration for iptables # Generated by iptables-save v1.4.14 on Thu Oct 18 22:05:33 2012 *nat :PREROUTING ACCEPT [3831842:462225238] :INPUT ACCEPT [3820049:461550908] :OUTPUT ACCEPT [1885011:139487044] :POSTROUTING ACCEPT [1883834:139415168] -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE COMMIT # Completed on Thu Oct 18 22:05:33 2012 # Generated by iptables-save v1.4.14 on Thu Oct 18 22:05:33 2012 *filter :INPUT ACCEPT [45799:10669929] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [45747:10335026] :fail2ban-apache - [0:0] :fail2ban-apache-myadmin - [0:0] :fail2ban-apache-noscript - [0:0] :fail2ban-ssh - [0:0] :fail2ban-ssh-ddos - [0:0] :fail2ban-webserver-w00tw00t - [0:0] -A INPUT -p tcp -m multiport --dports 80,443 -j fail2ban-apache-myadmin -A INPUT -p tcp -m multiport --dports 80,443 -j fail2ban-webserver-w00tw00t -A INPUT -p tcp -m multiport --dports 80,443 -j fail2ban-apache-noscript -A INPUT -p tcp -m multiport --dports 80,443 -j fail2ban-apache -A INPUT -p tcp -m multiport --dports 22 -j fail2ban-ssh-ddos -A INPUT -p tcp -m multiport --dports 22 -j fail2ban-ssh -A INPUT -i tun+ -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp --dport 3306 -j ACCEPT -A FORWARD -i tun+ -j ACCEPT -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT -A fail2ban-apache -j RETURN -A fail2ban-apache-myadmin -s 211.154.213.122/32 -j DROP -A fail2ban-apache-myadmin -s 201.170.229.96/32 -j DROP -A fail2ban-apache-myadmin -j RETURN -A fail2ban-apache-noscript -j RETURN -A fail2ban-ssh -s 76.9.59.66/32 -j DROP -A fail2ban-ssh -s 64.13.220.73/32 -j DROP -A fail2ban-ssh -s 203.69.139.179/32 -j DROP -A fail2ban-ssh -s 173.10.11.146/32 -j DROP -A fail2ban-ssh -j RETURN -A fail2ban-ssh-ddos -j RETURN -A fail2ban-webserver-w00tw00t -s 217.70.51.154/32 -j DROP -A fail2ban-webserver-w00tw00t -s 86.35.242.58/32 -j DROP -A fail2ban-webserver-w00tw00t -j RETURN COMMIT # Completed on Thu Oct 18 22:05:33 2012 also here is my openvpn server configuration port 1194 proto udp dev tun ca ca.crt cert server.crt key server.key dh dh1024.pem server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt keepalive 10 120 comp-lzo user nobody group users persist-key persist-tun status /var/log/openvpn/openvpn-status.log verb 3 push "redirect-gateway def1" push "dhcp-option DNS 213.133.98.98" push "dhcp-option DNS 213.133.99.99" push "dhcp-option DNS 213.133.100.100" client-to-client

    Read the article

  • How can I roll back 1 commit?

    - by n179911
    I have 2 commits that I did not push: $ git status # On branch master # Your branch is ahead of 'faves/master' by 2 commits. How can I roll back my first one (the oldest one), but keep the second one? $ git log commit 3368e1c5b8a47135a34169c885e8dd5ba01af5bb ... commit baf8d5e7da9e41fcd37d63ae9483ee0b10bfac8e ... From here: http://friendfeed.com/harijay/742631ff/git-question-how-do-i-rollback-commit-just-want Do I just need to do: git reset --hard baf8d5e7da9e41fcd37d63ae9483ee0b10bfac8e That is?

    Read the article

  • How to sync (or at least view) public / team / shared calendar to Blackberry using BES?

    - by 3rdparty
    Trying to allow 3 people to view and ideally sync (create/edit) common (team) calendar events via Blackberry and hosted Exchange 2007 BES. My understanding is that BES does not support anything other than a users primary calendar to be synced wirelessly. From what I've researched the only supported workflow is for user to create event in public calendar on Outlook and then invites team members individually as optional attendees so event displays in their calendar (and on their Blackberry). I've seen some 3rd party utilities that claim to support syncing of public folders/calendars: Add2Outlook: http://www.diditbetter.com/add2outlook.aspx WICKSoft: http://www.wicksoft.com/contacts_calendars.htm (needs to be installed on local Exchange server) I've also been told I can sync public/other calendars using Desktop Manager, but I need to avoid any tethered sync with this environment. Am I missing an easier workflow here? There must be tens of thousands of BES users that require the ability to/view share a public, shared or team calendar on their Blackberry. How can I solve this?

    Read the article

  • Convert apache rewrite rules to nginx

    - by Shiyu Sekam
    I want to migrate an Apache setup to Nginx, but I can't get the rewrite rules working in Nginx. I had a look on the official nginx documentation, but still some trouble converting it. http://nginx.org/en/docs/http/converting_rewrite_rules.html I've used http://winginx.com/en/htaccess to convert my rules, but this just works partly. The / part looks okay, the /library part as well, but the /public part doesn't work at all. Apache part: ServerAdmin webmaster@localhost DocumentRoot /srv/www/Web Order allow,deny Allow from all RewriteEngine On RewriteRule ^$ public/ [L] RewriteRule (.*) public/$1 [L] Order Deny,Allow Deny from all RewriteEngine On RewriteCond %{QUERY_STRING} ^pid=([0-9]*)$ RewriteRule ^places(.*)$ index.php?url=places/view/%1 [PT,L] # Extract search query in /search?q={query}&l={location} RewriteCond %{QUERY_STRING} ^q=(.*)&l=(.*)$ RewriteRule ^(.*)$ index.php?url=search/index/%1/%2 [PT,L] # Extract search query in /search?q={query} RewriteCond %{QUERY_STRING} ^q=(.*)$ RewriteRule ^(.*)$ index.php?url=search/index/%1 [PT,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d # Rewrite all other URLs to index.php/URL RewriteRule ^(.*)$ index.php?url=$1 [PT,L] Order deny,allow deny from all ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn AddHandler php5-fcgi .php Action php5-fcgi /php5-fcgi Alias /php5-fcgi /usr/lib/cgi-bin/php5-fcgi FastCgiExternalServer /usr/lib/cgi-bin/php5-fcgi -socket /var/run/php5-fpm.sock -pass-header Authorization CustomLog ${APACHE_LOG_DIR}/access.log combined Nginx config: server { #listen 80; ## listen for ipv4; this line is default and implied root /srv/www/Web; index index.html index.php; server_name localhost; location / { rewrite ^/$ /public/ break; rewrite ^(.*)$ /public/$1 break; } location /library { deny all; } location /public { if ($query_string ~ "^pid=([0-9]*)$"){ rewrite ^/places(.*)$ /index.php?url=places/view/%1 break; } if ($query_string ~ "^q=(.*)&l=(.*)$"){ rewrite ^(.*)$ /index.php?url=search/index/%1/%2 break; } if ($query_string ~ "^q=(.*)$"){ rewrite ^(.*)$ /index.php?url=search/index/%1 break; } if (!-e $request_filename){ rewrite ^(.*)$ /index.php?url=$1 break; } } location ~ \.php$ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } } I haven't written the original ruleset, so I've a hard time converting it. Would you mind giving me a hint how to do it easily or can you help me to convert it, please? I really want to switch over to php5-fpm and nginx :) Thanks

    Read the article

  • Some search keywords in the omnibox have stopped working

    - by pinouchon
    My problem in the same as described here: http://productforums.google.com/forum/#!topic/chrome/HE7ak4y92YU When I type a search in the chrome omnibox, after typing enfr [space] chrome replaces it by a search in the history and I end up googling my search prefixed by the keyword : for example, I end up googling "enfr former", instead of translating "former" from engligh into french. (in my case, the search is the folowing name: enfr, keywoard: enfr, search: http://translate.google.fr/#en/fr/%s in the other search engines options) Chrome version: 21.0.1180.89 How do I restore the normal behavior (searching on the specified search engine instead of the default search) ? Update: I have seen that clearing browsing data while I did the search fixes the problem. But I am not sure if it wont reapear.

    Read the article

  • Motherboard not recognizing memory anymore

    - by root
    I bought some new RAM and installed it on my motherboard. But, the BIOS would not post. There's an LED on my motherboard that shows error codes, and it showed the error: No usable memory detected. So, I removed the new memory and reinstalled the old memory, thus restoring the computer back to its original configuration. But, the BIOS still would not post, still giving the error: No usable memory detected. I've ensured that the memory and power headers are seated properly. I've tried all possible combinations of memory slots, and I've also reset the CMOS, but the error remains the same. The computer was working fine before I tried upgrading the memory, and I originally assembled the computer myself. What are some possible causes of this problem?

    Read the article

  • Nginx return 444 depending on upstream response code

    - by Mark
    I have nginx setup to pass to an upstream using proxy pass. The upstream is written to return a 502 http response on certain requests, rather then returning the 502 with all the header I would like nginx to recoginse this and return 444 so nothing is returned. Is this possible? I also tried to return 444 on any 50x error but it doesn't work either. location / { return 444; } location ^~ /service/v1/ { proxy_pass http://127.0.0.1:3333; proxy_next_upstream error timeout http_502; error_page 500 502 503 504 /50x.html; } location = /50x.html { return 444; } error_page 404 /404.html; location = /404.html { return 444; }

    Read the article

  • How to configure Apache to act as an SSL proxy to an application server?

    - by ripper234
    I have one physical server that runs: an Apache (httpd) server another web server (let's say Tomcat for sake of argument) on port 1234 Can I configure the Apache server to act as a proxy for SSL traffic, while keeping the application server blissfully unaware of SSL? What I imagine is: Traffic to http://myserevr.com/app is redirected to https://myserver.com/app Traffic to https://myserver.com/app is proxied to the application server. My SSL certificate is only installed on the Apache server, not on the Application server Other traffic to the Apache server (http://myserver.com/anotherapp) is served directly from the Apache server What's the best setup to achieve this? (On Ubuntu, if that matters)

    Read the article

< Previous Page | 672 673 674 675 676 677 678 679 680 681 682 683  | Next Page >