Search Results

Search found 12480 results on 500 pages for 'psd to xhtml css'.

Page 398/500 | < Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >

  • SimpleTest assertTags - loose matching? (for CakePHP)

    - by Arkaaito
    I'd like to use SimpleTest to set up some functionality tests for our project - in particular, we have a very busy page which has some random components and some static components, and I'd like to be able to write a simple test which only confirms the static bits (preferably only the one or two most important ones). In other words, I want to be able to leave out any tags on the page I don't care about, and write something like: $result = "<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"><head><title>...</title><meta .../></head><body><script type="text/javascript">...</script><div class="center-splash"><span>Welcome JohnDoe</span><p>Your progress:</p>...</div><div class="left-column">...</div><div class="right-column">...</div>...</body></html>"; $expects = array('html'=>true,'body'=>true,'div'=>array('class'=>'center_splash'),'span'=>true,'Welcome JohnDoe','/span','/div','/body','/html'); $this->assertTagsButIgnoreExtras($result, $expects); When I try this with assertTags it fails. Is there a version of assertTags which allows this - something either officially part of the SimpleTest or CakePHP project or unofficially put out under the MIT license or similar?

    Read the article

  • f:ajax not working on tomcat7/eclipse

    - by mntgoat
    I have this very simple code which works fine until I add a f:ajax tag. Code that works: <h:commandButton disabled="#{!feature.available}" class="featureButton" value="#{feature.selected ? 'selected': feature.available? 'available':'unavailable'} " style="vertical-align: top;" action="#{Bean.toggleFeature(feature)}"> </h:commandButton> Code that doesn't work: <h:commandButton disabled="#{!feature.available}" class="featureButton" value="#{feature.selected ? 'selected': feature.available? 'available':'unavailable'} " style="vertical-align: top;" action="#{Bean.toggleFeature(feature)}"> <f:ajax event="click" /> </h:commandButton> As far as I can tell the jsf.js file is loaded fine, this is automatically added by the facelet servlet to the head of my rendered document <script type="text/javascript" src="/www/javax.faces.resource/jsf.js.xhtml?ln=javax.faces"></script> and I was even able to do a jsf.ajax.request directly from javascript and got the page to rerender something. I am using mojarra 2.1.13, tomcat 7, eclipse juno, java 7. Any thoughts on what I might be doing wrong or how I might be able to troubleshoot this issue? debugging it in javascript didn't help at all. Thanks.

    Read the article

  • How do I access a value of a nested Perl hash?

    - by st
    I am new to Perl and I have a problem that's very simple but I cannot find the answer when consulting my Perl book. When printing the result of Dumper($request); I get the following result: $VAR1 = bless( { '_protocol' => 'HTTP/1.1', '_content' => '', '_uri' => bless( do{\(my $o = 'http://myawesomeserver.org:8081/counter/')}, 'URI::http' ), '_headers' => bless( { 'user-agent' => 'Mozilla/5.0 (X11; U; Linux i686; en; rv:1.9.0.4) Gecko/20080528 Epiphany/2.22 Firefox/3.0', 'connection' => 'keep-alive', 'cache-control' => 'max-age=0', 'keep-alive' => '300', 'accept' => 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 'accept-language' => 'en-us,en;q=0.5', 'accept-encoding' => 'gzip,deflate', 'host' => 'localhost:8081', 'accept-charset' => 'ISO-8859-1,utf-8;q=0.7,*;q=0.7' }, 'HTTP::Headers' ), '_method' => 'GET', '_handle' => bless( \*Symbol::GEN0, 'FileHandle' ) }, 'HTTP::Server::Simple::Dispatched::Request' ); How can I access the values of '_method' ('GET') or of 'host' ('localhost:8081'). I know that's an easy question, but Perl is somewhat cryptic at the beginning.

    Read the article

  • Is it possible to access the SMIL timer from javascript?

    - by Will
    I'm trying to use SMIL to animate the typing of text into a field embedded in a SVG. I tried the following code in both Chrome and a SMIL-enable Firefox nightly, but it has no effect: <svg xmlns="http://www.w3.org/2000/svg" xmlns:html="http://www.w3.org/1999/xhtml"> <foreignObject> <html:input type="text" value=""> <set attributeName="value" to="Hello World" begin="0" dur="10s" fill="freeze" /> </html:input> </foreignObject> </svg> The text field appears, but remains empty. So, I thought I would register for the beginEvent and do the substitution manually. To test the events, I added: <rect id="rect" x="0" y="0" width="10" height="10"> <animate id="dx" attributeName="x" attributeType="XML" begin="0s" dur="1s" fill="freeze" from="0" to="-10" /> </rect> As well as the javascript that made sense from the event model: window.addEventListener( 'load', function() { function listen( id ) { var elem = document.getElementById( id ) elem.addEventListener( 'beginEvent', function() { console.log( 'begin ' + id ) }, false ) elem.addEventListener( 'endEvent', function() { console.log( 'end ' + id ) }, false ) } listen( 'rect' ) listen( 'dx' ) }) But there's no events fired on either the rect or the animate in either browser. The next logical step seems to be to simulate the animation (ala. FakeSmile), but I want to use the browser's animation timer if at all possible.

    Read the article

  • jsf custom control strange behaviour

    - by Cristian Boariu
    hi, I have a jsf custom control which contains this: <rich:column> <c:if test="#{not empty columnTitle}"> <f:facet name="header"> <rich:spacer/> </f:facet> </c:if> <s:link view="#{view}" value="#{messages['edit']}" propagation="#{propagation}"> <f:param name="${paramName}" value="${paramValue}"/> </s:link> &#160; <h:commandLink action="#{entityHome.removeMethodName(entity)}" value="#{messages['remove']}"/> </rich:column> You see that command link action. I want it to call an action like this: action="#{documentHome.removeProperty(property)"} Well, in order to do this i call the control like: <up:columnDetails view="/admin/property.xhtml" columnTitle="yes" entity="#{property}" paramValue="#{property.propertyId}" propagation="nest" entityHome="documentHome" removeMethodName="removeProperty"/> So, i hardcode entityHome and removeMethodName. Well an error is firing. Caused by javax.servlet.ServletException with message: "#{entityHome.removeMethodName(entity)}: javax.el.MethodNotFoundException It seems that it cannot interpret "removeMethodName". If i print entityHome or removeMethodName it correctly shows the values i pass. But i think jsf has an error like not beeing able to "believe" that after an object.something, that something can be a parameter... Can anyone guide me...?

    Read the article

  • PHP & HTML Purifier Error: mysqli_num_rows() expects parameter 1 to be mysqli_result, boolean given

    - by TaG
    I'm trying to Integrate HTML Purifier http://htmlpurifier.org/ to filter my user submitted data but I get the following error below. And I was wondering how can I fix this problem? I get the following error. on line 22: mysqli_num_rows() expects parameter 1 to be mysqli_result, boolean given line 22 is. if (mysqli_num_rows($dbc) == 0) { Here is the php code. if (isset($_POST['submitted'])) { // Handle the form. require_once '../../htmlpurifier/library/HTMLPurifier.auto.php'; $config = HTMLPurifier_Config::createDefault(); $config->set('Core.Encoding', 'UTF-8'); // replace with your encoding $config->set('HTML.Doctype', 'XHTML 1.0 Strict'); // replace with your doctype $purifier = new HTMLPurifier($config); $mysqli = mysqli_connect("localhost", "root", "", "sitename"); $dbc = mysqli_query($mysqli,"SELECT users.*, profile.* FROM users INNER JOIN contact_info ON contact_info.user_id = users.user_id WHERE users.user_id=3"); $about_me = mysqli_real_escape_string($mysqli, $purifier->purify($_POST['about_me'])); $interests = mysqli_real_escape_string($mysqli, $purifier->purify($_POST['interests'])); if (mysqli_num_rows($dbc) == 0) { $mysqli = mysqli_connect("localhost", "root", "", "sitename"); $dbc = mysqli_query($mysqli,"INSERT INTO profile (user_id, about_me, interests) VALUES ('$user_id', '$about_me', '$interests')"); } if ($dbc == TRUE) { $dbc = mysqli_query($mysqli,"UPDATE profile SET about_me = '$about_me', interests = '$interests' WHERE user_id = '$user_id'"); echo '<p class="changes-saved">Your changes have been saved!</p>'; } if (!$dbc) { // There was an error...do something about it here... print mysqli_error($mysqli); return; } }

    Read the article

  • How do I select only the 4th and higher LIs in each UL?

    - by KatieK
    For this XHTML: <ul class="collapse"> <li>One</li> <li>Two</li> <li>Three</li> <li>Four</li> <li>Five</li> </ul> <ul class="collapse"> <li>1</li> <li>2</li> <li>3</li> <li>4</li> </ul> Using jQuery, how do I select only the 4th and higher LIs in each UL? I've tried: $("ul.collapse li:gt(2)").css("color", "red"); But it selects the 4th and higher LIs in the whole document. "Four", "Five", and "1", "2", "3", "4" are red.

    Read the article

  • Where is a small, simple CMS that has no Front End done in PHP?

    - by user559469
    The keys are: small and simple PHP MySql no Front End By "no front end" I mean literally, I can control the look 100%. I just want a CMS on the "backend" to manage content (user login/security, upload images, udate articles, etc.) that will not dictate in anyway how the managed data is presented. Maybe it just keeps the info in a (MySql) database (which I can query and extract myself) or if it writes content, it is in super-clean xhtml fragments or even just xml I will parse myself? I have looked at Wordpress -- and don't like the code it generates, not to mention the sites look too "canned" (you can usually spot a WP site a mile a way.) Joomla and Drupal look more customizable, but they are bloated now in my opinion, and really I just want something lightweight and simple. For one-user mom-and-pop sites. (No tiered publishing/approval systems, and all that.) I envision plugging this CMS into existing websites/web apps where most of the site is made and managed by me, but a few choice areas are managed by the site owner.

    Read the article

  • The page at [ Page URL ] could not be reached. (Urgent)

    - by Danial Sabagh
    I want to add the like button on my website, but it does not work because whenever I click on Like button it says: The page at could not be reached. You can also check the url to see the error: My Facebook page Here is what I did to use the code: <html xmlns="http://www.w3.org/1999/xhtml" xmlns:fb="http://ogp.me/ns/fb#"> <head> <meta property="og:title" content="ALEXA BEAUTY" /> <meta property="og:type" content="company" /> <meta property="og:url" content="http://alexasalon.co.uk/" /> <meta property="og:image" content="http://alexasalon.co.uk/images/logo.png" /> <meta property="og:site_name" content="ALEXA BEAUTY" /> <meta property="fb:admins" content="100002556535323" /> </head> <body> <div id="fb-root"></div> <script>(function(d, s, id) { var js, fjs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)) {return;} js = d.createElement(s); js.id = id; js.src = "//connect.facebook.net/en_US/all.js#xfbml=1&appId=220687968005095"; fjs.parentNode.insertBefore(js, fjs); }(document, 'script', 'facebook-jssdk'));</script> <div> <fb:like href="http://www.facebook.com/pages/Alexa-Beauty/205401152839187" send="true" width="450" show_faces="false" font="lucida grande"></fb:like> </div> Is the code wrong? Is the page URL correct? I checked the website on Object Debugger and seems there is no error, check link please. I really do not know what is wrong? Does anyone know?!

    Read the article

  • Mixing SSL and non-SSL content in an Apache2 virtual host

    - by gravyface
    I have a (hopefully) common scenario for one of my sites that I just can't seem to figure out how to deploy correctly. I have the following site and directories for example.com: These need to require SSL: /var/www/example.com/admin /var/www/example.com/order These need to be non-SSL: /var/www/example.com/maps These need to support both: /var/www/example.com/css /var/www/example.com/js /var/www/example.com/img I have two virtual host declarations for the one site in my /sites-available/example.com file; the top one is *:443 the second one is *:80. Since I have two sites, and if a request comes in on 443, the top virtualhost is used, same with the bottom if it's a port 80 request. However, I can't seem to enforce my SSL requirements using SSLRequireSSL because I'm assuming a port 80 request to /admin or /order is not even hitting the *:443 vhost. Should I just Deny All to /order and /admin within the *:80 virtual host so that if you try to request it on 80, you'll get a 403 Forbidden?

    Read the article

  • Lighttpd 403 Errors on HTML and PHP pages

    - by Brian
    I installed lighttpd on CentOS 5.5 64-bit. Everything seems fine and running except I cannot get past 403 errors on both HTML and PHP pages. I have used CHMOD and CHOWN, changed ownership in the config file, done everything possible and have been stuck for 2 days. Appreciate any help, and here's hoping to a stupid error on my part. Here is the log file with debug options on: 2011-02-21 11:23:13: (request.c.304) fd: 7 request-len: 408 GET /index.html HTTP/1.1 Host: 10.0.1.8 User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.13) Gecko/20101203 Firefox/3.6.13 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Cache-Control: max-age=0 2011-02-21 11:23:13: (response.c.241) run condition 2011-02-21 11:23:13: (response.c.300) -- splitting Request-URI 2011-02-21 11:23:13: (response.c.301) Request-URI : /index.html 2011-02-21 11:23:13: (response.c.302) URI-scheme : http 2011-02-21 11:23:13: (response.c.303) URI-authority: 10.0.1.8 2011-02-21 11:23:13: (response.c.304) URI-path : /index.html 2011-02-21 11:23:13: (response.c.305) URI-query : 2011-02-21 11:23:13: (response.c.349) -- sanatising URI 2011-02-21 11:23:13: (response.c.350) URI-path : /index.html 2011-02-21 11:23:13: (response.c.470) -- before doc_root 2011-02-21 11:23:13: (response.c.471) Doc-Root : /srv/www/lighttpd 2011-02-21 11:23:13: (response.c.472) Rel-Path : /index.html 2011-02-21 11:23:13: (response.c.473) Path : 2011-02-21 11:23:13: (response.c.521) -- after doc_root 2011-02-21 11:23:13: (response.c.522) Doc-Root : /srv/www/lighttpd 2011-02-21 11:23:13: (response.c.523) Rel-Path : /index.html 2011-02-21 11:23:13: (response.c.524) Path : /srv/www/lighttpd/index.html 2011-02-21 11:23:13: (response.c.541) -- logical -> physical 2011-02-21 11:23:13: (response.c.542) Doc-Root : /srv/www/lighttpd 2011-02-21 11:23:13: (response.c.543) Rel-Path : /index.html 2011-02-21 11:23:13: (response.c.544) Path : /srv/www/lighttpd/index.html 2011-02-21 11:23:13: (response.c.561) -- handling physical path 2011-02-21 11:23:13: (response.c.562) Path : /srv/www/lighttpd/index.html 2011-02-21 11:23:13: (response.c.608) -- access denied 2011-02-21 11:23:13: (response.c.609) Path : /srv/www/lighttpd/index.html 2011-02-21 11:23:13: (response.c.128) Response-Header: HTTP/1.1 403 Forbidden Content-Type: text/html Content-Length: 345 Date: Mon, 21 Feb 2011 16:23:13 GMT Server: lighttpd/1.4.28 Here is the directory listing. I used CHOWN to set to lighttpd:lighttpd [root@localhost lighttpd]# ls -al total 40 drwxrwxrwx 2 lighttpd lighttpd 4096 Feb 21 10:48 . drwxrwxrwx 3 lighttpd lighttpd 4096 Feb 21 10:57 .. -rwxrwxrwx 1 lighttpd lighttpd 10 Feb 20 08:32 index.html -rwxrwxrwx 1 lighttpd lighttpd 20 Feb 21 10:48 index.php -rwxrwxrwx 1 lighttpd lighttpd 20 Feb 21 10:39 info.php [root@localhost lighttpd]# Requested Commands: [root@localhost lighttpd]# ls -ld / /srv /srv/www drwxr-xr-x 22 root root 4096 Feb 21 04:39 / drwxrwxrwx 3 lighttpd lighttpd 4096 Feb 20 07:38 /srv drwxrwxrwx 3 lighttpd lighttpd 4096 Feb 21 10:57 /srv/www [root@localhost lighttpd]# ps auxZ | grep lighttpd root:system_r:httpd_t lighttpd 3842 0.0 0.2 48368 896 ? S 12:24 0:00 /usr/sbin/lighttpd -f /etc/lighttpd/lighttpd.conf root:system_r:unconfined_t:SystemLow-SystemHigh root 3845 0.0 0.2 61152 764 pts/0 R+ 12:24 0:00 grep lighttpd

    Read the article

  • Domain name rewriting and URL rewriting in the meantime in .htaccess

    - by Steven
    Ugly URLs: www.domainname.com/en/piece/piece.php?piece_id=1 www.domainname.com/en/piece/piece.php?piece_id=2 www.domainname.com/en/piece/piece.php?piece_id=3 ... Friendly URLs: piece.domainname.com/en/1 piece.domainname.com/en/2 piece.domainname.com/en/3 ... I want to present website users only friendly URLs. When I apply RewriteEngine On RewriteRule ^en/([^/]*)$ /en/piece/piece.php?piece_id=$1 [L] only the URL is rewrote.Besides the CSS file can not be found in web page of the friendly URL. How to rewrite the domain name and the URL in the meantime? Do I have to use RedirectMatch, if so, how to do it?

    Read the article

  • Domain name rewriting and URL rewriting in the meantime in .htaccess

    - by Steven
    Ugly URLs: www.domainname.com/en/piece/piece.php?piece_id=1 www.domainname.com/en/piece/piece.php?piece_id=2 www.domainname.com/en/piece/piece.php?piece_id=3 ... Friendly URLs: piece.domainname.com/en/1 piece.domainname.com/en/2 piece.domainname.com/en/3 ... When I apply RewriteEngine On RewriteRule ^en/([^/]*)$ /en/piece/piece.php?piece_id=$1 [L] only the URL is rewrote.Besides the CSS file can not be found in web page of the friendly URL. How to rewrite the domain name and the URL in the meantime? Do I have to use RedirectMatch, if so, how to do it?

    Read the article

  • Mono through FastCGI on nginx

    - by Stijn
    I'm going through http://www.mono-project.com/FastCGI_Nginx and can't get it to work. The FastCGI server seems to be running. The following is from the error log: upstream sent unexpected FastCGI record: 3 while reading response header from upstream, client: 192.168.1.125, server: arch, request: "GET /Default.aspx HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "arch" Command used to start the server (I've tried server2 and server4, using a simple .NET 2.0 or .NET 4.0 project): fastcgi-mono-server2 /applications=arch:/:/var/www/test/public/ /socket=tcp:127.0.0.1:9000 /stopable=True nginx config: server { listen 80; server_name arch; access_log /var/www/test/log/access.log; error_log /var/www/test/log/error.log; location / { root /var/www/test/public; index index.html index.htm default.aspx Default.aspx; fastcgi_index Default.aspx; fastcgi_pass 127.0.0.1:9000; fastcgi_param PATH_INFO ""; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } Using xsp4 works fine, I can browse the site. I've enabled FastCGI logging, this is the output: [2012-04-15 23:51:18Z] Debug Accepting an incoming connection. [2012-04-15 23:51:18Z] Notice Beginning to receive records on connection. [2012-04-15 23:51:18Z] Debug Record received. (Type: BeginRequest, ID: 1, Length: 8) [2012-04-15 23:51:18Z] Debug Record received. (Type: Params, ID: 1, Length: 386) [2012-04-15 23:51:18Z] Debug Record received. (Type: Params, ID: 1, Length: 0) [2012-04-15 23:51:18Z] Debug Read parameter. (PATH_INFO = ) [2012-04-15 23:51:18Z] Debug Read parameter. (SCRIPT_FILENAME = /var/www/test/public/Home) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_HOST = arch) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_USER_AGENT = Mozilla/5.0 (Windows NT 6.1; WOW64; rv:11.0) Gecko/20100101 Firefox/11.0) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_ACCEPT = text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_ACCEPT_LANGUAGE = en-gb,en;q=0.5) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_ACCEPT_ENCODING = gzip, deflate) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_CONNECTION = keep-alive) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_COOKIE = ASP.NET_SessionId=2C3D702C9B0F23F69B80820B) [2012-04-15 23:51:18Z] Error Failed to process connection. Reason: Argument cannot be null. Parameter name: s [2012-04-15 23:51:18Z] Debug Record sent. (Type: EndRequest, ID: 1, Length: 8) [2012-04-15 23:51:18Z] Debug The FastCGI connection has been closed.

    Read the article

  • Accessing apache on virtual pc

    - by Rick Hensly
    I am using a virtual pc to test my website. I can access all my webpages on the virtual pc. However, images don't load correctly. I can browse to my images folder and view all of them. However, if I click an image it will not load, and the apache log shows the virtual pc's ip trying to view the image: 192.168.0.55 - - [25/Jun/2009:20:10:41 -0400] "GET /images/pic.png HTTP/1.1" 302 220 Now, if I refresh the page it loads. 192.168.0.55 - - [25/Jun/2009:20:10:51 -0400] "GET /images/bg.png HTTP/1.1" 200 214 Also, images won't load in html or css. It seems like a redirecting problem or something, but I have no clue how to fix it. Thanks for any advice

    Read the article

  • Error headers: ap_headers_output_filter() after putting cache header in htaccess file

    - by Brad
    Receiving error: [debug] mod_headers.c(663): headers: ap_headers_output_filter() after I included this within the htaccess file: # 6 DAYS <FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf)$"> Header set Cache-Control "max-age=518400, public" </FilesMatch> # 2 DAYS <FilesMatch "\.(xml|txt)$"> Header set Cache-Control "max-age=172800, public, must-revalidate" </FilesMatch> # 2 HOURS <FilesMatch "\.(html|htm)$"> Header set Cache-Control "max-age=7200, must-revalidate" </FilesMatch> Any help is appreciated as to what I could do to fix this?

    Read the article

  • JMeter Stress testing

    - by mcondiff
    MAMP server hosting a Joomla instance. I'd like to hear the community's thoughts on the best way to stress test the server and find it's breaking point on concurrent users etc. Currently I have setup a test plan which I have going to the home page, grabbing the index.php, css, js and all images and have run tests on 1 to 100 users and a varying number of loops. What I'd like to know is how do I determine at what number of concurrent requests or looping requests is a good way to gauge if my server can handle the proposed increase in traffic? What is a good KB/sec, Throughput, Average, Max, Min via the Aggregate Report and at what number of threads/loops etc? I have googled and have not found immediate answers to these questions and thought to come here. More or less I have just used this http://jakarta.apache.org/jmeter/usermanual/jmeter_proxy_step_by_step.pdf to guide me and then I have been winging it in terms of Thread and Loop numbers. Any light shed on these subject would be much appreciated.

    Read the article

  • Squid 2.7.STABLE3-4.1 as a transparent proxy on Ubuntu Server 9.04

    - by E3 Group
    Can't get this to work at all! I'm trying to get this linux box to act as a transparent proxy and, with the help of DHCP, force everyone on the network to gate into the proxy. I have two ethernet connections, both to the same switch. And I'm trying to get 192.168.1.234 to become the default gateway. The actual WAN connection is to a gateway 192.168.1.1. eth0 is 192.168.1.234 eth1 is 192.168.1.2 Effectively I'm trying to make eth0 a LAN only interface and eth1 a WAN interface. I've oi should set the gateway for eth1 to point to 192.168.1.234 my squid.conf file has the following directives added at the bottom: nly set eth0 to have a gateway address in /etc/network/interfaces I'm not sure whether http_port 3128 transparent acl lan src 192.168.1.0/24 acl lh src 127.0.0.1/255.255.255.0 http_access allow lan http_access allow lh i've added the following routing commands: iptables -t nat -A PREROUTING -i eth0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.1.2:3128 iptables -t nat -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 3128 I set a computer with TCP settings 192.168.1.234 as the gateway and opened up google.com, but it comes up with a request error. Any ideas why this isn't working? :( Been searching continuously for a solution to no avail. ----------------------------- EDIT ------------------------------- Managed to get it to route properly to the squid, here's the error I get in the browser: ERROR The requested URL could not be retrieved While trying to process the request: GET / HTTP/1.1 Host: www.google.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.9.1.2) Gecko/20090729 Firefox/3.5.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-gb,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive Cache-Control: max-age=0 The following error was encountered: * Invalid Request Some aspect of the HTTP Request is invalid. Possible problems: * Missing or unknown request method * Missing URL * Missing HTTP Identifier (HTTP/1.0) * Request is too large * Content-Length missing for POST or PUT requests * Illegal character in hostname; underscores are not allowed Your cache administrator is webmaster. Generated Mon, 26 Oct 2009 03:41:15 GMT by mjolnir.lloydharrington.local (squid/2.7.STABLE3)

    Read the article

  • .htaccess - Remove all cookies

    - by BlaM
    I want to make an existing domain a "CDN" domain that serves all images, CSS and JS files (i.e. static files). However that domain was parked earlier and some application on that domain has set cookies. As far as I can observe, I'd say that with cookies the "Expires" header doesn't seem to have much effect with some browsers (Including Firefox). The browsers still request the file, even if they shouldn't do so for the next month. It would be possible to do some mod_rewrite tricks to detect if there are any cookies and then call a PHP file to remove the cookies and serve the static file so that for the next call there aren't any cookies left, but maybe you can give me a simpler method: Is there a "Apache .htaccess only" way of removing all existing cookies?

    Read the article

  • Basic Auth on DirectoryIndex Only

    - by Brad
    I am trying to configure basic auth for my index file, and only my index file. I have configured it like so: <Files index.htm> Order allow,deny Allow from all AuthType Basic AuthName "Some Auth" AuthUserFile "C:/path/to/my/.htpasswd" Require valid-user </Files> When I visit the page, 401 Authorization Required is returned as expected, but the browser doesn't prompt for the username/password. Some further inspection has revealed that Apache is not sending the WWW-Authenticate header. GET http://myhost/ HTTP/1.1 Host: myhost Connection: keep-alive User-Agent: Mozilla/5.0 (Windows NT 5.1) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.100 Safari/534.30 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 HTTP/1.1 401 Authorization Required Date: Tue, 21 Jun 2011 21:36:48 GMT Server: Apache/2.2.16 (Win32) Content-Length: 401 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/html; charset=iso-8859-1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>401 Authorization Required</title> </head><body> <h1>Authorization Required</h1> <p>This server could not verify that you are authorized to access the document requested. Either you supplied the wrong credentials (e.g., bad password), or your browser doesn't understand how to supply the credentials required.</p> </body></html> Why is Apache doing this? How can I configure it to send that header appropriately? It is worth noting that this exact same set of directives work fine if I set them for a whole directory. It is only when I configure them to a directory index that they do not work. This is how I know my .htpasswd and such are fine. I am using Apache 2.2 on Windows. On another note, I found this listed as a bug in Apache 1.3. This leads me to believe that this is actually a configuration problem on my end.

    Read the article

  • Apache returns the perl script source instead of execute the script when the request comes from chrome

    - by Kartoch
    I've just finish to install awstats on my web server, and it runs fine using firefox. But when I try to open the awstats page with chrome, the perl source script is downloaded (instead of being executed). it seems the MIME requested by Chrome gave a different behavior compared to Chrome. Any idea ? Interesting part of the Apache configuration file: <Directory "/var/www/cryptis-https-root/admin-awstats"> Options Indexes FollowSymLinks MultiViews ExecCGI AllowOverride None Order allow,deny Allow from X.Y </Directory> Alias /awstatsclasses "/var/www/awstats/wwwroot/classes/" Alias /awstatscss "/var/www/awstats/wwwroot/css/" Alias /awstatsicons "/var/www/awstats/wwwroot/icon/" ScriptAlias /admin-awstats/ "/var/www/awstats/wwwroot/cgi-bin/" <Directory "/var/www/awstats/wwwroot"> Options None ExecCGI AllowOverride None Order allow,deny Allow from X.Y </Directory> I've tried to add the following line in the apache configuration file but it has no effect: AddHandler cgi-script .pl

    Read the article

  • Nginx Password Protect Directory Downloads Source Code

    - by Pamela
    I'm trying to password protect a WordPress login page on my Nginx server. When I navigate to http://www.example.com/wp-login.php, this brings up the "Authentication Required" prompt (not the WordPress login page) for a username and password. However, when I input the correct credentials, it downloads the PHP source code (wp-login.php) instead of showing the WordPress login page. Permission for my htpasswd file is set to 644. Here are the directives in question within the server block of my website's configuration file: location ^~ /wp-login.php { auth_basic "Restricted Area"; auth_basic_user_file htpasswd; } Alternately, here are the entire contents of my configuration file (including the above four lines): server { listen *:80; server_name domain.com www.domain.com; root /var/www/domain.com/web; index index.html index.htm index.php index.cgi index.pl index.xhtml; error_log /var/log/ispconfig/httpd/domain.com/error.log; access_log /var/log/ispconfig/httpd/domain.com/access.log combine$ location ~ /\. { deny all; access_log off; log_not_found off; } location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } location /stats/ { index index.html index.php; auth_basic "Members Only"; auth_basic_user_file /var/www/web/stats/.htp$ } location ^~ /awstats-icon { alias /usr/share/awstats/icon; } location ~ \.php$ { try_files /b371b8bbf0b595046a2ef9ac5309a1c0.htm @php; } location @php { try_files $uri =404; include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/lib/php5-fpm/web11.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_intercept_errors on; } location / { try_files $uri $uri/ /index.php?$args; client_max_body_size 64M; } location ^~ /wp-login.php { auth_basic "Restricted Area"; auth_basic_user_file htpasswd; } } If it makes any difference, I'm using Ubuntu 14.04.1 LTS with Nginx 1.4.6 and ISPConfig 3.0.5.4p3.

    Read the article

  • Nginx location issue

    - by dave
    I'm trying to set a longer (30 day) 'expires' header for my (images only) in the /misc-stuff/ directory. This is what I'm using for my site : # Serve static files directly from nginx location ~* \.(jpg|jpeg|gif|png|bmp|ico|pdf|flv|swf|exe|html|htm|txt|css|js) { add_header Cache-Control public; add_header Cache-Control must-revalidate; expires 7d; } I want to be able to keep that code in to handle regular site images, but create a new block to handle the /misc-stuff/ directory. I have tried : location ^~ /misc-stuff/ { ... } The problem I'm having now is that my backup .php files in that directory show up as plain text if someone tries to access it. How do I set it up so ONLY .gif images in the /misc-stuff/ directory are effected?

    Read the article

  • Why does using nginx as a reverse proxy break local links?

    - by tsvallender
    I've just set up nginx as a reverse proxy, so some sites served from the box are served directly by it and others are forwarded to a Node.js server. The site being served by Node.js, however, is displayed with no CSS or images, so I assume the links are somehow being broken, but don't know why. The following is the only file in /etc/nginx/sites-enabled: server { listen 80; ## listen for ipv4 listen [::]:80 default ipv6only=on; ## listen for ipv6 server_name dev.my.site; access_log /var/log/nginx/localhost.access.log; location / { root /var/www; index index.html index.htm; } location /myNodeSite { proxy_pass http://127.0.0.1:8080/; proxy_redirect off; proxy_set_header Host $host; } } I had thought perhaps it was trying to find them in /var/www due to the first entry, but removing that doesn't seem to help.

    Read the article

  • Apache gives empty reply

    - by Jorge Bernal
    It happens randomly, and only on moodle installations. Apache don't add a line in the logs when this happens, and I don't know where to look. koke@escher:~/Code/eboxhq/moodle[master]$ curl -I http://training.ebox-technologies.com/login/signup.php?course=WNA001 curl: (52) Empty reply from server koke@escher:~/Code/eboxhq/moodle[master]$ curl -I http://training.ebox-technologies.com/login/signup.php?course=WNA001 HTTP/1.1 200 OK The apache conf is quite straightforward and works perfectly in the other vhosts <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /srv/apache/training.ebox-technologies.com/htdocs ServerName training.eboxhq.com ErrorLog /var/log/apache2/training.ebox-technologies.com-error.log CustomLog /var/log/apache2/training.ebox-technologies.com-access.log combined <FilesMatch "\.(ico|gif|jpe?g|png|js|css)$"> ExpiresActive On ExpiresDefault "access plus 1 week" Header add Cache-Control public </FilesMatch> </VirtualHost> Using apache 2.2.9 php 5.2.6 and moodle 1.9.5+ (Build: 20090722) Any ideas welcome :)

    Read the article

< Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >