Search Results

Search found 37204 results on 1489 pages for 'page validators'.

Page 641/1489 | < Previous Page | 637 638 639 640 641 642 643 644 645 646 647 648  | Next Page >

  • How seriously should I take ECC correctable error warnings?

    - by David Mackintosh
    I have a pile of Sun X2200-M2 servers. These servers have ECC memory. In some of these servers, I am getting warnings in the eLOM about "correctable ECC errors detected", eg: # ssh regress11 ipmitool sel elist 1 | 05/20/2010 | 14:20:27 | Memory CPU0 DIMM2 | Correctable ECC | Asserted 2 | 05/20/2010 | 14:33:47 | Memory CPU0 DIMM2 | Correctable ECC | Asserted ...some more frequently than others. The kernel on this particular system is throwing EDAC errors as well, although with far more frequency than the eLOM is recording ECC events: EDAC k8 MC0: general bus error: participating processor(local node response), time-out(no timeout) memory transaction type(generic read), mem or i/o(mem access), cache level(generic) MC0: CE page 0x42a194, offset 0x60, grain 8, syndrome 0xf654, row 4, channel 1, label "": k8_edac MC0: CE - no information available: k8_edac Error Overflow set EDAC k8 MC0: extended error code: ECC chipkill x4 error EDAC k8 MC0: general bus error: participating processor(local node response), time-out(no timeout) memory transaction type(generic read), mem or i/o(mem access), cache level(generic) MC0: CE page 0x48cb94, offset 0x10, grain 8, syndrome 0xf654, row 5, channel 1, label "": k8_edac MC0: CE - no information available: k8_edac Error Overflow set EDAC k8 MC0: extended error code: ECC chipkill x4 error Now if the server is detecting Uncorrectable ECC, the system resets, so clearly that's bad and removing/replacing the identified stick or pair corrects the issue. But I am thinking that if the error is Correctable, then there's no immediate issue -- I can treat this as a warning and be prepared to pull the stick/pair if an uncorrectable error starts occurring?

    Read the article

  • Apache mod_proxy with SSL not redirecting

    - by simonszu
    I have a custom server running behind an apache reverse proxy. Since the custom server can only handle HTTP traffic, i am trying to use apache for wrapping proper SSL around it, and for some kind of HTTP authentication. So i enabled mod_proxy and mod_ssl and modified sites-available/default-ssl. The config is as following: <Location /server> order deny,allow allow from all AuthType Basic AuthName "Please log in" AuthUserFile /etc/apache2/htpasswd Require valid-user ProxyPass http://192.168.1.102:8181/server ProxyPassReverse http://192.168.1.102:8181/server </Location> The custom server is accessible from the internal network via the location specified in the ProxyPass directive. However, when the proxy is accessed from the outside, it presents the login prompt, and after successfully authenticated, i get a blank page with the words The resource can be found at http://192.168.1.102:8181/server. When i type the external URL again in an already authenticated browser instance, i am properly redirected to the server frontend. The access.log is full of entrys stating that my browser does successful GET requests, and the proxy is happily serving the /server ressource. However, the ressource isn't containing the server's frontend, but this blank page with these words on it.

    Read the article

  • Can't login to phpMyAdmin on a WAMP server running Windows 2008

    - by Richard West
    I am setting up a new server. I have installed Apache 2.2.17, PHP 5.3.3, MySQL 5.1.53 and phpMyAdmin 3.3.8 running on a Windows 2008 (32 bit) OS. I have configured Apache and PHP so they appear to be working fine. I have created the standard test php page with the following code and everything appears to be working fine. <?php //index.php phpinfo(); ?> I also see the mySQL and mySQLi section in the above webpage, so it appears that that I have the proper extensions loaded for mySQL access. The problem that I am having centeres around myPHPAdmin. I have this installed and I can access to the login screen at http://localhost/pma I login using "root" and the password I have setup for root. After a delay of 30 seconds or so the web page goes to a blank screen, and the url is now http://localhost/pma/index.php?token= No error is ever displayed - however nothing usable is either. I have confirmed that mySQL is running by going to the command line and logging into mySQL from there. I have double checked my configuration but I am not having any luck getting this to work. I have also disabled the Windows firewall, but that did not change anything. I installed mySQL using the standard port 3306. Any advice would be greatly appreciated.

    Read the article

  • Losing JSESSIONID when using ProxyHTMLURLMap

    - by Matthew Schmitt
    I've setup a reverse proxy between an Apache front-end and multiple Tomcat backends. The below block of code includes the ProxyHTMLURLMap param so that the HTML can be rewritten to remove the Tomcat context path. With this setup in place, after logging into my application, an initial JSESSIONID is set properly, but when navigating to any other page, this JSESSIONID is lost and another one is set by the application. I should mention that the initial login directs to a URL that includes the current context path (i.e. https://app.domain.com/context/home), but when navigating to another page, that context path is not present in the URL (i.e. https://app.domain.com/page2). <Proxy balancer://happcluster> BalancerMember ajp://happ01.h.s.com:8009 route=worker1 loadfactor=10 timeout=15 retry=5 BalancerMember ajp://happ02.h.s.com:8009 route=worker2 loadfactor=10 timeout=15 retry=5 BalancerMember ajp://happ03.h.s.com:8009 route=worker3 loadfactor=5 timeout=15 retry=5 BalancerMember ajp://happ04.h.s.com:8009 route=worker4 loadfactor=5 timeout=15 retry=5 BalancerMember ajp://happ05.h.s.com:8009 route=worker5 loadfactor=5 timeout=15 retry=5 ProxySet lbmethod=bytraffic ProxySet stickysession=JSESSIONID </Proxy> ProxyPass /context balancer://happcluster/context ProxyPass / balancer://happcluster/context/ <Location /context/> # Rewrite HTTP headers and HTML/CSS links for everything else ProxyPassReverse / ProxyPassReverseCookieDomain / app.domain.com ProxyPassReverseCookiePath / /context ProxyHTMLURLMap /context/ / # Be prepared to rewrite the HTML/CSS files as they come back # from Tomcat SetOutputFilter INFLATE;proxy-html;DEFLATE </Location> Has anyone ever run into a similar situation?

    Read the article

  • Chrome caching 302 redirects

    - by Thermionix
    I have a php script with is used to rotate banner images on a site. Under Firefox/IE page refreshes will make another request and a different image will be returned. Under Chrome, the request seems to be cached and only opening the page in a new tab will cause it to actually query the script. I believe this used to work in older versions of chrome, I've tried a few different types of redirect codes all with the same result. Any tips? <img class="banner" src="/inc/banner.php" alt=""> ~$ cat /var/www/inc/banner.php <?php header("HTTP/1.1 302 Redirect"); header("Cache-Control: max-age=0, no-cache, no-store, must-revalidate"); //header('HTTP/1.1 307 Temporary Redirect'); //header("expires: none"); //header("expires: max"); //header("Cache-Control: public"); $folder = '../img/banner/'; $exts = 'jpg jpeg png gif'; $files = array(); $i = -1; if ('' == $folder) $folder = './'; $handle = opendir($folder); $exts = explode(' ', $exts); while (false !== ($file = readdir($handle))) { foreach($exts as $ext) { // for each extension check the extension if (preg_match('/\.'.$ext.'$/i', $file, $test)) { // faster than ereg, case insensitive $files[] = $file; // it's good ++$i; } } } closedir($handle); // We're not using it anymore mt_srand((double)microtime()*1000000); // seed for PHP < 4.2 $rand = mt_rand(0, $i); // $i was incremented as we went along header('Location: '.$folder.$files[$rand]); flush(); ?> curl output; ~$ curl -I -k https://example.net/inc/banner.php HTTP/1.1 302 Redirect Server: nginx/1.1.14 Date: Fri, 24 Feb 2012 03:23:46 GMT Content-Type: text/html Connection: keep-alive X-Powered-By: PHP/5.3.10-1ubuntu1 Cache-Control: max-age=0, no-cache, no-store, must-revalidate Location: ../img/banner/2.jpg

    Read the article

  • Apache Process question about RAM usage

    - by Andrew Fashion
    So everytime I load a new page, I notice a new HTTPD process opens, every time I click a page, and each process says it's using anywhere from 2-4.5% of memory. Does that mean every single process is running at that time using 2-4% of RAM? It's a brand new server and I'm the only one on the server at the moment. Or does it mean all the other processes are dying, and only the new one is active. Because 4% of my 2048MB of RAM is already 82MB for just one process!?!? Let me know, because I am trying to determine what I need to beef my server up in order to handle high loads of traffic. I'm expect to get 20,000 uniques per day on launch. I am currently running a Dual Quad Xeon server, with only 2GB of ram, I will upgrade to 8GB or more shortly. Let me know what you suggest! thank you [root@D18634 log]# top | grep 'httpd' 11315 apache 15 0 362m 82m 24m R 12.3 4.1 0:03.00 httpd 11310 apache 16 0 322m 41m 21m S 5.7 2.1 0:02.98 httpd 11315 apache 15 0 362m 83m 25m S 24.3 4.1 0:03.73 httpd 11319 apache 16 0 324m 42m 20m R 1.0 2.1 0:01.85 httpd 11319 apache 16 0 362m 82m 23m R 78.5 4.1 0:04.21 httpd 11321 apache 16 0 323m 44m 23m S 35.3 2.2 0:04.13 httpd 11319 apache 15 0 361m 82m 23m S 8.3 4.1 0:04.46 httpd 11321 apache 15 0 323m 44m 23m S 35.9 2.2 0:05.21 httpd 11313 apache 15 0 324m 41m 19m S 48.6 2.1 0:03.23 httpd 11322 apache 16 0 354m 72m 20m R 11.0 3.6 0:05.11 httpd 11322 apache 16 0 354m 72m 20m S 23.9 3.6 0:05.83 httpd 11314 apache 16 0 355m 75m 22m R 18.3 3.7 0:04.64 httpd

    Read the article

  • Why can't I get rid of default index.html even if I disable the default virtual host in Apache2?

    - by Emre Sevinç
    I have created a virtual host settings file and I disabled the default settings by using a2dissite default (this is a pretty standard Ubuntu 10.04 installation). But no matter what I try my Apache2 server simply keeps on displaying the default index.html page instead of the index.php page that I set up in the virtual host file. Can someone help me what I'm missing. Details follow: No default settings: ls -l /etc/apache2/sites-enabled/ total 0 lrwxrwxrwx 1 root root 51 May 5 13:32 webmin.1273066327.conf -> /etc/apache2/sites-available/webmin.1273066327.conf lrwxrwxrwx 1 root root 34 May 30 11:03 www.accontax.be -> ../sites-available/www.accontax.be Contents of the relevant virtual host: cat /etc/apache2/sites-enabled/www.accontax.be <VirtualHost *> ServerName www.accontax.be ServerAlias accontax.be DirectoryIndex index.php DocumentRoot /var/www/drupal/ <Directory /var/www/drupal/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> Contents of httpd.conf: cat /etc/apache2/httpd.conf Listen 80 NameVirtualHost * I also have those relevant lines in my apache2.conf: # Include generic snippets of statements Include /etc/apache2/conf.d/ # Include the virtual host configurations: Include /etc/apache2/sites-enabled/ When I visit http://www.accontax.be I expect apache2 server go to the /var/www/drupal subdirectory and start serving index.php but it simply keeps on serving index.html from /var/www directory. I have reloaded the configuration, restarted the server, deleted my browser cache. Nothing changed. Probably I'm missing a simple yet crucial step but I just could not find it.

    Read the article

  • How to serve Rails application with Passenger/Apache without domain name?

    - by grifaton
    I am trying to serve a Rails application using Passenger and Apache on a Ubuntu server. The Passenger installation instructions say I should add the following to my Apache configuration file - I assume this is /etc/apache2/httpd.conf. <VirtualHost *:80> ServerName www.yourhost.com DocumentRoot /somewhere/public # <-- be sure to point to 'public'! <Directory /somewhere/public> AllowOverride all # <-- relax Apache security settings Options -MultiViews # <-- MultiViews must be turned off </Directory> </VirtualHost> However, I do not yet have a domain pointing at my server, so I'm not sure what I should put for the ServerName parameter. I have tried the IP address, but when I do that, restarting Apache gives apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName [Sun Jan 17 12:49:26 2010] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName [Sun Jan 17 12:49:36 2010] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results and pointing the browser at the IP address gives a 500 Internal Server Error. The closest I have got to something sensible is with <VirtualHost efate:80> ServerName efate DocumentRoot /root/jpf/public <Directory /root/jpf/public> AllowOverride all Options -MultiViews </Directory> </VirtualHost> where "efate" is my server's host name. But now pointing my browser at the server's IP address just gives a page saying "It works!" - presumably this is a default page, but I'm not sure where this is being served from. I might be wrong in thinking that the reason I have been unable to get this to work is related to not having a domain name. This is the first time I have used Apache directly - any help would be most gratefully received!

    Read the article

  • How to detect/list rogue computers connected to a WIFI network without access to the Wifi Router interface?

    - by JJarava
    This is what I believe to be an interesting challenge :) A relative (that leaves a bit too far to go there in person) is complaining that their WIFI/Internet network performance has gone down abysmally lately. She'd like to know if some of the neighbors are using her wifi network to access the internet but she's not too technically savvy. I know that the best way to prevent issues would be to change the Router password, but it's a bit of a PITA having to re-configure all wifi devices... and if the uninvited guest broke the password once, they can do it again... Her wifi router/internet connection is provided by the telco, and remotely managed so she can log-on to their telco account's page and remotely change the router's Wifi password, but doesn't have access to the router status page/config/etc unless she opts out of the telco's remote support and mainteinance service... So, how could she check if there are guests in the wifi with this restrictions and in the most "point and click way"? In this case I'd probably use nmap to look for other devices in the network, but I'm not sure if that's the easiest way to do it. I'm not a wifi expert, so I don't know if there are any wifi-scanning utils that can tell us who's talking to the router... Lastly, she's a Windows user as I guess that'll influence the choice of tools available Any suggestions more than welcome Regards!

    Read the article

  • Connecting a network printer via a Thecus N2100 - works in Vista, not in Windows 7

    - by Jon Skeet
    I have a Lexmark E250d printer attached to a Thecus N2100 NAS. On Windows Vista I've managed to configure this using an "Internet" printer port with the URL of http://thecus:631/printers/usb-printer. I can add a printer in a similar way in Windows 7, but it never manages to print the test page. If I go to "Configure Port" in Vista, it just has "Security Options" - on Windows 7 it's asking about Raw mode vs LPR mode etc. On Vista I'm using an E250d-specific driver from Lexmark; on Windows 7 there's a Microsoft E250d driver, or a Universal PCL XL driver from Lexmark... I wouldn't expect this different to be related to the problem, but I thought I'd mention it anyway. (Lexmark doesn't have a Windows 7 E250d-specific driver as far as I can see.) Any suggestions? I was thinking of upgrading my main laptop from Vista to Windows 7, but I'd really like to get this sorted first... EDIT: If I connect to http://thecus:631/printers/usb-printer via Chrome while capturing with Wireshark, I get this response: HTTP/1.1 200 OK Date: Wed, 06 Jan 2010 16:47:23 GMT Connection: Keep-Alive Keep-Alive: timeout=60 Content-Language: C Transfer-Encoding: chunked Content-Type: text/html;charset=iso-8859-1 0 No idea what that's meant to be doing... EDIT: On further consultation, this would appear to be the Internet Printing Protocol which is layered on HTTP. Printing a test page successfully from Vista posts to that URL. Will attempt the same on Windows 7...

    Read the article

  • Why do I often have to refresh pages I navigate to once for them (or content in them) to load?

    - by GetOutOfBox
    I have noticed a bizarre pattern when using my PC, that when I open a link to a website, it often will often take a very long time to load, or time out. Sometimes content on the website will be drawn, but again, it seems to get "stuck" for an unusual amount of time before finishing. Most affected is Youtube; almost every time I navigate to a youtube video from another website such as Google, the video will not begin playing, but will instead just display the player controls with a black screen where the video should be and the buffering symbol, usually before displaying an error such as "The video failed to load". The unusual part of this problem is that whenever this happens, refreshing the page always causes it to load almost immediately the second time around, without any problems. Note that I'm not talking about how some browsers will dump whatever has been cached to the "pallet" briefly when the page is refreshed or loading stopped; but that the second time loading the website being faster. I have done my best to rule out some of the obvious causes: My Windows 7 desktop computer is the only device that seems to be affected. I use Firefox on it (latest version, flash updated, etc). My connection has more than enough bandwidth (30 megabits down, 4 up), and I've even tried QoSing all other devices to make sure this isn't happening due to usage spikes. Wireshark is not showing any clearly unusual network activity (i.e frequently dropped packets).

    Read the article

  • memcache fast-cgi php apache 2.2 windows 7 creating problems

    - by Ahmad
    hi, i am trying to run memcache, fast-cgi with apache 2.2 + php on a windows 7 machine. if i dont use memcache everything works fine. the moment i disable extension=php_memcache.dll in php.ini everything returns to normal. once i start apache, the apache logs say: [Wed Jan 12 18:19:23 2011] [notice] Apache/2.2.17 (Win32) mod_fcgid/2.3.6 configured -- resuming normal operations [Wed Jan 12 18:19:23 2011] [notice] Server built: Oct 18 2010 01:58:12 [Wed Jan 12 18:19:23 2011] [notice] Parent: Created child process 412 [Wed Jan 12 18:19:23 2011] [notice] Child 412: Child process is running [Wed Jan 12 18:19:23 2011] [notice] Child 412: Acquired the start mutex. [Wed Jan 12 18:19:23 2011] [notice] Child 412: Starting 64 worker threads. [Wed Jan 12 18:19:23 2011] [notice] Child 412: Starting thread to listen on port 80. and after accessing the page [the page just has echo phpinfo()]. i get this error in the error.log [Wed Jan 12 18:20:54 2011] [warn] [client 127.0.0.1] (OS 109)The pipe has been ended. : mod_fcgid: get overlap result error [Wed Jan 12 18:20:54 2011] [error] [client 127.0.0.1] Premature end of script headers: index.php i have php_memcache.dll in my ext directory and httpd.conf is like this: LoadModule fcgid_module modules/mod_fcgid.so FcgidInitialEnv PHPRC "c:/php" FcgidInitialEnv PATH "c:/php;C:/WINDOWS/system32;C:/WINDOWS;C:/WINDOWS/System32/Wbem;" FcgidInitialEnv SystemRoot "C:/Windows" FcgidInitialEnv SystemDrive "C:" FcgidInitialEnv TEMP "C:/WINDOWS/Temp" FcgidInitialEnv TMP "C:/WINDOWS/Temp" FcgidInitialEnv windir "C:/WINDOWS" FcgidIOTimeout 64 FcgidConnectTimeout 32 FcgidMaxRequestsPerProcess 500 <Files ~ "\.php$>" AddHandler fcgid-script .php FcgidWrapper "c:/php/php-cgi.exe" .php </Files> so the problem has to be related to memcache coz if i disable it, fast-cgi seems to be working fine. any possible reasons for this?? the memcache service is running.. i can check it through control panel-services

    Read the article

  • ntop to analyse bandwidth usage on multiple ASA 5505

    - by dunxd
    I have set up a netflow server at our data centre, which is connected via VPN to ~40 remote offices using Cisco ASA 5505. The aim is to analyse usage data and find out exactly how the remote connections are being used. I followed through http://techowto.files.wordpress.com/2008/09/ntop-guide.pdf to set up ntop and https://supportforums.cisco.com/docs/DOC-6114 to set up the ASAs. I can see from the Plugin Netflow Statistics page that netflow packets from my ASAs are being received - the counter is increasing. However, I am not seeing any breakdown on the Global Traffic Statistic page after switching to the Netflow interface. I'm just seeing a pie chart showing 100% traffic for eth0. The interfaces and documentation are a little hard to follow so I am not sure I have got things configured correctly. When setting up my NetFlow-device.2 I can specify Virtual NetFlow Interface Network Address - the web UI says This value is in the form of a network address and mask on the network where the actual NetFlow probe is located. is this a Network address (e.g. 192.168.0.0/24) or an actual host IP address (192.167.0.1/24)? If that should be a network address, is this the network in which one of my ASAs is or the network in which my ntop server is? If a host IP address, is this the IP address used by eth0 on my ntop server, the IP address of an ASA, or something else? Do I need a separate virtual interface for each ASA I am collecting netflow data from? Any guidance would be greatly welcome.

    Read the article

  • Docs for OpenSSH CA-based certificate based authentication

    - by Zoredache
    OpenSSH 5.4 added a new method for certificate authentication (changes). * Add support for certificate authentication of users and hosts using a new, minimal OpenSSH certificate format (not X.509). Certificates contain a public key, identity information and some validity constraints and are signed with a standard SSH public key using ssh-keygen(1). CA keys may be marked as trusted in authorized_keys or via a TrustedUserCAKeys option in sshd_config(5) (for user authentication), or in known_hosts (for host authentication). Documentation for certificate support may be found in ssh-keygen(1), sshd(8) and ssh(1) and a description of the protocol extensions in PROTOCOL.certkeys. Is there any guides or documentation beyond what is mentioned in the ssh-keygen man-page? The man page covers how to generate certificate and use them, but it doesn't really seem to provide much information about the certificate authority setup. For example, can I sign the keys with an intermediate CA, and have the server trust the parent CA? This comment about the new feature seems to mean that I could setup my servers to trust the CA, then setup a method to sign keys, and then users would not have to publish their individual keys on the server. This also seems to support key expiration, which is great since getting rid of old/invalid keys is more difficult then it should be. But I am hoping to find some more documentation about describe the total configuration CA, SSH server, and SSH client settings needed to make this work.

    Read the article

  • SSO "Portal"

    - by Clinton Blackmore
    Pursuant to my question on alleviating the password explosion, I've contacted some of the services to whom we are paying money to access their websites to ask if we could authenticate our own users, and some of them said yes and send me specs on how to do so. (One of the sites called such a system a page a "portal"; I've never heard the term used in quite that way.) It is simple enough that I am tempted to roll my own. The largest complication is that one site wants us to store a key for every user in our database (and I think the LDAP database makes sense) after their initial login. So, non-trivial, but doable. The nature of these sorts of tasks, I expect, is that if they start out small and simple, they don't end that way. There must be some software that addresses this that is readily extended, surely. In my searching, I've come across: SimpleSAMLphp JOSSO RubyCAS-Server Shibboleth Pubcookie OpenID [Wow, gee. I'd missed some of those in my previous searches! The wikipedia page on Central Authentication Services is useful, and the section on Alternatives to OpenID makes it look like there is a lot of choice.] Can anyone recommend any of these, or suggest ones to avoid? Internally, we are authenticating using Apple's Open Directory [ == OpenLDAP + Kerberos + Password Server (which, I believe, == SAML) ]. As far as extending/tweaking/advanced configuration of a system, I am able to program in Python, C++, can do some basic PHP, and may be able to remember some Java. Looks like I need to pick up Ruby at some point. Addendum: I would also like users to be able to change their passwords over the web (and for certain users to change passwords of other users).

    Read the article

  • ajax.googleapis.com stopping my Firefox

    - by Oscar Reyes
    Today for some strange reason, Firefox stops working properly because it is trying to fetch something from ajax.googleapis.com. Is there something I can do to avoid this? Safari and Chrome work just fine. I tried uninstalling Firebug and clearing the cache. The only thing that worked was disabling the JavaScript altogether. This seems to be the culprit link: http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js What can I do? EDIT I think I have found where the problem is. My proxy is serving one byte at a time the file, so firefox consume it at that peace. What I don't understand is why Safari and Chrome takes it right away. What I did last night was, leave the FF open all the night to give him change to load the file, my hope was that I got cached and the next time there was no need to go for it. Today in the morning, the page load successfully but the page was not cached, because the next request failed the same. Here's a video showing the problem:

    Read the article

  • .htaccess rewrite rule to ignore a directory

    - by Kirk Strobeck
    I am running a Symphony installation out of the directory symphony but I want to remove that word from the URL in specific cases. When a user visits http://domain.com/demo It should go to http://domain.com/symphony/demo because I've added a specific rule for demo. If I haven't added a specific rule for demo in the .htaccess, then it should resolve to http://domain.com/demo as typed. This will route it to another part of our app. Here is my current rewrite rule ### Symphony 2.3.x ### Options +FollowSymlinks -Indexes <IfModule mod_rewrite.c> RewriteEngine on RewriteBase / ### SECURITY - Protect crucial files RewriteRule ^manifest/(.*)$ - [F] RewriteRule ^workspace/(pages|utilities)/(.*)\.xsl$ - [F] RewriteRule ^(.*)\.sql$ - [F] RewriteRule (^|/)\. - [F] ### DO NOT APPLY RULES WHEN REQUESTING "favicon.ico" RewriteCond %{REQUEST_FILENAME} favicon.ico [NC] RewriteRule .* - [S=14] ### IMAGE RULES RewriteRule ^image\/(.+\.(jpg|gif|jpeg|png|bmp))$ extensions/jit_image_manipulation/lib/image.php?param=$1 [B,L,NC] ### CHECK FOR TRAILING SLASH - Will ignore files RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_URI} !/$ RewriteCond %{REQUEST_URI} !(.*)/$ RewriteRule ^(.*)$ $1/ [L,R=301] ### URL Correction RewriteRule ^(symphony/)?index.php(/.*/?) $1$2 [NC] ### ADMIN REWRITE RewriteRule ^symphony\/?$ index.php?mode=administration&%{QUERY_STRING} [NC,L] RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^symphony(\/(.*\/?))?$ index.php?symphony-page=$1&mode=administration&%{QUERY_STRING} [NC,L] ### FRONTEND REWRITE - Will ignore files and folders RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*\/?)$ index.php?symphony-page=$1&%{QUERY_STRING} [L] </IfModule> ###### How would I change the rewrite rule to support those cases?

    Read the article

  • foswiki: hide some topic info when editing in WYSWYG mode.

    - by Mica
    I have a FOSWiki installation with a bunch of Topic templates already defined. the problem is, when a user selects the topic, they are presented with a bunch of extra information that they should not edit, and should not even see really. Is there a way to hide this content in the WYSWYG editor? Example: The topic template looks like this: <!-- * Foswiki.GenPDFAddOn Settings * Set GENPDFADDON_TITLE = <font size="7"><center>Foo</center></font> * Set GENPDFADDON_HEADFOOTFONT = helvetica * Set GENPDFADDON_FORMAT = pdf14 * Set GENPDFADDON_PERMISSIONS = print,no-copy * Set GENPDFADDON_ORIENTATION = portrait * Set GENPDFADDON_PAGESIZE = letter * Set GENPDFADDON_TOCLEVELS = 0 * Set GENPDFADDON_HEADERSHIFT = 0 --> <!-- PDFSTART --> <!-- HEADER LEFT "Foo:Bar" --> <!-- HEADER RIGHT "%BASETOPIC%" --> <!-- HEADER CENTER " " --> <!-- FOOTER RIGHT "Doc Rev %REVINFO{"r$rev - $date " web="%WEB%" topic="%BASETOPIC%"}%" --> <!-- FOOTER LEFT "F-xxx Rev A" --> <!-- FOOTER CENTER "Page $PAGE(1)" --> Header 1 foo etc. etc. etc <!-- pdfstop --> And when the user selects the topic template, they get all that in the WYSWYG editor. I would like to hide all that so when the user selects the topic template, they get Header 1 foo etc etc etc Without any of the other mark-up.

    Read the article

  • How to set only specific nginx server block into maintenance mode programmatically

    - by Ville Mattila
    I am looking for a solution to automate one of our application's deployment process. In the beginning of deployment, I would like to programmatically set the specified server into maintenance mode and finally after the deployment has been completed, remove the maintenance mode flag from the nginx server. By maintenance mode, I mean that nginx should response with HTTP Response Code 503 to all the requests (with possible custom page). I know how to set the server block to respond with 503 code (see http://www.cyberciti.biz/faq/custom-nginx-maintenance-page-with-http503/) but the question is about how to do this programmatically and most efficiently. Two options have came to my mind: Option 1: At the beginning of the deployment process, write a maintenance file into document root and conditionally check an existence of the maintenance file in nginx server config: server { if (-f $document_root/in_maintenance_mode) { return 503; } } This method contains certain overhead as the file existence is checked for each request. Is it possible to check the file existence only when loading the nginx config? Option 2: Deployment script replaces the whole nginx server configuration file with a maintenance version and swaps it back in the end of the deployment. If this method is used, I am concerned about possible other automation processes like puppet that may be override the maintenance configuration file.

    Read the article

  • Official end of support date for Internet Explorer 6 on Windows XP

    - by scunliffe
    If I read the docs on Windows Service Pack support policies, and the specific Internet Explorer lifecycle support page as well as the Wikipedia page I've deduced that: IE6 support ends/ended at: Windows 2000 Ended (date unknown) Windows XP SP0 (RTM) Ended Home: 30-Aug-2003 Pro: 30-Sep-2004 SP1 Ended Home: 11-Jul-2004 Pro: 11-Jul-2004 SP2 Home: 13-Jul-2010 Pro: 13-Jul-2010 SP3 (released: April 21, 2008) Home: ??? Pro: ??? What isn't clear is the Windows XP SP3 scenario. In "human" terms, when is the end of support for IE6 on Windows XP SP3? e.g. if there is never a Windows XP SP4... or heaven forbid, an SP4 is released. I realize this doesn't force people to upgrade etc. however I'm trying to get a "semi" official word on when IE6 moves into the "not supported" category. I'm not interested in philosophical answers e.g. "big enterprise won't upgrade but they will expect support into 2017" stuff... I just want the "clear answer" in terms of official Microsoft support.

    Read the article

  • Software to automate website screenshot capture

    - by Leniel Macaferi
    Do you know any software that can automate the process of getting screenshots of every page of a website? It would act like a spider/crawler/robot. You name it... For example: I developed a website and now I'd like to get a screenshot of every page of the site. I of course could do it manually (a lot of work). For each module of the site (Student, Payment, etc) I have different pages (Create, Edit, Details, Delete, etc) forms. The thing I'm looking for is a software that can visit every link of the site and then capture the screen - a software that can automate the whole process. It would also be good if the software allowed the user to pass a list of URLs to capture screenshots allowing even more fine grained configuration. EDIT: I tried Selenium mentioned by Aaron in his answer but I managed to find an app that does exactly what I needed. It's called Paparazzi!. I wrote a blog post to showcase my attempt at Selenium and the findings regarding Paparazzi!'s batch capture functionality: Software to automate website screenshot capture

    Read the article

  • Java web app deployment and ControlTier adoption

    - by Ran
    I've been searching for a configuration and deployment manager tool for my java-linux based web service and have been looking mainly at ControlTier (http://controltier.org). We operate at a medium scale (100's of hosts, multi-DC, dozens of services). There seem to be be plenty of lower level system admin tools such as chef, puppet, cfengine, bcfg2 and more and my understanding and the reason I'm calling them "low level" is that they are great for system level administration tasks such as setting up a mount, file permissions, users etc but aren't designed, for example for java deployments, which usually come with a build process and special java semantics. In many cases any tool can be used to do anything but if it was not designed for the task it can get uncomfortable. OTOH control-tier seem to have been designed just for that - java application deployments, at least that's what all the tutorials on their site demonstrate but here's the problem - The wiki at http://controltier.org/wiki/ is pretty good and stuffed with examples and the company behind the open source CT product is very responsive (pushy...) however, I'm yet to have seen any material from 3rd party users on the net. No success stories, no detailed blog posts, no best practices, no cheat sheets, not even hate letters, nothing. This plays badly for DTO solutions, CT's sponsor for two reasons, one is that it makes me suspicious what's the reason for the poor adoption? and second, what do I do if I get stuck and there's no help page on CT's wiki page and the mailing list is too slow to answer. I'm stuck with a "free" product that a consultancy company is pushing. So my question here - I'd be interested in hearing if anyone has had real world experience with CT for java based web app deployments and if he'd thumb up the product? Any other comments that may enlighten me are welcome of course...

    Read the article

  • Problem with single quotes in man pages

    - by Peter
    When I ssh into my Debian Lenny server and open a man page, single quotes appear to be messed up. Example from the man page of apt-get: If no package matches the given expression and the expression contains one of ´.´, ´?´ or ´*´ then it is assumed to be a POSIX regular expression, and it is applied to all package names in the database. Any matches are then installed (or removed). Note that matching is done by substring so ´lo.*´ matches ´how-lo´ and ´lowest´. If this is undesired, anchor the regular expression with a ´^´ or ´$´ character, or create a more specific regular expression. I'm on Mac OS X and using xterm. If I use Terminal, the problem doesn't happen. My locale is configured correctly as far as I can see: $ locale LANG=en_US.UTF-8 LC_CTYPE="en_US.UTF-8" LC_NUMERIC="en_US.UTF-8" LC_TIME="en_US.UTF-8" LC_COLLATE="en_US.UTF-8" LC_MONETARY="en_US.UTF-8" LC_MESSAGES="en_US.UTF-8" LC_PAPER="en_US.UTF-8" LC_NAME="en_US.UTF-8" LC_ADDRESS="en_US.UTF-8" LC_TELEPHONE="en_US.UTF-8" LC_MEASUREMENT="en_US.UTF-8" LC_IDENTIFICATION="en_US.UTF-8" LC_ALL= I'm not sure what's wrong with my environment, and I have no idea what to check next. I'd appreciate help.

    Read the article

  • Kindle (client) for Mac

    - by doug
    So we're clear, i'm talking bout the client/software version here--ie, that you install on your Mac or PC--not the device. The Kindle client was recently released for the Mac. I bought a couple of Kindle-edition books and i'm reading them using this client. Astonishingly, two features i consider to be more or less essential to any ebook reader are missing in the Kindle client, either that, or i can't find them: (i) text searching; and (ii) highlighting text. First, does anyone know how to access the search feature? I'm aware of the "Go To" button at the top middle of the reader window--the options in that menu when you click the button are: "Cover", "Table of Contents", "Beginning" and "Location." "Location" requires that you type in an integer (but it doesn't correspond to page number--e.g., typing "167" brought me to the table of contents), not a search term. Second, there's a button on the upper right-hand corner of the window "Show Notes and Marks" yet i can't find any way to highlight text. The only kind of "note" or "mark" i have been able to record is to "bookmark" a page by clicking the "bookmark" button also at the top of the window.

    Read the article

  • Apache 2.2.21 installation on Linux 6 but got error while accessing in browser

    - by JRanjan
    I am very new to linux. I have install apache 2.2.21 on linux 6 platform. While i am using ./apachectl start or ./apachectl -k start command it shows that apache is started. But while i am trying to to access apache default page in any browser using " http://:8080 " it shows page cannot be displayed. Can any one help me on this issue ??????? Plz its urgent.. I am also enclosing the error_log file as below: error_log file [Thu Nov 24 08:57:23 2011] [notice] Apache/2.2.21 (Unix) DAV/2 configured -- resuming normal operations [Fri Nov 25 01:45:58 2011] [notice] caught SIGTERM, shutting down [Fri Nov 25 01:46:12 2011] [notice] Digest: generating secret for digest authentication ... [Fri Nov 25 01:46:12 2011] [notice] Digest: done [Fri Nov 25 01:46:13 2011] [notice] Apache/2.2.21 (Unix) DAV/2 configured -- resuming normal operations [Fri Nov 25 01:54:58 2011] [notice] caught SIGTERM, shutting down [Fri Nov 25 01:55:10 2011] [notice] Digest: generating secret for digest authentication ... [Fri Nov 25 01:55:10 2011] [notice] Digest: done [Fri Nov 25 01:55:11 2011] [notice] Apache/2.2.21 (Unix) DAV/2 configured -- resuming normal operations [Fri Nov 25 01:58:10 2011] [notice] caught SIGTERM, shutting down [Fri Nov 25 01:59:41 2011] [notice] Digest: generating secret for digest authentication ... [Fri Nov 25 01:59:41 2011] [notice] Digest: done [Fri Nov 25 01:59:42 2011] [notice] Apache/2.2.21 (Unix) DAV/2 configured -- resuming normal operations [Fri Nov 25 03:23:14 2011] [notice] caught SIGTERM, shutting down [Fri Nov 25 03:27:36 2011] [notice] Digest: generating secret for digest authentication ... [Fri Nov 25 03:27:36 2011] [notice] Digest: done [Fri Nov 25 03:27:37 2011] [notice] Apache/2.2.21 (Unix) DAV/2 configured -- resuming normal operations [Fri Nov 25 08:52:27 2011] [notice] caught SIGTERM, shutting down [Fri Nov 25 08:52:43 2011] [notice] Digest: generating secret for digest authentication ... [Fri Nov 25 08:52:43 2011] [notice] Digest: done [Fri Nov 25 08:52:44 2011] [notice] Apache/2.2.21 (Unix) DAV/2 configured -- resuming normal operations [Fri Nov 25 09:21:39 2011] [notice] caught SIGTERM, shutting down [Fri Nov 25 09:21:57 2011] [notice] Digest: generating secret for digest authentication ... [Fri Nov 25 09:21:57 2011] [notice] Digest: done [Fri Nov 25 09:21:58 2011] [notice] Apache/2.2.21 (Unix) DAV/2 configured -- resuming normal operations [Mon Nov 28 01:06:58 2011] [notice] caught SIGTERM, shutting down [Mon Nov 28 01:07:58 2011] [notice] Digest: generating secret for digest authentication ... [Mon Nov 28 01:07:58 2011] [notice] Digest: done [Mon Nov 28 01:07:59 2011] [notice] Apache/2.2.21 (Unix) DAV/2 configured -- resuming normal operations

    Read the article

< Previous Page | 637 638 639 640 641 642 643 644 645 646 647 648  | Next Page >