Search Results

Search found 62215 results on 2489 pages for 'http basic authentication'.

Page 708/2489 | < Previous Page | 704 705 706 707 708 709 710 711 712 713 714 715  | Next Page >

  • Adding Facebook IPv6 to Centos, getting CurlException 7

    - by Nick
    I'm correctly get following error. After searching about this issue, correct me if i'm wrong, I believe that adding/configuring IPv6 should solve the problem. PHP Fatal error: Uncaught CurlException: 7: Failed to connect to 2a03:2880:10:8f02:face:b00c:0:26: Network is unreachable\n thrown in /var/www/vhosts/facedex.net/httpdocs/fb/apps/seemyfuture/src/base_facebook.php on line 886 The problem is I dont know the right way to add it. There seems to have may methods. http://tldp.org/HOWTO/Linux+IPv6-HOWTO/x1035.html#AEN1044 http://unix.stackexchange.com/questions/34093/static-ipv4-ipv6-configuration-on-centos-6-2 My netstat show this. Shell doesnt recogize -rn6 though.It shows invalid option -- 6 netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 27.254.38.128 0.0.0.0 255.255.255.128 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 0.0.0.0 27.254.38.254 0.0.0.0 UG 0 0 0 eth0 FYI: I'm using Centos 5.7. Thank you a lot in advance.

    Read the article

  • How do I get around "Access is Denied" [Number: 5 (0x80070005)], with IIS6/FastCGI and PHP 5.2.3?

    - by Evan Carroll
    I'm getting this error with IIS 6.0 (i assume), and PHP 5.2.3, and FastCGI FastCGI Error The FastCGI Handler was unable to process the request. Error Details: Error Number: 5 (0x80070005). Error Description: Access is denied. HTTP Error 500 - Server Error. Internet Information Services (IIS) Any ideas, nothing revealings in logs (other than 500 errors), this is pretty much all of I have to work with. The script has read and execute privileged for the internet guest account; and, I've added read/execute privileges to the whole D:\PHP. I followed this tutorial http://learn.iis.net/page.aspx/247/using-fastcgi-to-host-php-applications-on-iis-60/ to set it up. The only major diversion is I installed PHP to D:\PHP

    Read the article

  • How can I use VirtualDocumentRoot to serve the www subdomain with SSL enabled?

    - by mdgreenwald
    I am able to serve http://www.domain.com and http://domain.com. Also https://domain.com works fine too. But not https://www.domain.com for some reason this doesn't work. I even created a www.domain.com in my sites-availible folder and also enabled it. I reloaded the configuration and yet it still doesn't work. I have a wildcard certificate so that is NOT the problem. <IfModule mod_ssl.c> <VirtualHost *:443> ServerAdmin [email protected] ServerName *.domain.com:443 ServerAlias www.domain.com VirtualDocumentRoot /var/www/%0 Thanks for any help.

    Read the article

  • How do I generate a core dump from apache

    - by blockhead
    I have a script which intermittently returns a white screen of death in firefox and Error 324 (net::ERR_EMPTY_RESPONSE): Unknown error. chrome. When I try to access the script using a PHP HTTP client (like Zend_Http_Client), intermittently I get an exception (sorry I don't have the exact message on me at the moment). I suspect a segfault. This is further buttressed by the lines in my error log that look like this: [Thu Mar 18 16:03:02 2010] [notice] child pid 845 exit signal Segmentation fault (11) Now, I'm running RedHat, and I know that RedHat doesn't generate core dumps out-of-the-box. I followed the instructions here http://kbase.redhat.com/faq/docs/DOC-5353, but I'm not seeing any core dumps. How do I generate a core dump?

    Read the article

  • Apache default NameVirtualHost does not work

    - by Luc
    Hello, I have 4 Name Virtual Hosts on my apache configuration, each one using proxy_http to forward request to the correct server. They work fine. <VirtualHost *:80> ServerName application_name.domain.tld ProxyRequests Off ProxyPreserveHost On ProxyPass / http://server_ip/ ProxyPassReverse / http://server_ip/ </VirtualHost> I then tried to add a default NameVirtualHost to take care of the requests for which the server name does not match one of the four others. Otherwise a request like some_weird_styff.domain.tld would be forwarded to one of the 4 VH. I then added this one: <VirtualHost *:80> ServerAlias "*" DocumentRoot /var/www/ </VirtualHost> At the beginning it seemed to work fine, but at some point it appears that the requests that should be handed by one of the 4 regular hosts is "eaten" by the default one !!! If I a2dissite this default host, everything is back to normal... I do not really understand this. If you have any clue... thanks a lot, Luc

    Read the article

  • Running a reverse proxy in front of Splunk 4.x

    - by sgerrand
    So, I have previously installed Splunk 3.x behind a reverse proxy and downloaded the latest version (4.0.6 at time of typing) expecting it to be as easy to use as before. Sadly this was not the case. There appears to be some elements which are not being translated correctly through the reverse proxy, causing Splunk to fail. I have used the following configuration in Apache2 to no avail: ServerName monitoringbox.com DocumentRoot /path/to/nowhere ProxyRequests off ProxyPass /splunk http://127.0.0.1:8000/splunk ProxyPassReverse /splunk http://127.0.0.1:8000/splunk Order allow,deny Allow from all Has anyone else had more luck than me in setting up Splunk 4.x behind a reverse proxy?

    Read the article

  • How to install Vips on CentOS?

    - by A4J
    I am trying to install Vips on my CentOS box I've downloaded the latest files: wget http://www.vips.ecs.soton.ac.uk/supported/current/vips-7.30.0.tar.gz wget http://www.vips.ecs.soton.ac.uk/supported/current/nip2-7.30.1.tar.gz And on ./configure I get: configure: error: Package requirements (glib-2.0 >= 2.6 gmodule-2.0 >= 2.4 libxml-2.0 gobject-2.0) were not met: No package 'glib-2.0' found No package 'gmodule-2.0' found No package 'gobject-2.0' found However I have tried yum install glib2 and rerun ./configure but get the same error. Am I doing something wrong? Anyone know how to install it correctly on CentOS 6?

    Read the article

  • Windows 7 IIS problem

    - by Carlo
    Hi, the thing is that I just installed IIS in my Windows 7 computer to test it out, but I'm getting this error as soon as I try to load the localhost on my web browser: Service Unavailable HTTP Error 503. The service is unavailable. I found this solution but it didn't work. Anyone has an idea of what's going on? Thanks! More Info: I checked on the Application Pools, as soon as I go to http://localhost/ it stops the application pool of thw website I try.

    Read the article

  • Resolving System Restore error - Error 0x800423F3 - The writer experienced a transient error

    - by Gishu
    System restore just stopped working, when I needed it most. (tried diff restore points... same error). Now everytime I run system-restore, it fails with the above error message. I cleared the event logs and retried to isolate the relevant events. I see 5 warnings and 1 info event from VSS and 1 error from System Restore. Here's the first warning from VSS and the error http://dl.dropbox.com/u/11733224/SystemRestore-FirstWarning.txt http://dl.dropbox.com/u/11733224/SystemRestore-FirstError.txt Tried a lot of stuff, but in vain; this error still persists.

    Read the article

  • How do you fix the Performance Dashboard datetime overfow error

    - by Mike L
    I'm a programmer/DBA by accident and we're running SQL Server 2005 with Performance Dashboard for basic monitoring. The server has been up for a few weeks and now we can't drill into certain reports. Is there any way to reset these reports without a complete reboot? edit: I bet the error message would help. I get this when I drill into the CPU graph: Error: Difference of two datetime columns caused overflow at runtime.

    Read the article

  • FastCGI Error Access to the script denied

    - by ArtWorkAD
    I have a Debian Squeeze server running nginx + php-fpm + fastcgi. I have a typo3 installation on this server which runs well. No I installed OTRS and I get an error that I do not understand: 2012/06/25 15:35:38 [error] 16510#0: *34 FastCGI sent in stderr: "Access to the script '/opt/otrs/bin/fcgi-bin/index.pl' has been denied (see security.limit_extensions)" while reading response header from upstream, client: ..., server: support.....com, request: "GET /otrs/index.pl HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "support.....com", referrer: "http://support.....com/" Why do I get this error? The otrs directory is writable for the webserver, so this is not the problem. Any ideas?

    Read the article

  • How do I export calendar events in Mozilla lightning

    - by Andrew Grimm
    How do I export calendar events in Mozilla Lightning? I'm using Thunderbird 3.0.4. (Sorry for such a basic question, but clicking on "Help contents" takes me to http://support.mozillamessaging.com/en-US/kb/ , and searching the knowledge base for lightning export got zero hits, and searching for export only got one irrelevant hit)

    Read the article

  • SCCM Client Push FAIL - Win2000 box

    - by ajp
    Hello, When trying to install the SCCM client onto a Windows 2000 box, the install fails. The install script is run through a batch file (CONTENTS: \mdop\SCCM_client\ccmsetup.exe /mp:MDOP /logon smssitecode=MID smsslp=MDOP) hosted on a public area of the network. This script has worked for all machines (mostly Win2003 Server). I've tried enabling all the common services it requires (BITS, IIS Admin, Windows Installer), but it still only runs for a second or two then quits. Here's the piece of the log file where it errors out: [LOG[Couldn't get directory list for directory 'http://MDOP/CCM_Client/ClientPatch'. This directory may not exist.]LOG]! time="13:55:53.618+300" date="06-30-2009" component="ccmsetup" context="" type="0" thread="1676" file="ccmsetup.cpp:6054" Full Log: http://paste-it.net/public/gb11732/

    Read the article

  • Sony Vaio CPU fan runs at full speed after installing Windows 8.1 preview

    - by greg27
    I installed the Windows 8.1 preview on my Sony Vaio E Series 14P (http://www.sony.com.au/product/sve14a27cg) and now my CPU fan is running at full speed all the time. When I first start my computer it runs normally, but a few seconds after reaching the Windows login screen the fan speed will suddenly jump up to max. It's pretty loud, so I'm hoping there's a solution! I've tried updating my bios to the latest version but that didn't help. I've tried programs like SpeedFan, but that wasn't able to detect my CPU fan at all. My CPU usage and temperature is normal and doesn't warrant the max fan speed. Someone else has reported the issue here: http://answers.microsoft.com/en-us/windows/forum/windows8_1_pr-hardware/sony-vaio-e-series-sve14a35cxh100-fan-throttle/45ec823a-2bc8-43ea-8557-b1a5dd0a6870 Is there any way to fix this without having to refresh/reinstall Windows?

    Read the article

  • How to automatically remove Flash history/privacy trail? Or stop Flash from storing it?

    - by Arjan van Bentem
    Many people have heard about third-party cookies, and some browsers even block those by default. Some people may even be using Private Browsing modes. However, only few seem to realise that Adobe's Flash player also leaves a cross-browser trail on your local hard drive, and allows for sending cookie-like information back to the server, including third-party sites. And because it is a plugin, Flash does not take any of the browser's privacy settings into account. Sorry for the long post, but first some details about why using Flash raises a privacy concern, followed by the results of my tests: The Flash player keeps a cross-browser history of the domain names of the Flash-sites your computer has visited. Unlike your browser's history, this history is not limited to a certain number of days. History is also recorded while using so-called Private Browsing modes. It is stored on your hard drive (though, as described below, without going to Adobe's site you won't know what is stored). I am not sure if any date and time information is kept about each visit, but to see the domain names: right-click on some Flash content, open the settings dialog, and click the Help icon or click the Advanced button within the Privacy tab. This opens a browser to the help pages on Adobe.com, where one can click through to the Website Storage Settings panel. One can clear the existing list, but one cannot stop it from being recorded again. Flash allows for storing data on your local hard drive, using so-called Local Shared Objects (aka "Flash Cookies"). Just like HTTP cookies, this data can be sent back to the server, for tracking purposes. They are cross-browser, have no expiration date, and no user defined maximum lifetime can be set in the Flash preferences either. These not being HTTP cookies, they are (of course) not blocked by a browser's cookies preferences and are not removed when the normal HTTP cookies are deleted. Adobe has announced that version 10.1 will obey Private Browsing in most popular browsers, but unfortunately no word about also removing the data whenever normal cookies are deleted manually. And its implementation might be confusing: [..] if the browser is in normal browsing mode when the Flash Player instance is created, then that particular instance will forever be in normal browsing mode (private browsing is turned off). Accordingly, toggling private browsing on or off without refreshing the page or closing the private browsing window will not impact Flash Player. Local Shared Objects are not limited to the site you visit, and third-party storage is enabled by default. At the Global Storage Settings panel one can deselect the default Allow third-party Flash content to store data on your computer. Because of the cross-browser and expiration-less nature (and the fact that few people know about it), I feel that the cross-browser third-party Flash Cookies are more dangerous for visitor tracking than third-party normal HTTP cookies. They are even used to restore plain HTTP cookies that the user tried to delete: "All advertisers, websites and networks use cookies for targeted advertising, but cookies are under attack. According to current research they are being erased by 40% of users creating serious problems," says Mookie Tenembaum, founder of United Virtualities. "From simple frequency capping to the more sophisticated behavioral targeting, cookies are an essential part of any online ad campaign. PIE ["Persistent Identification Element"] will give publishers and third-party providers a persistent backup to cookies effectively rendering them unassailable", adds Tenembaum. [..] To justify this tracking mechanism, UV's Tenembaum said, "The user is not proficient enough in technology to know if the cookie is good or bad, or how it works." When selecting None (zero KB) for Specify the amount of disk space that website websites that you haven't yet visited can use to store information on your computer, and checking Never ask again then some sites do not work. However, the same site might work when setting it to None but without selecting Never ask again, and then choose Deny whenever prompted. Both options would result in zero KB of data being allowed, but the behaviour differs. The plugin also provides a Flash Player cache for Adobe-signed files. I guess these files are not an issue. So: how to automatically delete that information? On a Mac, one can find a settings.sol file and a folder for each visited Flash-website in: $HOME/Library/Preferences/Macromedia/Flash Player/macromedia.com/support/flashplayer/sys/ Deleting the settings.sol file and all the folders in sys, removes the trail from the settings panels. However, the actual Local Shared Ojects are elsewhere (see Wikipedia for locations on other operating systems), in a randomly named subfolder of: $HOME/Library/Preferences/Macromedia/Flash Player/#SharedObjects But then: how to remove this automatically? Simply removing the folders and the settings.sol file every now and then (like by using launchd or Windows' Task Scheduler) may interfere with active browsers. Or is it safe to assume that, given the cross-browser nature, the plugin would not care if things are removed while it is active? Only clearing during log-off may not work for those who hibernate all the time. Firefox users can install BetterPrivacy or Objection to delete the Local Shared Objects (for all others browsers as well). I don't know if that also deletes the trail of website domain names. Or: how to stop Flash from storing a history trail? Change of plans: I'm currently testing prohibiting Flash to write to its own sys and #SharedObjects folders. So far, Flash has not tried to restore permissions (though, when deleting the folders, Flash will of course recreate them). I've not encountered any problems but this may take some while to validate, using multiple browsers and sites. I've not yet found a log that reports errors. On a Mac: cd "$HOME/Library/Preferences/Macromedia/Flash Player/macromedia.com/support/flashplayer" rm -r sys/* chmod u-w sys cd "$HOME/Library/Preferences/Macromedia/Flash Player" # preserve the randomly named subfolders (only preserving the latest would suffice; see below) rm -r \#SharedObjects/*/* chmod -R u-w \#SharedObjects I guess the above chmods cannot be achieved on an old Windows system (I'm not sure about XP and Vista?). Though maybe on Windows one could replace the folders sys and #SharedObjects with dummy files with the same names? Anyone? Obviously, keeping Flash from storing those Local Shared Objects for all sites may cause problems. Some test results (Flash 10 on Mac OS X): When blocking the sys folder (even when leaving the #SharedObjects folder writable) then YouTube won't remember your volume settings while viewing multiple videos. Temporarily allowing write access to the blocked folders while visiting trusted sites (to only create folders for domains you like, maybe including references in settings.sol) solves that. This way, for YouTube, Flash could be allowed to write to sys/#s.ytimg.com and #SharedObjects/s.ytimg.com, while Flash could not create new folders for other domains. One may also need to make settings.sol read-only afterwards, or delete it again. When blocking both the sys and #SharedObjects folders, YouTube and Vimeo work fine (though they might not remember any settings). However, Bits on the Run refuses to even show the video player. This is solved by temporarily unblocking the #SharedObjects folder, to allow Flash to create a subfolder with some random name. Within this folder, it would create yet another folder for the current Flash website (content.bitsontherun.com). Removing that website-specific folder, and blocking both #SharedObjects and the randomly named subfolder, still seems to allow Bits on the Run to operate, even though it still cannot write anything to disk. So: the existence of the randomly named subfolder (even when write protected) is important for some sites. When I first found the #SharedObjects folder, it held many subfolders with random names, some created on the very same day. I wonder when Flash decides it wants a new folder, and how it determines (and remembers) that random name. For a moment I considered not blocking write access for sys and #SharedObjects, but explicitly creating read-only folders for well-known third-party tracking domains (like based on a list from, for example, AdBlock Plus). That way, any other domain could still create Local Shared Objects. But the list would be long, and the domains from AdBlock Plus are probably all third-party domains anyway, so disabling Allow third-party Flash content to store data on your computer might have the very same result. Any experience anyone? (Final notes: if the above links to the settings panels do not work in the future, then use the URL that is known to Flash player as a starting point: www.adobe.com/go/settingsmanager. See also "You Deleted Your Cookies? Think Again" at Wired.com -- which uses Flash cookies itself as well... For the very suspicious using Time Machine: you may want to exclude both folders, for each user, and remove the trace that is already on your backup.)

    Read the article

  • Rails 3 shows 404 error instead of index.html (nginx + unicorn)

    - by Miko
    I have an index.html in public/ that should be loading by default but instead I get a 404 error when I try to access http://example.com/ The page you were looking for doesn't exist. You may have mistyped the address or the page may have moved. This has something to do with nginx and unicorn which I am using to power Rails 3 When take unicorn out of the nginx configuration file, the problem goes away and index.html loads just fine. Here is my nginx configuration file: upstream unicorn { server unix:/tmp/.sock fail_timeout=0; } server { server_name example.com; root /www/example.com/current/public; index index.html; keepalive_timeout 5; location / { try_files $uri @unicorn; } location @unicorn { proxy_pass http://unicorn; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_redirect off; } } My config/routes.rb is pretty much empty: Advertise::Application.routes.draw do |map| resources :users end The index.html file is located in public/index.html and it loads fine if I request it directly: http://example.com/index.html To reiterate, when I remove all references to unicorn from the nginx conf, index.html loads without any problems, I have a hard time understanding why this occurs because nginx should be trying to load that file on its own by default. -- Here is the error stack from production.log: Started GET "/" for 68.107.80.21 at 2010-08-08 12:06:29 -0700 Processing by HomeController#index as HTML Completed in 1ms ActionView::MissingTemplate (Missing template home/index with {:handlers=>[:erb, :rjs, :builder, :rhtml, :rxml, :haml], :formats=>[:html], :locale=>[:en, :en]} in view paths "/www/example.com/releases/20100808170224/app/views", "/www/example.com/releases/20100808170224/vendor/plugins/paperclip/app/views", "/www/example.com/releases/20100808170224/vendor/plugins/haml/app/views"): /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/actionpack-3.0.0.beta4/lib/action_view/paths.rb:14:in `find' /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/actionpack-3.0.0.beta4/lib/action_view/lookup_context.rb:79:in `find' /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/actionpack-3.0.0.beta4/lib/action_view/base.rb:186:in `find_template' /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/actionpack-3.0.0.beta4/lib/action_view/render/rendering.rb:45:in `_determine_template' /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/actionpack-3.0.0.beta4/lib/action_view/render/rendering.rb:23:in `render' /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/haml-3.0.15/lib/haml/helpers/action_view_mods.rb:13:in `render_with_haml' etc... -- nginx error log for this virtualhost comes up empty: 2010/08/08 12:40:22 [info] 3118#0: *1 client 68.107.80.21 closed keepalive connection My guess is unicorn is intercepting the request to index.html before nginx gets to process it.

    Read the article

  • Differences between FCKeditor and CKeditor?

    - by matt74tm
    Sorry, but I've not been able to locate anything (even on the official pages) informing me of the differences between the two versions. Even a query on their official forum has been unanswered for many months - http://cksource.com/forums/viewtopic.php?f=11&t=17986&start=0 The license page talks a bit about the "internal" differences http://ckeditor.com/license CKEditor Commercial License - CKSource Closed Distribution License - CDL ... This license offers a very flexible way to integrate CKEditor in your commercial application. These are the main advantages it offers over an Open Source license: * Modifications and enhancements doesn't need to be released under an Open Source license; * There is no need to distribute any Open Source license terms alongside with your product and no reference to it have to be done; * No references to CKEditor have to be done in any file distributed with your product; * The source code of CKEditor doesn’t have to be distributed alongside with your product; * You can remove any file from CKEditor when integrating it with your product.

    Read the article

  • File association and msconfig broken. Malware?

    - by Moshe
    A friend of mine had an Acer laptop. It has Vista Home Basic. I can't get system properties open. Msconfig does not run. Also, exe filetype is asking me what program to run it with. How can I fix that? I'm running AVG now. Assuming nothing shows up, what are fixes to the above mentioned issues?

    Read the article

  • uploading a python site to httpdocs?

    - by daniel Crabbe
    OK - so we've agreed to host a python site, got the files and not sure where to go next. We use a dedicated server and manage it mainly with plesk which has a tick box for a python support but not sure what this does. This is all the info i have from previous hosts; 10,000 ft overview The site is intended to run on a Linux host, specifically Ubuntu Server (tho it should be fine on most distros). The web framework is CherryPy ( http://cherrypy.org/ ), which is a Python based framework. There is no database as such, instead the data is kept in JS files and loaded by the front end. nicholasbarker.com.c6a4facf0192/www/js/video_content_items.js is a prime example of this. The main site templates are in nicholasbarker.com.c6a4facf0192/www/templates/ They are Cheetah templates ( http://www.cheetahtemplate.org/ ) and here's the file structure i've been sent - Could some explain to me how i'd go about uploading and running this site... Any help welcome! Dc

    Read the article

  • Rackspace Cloud Server DNS Add SPF Records

    - by user625435
    I've setup my new LAMP server on Rackspace Cloud and the Basic A, C and MX DNS setup is no problem. I need to add an SPF record for a project I am migrating over to this new server that allows emails from a 3rd party server and I can't seem to figure out how to do this. There doesn't seem to be an option to add a TXT record in my Rackspace Cloud Server interface and I installed the BIND DNS on my Apache server, but I am not sure how to get that to been seen, etc.

    Read the article

  • Access Home Network Server via External Address (DSL vs Cable)

    - by Dominic Barnes
    For the last few months, I've been using a server on my home network for basic backups and hosting some small websites. Up until this past week, I've been using Comcast (cable) as an ISP and now that I've moved into an apartment, I'm using AT&T. (DSL) I've set up dynamic DNS and I can verify it works externally. However, I can't seem to access the public address from within the local network. Is there something DSL does differently from Cable that makes this frustration possible?

    Read the article

  • Access Home Network Server via External Address (DSL vs Cable)

    - by Dominic Barnes
    For the last few months, I've been using a server on my home network for basic backups and hosting some small websites. Up until this past week, I've been using Comcast (cable) as an ISP and now that I've moved into an apartment, I'm using AT&T. (DSL) I've set up dynamic DNS and I can verify it works externally. However, I can't seem to access the public address from within the local network. Is there something DSL does differently from Cable that makes this frustration possible?

    Read the article

  • enabling gzip with htaccess...why is it hit or miss?

    - by adam-asdf
    I have shared hosting through Justhost. I use the HTML5 Boilerplate .htaccess (have tried other methods from here and there without luck) the compression part is as follows: <IfModule mod_deflate.c> # Force deflate for mangled headers developer.yahoo.com/blogs/ydn/posts/2010/12/pushing-beyond-gzipping/ <IfModule mod_setenvif.c> <IfModule mod_headers.c> SetEnvIfNoCase ^(Accept-EncodXng|X-cept-Encoding|X{15}|~{15}|-{15})$ ^((gzip|deflate)\s*,?\s*)+|[X~-]{4,13}$ HAVE_Accept-Encoding RequestHeader append Accept-Encoding "gzip,deflate" env=HAVE_Accept-Encoding </IfModule> </IfModule> # Compress all output labeled with one of the following MIME-types <IfModule mod_filter.c> AddOutputFilterByType DEFLATE application/atom+xml \ application/javascript \ application/json \ application/rss+xml \ application/vnd.ms-fontobject \ application/x-font-ttf \ application/xhtml+xml \ application/xml \ font/opentype \ image/svg+xml \ image/x-icon \ text/css \ text/html \ text/plain \ text/x-component \ text/xml </IfModule> </IfModule> However, it isn't working—at least I don't think—My home page (html) isn't compressing, the CSS and some of the JS aren't gzipped. It is failing on HTML, CSS and JS. However, some things are (or were, who knows what it will look like when you check) gzipped. My domain is http://adaminfinitum.com/ What is weird is that the (Google) PageSpeed browser extension for Firefox (whatever the current version is [Nov. 2012]) gives me a 95% speed rating (and no warnings about compression), yet YSlow and Chrome developer tools both flag me about gzip, as does a tool I found on here while researching this. To reduce cookies I set up a subdomain on my site and I thought maybe that was it so I added an .htaccess there also, but no luck. To reduce http requests I embedded some of webfonts and images in CSS (HTML5 BP stipulates not to compress images, and apparently '.woff' files are already compressed) so I thought maybe that was it and I spent all day separating and asynchronously loading those portions (via Modernizr.load) but that hasn't helped either...if anything it made it worse due to increasing http requests (I realize speed scores of async resources may be misleading). Researching this, it seems to be a fairly common issue but I haven't found an explanation/solution. I don't think it is a MIME-type issue, I have quadruple checked (and thrice edited) my .htaccess files. My hosting company said they run Apache 2.2.22 and I have looked at everything I can find. What gives?

    Read the article

  • How do I make kdump use a permissible range of memory for the crash kernel?

    - by Philip Durbin
    I've read the Red Hat Knowledgebase article "How do I configure kexec/kdump on Red Hat Enterprise Linux 5?" at http://kbase.redhat.com/faq/docs/DOC-6039 and http://prefetch.net/blog/index.php/2009/07/06/using-kdump-to-get-core-files-on-fedora-and-centos-hosts/ The crashkernel=128M@16M kernel parameter works fine for me in a RHEL 6.0 beta VM, but not on the RHEL 5.5 hosts I've tried. dmesg shows me: Memory for crash kernel (0x0 to 0x0) notwithin permissible range disabling kdump Here's the line from grub.conf: kernel /vmlinuz-2.6.18-194.3.1.el5 ro root=/dev/md2 console=ttyS0,115200 panic=15 rhgb quiet crashkernel=128M@16M How do I make kdump use a permissible range of memory for the crash kernel?

    Read the article

< Previous Page | 704 705 706 707 708 709 710 711 712 713 714 715  | Next Page >