Search Results

Search found 10307 results on 413 pages for 'apache chainsaw'.

Page 328/413 | < Previous Page | 324 325 326 327 328 329 330 331 332 333 334 335  | Next Page >

  • nagiosgraph new services not showing

    - by Eleven-Two
    I am using Nagios Core with Nagiosgraph and had only enabled graphing for cpu usage for a while. This worked fine, but now i wanted to add some more services (for example memory usage). The new services are not working (no rrd data is generated). The Nagiosgraph site only says "no data available" and I get no error in apache log, nagiosgraph.log or nagiosgraph-cgi.log. The new services are standard services (nsclient++ MEMUSE for example) and of course they are included in the map file. If I execute the checks manually, it shows also the perfdata. I added the services by enabling the "graphed-service" use. Did I miss something?

    Read the article

  • Best way to use mod_rewrite to replace WordPress pages with static files

    - by David Moles
    Here's the situation: I've got an old WordPress installation that I'd like to archive as static files, but I'd also like to preserve old URLs. I've already created the static archive with wget and sorted out the filenames and links. Now I'd like to configure Apache to intercept requests for the old dynamic URL and replace them with the new static one, e.g.: http://www.example.org/log/?p=1234 or http://www.example.org/log/index.php?p=1234 should redirect to http://www.example.org/log/archives/1234.html I've tried adding the following to the VirtualHost config for example.org, but to no effect -- I just get the PHP page. RewriteCond %{REQUEST_URI} /log/ RewriteCond %{QUERY_STRING} p=([^&;]*) RewriteRule ^/$ http://%{SERVER_NAME}/log/archives/%1.html [R,L] I've enabled logging and I can see what look like other rules being applied, but not this one. None of my other guesses at match patterns for %{REQUEST_URI} seem to have any effect either (log, log/, log.*, even .*). I'm new to mod_rewrite and this is mostly cargo cult, so I'm pretty sure I've gotten it wrong. Anyone know what I should be doing here?

    Read the article

  • Safari, IIS and optional Client Certificates

    - by Philipp
    I've a ASP.Net Webapp running on IIS7.5. The Webserver is configured to accept Client Certifcates. Unfortunately Visitors with Safari Browser are unable to view the Page. Same Problem as described under the following link: http://www.mnxsolutions.com/apache/safari-providing-an-ssl-error-client-certificate-rejected%E2%80%9D-when-other-browsers-work.html Does anyone knows how to solve this? I'd really appreciate your help. edit: Seems to be the same problem: http://superuser.com/questions/231695/iis7-5-ssl-question-safari-users-get-a-prompt-of-certificate-to-select

    Read the article

  • Centos OS resource footprint vs Ubuntu package refresh

    - by webworm
    I am trying to determine which distro to sink my teeth into. I am new to the Linux world and would like to choose a distro to focus on. I have read that CentOS uses less resources than Ubuntu, which is an issue for me since I am renting a VPS and resource cost is an issue. I have also read that Ubuntu has more up-to-date packages which is a concern for me as I want to use PHP and some packages that have a fair amount of dependencies. I am not using Linux as a desktop OS, rather just as a server for Apache, PHP, PERL, and Java development. What would be the best choice for a server OS? CentOS or Ubuntu? Are the resource requirements that different? Are the packages that different between the two? Thanks.

    Read the article

  • backup, sync and search files over internet and intranet

    - by Cawas
    There are many online backup options out there. Dropbox, Sugarsync, Mozy, Carbonite, Jungledisk and my favorite so far, Crashplan. Some of them allow searching, all of them sync with their online servers, none of those (or many many others I didn't listed here) have what I want. I'm _not_ looking for an online backup service in here. Sure, some people might say "use rsync", "linux" and/or "set up apache" and so on... But that's just too much for maintenance, if it's even viable of building up. It needs to be simple. So, anyone knows of a really good solution out there? Picture mostly Google Desktop Search (or quick search) awesome searching, mixed with Crashplan Desktop, which is able to do everything by itself, and something like Dropbox's file versioning, along with dropbox the ability to seamless sync over intranet and internet like crashplan, switching between them when needed. I bet there's nothing like this yet, but well, I'm not sure. It would be great!

    Read the article

  • dig gets the right result from DNS server, but name still fails to resolve

    - by EMiller
    Under what conditions would the following occur? From a given OSX machine on an internal network: $~ cat /etc/resolv.conf nameserver 10.102.120.7 nameserver 10.102.120.2 From the same machine: $~ dig @10.102.120.7 in.local <snip> ... ;; QUESTION SECTION: ;in.local. IN A ;; ANSWER SECTION: in.local. 43200 IN A 10.102.123.30 <snip> ... And yet, this workstation cannot ping in.local, nor load pages hosted by apache on that machine. 10.102.123.30 is definitley up (2 OSX machines I know fail to resolve in.local - but other machines on the network can). I have also checked their /etc/hosts to see if anything there might interfere... Not sure what else to check...

    Read the article

  • Is it possible to route *.example.com to a single machine without registering extra domains?

    - by oligofren
    I would like to achieve something similar to what wordpress.com does - giving each user its own subdomain. user1.wordpress.com would in the VirtualHosts setup of Apache would have its DocRoot at /user/user1, for instance. Now, our hosting service provider takes a fee for creating a domain, and in our case this would mean a ridiculous number of domains with a matching price. After some googling on DNS I came over a description of a DNAME record. That seems to fit the bill precisely. Any reason why my service provider would not do this, or why I should not do this?

    Read the article

  • dav_svn write access

    - by canavar
    Good day! I am configuring dav_svn and apache with ldap auth. What I want to do: allow anonymous READ access to repo allow write access to authenticated users Here comes my config: # Uncomment this to enable the repository DAV svn SVNPath /home/svn/ldap-test-repo AuthType Basic AuthName "LDAP-REPO Repository" AuthBasicProvider ldap AuthzLDAPAuthoritative on AuthLDAPBindDN "cn=svn,ou=applications,dc=company,dc=net" AuthLDAPBindPassword "pass" AuthLDAPURL ldap://ldap.company.net:389/ou=Users,dc=company,dc=net?uid?sub?(objectClass=person) <Limit GET PROPFIND OPTIONS REPORT> Allow from all </Limit> <LimitExcept GET PROPFIND OPTIONS REPORT> Require ldap-group cn=group,ou=services,dc=company,dc=net </LimitExcept> But when I do a test this config doesn't work... I can do checkout without auth and commit without auth... What I am doing wrong? Thanks!

    Read the article

  • What is the Optimal Server Configuration for Split-Path Testing?

    - by doug
    I am far from an expert on Apache or any server for that matter, so i apologize if this question is poorly worded, which it likely is. We have always relied on a vendor for split-path testing (aka "AB Testing"). If you're not familiar with that term, it's a form of marketing research in which you slightly modify one of your web pages (usually one nearest the point of conversion), say for instance, by changing the position of the "Buy Now" button or its color/contrast/texture, then serving one of those two pages to a given user based on random selection. By doing split-path testing ourselves, I suspect we can do it far more cheaply and increase cycle times as well. What is the optimal set-up for these tests? "Optimal" is based on the following criteria: how quickly/easily new tests can be set-up and put online; and minimal disruption to overall site performance

    Read the article

  • Polling performance on shared host

    - by Azincourt
    I am planning on writing a small browser game. The webserver is a shared server, with no root / install possible. I want to use AJAX for client/server communication. There will be 12 players. So each player would be polling the server for the current game status every X milliseconds (let's say 200ms). So that would be 200ms x 12 players x 5 = 60 requests per second Can Apache handle those requests? What might be the bottlenecks when using this attempt?

    Read the article

  • "Countersigning" a CA with openssl

    - by Tom O'Connor
    I'm pretty used to creating the PKI used for x509 authentication for whatever reason, SSL Client Verification being the main reason for doing it. I've just started to dabble with OpenVPN (Which I suppose is doing the same things as Apache would do with the Certificate Authority (CA) certificate) We've got a whole bunch of subdomains, and applicances which currently all present their own self-signed certificates. We're tired of having to accept exceptions in Chrome, and we think it must look pretty rough for our clients having our address bar come up red. For that, I'm comfortable to buy a SSL Wildcard CN=*.mycompany.com. That's no problem. What I don't seem to be able to find out is: Can we have our Internal CA root signed as a child of our wildcard certificate, so that installing that cert into guest devices/browsers/whatever doesn't present anything about an untrusted root? Also, on a bit of a side point, why does the addition of a wildcard double the cost of certificate purchase?

    Read the article

  • nginx static file buffer

    - by Philip
    I have a nfs which several frontend-servers are connected to for making the files stored on the nfs available for http downloads. It looks like I have problems with the way apache is serving the files, there seems to be a very small buffer or no buffer at all which results in a lot disk seeks. I did some testing with loading the whole requested file into memory at once and serve it to the client from memory. With this technique I need less disk seeks for a download stream. Since I don't want to implement this by myself for production use I thought that I could maybe use nginx for that because the documentation says that it uses buffers for static file serving. Is it possible to increase the buffer size to a few mb, if so which config parameter do I have to change for this? Has anyone experience with large buffers for static file serving? Is there a better way to reduce disk seeks?

    Read the article

  • ErrorDocument 404 not found in non-existent subdomain

    - by Question Overflow
    I am trying to get the apache server to issue a custom 404 error for invalid subdomains. The following is the relevant part of the httpd configuration: Alias /err/ "/var/www/error/" ErrorDocument 404 /err/HTTP_NOT_FOUND.html.var <VirtualHost *:80> # the default virtual host ServerName site_not_found Redirect 404 / </VirtualHost> <VirtualHost *:80> ServerName example.com ServerAlias ??.example.com </VirtualHost> What I get instead is this: Not Found The requested URL / was not found on this server. Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request. I don't understand why a URL to non-existent-subdomain.example.com produces a 404 error without custom error as shown above while a URL to eg.example.com/non-existent-file produces the full custom 404 error. Can someone advise on this. Thanks.

    Read the article

  • CentOS: OpsCenter does not see other node's agent

    - by Alice
    I'm new with Apache Cassandra. I am trying to install a little sample cluster using two CentOS server. I followed the documentation (Tarball installation) and the nodes are up. However, when I go to OpsCenter, the nodes cannot see each other's agent (there is always "1 of 2 agents connected"..I tried to fix, but nothing change). I tried both to disable and enable SSL, I tried to set the incoming_interface in opscenter.conf, I tried almost everything the network suggested to me, but the problem persisted. Now, I have SSL enabled, and agent log tell me: "There was an error when attempting to load stored rollups." Is there someone that could help me, please?

    Read the article

  • Best server OS for running up-to date software

    - by rjstelling
    I need to configure a server (*nix) that runs our (bespoke) CMS and Applications. In the bast I have defaulted to using Cent OS 5, but I find this difficult to upgrade the software to the versions we require. For example, we need PHP 5.3, but CentOS 5 has 5.2. Updating is fine but breaks something else (normally MySQL support in PHP). Eventually it will get to a situation where I can't upgrade because of missing dependancies and incompatible versions. Error: Missing Dependency: httpd = 2.2.3-43.el5.centos.3 is needed by package httpd-devel-2.2.3-43.el5.centos.3.i386 (updates) Is there a better alternative OS for hassle free updates, I need: Apache 2.2.17 (the development version for apxs) MySQL 5.5.8 PHP 5.3.5

    Read the article

  • Mod_rewrite issue with godaddy web hosting

    - by MrFoh
    Am trying to use laravel to build a site but my routes all redirect to the homepage. Apache error logs show this AH00124: Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace. And the .htaccess file is this <IfModule mod_rewrite.c> Options -MultiViews Options +FollowSymLinks RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php/$1 [L] </IfModule> The webroot has multiple sub-folders which are document roots for different domains. Am working with one of these sub-folders. What is causing this error and how can it be fixed

    Read the article

  • Invalid command 'VirtualDocumentRoot'

    - by andy
    I'm unsure as to why I'm getting the following error when apache is rebooted: Invalid command 'VirtualDocumentRoot', perhaps misspelled or defined by a module not included in the server configuration Action 'start' failed. The snippet it is referring to is this: <VirtualHost *:80> ServerAdmin [email protected] VirtualDocumentRoot /local/www/staging/%1 ServerAlias *.staging.mydomain.com </VirtualHost> I assumed it was a misspelling as it said, but it was copied directly from another server of mine. It works perfect there. Any ideas?

    Read the article

  • Squid stale-while-revalidate not working when max-age=0

    - by Wiliam
    Squid 2.7 always reaches backend, expected is to reach backend using stale-while-revalidate only when cache expires, not when client triggers max-age=0. Script: <?php header('Cache-Control: public, max-age=10, stale-if-error=200, stale-while-revalidate=500'); header("Last-Modified: " . gmdate("D, d M Y H:i:s") . " GMT"); sleep(2); die("OK"); And squid config: # http_port public_ip:port accel defaultsite= default hostname, if not provided http_port 80 accel defaultsite=mydomain.com # IP and port of your main application server (or multiple) cache_peer 127.0.0.1 parent 8000 0 no-query allow-miss originserver name=main # Do not tell the world that which squid version we're running httpd_suppress_version_string on # Remove the Caching Control header for upstream servers header_access Cache-Control deny all #header_access Last-Modified deny all # log all incoming traffic in Apache format logformat combined %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %Hs %<st "%{Referer}>h" "%{User-Agent}>h" %Ss:%Sh access_log /usr/local/squid/var/logs/squid.log combined all cache_effective_user squid refresh_pattern . 10080 90% 999999 ignore-no-cache override-expire ignore-private icp_port 0

    Read the article

  • Lighttpd referer issue

    - by Chris
    I have a problem to block files from accessing from different domains as my one. I have added to my lighty config in the "virual host" following: $HTTP["referer"] !~ "^($|http://www\.my-site\.net)" { url.access-deny = ( "" ) } but anyway the site www.example.com can access http://player.my-site.net/player.swf, also it can be accessed directly without a referrer. any idea? //EDIT here is my old apache .htaccess with a rewrite rule thats works perfect, but i dont know how to convert it for lighty: RewriteEngine on RewriteBase / RewriteCond %{HTTP_REFERER} !^http://my-site\.net/ [NC] RewriteCond %{HTTP_REFERER} !^http://www\.my-site\.net/ [NC] RewriteCond %{HTTP_REFERER} !^http://player\.my-site\.net/ [NC] RewriteCond %{HTTP_REFERER} !^http://stream\.my-site\.net/ [NC] RewriteRule .* - [L,R=404]

    Read the article

  • Thin web server - single or multiple instances per IP address:port?

    - by wchrisjohnson
    I'm deploying a rack/sinatra/web socket app onto several servers and will use thin as the web server (http://code.macournoyer.com/thin/). There are almost no views to show, so I am not front-ending it with a traditional web server like Apache or nginx. In general, you see thin started and the underlying config file for it has the number of server instances to start, say 3, and the port to start with, say 5000. So, in my example, when thin starts, it starts up three instances on a range of ports, starting on port 5000. If I have a series of virtual machines, say 3, 6, 9, etc. that I treat as a cluster, would/should I choose to start a single thin instance on each VM, or multiple instances on each VM? Why? Thanks - Chris

    Read the article

  • is there any valid reason for users to request phpinfo()

    - by The Journeyman geek
    I'm working on writing a set of rules for fail2ban to make life a little more interesting for whoever is trying to bruteforce his way into my system. A good majority of the attempts tend to revolve around trying to get into phpinfo() via my webserver -as below GET //pma/config/config.inc.php?p=phpinfo(); HTTP/1.1 GET //admin/config/config.inc.php?p=phpinfo(); HTTP/1.1 GET //dbadmin/config/config.inc.php?p=phpinfo(); HTTP/1.1 GET //mysql/config/config.inc.php?p=phpinfo(); HTTP/1.1 I'm wondering if there's any valid reason for a user to attempt to access phpinfo() via apache, since if not, i can simply use that, or more specifically the regex GET //[^>]+=phpinfo\(\) as a filter to eliminate these attacks

    Read the article

  • I'll be setting up a dedicated web server at work soon, my first non hobby server - What should I know?

    - by Rogue Coder
    I've been running my own dedicated server running CentOS and a LAMP stack for 2-3 years now, but it's only been hosting my own websites which aren't super important. However, I will soon be setting up a Linux Webserver and Linux Database Server at work, and I'm wondering what are some important things I should be doing. It's an internal server only, so only people in the company can access it. Should I get a slave server for both of my servers for backups? If I do this, how many backups should I be keeping and how often should those backups be done? Right now on my current server I run a cron job nightly to backup my MySQL databases (Usually 40mb files once compressed), and bi-weekly cron jobs to backup my web root. I just store these files on my local computer via FTP. Also, for an internal server like this, should I look at using LightHTTPD or NginX to increase performance, or will Apache be fine?

    Read the article

  • Let CGI-PHP load a non-default shared library.

    - by ralle
    In Apache2 I configured PHP as CGI in a virtual host: SetEnv PHPRC "/usr/local/php5.3" ScriptAlias /php5.3 "/usr/local/php5.3/bin" Action application/php5.3 /php5.3/php-cgi AddType application/php5.3 .php Everything works fine. Now I have some issues with the standard version of the GD because it restricts me in settings several hinting and anti-aliasing stuff for fonts. Therefore I want to modify the GD source and create a new shared library. Since I don't want a modifed library in my system I want only PHP to use that library. My question now: How can I change the Apache configuration in a way that PHP uses a certain new version of the library? Something like this does not work: ScriptAlias /php5.3 "LD_LIBRARY_PATH=/path/to/my/lib:$LD_LIBRARY_PATH /usr/local/php5.3/bin"

    Read the article

  • Best idea dataserver serving small pictures 40 ko

    - by Nicolas Manzini
    I'm designing the server structure for my application in case things go well. I have one server DB connected to multiple server who process connections. All those with lots of RAM and fast processors. (still looking for a way to use the multithread because now it's dumb apache php... so loooots of ram needed). Upon an answer from those servers, the client can then connect to another server to retrieve pictures using the address he previously got from the db. Is it a good idea to have one database server with let's say nginx and ssd disk having to send all pictures to everybody? or should I have multiple server accessing to a shared ssd disk drive or multiple disk updating each other? Also should I put a lot of RAM on the database server? because probably there wont be a picture more popular than another.

    Read the article

  • IPv4 NameVirtualHost, IPv6 VirtualHost

    - by MadHatter
    Like many of us, I have an apache server (2.2.15, plus patches) with a lot of virtual hosts on it. More than I have IPv4 addresses, to be sure, which is why I use NameVirtualHost to run lots of them on the same IPv4 address. I'm busily trying to get everything I do IPv6-enabled. This server now has a routed /64, which gives me an awful lot of v6 addresses to throw around. What I'm trying to find is a simple way to tell each v4-NameVirtualHost that it should also function as a VirtualHost on a unique ipv6 address. I really, really don't want to have to define each virtual host twice. Does anyone know of an elegant way to do this? Or to do something comparable, in case I've embedded any dangerously-ignorant assumptions in my question?

    Read the article

< Previous Page | 324 325 326 327 328 329 330 331 332 333 334 335  | Next Page >