Search Results

Search found 10307 results on 413 pages for 'apache chainsaw'.

Page 333/413 | < Previous Page | 329 330 331 332 333 334 335 336 337 338 339 340  | Next Page >

  • Rejecting new HTTP requests when server reaches a certain throughput

    - by user56221
    I have a requirement to run an HTTP server that rejects new HTTP requests (with a 503, or similar) when the global transfer rate of current HTTP responses exceeds a certain level. For example, if the web server is transferring at 98Mbps, and a new HTTP request arrives, we would want to reject this (as we couldn't guarantee a good speed). I've had a look at mod_cband for Apache, limit_req for nginx, and lighttpd's rate limiting features, but none of them seem to handle my (rather contrived, granted) use case. I should add that I'm open to using pretty much any web server, and am open to implementing this in iptables rules if someone can craft such a rule! (Refusing the TCP connection is fine, it doesn't have to respond with an HTTP 503). Any suggestions?

    Read the article

  • migration of physical server to a virtual solution, what i have to do?

    - by bibarse
    Hello I'm new in this forum, so i would like that you forgive me for my blissfully and my low English level. I'm a trainee in company one month ago, and my mission is to migrate 3 physicals servers to a virtualization technology. The company edit softwares for E-learning so there are lots of data like videos, flash and compressed (zip). This is some inventory of the servers: OS: Debian, 2 redhat, apache, php/mysql, sendMail/Dovecot, webmin with virtualmin template to create dynamically the web sites because there is no sysadmin ... The future provider will be responsible of to secure, update and create the virtual machines (outsourcing) and with a RedHat OS's. So i want that you help me to choose a virtualisation technologie (for the i prefer KVM of Redhat RHEV, VMWare is expensive), how evaluate the hardware needs (this for evolution of 4 or 5 years) and to elaborate a good planing to don't forget any think. Thank you for your responses.

    Read the article

  • php.ini settings change not taking effect for large file uploads

    - by user51347
    My server was just reprovisioned, and my application which uploads large (100M+) files now breaks upon re-installation. the symptom is quite consistent : smaller files (8mb in my tests) upload just fine. Larger files cut off at quite close to the same point every time given a particular computer. a file that fails at 26% will fail at just about the same every time. One that takes 1:40s to fail will take within 2 seconds of that every time, before failure. I have set my php.ini settings extravagently : post_max_size = 512M upload_max_filesize = 512M max_input_time = 3600 max_execution_time 3600 Is there possibly a setting at the Apache Level which would override PHP?

    Read the article

  • safe to setup NAS in virtual image with web server?

    - by Erik
    My current setup is Ubuntu Desktop, running Apache + MySQL + PHP stack. I use this image to host my small website to the outside world. I want to now run a virtual machine on the box, that will act as the FreeNAS box for my internal network. This sounds like a bad idea if the website gets hacked/attacked. Should I just give up and grab a dedicated machine instead? Can I instead virtualize the web server and NAS side by side on this machine?

    Read the article

  • Upgrading Fedora on Amazon to 12 but getting libssl.so.* & libcrypto.so.* are missing

    - by bateman_ap
    I am upgrading to Fedora 12 on a Amazon EC2 using help here: http://www.ioncannon.net/system-administration/894/fedora-12-bootable-root-ebs-on-ec2/ I managed to do a 64 bit instance OK, however facing some problems with a standard one. On the final bit of the install from 11 to 12 I am getting an error: Error: Missing Dependency: libcrypto.so.8 is needed by package httpd-tools-2.2.1.5-1.fc11.1.i586 (installed) Error: Missing Dependency: libssl.so.8 is needed by package httpd-tools-2.2.1.5-1.fc11.1.i586 (installed) This is referenced in the comments from the link above but all it says is: Q: Apache failed, or libssl.so.* & libcrypto.so.* are missing A: These versions are mssing the symlinks they require. Easy fix, go symlink them to the newest versions in /lib However I am afraid I don't know how to do this. If it is any help I tried running the command locate libssl.so and got: /lib/libssl.so.0.9.8b /lib/libssl.so.6

    Read the article

  • Restarting Nagios Using PHP

    - by X-Ware
    I am making a tool that is interacting with NAGIOS where some config files should be added so a restart will be needed. What I need to know is how to restart NAGIOS using PHP code since this tool is written in PHP .. when I try to do this using: shell_exec("service nagios restart"); changes do not take place but when I do this manually by the console all changes I did using the PHP script are applied ... after 2 minutes research I found that I am asking linux to execut this command while I am logged in as apache user so I changed the command to: shell_exec('echo "mypass" | sudo -S service nagios restart'); still having the same problem ... new config files are not read until I restart manually any suggestions will be appreciated :)

    Read the article

  • Running Debian as a Virtual Machine Host

    - by Tone
    I have a need to run a linux server to host subversion, bugzilla over Apache and also act as a file and print server. I also have a need to host Windows Server 2008 virtual machines for development purposes (I'm a .NET guy). The machine I have is a dual core AMD 2.5 Ghz w/2GB RAM. So here are my questions: 1) Should I run Debian (or some other distro) as the base AND create a Debian VM to host subversion, bugzilla and be my file/print server? or is it ok to use Debian as both my VM host and for those other reasons? 2) VMWare has a free server edition, Virtual Box also is another free option. Which one of these is better suited for what I need to do? Are there any other free (or inexpensive) alternatives out there? 3) Will I need a GUI with Debian in order to manage my VMs? 4) Can I run VMs without a GUI to conserve system resources?

    Read the article

  • Performance Drop Lingers after Load [closed]

    - by Charles
    Possible Duplicate: How do you do Load Testing and Capacity Planning for Databases I'm noticing a drop in performance after subsequent load tests. Although our cpu and ram numbers look fine, performance seems to degrade over time as sustained load is applied to the system. If we allow more time between the load tests, the performance gets back to about 1,000 ms, but if you apply load every 3 minutes or so, it starts to degrade to a point where it takes 12,000 ms. None of the application servers are showing lingering apache processes and the number of database connections cools down to about 3 (from a sustained 20). Is there anything else I should be looking out for here?

    Read the article

  • Squid gives always tcp_miss reverse proxy

    - by JaakL
    I added installed latest squid3 in front of apache as reverse proxy. The problem is that it gives always tcp_miss, in fact I have not yet found a single TCP_HIT message in the log file, and most of the content is static. Relevant config values for cache_dir and refresh_pattern are default ones, directory /var/spool/squid3 exists and has some files/folders. I have 100+G free storage, but reconfigure gives warning "WARNING cache_mem is larger than total disk cache space!", which does not make any sense to me. I have googled a lot and seen with similar problems, but none of them has helped.

    Read the article

  • Determining where a file is being cached

    - by RoadieRich
    I've got a debian webserver running apache, from which I regularly download an executable using IE 8 on an XP virtual machine (over LAN). However, I realised that somewhere along the line that I've been repeatedly running the same version (and wondering why my changes weren't being displayed). A Ctrl+F5 in IE will let me download the new version (although the page is always updated with a simple F5). This makes me suspect that the caching is happening in windows/IE, but I'm not certain. Wherever it's happening, is there an easy way to prevent it at the server? Eventually, we're hoping to offer the software to the entire company, and we'd like to avoid having to tell everyone to do a Ctrl+F5 every time there's an updated version.

    Read the article

  • Rewrite 2 different directories with htaccess?

    - by jason
    I have a tricky problem (for me at least). I'm trying to rewrite / to a folder /webroot/www. I have some simple code and it works: RewriteRule ^$ /webroot/www/ [L] However at the same time if the URL starts with components, followed by anything else (ex. foo, as in /components/foo), and foo is an actual directory that exists inside components, I should rewrite to /components/foo/www instead. How can I achieve that? I can't seem to figure it out. I'm using Apache with .htaccess.

    Read the article

  • Setting Environment Variable for Tomcat 6 Servlet

    - by amaevis
    I'm using Ubuntu's default installation of Tomcat 6. I'm deploying a ROOT.war, and trying to set an environment variable specific to it, i.e. accessible from System.getenv() in the Servlet.init(config). According to the docs (http://tomcat.apache.org/tomcat-6.0-doc/config/context.html), I can specify this in a Context element in conf/Catalina/localhost/ROOT.xml. I've created that with these contents: <Context> <Environment name="FOO" value="bar" type="java.lang.String" override="false"/> </Context> And I've deployed the webapp as usual, i.e. to webapps/ROOT.war. Server.getenv("FOO") in the Servlet.init(config) still returns null. What am I missing?

    Read the article

  • nginx block URI request but allow internal directory

    - by Mike Anders
    I'm new to nginx from apache. I'm trying to simply block the URIs: /_mydir/* = / (redirect) But, I want to rewrite: /ex/(.*)$ = /_mydir/$1 I have tried: location /ex/ { rewrite ^/ex/(.*)$ /_mydir/$1 last; } location /_mydir { rewrite ^/_mydir/(.*)$ http://$http_host/ redirect; } But what always happens is once I block the '/_mydir' directory the rewrite is also blocked. I have also tried: location /_mydir/ { internal; } This also ends up blocking the rewrite. All help is greatly appreciated, thanks. UPDATE: I fixed this problem using: rewrite ^/ex/(.*)$ /_mydir/$1 break;

    Read the article

  • SVN + Active Directory

    - by rudigrobler
    How do I setup SVN (On a linux box - Centos 5.2) to authenticate using Active Directory? Also: Any tips or tricks? What should I watch out for? How fine grain can I set the access? This group have access to these projects, etc? And how does this work if I use something like tortoissvn to access my repository? What I have learned so far: you need the following modules installed for apache mod_ldap mod_authnz_ldap mod_dav mod_dav_svn mod_authz_svn?

    Read the article

  • How much ram to be able to convert large (5-6MB) jpegs? [closed]

    - by cosmicbdog
    I've got a project where we want to be processing large jpegs (5-6MB) with apache and php (using GD library). My understanding is that the server converts the image into a BMP making it quite ram heavy and currently we're unable to do it with our 1gb of memory. Here's the error we get: Fatal error: Allowed memory size of 67108864 bytes exhausted (tried to allocate 17408 bytes) How much ram should we be looking at running with to process images of this size? Edit: As Chris S the purist highlighted below, my post is apparently vague. I am doing the most basic and common manipulation of an image, say turning it from a 4352px x 3264px jpg of 5mb in size, to a 900px x 675px file.

    Read the article

  • Java and Sendmail HELO requires domain address

    - by ealgestorm
    I am trying to set up emailing from a java web application hosted on a linux server (Cent OS) in apache. Sendmail is working fine from the command line as root on localhost but when trying to send emails from the java web app (also on the same server from localhost) the following java exception is thrown. 501 5.0.0 HELO requires domain address EDIT: I have read that some people have found this is due to an incorrect hosts entry currently the hosts file contains 127.0.0.1 Centos-VPS localhost.localdomain localhost and i'm not sure what the Centos-VPS bit at the start is for but this is a clients hosted server so don't really want to break stuff EDIT see the RFC is helpful ... 501 Syntax error in parameters or arguments Now I know what the problem is! (note the sarcasm people.)

    Read the article

  • How to secure a new server OS installation

    - by Pat R Ellery
    I bought (and just received) a new 1u dell poweredge 860 (got it on ebay for $35). I finished installing Ubuntu Server (Ubuntu Server 12.04.3 LTS), install apache/mariadb/memcache/php5 works great but I am scared about security. so far I am the only one using the server but eventually more people (friends, friends of friends) will use this server, use ssh etc... I want to know what can I do to secure all the information and not get hacked, both from the web or ssh or ddos and any other attack possible. Does Ubuntu Server does it for you right away? or I have to fix it my self? Thank you EDIT: I installed (so far): All dev tools ssh server LAMP I didn't install: Graphical interface

    Read the article

  • RedirectPermanent vs RewriteRule [R]

    - by notbrain
    I currently have a perm_redirects.conf file that gets included into my apache config stack where I have lines in the format RedirectPermanent /old/url/path /new/url/path It looks like I'm required to use an absolute URL for the new path, e.g.: http://example.com/new/url/path. In the logs I'm getting "incomplete redirect target /new/url/path was corrected to http://example.com/new/url/path." (paraphrased). In the 2.2 docs for RewriteRule, at the bottom they show the following as being a valid redirect, with only the url-paths instead of an abs URL for the right hand side of the redirect: RewriteRule ^/old/url/path(.*) /new/url/path$1 [R] But I can't seem to get that format to work to replicate the functionality of the RedirectPermanent version. Is this possible?

    Read the article

  • uploading files greater than 1MB = connection resets

    - by Legit
    I'm using nginx on the frontend as "proxy cache" and apache on the backend, i've set my PHP settings to the following: error_log = /var/www/site1/php_error.log error_reporting = 22527 file_uploads = On log_errors = On max_execution_time = 0 max_file_uploads = 20 max_input_time = -1 memory_limit = 512M post_max_size = 0 upload_max_filesize = 1000M What's the problem? Uploading files less than 1MB is successful but anything greater than that, Google Chrome outputs: Error 101 (net::ERR_CONNECTION_RESET): The connection was reset. I already checked for the error log file but it doesn't exist in the directory. I also checked /var/log/httpd/error_log but no uploading related problems. I don't know anything else which might have caused the problem so I have reached out for your helping hand. Thanks!

    Read the article

  • All HTTPS, or is it OK to accept HTTP and redirect (secure vs. user friendly)

    - by tharrison
    Our site currently redirects requests sent to http://example.com to https://example.com -- everything beyond this is served over SSL. For now, the redirect is done with an Apache rewrite rule. Our site is dealing with money, however, so security is pretty important. Does allowing HTTP in this way pose any greater security risk than just not opening or listening on port 80? Ideally, it's a little more user-friendly to redirect. (I am aware that SSL is only one of a large set of security considerations, so please make the generous assumption that we have done at least a "very good" job of covering various security bases.)

    Read the article

  • Can't access link in network using fully qualified domain name

    - by user1033715
    I have installed windows server 2003 and configured Domain controller (domain name - xyz.com) and DNS service. for that I have configured fully qualified domain name as server.xyz.com also I have installed apache tomcat with port 8080 on that server and accessed link successfully using "http://localhost:8080", "http://ip address of server:8080", "http://server.xyz.com:8080". but its working for local machine, and when I tried to access it from another machine in same network using "http://ip address of server:8080" its worked for me. but when I tried it using fully qualified domain name i.e. "http://server.xyz.com:8080" it's giving me error, "Could not connect to server.xyz.com" Please guide me getting this setup done. I need to be able to access this link "http://ip address of server:8080" as "http://server.xyz.com:8080" outside my network Any suggestion are highly appreciated..

    Read the article

  • Getting started with server and system adminstration

    - by sid__
    I am web developer and I have been doing a lot of server configurations and system administrative tasks lately I was wondering if someone could recommend books and tutorials that would help understand how things are done the right way. Though I mostly work with Apache and Nginx for configuring and maintaining my applications, my approach has been largely trial and error based with information off google and blogs. Could someone recommend books that explain how to get started with system administrative tasks (preferably in a tutorial fashion so that I may try and understand the workings of the system). I work mostly on Linux systems on EC2 Thanks in advance.

    Read the article

  • PHP5.3 FastCGI doesn't use global config's values

    - by mega.venik
    There's a Centos6.3 system. Apache 2.2.15 + mod_fcgid + PHP 5.3.3 There's a problem with date.timezone value. It's mentioned in the global /etc/php.ini like this: date.timezone = "Europe/Moscow" And doesn't mentioned in user's local php.ini. As a result, I'm getting lot's of warnings like: Warning: date() [function.date]: It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected 'Europe/Helsinki' for 'EEST/3.0/DST' instead in ... Including the date.timezone parameter into the user's php.ini solves the problem, but I don't think, that it's the best solution. Maybe someone have faced this problem and can give an advice? Thanks! P.S. Creating /etc/php.d/timezone.ini with the timezone info aslo does nothing:(

    Read the article

  • MongoDB PHP EC2 Setup Configuration

    - by nathansizemore
    I am new to web development and server set up. I am looking for some advice or a link to a tutorial on setting up a production system up. Right now, I have a server (Ubuntu, Apache, MongoDB, and PHP). It receives a request, PHP queries Mongo, and PHP sends out the requested data. How do I make that work with more servers? I've read that you can make a cluster of a primary and two slave nodes which work as separate servers running Mongo, but do those also run PHP? Or is the primary the only one running the PHP? I have read some docs on Mongo site and a video of someone from 10gen going through it, but they are geared towards people that seem to already understand this stuff, I have no idea and need to start from a beginning stage. If anyone can help me understand where PHP (Acting as my API) lives in these clusters, that would be greatly appreciated! Thanks in advance for any help!

    Read the article

  • Editing remotely the PHP files on a Centos server

    - by Alex2012
    I have a intranet web server (Centos 6, Apache, PHP) to which I would like to give access to a developer. He will connect by remote desktop from Windows 7 to Ubuntu 12.4 and from here by SSH to /var/www/html folder where it has to create and edit the files. This solution was chosen because: - I could not make a remote desktop connection from Windows to Centos - The web developer need some editor for PHP files and is not allowed to install software on Windows 7 machine - it is more a test solution ( we are all learning to use Linux). When the developer is connected from Ubuntu to Centos by SSH (SFTP) he could save the changes only if on Centos the account used to connect has ownership to that folder. Can you please tell how can I give all required rights. I tried different solutions found on Internet but without to much success. Are there other way to connect to Centos server?

    Read the article

< Previous Page | 329 330 331 332 333 334 335 336 337 338 339 340  | Next Page >