Search Results

Search found 17045 results on 682 pages for 'high cpu usage'.

Page 72/682 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • Lynx web browser usage

    - by Andrew
    Does anyone still use the Lynx text-only web browser? It would seem useful for certain classes of low-end mobile devices, especially if one is billed per KB of data transfer.

    Read the article

  • Memory cache Ubuntu 9.10 server x86 doesn't work as expected

    - by Matthijs
    We're using an Ubuntu 9.10 server to transfer Ghost-image files. We configured it only with Samba. And the DOS-clients connect to Samba. The latest updates are installed and so far the servers is running fine. When we image 10 pc's with the same image of 2 files of 2GB there's no disk activity. Everything is loaded in the RAM. There's 4GB in the server. But when we use 2 pc's with 2 different image files of 500 MB (8x) files then there's a lot of continuous disk activity. The speed is lower. So it seems that Ubuntu doesn't cache more then one big file. Are there settings to change this behaviour?

    Read the article

  • How to configure mod_proxy_balancer to gracefully fail under high load

    - by bramp
    We have a system which has one Apache instance in front of multiple tomcats. These tomcats then connect to various databases. We balance the load to the tomcat with mod_proxy_balancer. Currently we are receiving 100 requests a second, the load on the Apache server is quite low, but due to database heavy operations on the tomcats, the load there is roughly 25% (of what I estimate they can handle). In a few weeks there is an event happening and we estimate that our requests will jump significant, maybe by a factor of 10. I'm doing everything I can do reduce the load on our tomcats, but I know we are going to run out of capacity, so I would like to fail gracefully. By this I mean, instead of trying to deal with too many connections which all timeout, I would like Apache to somehow monitor average response time, and as soon as the response time to Tomcat is getting above some threshold, I would like a error page displayed. This means that users who are lucky still get a page rendered quickly, and those who are unlucky get a error page quickly. Instead of everyone waiting far too long for their page, and eventually everyone timing out, and the database being swamped with queries which are never used. Hopefully this makes sense, so I was looking for suggestions on how I could achieve this. thanks

    Read the article

  • Abnormally high amount of Transmit discards reported by Solarwinds for multiple switches

    - by Jared
    I have several 3750X Cisco switches that, according to our Solarwinds NPM, are producing billions of transmit discards per day. I'm not sure why it's reporting these discards. Many of the ports on the 3750X's have 2960's connected to them and are hardcoded as trunk ports. Solarwinds NPM version 10.3 Cisco IOS version 12.2(58)SE2 Total output drops: 29139431: GigabitEthernet1/0/43 is up, line protocol is up (connected) Hardware is Gigabit Ethernet, address is XXXX (bia XXXX) Description: XXXX MTU 1500 bytes, BW 100000 Kbit/sec, DLY 100 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation ARPA, loopback not set Keepalive set (10 sec) Full-duplex, 100Mb/s, media type is 10/100/1000BaseTX input flow-control is off, output flow-control is unsupported ARP type: ARPA, ARP Timeout 04:00:00 Last input 00:00:47, output 00:00:50, output hang never Last clearing of "show interface" counters 1w4d Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 29139431 Queueing strategy: fifo Output queue: 0/40 (size/max) 5 minute input rate 0 bits/sec, 0 packets/sec 5 minute output rate 35000 bits/sec, 56 packets/sec 51376 packets input, 9967594 bytes, 0 no buffer Received 51376 broadcasts (51376 multicasts) 0 runts, 0 giants, 0 throttles 0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored 0 watchdog, 51376 multicast, 0 pause input 0 input packets with dribble condition detected 115672302 packets output, 8673778028 bytes, 0 underruns 0 output errors, 0 collisions, 0 interface resets 0 unknown protocol drops 0 babbles, 0 late collision, 0 deferred 0 lost carrier, 0 no carrier, 0 pause output 0 output buffer failures, 0 output buffers swapped out sh controllers gigabitEthernet 1/0/43 utilization: Receive Bandwidth Percentage Utilization : 0 Transmit Bandwidth Percentage Utilization : 0

    Read the article

  • High-end CD/DVD burners?

    - by Robert Harvey
    Do such things exist? I wouldn't mind paying $100 to $200 for one, but it must: Have a very fast spin-up to ready time (less than one second) Have an even faster dismount time (say, half second) Can go from dead stop to laying down bits in two seconds or less Can be instantly abortable and resettable regardless of current operational state Does anyone know of such an animal?

    Read the article

  • excel rows,find if include,low and high

    - by Malin Pedersen
    Link to full-size image For what is marked in orange: As mentioned in the example in the picture it says "headphones". I would like it to search through all the lines in column A, to find something that has that name in it, then it should count the number of people, and come out with the number (in how many) the "middle price" I want it to take the price of B (depending on where it found it called headphones) and take the average price of it. In secured, as I would like it to count how many of them (from the number, or from the beginning) that have "secured" as "no" and "yes." I would like to use this on several things. For what is marked in pink: Where would I find the average price of all the goods, and what the name of the particular item is? Same with the highest and lowest price. How can I do this?

    Read the article

  • Apache heavy load VIRT vs RES memory

    - by pako
    I have a Debian 5 server, which gets a lot of traffic. Right now the server has 4 GB of RAM and no swap memory. I see in top that Apache processes consume roughly 180 MB virtual memory (VIRT) each, and 16 MB of real RAM (RES). So how many Apache threads can I have running at the same time? About 4 GB / 180 MB = 22 or 4 GB / 16 MB = 256?

    Read the article

  • Limit download usage for clients

    - by Kumar P
    i am maintaining few windows xp machines under rhel 5 . i want to set quota for download file size. How to do it ? I mean, in lan usar A's maximum donload file size is 300 MB , and user B's maximum download file size in 200 MB. I want to block downloading when user try to download more than 300 MB file.User should not allow to download 300MB file at a time. Or how to set quota for maximum download per day, is there possible to do it ? How can i do this ?

    Read the article

  • Gateway MX6440 CPU Upgrade

    - by BPugh
    I have received an a Gateway MX6440 laptop as a freebie, but I'm interested in upgrading its AMD Turion 64 ML-32 (socket 754) to something faster (and more cache). I know the range of processors that could work based on the family list in Wikipedia. However, this computer has the stock bios, and any updates I haven't applied from Gateway doesn't specify processor support. I'm looking to go to at least a 2.2 (ML-40). Has anybody upgraded the processor in this model or other in the series success or failure and do you happen to have any guides handy for working with the heat sink? Any Googling I have done keeps hitting RAM marketers. Update Computer died before I had a chance to try this out.

    Read the article

  • AD LDS High availability

    - by user792974
    We are currently using CAS for multiple directory authentication. AD for internal users, AD LDS for external users. I've read that NLB is a possible solution, but wondering if this is possible with SRV records, and how about you would correctly configure that. With our AD directory, I can bind with olddomain.local, and hit any of the DCs in the domain. We don't want to hardcode servernames into CAS, so the end goal is to bind with LDSdomain.gov. nslookup -type=srv _ldap._tcp.LDSdomain.gov returns _ldap._tcp.LDSdomain.gov SRV service location: priority = 0 weight = 100 port = 1025 svr hostname = server01 _ldap._tcp.LDSdomain.gov SRV service location: priority = 0 weight = 200 port = 1025 svr hostname = server02

    Read the article

  • CPU fan twitching phase leds blinking - computer won't boot

    - by SooX
    I have a big problem that I got today. Yesterday computer was "on" normally, I went to sleep without shutting it down. When I woke up, I heard a strange sound and was unable to bring it up from hibernate. I unplugged the PSU. When I plugged the PSU back in, the sound came back. When I opened the case, I saw the fan "twitching" like it is about to start and fan LEDs were blinking. Also, motherboard LEDs were blinking in the same pattern - the first green one has more of luminosity then others. When I cut down the power with 0/1 button on PSU, the fans continue to make sounds like the machine is trying to boot before the capacitors run out and the power dies. Does anyone have a clue what to do? I tried disassembling everything but that doesn't work. I will try with friend's PSU later today.

    Read the article

  • High I/O latency with software RAID, LUKS encrypted and LVM partitioned KVM setup

    - by aef
    I found out a performance problems with a Mumble server, which I described in a previous question are caused by an I/O latency problem of unknown origin. As I have no idea what is causing this and how to further debug it, I'm asking for your ideas on the topic. I'm running a Hetzner EX4S root server as KVM hypervisor. The server is running Debian Wheezy Beta 4 and KVM virtualisation is utilized through LibVirt. The server has two different 3TB hard drives as one of the hard drives was replaced after S.M.A.R.T. errors were reported. The first hard disk is a Seagate Barracuda XT ST33000651AS (512 bytes logical, 4096 bytes physical sector size), the other one a Seagate Barracuda 7200.14 (AF) ST3000DM001-9YN166 (512 bytes logical and physical sector size). There are two Linux software RAID1 devices. One for the unencrypted boot partition and one as container for the encrypted rest, using both hard drives. Inside the latter RAID device lies an AES encrypted LUKS container. Inside the LUKS container there is a LVM physical volume. The hypervisor's VFS is split on three logical volumes on the described LVM physical volume: one for /, one for /home and one for swap. Here is a diagram of the block device configuration stack: sda (Physical HDD) - md0 (RAID1) - md1 (RAID1) sdb (Physical HDD) - md0 (RAID1) - md1 (RAID1) md0 (Boot RAID) - ext4 (/boot) md1 (Data RAID) - LUKS container - LVM Physical volume - LVM volume hypervisor-root - LVM volume hypervisor-home - LVM volume hypervisor-swap - … (Virtual machine volumes) The guest systems (virtual machines) are mostly running Debian Wheezy Beta 4 too. We have one additional Ubuntu Precise instance. They get their block devices from the LVM physical volume, too. The volumes are accessed through Virtio drivers in native writethrough mode. The IO scheduler (elevator) on both the hypervisor and the guest system is set to deadline instead of the default cfs as that happened to be the most performant setup according to our bonnie++ test series. The I/O latency problem is experienced not only inside the guest systems but is also affecting services running on the hypervisor system itself. The setup seems complex, but I'm sure that not the basic structure causes the latency problems, as my previous server ran four years with almost the same basic setup, without any of the performance problems. On the old setup the following things were different: Debian Lenny was the OS for both hypervisor and almost all guests Xen software virtualisation (therefore no Virtio, also) no LibVirt management Different hard drives, each 1.5TB in size (one of them was a Seagate Barracuda 7200.11 ST31500341AS, the other one I can't tell anymore) We had no IPv6 connectivity Neither in the hypervisor nor in guests we had noticable I/O latency problems According the the datasheets, the current hard drives and the one of the old machine have an average latency of 4.12ms.

    Read the article

  • Windows Server task manager displays much higher memory use than sum of all processes' working set s

    - by Sleepless
    I have a 16 GB Windows Server 2008 x64 machine mostly running SQL Server 2008. The free memory as seen in Task Manager is very low (128 MB at the moment), i.e. about 15.7 GB are used. So far, so good. Now when I try to narrow down the process(es) using the most memory I get confused: None of the processes have more than 200MB Working Set Size as displayed in the 'Processes' tab of Task Manager. Well, maybe the Working Set Size isn't the relevant counter? To figure that out I used a PowerShell command [1] to sum up each individual property of the process object in sort of a brute force approach - surely one of them must add up to the 15.7 GB, right? Turns out none of them does, with the closest being VirtualMemorySize (around 12.7 GB) and PeakVirtualMemorySize (around 14.7 GB). WTF? To put it another way: Which of the numerous memory related process information is the "correct" one, i.e. counts towards the server's physical memory as displayed in the Task Manager's 'Performance' tab? Thank you all! [1] $erroractionpreference="silentlycontinue"; get-process | gm | where-object {$.membertype -eq "Property"} | foreach-object {$.name; (get-process | measure-object -sum $_.name ).sum / 1MB}

    Read the article

  • Setting up MongoDB in High Performance Computing LSF linux cluster

    - by Dnaiel
    I am trying to run mongo in a LSF cluster computing environment where I have no admin control. Our sysadmin installed mongodb, but it is not running. Any ideas on what should I ask the server admin to do for it to run? Or if I could run it locally? [node1382]allelix> mongod --dbpath /users/dnaiel/ma/mongodb/ Tue Oct 2 21:33:48 [initandlisten] MongoDB starting : pid=22436 port=27017 dbpath=/seq/epigenome01/allelix/ma/mongodb/ 64-bit host=node1382 Tue Oct 2 21:33:48 [initandlisten] Tue Oct 2 21:33:48 [initandlisten] ** WARNING: You are running on a NUMA machine. Tue Oct 2 21:33:48 [initandlisten] ** We suggest launching mongod like this to avoid performance problems: Tue Oct 2 21:33:48 [initandlisten] ** numactl --interleave=all mongod [other options] Tue Oct 2 21:33:48 [initandlisten] Tue Oct 2 21:33:48 [initandlisten] db version v2.2.0, pdfile version 4.5 Tue Oct 2 21:33:48 [initandlisten] git version: f5e83eae9cfbec7fb7a071321928f00d1b0c5207 Tue Oct 2 21:33:48 [initandlisten] build info: Linux ip-10-2-29-40 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 20 17:48:28 EST 2009 x86_64 BOOST_LIB_VERSION=1_49 Tue Oct 2 21:33:48 [initandlisten] options: { dbpath: "/users/dnaiel/ma/mongodb/" } Tue Oct 2 21:33:48 [initandlisten] journal dir=users/dnaiel/ma/mongodb/journal Tue Oct 2 21:33:48 [initandlisten] recover begin Tue Oct 2 21:33:48 [initandlisten] info no lsn file in journal/ directory Tue Oct 2 21:33:48 [initandlisten] recover lsn: 0 Tue Oct 2 21:33:48 [initandlisten] recover /seq/epigenome01/allelix/ma/mongodb/journal/j._0 Tue Oct 2 21:33:48 [initandlisten] recover cleaning up Tue Oct 2 21:33:48 [initandlisten] removeJournalFiles Tue Oct 2 21:33:48 [initandlisten] recover done Tue Oct 2 21:33:48 [websvr] admin web console waiting for connections on port 28017 Tue Oct 2 21:33:48 [initandlisten] waiting for connections on port 27017 It basically waits forever and cannot start mongodb. These servers are not webservers but they do have network access, it's a cloud computing LSF environment system. Any advice would be welcome, thanks in advance.

    Read the article

  • Optimize php-fpm and varnish for a powerfull server

    - by Jim
    My setup is: Intel® Core™ i7-2600 and RAM 16 GB DDR3 RAM varnish+nginx+php-fpm+apc for a not very heavy WordPress blog with W3 Total Cache and CDN My problem is that after 55 hits per second according to blitz.io varnish starts giving out timeouts. CPU usage at this time is hardly 1%. Free memory at all time remains 10GB+. I tried benchmarking php-fpm directly with result of 150hits/s without any timeouts. But after that the CPU usage goes 100% and it stops responding. Can you help me optimize it to handle more? As i understand nginx has nothing to do over here so i dont put its config. php-fpm config listen = /tmp/php5-fpm.sock listen.allowed_clients = 127.0.0.1 user = nginx group = nginx pm = dynamic pm.max_children = 150 pm.start_servers = 7 pm.min_spare_servers = 2 pm.max_spare_servers = 15 pm.max_requests = 500 slowlog = /var/log/php-fpm/www-slow.log php_admin_value[error_log] = /var/log/php-fpm/www-error.log php_admin_flag[log_errors] = on apc extension = apc.so apc.enabled=1 apc.shm_size=512MB apc.num_files_hint=0 apc.user_entries_hint=0 apc.ttl=7200 apc.use_request_time=1 apc.user_ttl=7200 apc.gc_ttl=3600 apc.cache_by_default=1 apc.filters apc.mmap_file_mask=/tmp/apc.XXXXXX apc.file_update_protection=2 apc.enable_cli=0 apc.max_file_size=1M apc.stat=1 apc.stat_ctime=0 apc.canonicalize=0 apc.write_lock=1 apc.report_autofilter=0 apc.rfc1867=0 apc.rfc1867_prefix =upload_ apc.rfc1867_name=APC_UPLOAD_PROGRESS apc.rfc1867_freq=0 apc.rfc1867_ttl=3600 apc.include_once_override=0 apc.lazy_classes=0 apc.lazy_functions=0 apc.coredump_unmap=0 apc.file_md5=0 apc.preload_path Varnish VCL backend default { .host = "127.0.0.1"; .port = "8080"; .connect_timeout = 6s; .first_byte_timeout = 6s; .between_bytes_timeout = 60s; } acl purgehosts { "localhost"; "127.0.0.1"; } # Called after a document has been successfully retrieved from the backend. sub vcl_fetch { # Uncomment to make the default cache "time to live" is 5 minutes, handy # but it may cache stale pages unless purged. (TODO) # By default Varnish will use the headers sent to it by Apache (the backend server) # to figure out the correct TTL. # WP Super Cache sends a TTL of 3 seconds, set in wp-content/cache/.htaccess set beresp.ttl = 24h; # Strip cookies for static files and set a long cache expiry time. if (req.url ~ "\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$") { unset beresp.http.set-cookie; set beresp.ttl = 24h; } # If WordPress cookies found then page is not cacheable if (req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)") { # set beresp.cacheable = false;#versions less than 3 #beresp.ttl>0 is cacheable so 0 will not be cached set beresp.ttl = 0s; } else { #set beresp.cacheable = true; set beresp.ttl=24h;#cache for 24hrs } # Varnish determined the object was not cacheable #if ttl is not > 0 seconds then it is cachebale if (!beresp.ttl > 0s) { # set beresp.http.X-Cacheable = "NO:Not Cacheable"; } else if ( req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)" ) { # You don't wish to cache content for logged in users set beresp.http.X-Cacheable = "NO:Got Session"; return(hit_for_pass); #previously just pass but changed in v3+ } else if ( beresp.http.Cache-Control ~ "private") { # You are respecting the Cache-Control=private header from the backend set beresp.http.X-Cacheable = "NO:Cache-Control=private"; return(hit_for_pass); } else if ( beresp.ttl < 1s ) { # You are extending the lifetime of the object artificially set beresp.ttl = 300s; set beresp.grace = 300s; set beresp.http.X-Cacheable = "YES:Forced"; } else { # Varnish determined the object was cacheable set beresp.http.X-Cacheable = "YES"; if (beresp.status == 404 || beresp.status >= 500) { set beresp.ttl = 0s; } # Deliver the content return(deliver); } sub vcl_hash { # Each cached page has to be identified by a key that unlocks it. # Add the browser cookie only if a WordPress cookie found. if ( req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)" ) { #set req.hash += req.http.Cookie; hash_data(req.http.Cookie); } } # vcl_recv is called whenever a request is received sub vcl_recv { # remove ?ver=xxxxx strings from urls so css and js files are cached. # Watch out when upgrading WordPress, need to restart Varnish or flush cache. set req.url = regsub(req.url, "\?ver=.*$", ""); # Remove "replytocom" from requests to make caching better. set req.url = regsub(req.url, "\?replytocom=.*$", ""); remove req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; # Exclude this site because it breaks if cached if ( req.http.host == "sr.ituts.gr" ) { return( pass ); } # Serve objects up to 2 minutes past their expiry if the backend is slow to respond. set req.grace = 120s; # Strip cookies for static files: if (req.url ~ "\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$") { unset req.http.Cookie; return(lookup); } # Remove has_js and Google Analytics __* cookies. set req.http.Cookie = regsuball(req.http.Cookie, "(^|;\s*)(__[a-z]+|has_js)=[^;]*", ""); # Remove a ";" prefix, if present. set req.http.Cookie = regsub(req.http.Cookie, "^;\s*", ""); # Remove empty cookies. if (req.http.Cookie ~ "^\s*$") { unset req.http.Cookie; } if (req.request == "PURGE") { if (!client.ip ~ purgehosts) { error 405 "Not allowed."; } #previous version ban() was purge() ban("req.url ~ " + req.url + " && req.http.host == " + req.http.host); error 200 "Purged."; } # Pass anything other than GET and HEAD directly. if (req.request != "GET" && req.request != "HEAD") { return( pass ); } /* We only deal with GET and HEAD by default */ # remove cookies for comments cookie to make caching better. set req.http.cookie = regsub(req.http.cookie, "1231111111111111122222222333333=[^;]+(; )?", ""); # never cache the admin pages, or the server-status page, or your feed? you may want to..i don't if (req.request == "GET" && (req.url ~ "(wp-admin|bb-admin|server-status|feed)")) { return(pipe); } # don't cache authenticated sessions if (req.http.Cookie && req.http.Cookie ~ "(wordpress_|PHPSESSID)") { return(lookup); } # don't cache ajax requests if(req.http.X-Requested-With == "XMLHttpRequest" || req.url ~ "nocache" || req.url ~ "(control.php|wp-comments-post.php|wp-login.php|bb-login.php|bb-reset-password.php|register.php)") { return (pass); } return( lookup ); } Varnish Daemon options DAEMON_OPTS="-a :80 \ -T 127.0.0.1:6082 \ -f /etc/varnish/ituts.vcl \ -u varnish -g varnish \ -S /etc/varnish/secret \ -p thread_pool_add_delay=2 \ -p thread_pools=8 \ -p thread_pool_min=100 \ -p thread_pool_max=1000 \ -p session_linger=50 \ -p session_max=150000 \ -p sess_workspace=262144 \ -s malloc,5G" Im not sure where to start, should i for start optimize php-fpm and then go to varnish or php-fpm is at its max right now so i should start looking for the problem in varnish?

    Read the article

  • idle proccesses and high memory bad? uwsgi/django

    - by JimJimThe3rd
    I have a VPS with 256MB of ram. I'm running nginx, uwsgi and postgresql on Ubuntu 12.04 for a soon to be Django site. About 200MB of ram are being used despite the website not being active, the uwsgi processes seem to just be idling. Is this bad? I once heard that having a bunch of free memory isn't necessarily a good metric because it is possible that the memory in use can easily be freed up. I mean, it is possible that the server is storing commonly used "stuff" in case it is accessed but is more than happy to dump it if the ram is needed. But I'm really not sure, hence me asking this question. If it is bad I could set some of the application loading options for uwsgi like "cheap" or "idle" mode. Screenshot of my htop

    Read the article

  • Optimize apache for 10K+ wordpress views a day on 2GB RAM E6500 CPU

    - by Broke artist
    I have a dedicated server with apache/php on ubuntu serving my Wordpress blog with about 10K+ pageviews a day. I have W3TC plug in installed with APC. But every now and then server stop responding or goes dead slow and i have to restart apache to get it back. Heres my config what am i doing wrong? ServerRoot "/etc/apache2" LockFile /var/lock/apache2/accept.lock PidFile ${APACHE_PID_FILE} TimeOut 40 KeepAlive on MaxKeepAliveRequests 200 KeepAliveTimeout 2 StartServers 5 MinSpareServers 5 MaxSpareServers 8 ServerLimit 80 MaxClients 80 MaxRequestsPerChild 1000 StartServers 3 MinSpareServers 3 MaxSpareServers 3 ServerLimit 80 MaxClients 80 MaxRequestsPerChild 1000 StartServers 3 MinSpareServers 3 MaxSpareServers 3 ServerLimit 80 MaxClients 80 MaxRequestsPerChild 1000 User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} AccessFileName .htaccess Order allow,deny Deny from all Satisfy all DefaultType text/plain HostnameLookups Off ErrorLog /var/log/apache2/error.log LogLevel error Include /etc/apache2/mods-enabled/.load Include /etc/apache2/mods-enabled/.conf Include /etc/apache2/httpd.conf Include /etc/apache2/ports.conf LogFormat "%v:%p %h %l %u %t \"%r\" %s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %s %O" common LogFormat "%{Referer}i - %U" referer LogFormat "%{User-agent}i" agent CustomLog /var/log/apache2/other_vhosts_access.log vhost_combined Include /etc/apache2/conf.d/ Include /etc/apache2/sites-enabled/

    Read the article

  • Deleting large no of files on linux eats up CPU

    - by Sanjay
    I generate more than 50GB of cache files on my RHEL server (and typical file size is 200kb so no of files is huge). When I try to delete these files it takes 8-10 hours. However, the bigger issue is that the system load goes to critical for these 8-10 hours. Is there anyway where I can keep the system load under control during the deletion. I tried using nice -n19 rm -rf * but that doesn't help in system load. P.S. I asked the same question on superuser.com but didn't get a good enough answer so trying here.

    Read the article

  • Apache2 memory usage when uploading large files

    - by abhaga
    Hi, I am running apache2.2.12 along with PHP 5.2.10. PHP is configured to run as a separate process through fcgid. The problem is that when users upload a file, size of the apache process swells by almost the same amount. So if somebody tries to upload a 200 MB file, one of the child process swells to current size+200 MB. If 2 users simultaneously start uploading, my server crashes. Now it is the virtual memory size which is increasing but since I am on a OpenVZ based VPS, that is what counts. My questions are: Is it the normal Apache behavior or can I do something to fix this? If not, is there a more memory efficient way of handling big file uploads. Going by the current behavior, I will need 1 GB of free RAM for every apache child accepting a upload. Thanks! Abhaya -

    Read the article

  • How to configure mod_proxy_balancer to gracefully fail under high load

    - by bramp
    We have a system which has one Apache instance in front of multiple tomcats. These tomcats then connect to various databases. We balance the load to the tomcat with mod_proxy_balancer. Currently we are receiving 100 requests a second, the load on the Apache server is quite low, but due to database heavy operations on the tomcats, the load there is roughly 25% (of what I estimate they can handle). In a few weeks there is an event happening and we estimate that our requests will jump significant, maybe by a factor of 10. I'm doing everything I can do reduce the load on our tomcats, but I know we are going to run out of capacity, so I would like to fail gracefully. By this I mean, instead of trying to deal with too many connections which all timeout, I would like Apache to somehow monitor average response time, and as soon as the response time to Tomcat is getting above some threshold, I would like a error page displayed. This means that users who are lucky still get a page rendered quickly, and those who are unlucky get a error page quickly. Instead of everyone waiting far too long for their page, and eventually everyone timing out, and the database being swamped with queries which are never used. Hopefully this makes sense, so I was looking for suggestions on how I could achieve this. thanks

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >