Search Results

Search found 13249 results on 530 pages for 'performance tuning'.

Page 126/530 | < Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >

  • How to find out which process is hogging the linux server?

    - by user1149518
    We have a RHEL server. Today it suddenly became slow. Symptoms - It was responding slow to ping queries from other server. When I try to login using ssh, it was taking about 10 seconds to login. I was able to resolve the problem by doing some guess work. I killed one process which I thought was culprit. Which resolved the problem. Though I would like to know what's proper approach to detect the culprit in such kind of "slow server" situations. Le me know proper way to resolving such slowness issues and decting the process causing the slowness. These were the conditions when the server was slow - # vmstat 3 3 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 1 1 176 6730868 285052 4899676 0 0 3 4 0 0 1 1 97 1 0 0 0 176 6751576 285064 4899704 0 0 0 115 15307 37171 1 1 96 3 0 0 0 176 6751948 285068 4899700 0 0 0 23 14813 39559 1 1 98 1 0 # top top - 16:38:18 up 150 days, 19:36, 64 users, load average: 1.68, 1.46, 1.44 Tasks: 1287 total, 2 running, 1284 sleeping, 1 stopped, 0 zombie Cpu(s): 1.3%us, 1.7%sy, 0.1%ni, 95.9%id, 0.7%wa, 0.0%hi, 0.2%si, 0.0%st Mem: 16620824k total, 9867124k used, 6753700k free, 287424k buffers Swap: 8193140k total, 176k used, 8192964k free, 4898996k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 26258 khk 34 19 130m 47m 7088 S 11.2 0.3 385:32.42 edm Though I would like to know what's proper approach to detect the culprit in such kind of "slow server" situations. Le me know proper way to resolving such slowness issues and decting the process causing the slowness.

    Read the article

  • Calculating memory footprints using /proc/sysvipc/shm

    - by MarkTeehan
    This is for a SLES 10 database server. One of my servers runs three databases and three app servers; I am analyzing how their shared memory segments grow and shrink to avoid intermittent out-of-memory scenarios. "Top" is hot helpful for this since its calculations for RES and VIRT are inconsistent. I am doing this by matching up the contents of /proc/sysvipc/shm with memory usage reported by the database admin console. I do this by totaling up saving the contents of /proc/sysvipc/shm and then total up "bytes" for all of the segments for the offending userid. This is a large server with hundreds of segments and tens (or hundreds) of GB of allocated memory per userid. However it doesn't match up - the database management software claims to be using around 25% more memory than the total I calculate. Negligible swap space is in use, so I am ignoring that. I am running it as root so I am sure I see all shared memory segments. My question is : is all (significant) allocated memory recorded in /proc/sysvipc/shm, or is this only shared memory (*and not "un-shared" memory?). If this is incorrect, what is the correct way to calculate out the total allocated memory for each userid? Also: I believe doing a 'cat' on this file locks server IPC. I check it every 5 seconds - is it likely that this frequency could be problematic? Thanks! Mark Teehan Singapore

    Read the article

  • What does CPU Time consist of?

    - by Sid
    What does CPU time exactly consist of? For instance, is the time taken to access a page from the RAM (at which point, the CPU is most likely idling) part of the CPU time? I'm not talking about fetching the page from the disk here, just fetching it from the RAM. Thanks

    Read the article

  • A good free software for freeing up RAM Memory in Windows 7(64bit)

    - by Flavius Frantz
    I am looking for a good windows 7 software to free up RAM memory on my PC... i tried some ones I found on google but they were bad stuff... with viruses, spamware etc... i want a free clean professional software, if you don't know a good one thats free, please recommend a payed version. Also other tips/software to speed up my pc(on win7- 64bit) and such utilities. Also software to measure temperature would be great... If you can make a "must have" list of such software... Thank you I am a graphic designer, usually using this stack exchange for graphic design questions, now I realised there is this superuser one... nice :) [I usually have a lot of running programs, such as Photoshop, Flash, Illustrator, InDesign, running at the same time... with only 4GB of RAM memmory.. any tips to improve my PC perfomance would be great... I have a Asus K50IP Notebook]

    Read the article

  • How expensive is a hostname in htaccess? Other solutions possible?

    - by Nanne
    For easy allow or disallowing of dynamic IP-adresses you can add them as a hostname in a .htaccess file. As I have read from: .htaccess allow from hostname? it does a reverse lookup on the connecting ip address, seeing if the response matches the allowed name. (Well, actually Apache is doing a double lookup, first a reverse lookup and then a forward lookup on the result of the reverse.) This is the reason we are currently not using dynamic-ip hostnames in the .htaccess: this "sounds" quite heavy: 2 extra lookups for every request. Is this indeed quite heavy, and would a reasonably busy server that is rather looking for less then more load get away with this :)? (e.g.: how does this 'load' compare to the rest? If a request is 1000 times more expensive then the lookups it might be negligible. otoh, it could be that final straw :) ) Are there other solutions? I can write a script that does a lookup of the hostname and put it in .htaccess files ofcourse, but this feels a bit like a hack.

    Read the article

  • MySQL : table organisation for very large sets with high update frequency

    - by Remiz
    I'm facing a dilemma in the choice of my MySQL schema application. So before I start here is a picture extremely simplified of my database : Schema here : http://i43.tinypic.com/2wp5lxz.png In one sentence : for each customer, the application harvest text data and attached tags to each data collected. As approximation of the usage of each table, here is what I expect : customer : ~5000, shouldn't grow fast data : 5 millions per customer, could double or triple for big customers. tag : ~1000, quite fixed size data_tag : hundred of millions per customer easily. Each data can be tagged a lot. The harvesting process is permanent, that means that around every 15 minutes new data come and are tagged, that require a very constant index refreshing. A lot of my queries are a SELECT COUNT of DATA between specific DATES and tagged with a specific TAG on a specific CUSTOMER (very rarely it will involve several customers). Here is the situation, you can imagine with this kind of volume of data I'm facing a challenge in term of data organization and indexing. Again, it's a very minimalistic and simplified version of my structure. My question is, is it better: to stick with this model and to manage crazy index optimization ? (which involves potentially having billions of rows in the data_tag table) change the schema and use one data table and one data_tag table per customer ? (which involves having 5000 tables on my database) I'm running all of this on a MySQL 5.0 dedicated server (quad-core, 8Go of ram) replicated. I only use InnoDB, I also have another server that run Sphinx. So knowing all of this, I can't wait to hear your opinion about this. Thanks.

    Read the article

  • Difference between all servers in one cluster and more than one cluster with servers?

    - by silla
    Not sure I understand what´s the difference or how it works when servers a running in one cluster or if there are more than one clusters with servers in it - regard High availability & Load Balancing. For me they are somehow the same, there is not really a big difference. Let´s make a simple example: 2 Servers in 1 Cluster 2 Clusters with each 1 Server - 1. If one Server failure, the other one is able to continue the work. The same for Load Balancing, these two Servers are able to balance the work together. - 2. The same thing! If one Server failure... The only thing that could be a problem with point 1. is if the Cluster fails (then both of the Server are dead). But is this even possible? I was reading stuff about clustering and high availability but I think I do not get this really. Probably I did not really understand how a cluster is working. Are these 2 points with 1 Cluster and 2 Clusters somehow the same or are there really some big differences? What should I know about it? Thank you

    Read the article

  • Top causes of slow ssh logins

    - by Peter Lyons
    I'd love for one of you smart and helpful folks to post a list of common causes of delays during an ssh login. Specifically, there are 2 spots where I see a range from instantaneous to multi-second delays. Between issuing the ssh command and getting a login prompt and between entering the passphrase and having the shell load Now, specifically I'm looking at ssh details only here. Obviously network latency, speed of the hardware and OSes involved, complex login scripts, etc can cause delays. For context I ssh to a vast multitude of linux distributions and some Solaris hosts using mostly Ubuntu, CentOS, and MacOS X as my client systems. Almost all of the time, the ssh server configuration is unchanged from the OS's default settings. What ssh server configurations should I be interested in? Are there OS/kernel parameters that can be tuned? Login shell tricks? Etc?

    Read the article

  • What to do before trying to benchmark

    - by user23950
    What are the things that I should do before trying to benchmark my computer: I've got this tools for benchmarking: 3dmark cinebench geekbench juarez dx10 open source mark Do I need to have a full spyware and virus scan before proceeding?What else should I do, in order to get accurate readings.

    Read the article

  • Use the same database or replicate it for reports and web

    - by developer
    I would like to know if i have a web with a huge Database and throw expensive (in time)reports , the best way to do this is with one database for the web and a replicated one for reports, or only one for both, i'm worried that users can throw reports for 5 or more years because they need that information and the web crashes because of this.

    Read the article

  • How to find the process(es) which are hogging the machine

    - by Aaron Digulla
    Scenario: All of a sudden, my computer feels sluggish. Mouse moves but windows take ages to open, etc. uptime says the load is 7.69 and raising. What is the fastest way to find out which process(es) are the cause of the load? Now, "top" and similar tools isn't the answer because they either show CPU or memory usage but not both at the same time. What I need is the single command which I might be able to type as it happens - something that will figure out any of System is trying to swap 8GB of RAM to disk because process X ... or process X seeks all over the disk or process X uses 400% CPU" So what I'm looking for is iostat, htop/atop and similar tools run into one with an output like this: 1235 cp - Disk trashing 87 chrome - Uses 2&nbsp;GB of RAM 137 nfs_bench - Uses 95% of the network bandwidth I don't want a tool that gives me some numbers which I can analyze but a tool that tells me exactly which process causes the current load. Assume that the user in front of the keyboard barely knows how to write "process", but the user is quickly overwhelmed when it comes to "resident size", "virtual memory" or "process life cycle". My argument goes like this: A user notices a problem. There can be thousands of reasons ... well, almost :-) The user wants to know the source of the problem. The current solutions give me lots of numbers, and I need to know what these numbers mean. What I'm looking for is a meta tool. 99% of the data is irrelevant to the problem. So what the tool should do is look for processes which hog some resource and list only those along with "this process needs a lot of CPU, this produces many IRQs, this process allocates a lot of RAM (and it's still growing)". This will be a relatively short list. It will be much more simple for someone new to this to locate the culprit from this list than from the output of, say, htop which gives me about 5000 numbers but requires me to fold multi-threaded processes myself (I have 50 lines which say VIRT 2750M but only 16 GB of RAM - the machine ought to swap itself to death but of course, this is a misinterpretation of the data that can happen quickly).

    Read the article

  • Let varnish send old data from cache while it's fetching a new one?

    - by mark
    I'm caching dynamically generated pages (PHP-FPM, NGINX) and have varnish in front of them, this works very well. However, once the cache timeout is reached, I see this: new client requests page varnish recognizes the cache timeout client waits varnish fetches new page from backend varnish delivers new page to the client (and has page cached, too, for the next request which gets it instantly) What I would like to do is: client requests page varnish recognizes the timeout varnish delivers old page to the client varnish fetches new page from backend and puts it into the cache In my case it's not site where outdated information is such a big problem, especially not when we're talking about cache timeout from a few minutes. However, I don't want punish user to wait in line and rather deliver something immediate. Is that possible in some way? To illustrate, here's a sample output of running siege 5 minutes against my server which was configured to cache for one minute: HTTP/1.1,200, 1.97, 12710,/,1,2013-06-24 00:21:06 ... HTTP/1.1,200, 1.88, 12710,/,1,2013-06-24 00:21:20 ... HTTP/1.1,200, 1.93, 12710,/,1,2013-06-24 00:22:08 ... HTTP/1.1,200, 1.89, 12710,/,1,2013-06-24 00:22:22 ... HTTP/1.1,200, 1.94, 12710,/,1,2013-06-24 00:23:10 ... HTTP/1.1,200, 1.91, 12709,/,1,2013-06-24 00:23:23 ... HTTP/1.1,200, 1.93, 12710,/,1,2013-06-24 00:24:12 ... I left out the hundreds of requests running in 0.02 or so. But it still concerns me that there are going to be users having to wait almost 2 seconds for their raw HTML. Can't we do any better here? (I came across Varnish send while cache , it sounded similar but not exactly what I'm trying to do.)

    Read the article

  • How do I remove 1,000,000 directories?

    - by harper
    I found that in a directory more than 1,000,000 subdirectories has been created due to a bug. I want to remove all these directories, let's say in the directory WebsiteCache. My first approach was to use the command line tool: cd WebsiteCache rmdir /Q /S . This will remove all subdirectories except the directory WebsiteCache itself, since it is the current working directory. I noticed after two hours that the directoriws starting with A-H have been removed. Why does rmdir removes the directories in alphabetical order? It must take additional effort to do this ordered. What is the fastest way to delete such an amount of directories?

    Read the article

  • How can I pin point a USB file transfer bottleneck in Unix?

    - by HankHendrix
    I'm experiencing very slow data transfer speeds over USB 2.0 on my nix box and was wondering how I can pin-point the cause of the problem. I've looked into iotop and top but the cpu and mem figures look normal (compared to guides I have checked). The box which is affected is Ubuntu 12.04 32bit Server running on an Asus EEE 701 2G model and I am transferring from the OS over USB 2.0 to an external HDD (which transfers at 30MB/s+ on Windows 7 on other machine). I get rsync write speeds of 1MB/s from OS to USB HDD which seems ridiculously slow. These speeds are consistent with other USB HDDs and sticks.

    Read the article

  • Windows 2008 R2 on ESXi 4.1 cpu utilization kernel high

    - by MK.
    I have a Win2k8 guest running on ESXi 4.1. The host has 12 cores and the problem happens even if the guest is the only VM on the host. We have 4 cores dedicated to the guest. We noticed that network starts chocking when the CPU load goes up. After some testing we noticed that when running a simple CPU hogging tool set up to run 3 threads at 100% the regular CPU load goes to 75% like it should and the "kernel times" graph in task manager goes up to 25%. My intuition tells me that the network problem and kernel times problem are the same. This is confirmed by another similar VM we created on the same host which doesn't have either of the problems. VMWare tools are obviously installed. The nic is e1000. What else can we do to troubleshoot this?

    Read the article

  • Lightweight ad-blocker for firefox

    - by student
    On a old machine (512 MB RAM) I am currently running ubuntu jaunty and firefox 3.0.15. I tried the ad blocker addon add block plus but it eats lots of RAM (300 MB). Is high memory load of this add-on a bug, which is fixed in a newer version or just normal? If so, why is the memory usage so high? Is there another ad blocker add-on for firefox or another browser- add-on combination for linux (ubuntu jaunty) which uses significant less RAM?

    Read the article

  • Applications starts very slowly from a network path

    - by Snowfox
    Hi We have a windows 2008 server which hosts the network share \\srvcompany\lib. This share contains several applications needed for the daily business. Every client/user (all win xp) has shortcuts on the desktop to these apps. We have the problem that at several (but not all) clients the apps starts very slowly. If I copy the application's programm files to a local folder then they'll start fastly. When I watch the memory usage in the task manager on such a "slow" machine while an applications starts I notice that the memory usage grows much slowier than when I start the app from a "fast" machine. But when I copy files with Windows Explorer from this share, the speed is nearly the same. I've also checked the network driver, both tested clients have the same network card with the same driver version. Has anyone an idea where or what I should check next to solve this problem? Thanks for answers.

    Read the article

  • Windows 7 slowing down during hard drive activity

    - by Iniquities of evil men
    Sometimes when normally using my PC, it will (seemingly) randomly slow down, and maybe sometimes even freeze for several seconds. During this slow down period, it looks like a (I don't know which drive it is) hard drive is constantly being written to. During the last slow down, I started Windows's Ressource Monitor and found out that the System process was writing up to 10MB/s to a drive (I suspect it's the system drive, C:\, but I don't know for sure). I'm not doing anything unusual (at least, I don't think I am), and most of the time, it will work normally, but, as I said, it just randomly slows down for some times. Any ideas on what might be causing this and how I can prevent this from happening again? (I have a triple-core processor and 4GB of RAM. My system drive is a WD Caviar Black 500GB, my secondary, 'data' drive is a Samsung drive, which I don't know the model number of, but I can look it up. I can also post my full PC specs if needed.)

    Read the article

  • Mysql process goes over 100% of CPU usage

    - by Temnovit
    Hello! I'm experiencing some problems with my LAMP server. Recently, everything became very slow, even though visitor count on my websites didn't change to much. When I run top command, it sais that mysql process has taken over 150-200% of CPU. How's that possible, I always thought that 100% is a maximum? I'm running Ubuntu 9.04 server edition with 1,5 GB RAM my.cnf settings: key_buffer = 64M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 myisam-recover = BACKUP max_connections = 200 table_cache = 512 table_definition_cache = 512 thread_concurrency = 2 read_buffer_size = 1M sort_buffer_size = 4M join_buffer_size = 1M query_cache_limit = 1M # the maximum size of individual query results query_cache_size = 128M Here is the output of MySQLTuner: The top command: What could be the cause of this problem? Can I make changes to my my.cnf to prevent server from hanging?

    Read the article

  • Rule of thumb in RAM estimate for static pages? [closed]

    - by IMB
    Possible Duplicate: How do you do Load Testing and Capacity Planning for Web Sites I've seen tutorials saying they can run decent websites on 64MB RAM (Debian/Lighttpd/PHP/MySQL) however it's not clearly defined how much hits/traffic a "decent" site gets. Is there a rule of thumb on how much RAM a web server needs? To keep things simple, let's say you're running a site with static content and it's averaging at 100,000 hits per hour (HTML + images combined, no MySQL). How much RAM is the minimum requirement for that?

    Read the article

  • What is the maximum memory that an IIS6 web site/app pool can use?

    - by Robin M
    I have an IIS 6 server running on Windows 2003 SP2 x86. The server has 4GB of RAM and runs consistently with 2GB allocated. I realise that with x86, the server won't utilize all of the 4GB RAM and the application space is also limited but the IIS processes seem to be limited elsewhere. w3wp.exe never has more than 500MB allocated and I occasionally get OutOfMemory exceptions from a busy .NET application (there are several applications running, each with a separate application pool). What is the maximum memory that an IIS6 web site/app pool can use?

    Read the article

  • MySQL server simple insert/update/delete queries are taking a long time to execute

    - by ElGabbu
    We have a VPS hosting server with a MySQL server running on it. We host several databases for client's websites. Recently we have noticed that insert/update and delete queries are taking a long time to execute sometimes as close as 30 seconds. I use the following command to see these queries being executed: watch -n1 mysqladmin proc stat We have still not been able to track the root of this problem. I would apprecite if anyone had any pointers as to what we can check or improve to resolve the issue. Thanks

    Read the article

  • Is the XP VMM a bottleneck on a multi core machine?

    - by JeffV
    I have a dual Xeon hex core machine running an IO intensive application. (WinXP 32) I am seeing a hardware driver (1/2 user mode, 1/2 kernel, streaming data) that is using 6k delta page faults per second. When other applications load or allocate large amounts of memory the driver's hardware buffer gets an underrun (application not feeding it fast enough). Could this be because the kernel is only using one core to service page fault interrupts?

    Read the article

  • mysql is not using multiple cpus

    - by mhost
    Our MySQL server has been using a lot of CPU lately (it's reached 100% several times and stays there for a while) and I noticed that it the CPU load is all on one core of one cpu. I was hoping to spread that out to all 4 on my server. I have been tweaking the MySQL settings to use more ram and less cpu, but it still occasionally reaches very high CPU usage. It seems like everything about the topic refers to thread_concurrency (which I've read is a solaris only setting). What can I do in Linux? Thanks.

    Read the article

< Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >