Search Results

Search found 97855 results on 3915 pages for 'code performance'.

Page 214/3915 | < Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >

  • Calculating memory footprints using /proc/sysvipc/shm

    - by MarkTeehan
    This is for a SLES 10 database server. One of my servers runs three databases and three app servers; I am analyzing how their shared memory segments grow and shrink to avoid intermittent out-of-memory scenarios. "Top" is hot helpful for this since its calculations for RES and VIRT are inconsistent. I am doing this by matching up the contents of /proc/sysvipc/shm with memory usage reported by the database admin console. I do this by totaling up saving the contents of /proc/sysvipc/shm and then total up "bytes" for all of the segments for the offending userid. This is a large server with hundreds of segments and tens (or hundreds) of GB of allocated memory per userid. However it doesn't match up - the database management software claims to be using around 25% more memory than the total I calculate. Negligible swap space is in use, so I am ignoring that. I am running it as root so I am sure I see all shared memory segments. My question is : is all (significant) allocated memory recorded in /proc/sysvipc/shm, or is this only shared memory (*and not "un-shared" memory?). If this is incorrect, what is the correct way to calculate out the total allocated memory for each userid? Also: I believe doing a 'cat' on this file locks server IPC. I check it every 5 seconds - is it likely that this frequency could be problematic? Thanks! Mark Teehan Singapore

    Read the article

  • How do I remove 1,000,000 directories?

    - by harper
    I found that in a directory more than 1,000,000 subdirectories has been created due to a bug. I want to remove all these directories, let's say in the directory WebsiteCache. My first approach was to use the command line tool: cd WebsiteCache rmdir /Q /S . This will remove all subdirectories except the directory WebsiteCache itself, since it is the current working directory. I noticed after two hours that the directoriws starting with A-H have been removed. Why does rmdir removes the directories in alphabetical order? It must take additional effort to do this ordered. What is the fastest way to delete such an amount of directories?

    Read the article

  • Let varnish send old data from cache while it's fetching a new one?

    - by mark
    I'm caching dynamically generated pages (PHP-FPM, NGINX) and have varnish in front of them, this works very well. However, once the cache timeout is reached, I see this: new client requests page varnish recognizes the cache timeout client waits varnish fetches new page from backend varnish delivers new page to the client (and has page cached, too, for the next request which gets it instantly) What I would like to do is: client requests page varnish recognizes the timeout varnish delivers old page to the client varnish fetches new page from backend and puts it into the cache In my case it's not site where outdated information is such a big problem, especially not when we're talking about cache timeout from a few minutes. However, I don't want punish user to wait in line and rather deliver something immediate. Is that possible in some way? To illustrate, here's a sample output of running siege 5 minutes against my server which was configured to cache for one minute: HTTP/1.1,200, 1.97, 12710,/,1,2013-06-24 00:21:06 ... HTTP/1.1,200, 1.88, 12710,/,1,2013-06-24 00:21:20 ... HTTP/1.1,200, 1.93, 12710,/,1,2013-06-24 00:22:08 ... HTTP/1.1,200, 1.89, 12710,/,1,2013-06-24 00:22:22 ... HTTP/1.1,200, 1.94, 12710,/,1,2013-06-24 00:23:10 ... HTTP/1.1,200, 1.91, 12709,/,1,2013-06-24 00:23:23 ... HTTP/1.1,200, 1.93, 12710,/,1,2013-06-24 00:24:12 ... I left out the hundreds of requests running in 0.02 or so. But it still concerns me that there are going to be users having to wait almost 2 seconds for their raw HTML. Can't we do any better here? (I came across Varnish send while cache , it sounded similar but not exactly what I'm trying to do.)

    Read the article

  • Top causes of slow ssh logins

    - by Peter Lyons
    I'd love for one of you smart and helpful folks to post a list of common causes of delays during an ssh login. Specifically, there are 2 spots where I see a range from instantaneous to multi-second delays. Between issuing the ssh command and getting a login prompt and between entering the passphrase and having the shell load Now, specifically I'm looking at ssh details only here. Obviously network latency, speed of the hardware and OSes involved, complex login scripts, etc can cause delays. For context I ssh to a vast multitude of linux distributions and some Solaris hosts using mostly Ubuntu, CentOS, and MacOS X as my client systems. Almost all of the time, the ssh server configuration is unchanged from the OS's default settings. What ssh server configurations should I be interested in? Are there OS/kernel parameters that can be tuned? Login shell tricks? Etc?

    Read the article

  • C# and SQL data layer code generator

    I've created a simple yet efficient tool to help generate stored procedures and a C# data access layer from a table.  Instead of using an ORM, this uses standard ADO .NET (SqlConnection, SqlDataReader, etc).  Check it out at www.asteio.com.  It's saved me a ton of time and I'm hoping it does the same for you....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • apache chokes after 300 connections

    - by john titus
    We have an apache webserver in front of Tomcat hosted on EC2, instance type is extra large with 34GB memory. Our application deals with lot of external webservices and we have a very lousy external webservice which takes almost 300 seconds to respond to requests during peak hours. During peak hours the server chokes at just about 300 httpd processes. ps -ef | grep httpd | wc -l =300 I have googled and found numerous suggestions but nothing seems to work.. following are some configuration i have done which are directly taken from online resources. I have increased the limits of max connection and max clients in both apache and tomcat. here are the configuration details: //apache <IfModule prefork.c> StartServers 100 MinSpareServers 10 MaxSpareServers 10 ServerLimit 50000 MaxClients 50000 MaxRequestsPerChild 2000 </IfModule> //tomcat <Connector port="8080" protocol="org.apache.coyote.http11.Http11NioProtocol" connectionTimeout="600000" redirectPort="8443" enableLookups="false" maxThreads="1500" compressableMimeType="text/html,text/xml,text/plain,text/css,application/x-javascript,text/vnd.wap.wml,text/vnd.wap.wmlscript,application/xhtml+xml,application/xml-dtd,application/xslt+xml" compression="on"/> //Sysctl.conf net.ipv4.tcp_tw_reuse=1 net.ipv4.tcp_tw_recycle=1 fs.file-max = 5049800 vm.min_free_kbytes = 204800 vm.page-cluster = 20 vm.swappiness = 90 net.ipv4.tcp_rfc1337=1 net.ipv4.tcp_max_orphans = 65536 net.ipv4.ip_local_port_range = 5000 65000 net.core.somaxconn = 1024 I have been trying numerous suggestions but in vain.. how to fix this? I'm sure m2xlarge server should serve more requests than 300, probably i might be going wrong with my configuration.. The server chokes only during peak hours and when there are 300 concurrent requests waiting for the [300 second delayed] webservice to respond. Please help..

    Read the article

  • how to serialize function depending on what instance of object calls it, if same instance call in a thread then do serialize else not

    - by LondonDreams
    I have a function which fetches and updates some record from db and I am trying to make sure each if the function is called by same instance of object(same Or different thread) then function should behave synchronized else its a call from different object instance function need not to be synchronized. I have tried it use a lock per client. That is, instead of synchronizing the method directly using explicit locking through lock objects using Map. function is like :- getAndUpdateMyHitCount(myObjId){ //go to db and get unique record by myObjId //fetch value , increment , save update } And this function may get call is same thread by different Or same object instance But as fetching and matching from Map is slow , Is there other optimized way to do this ? Found similar at this Question but dont feel that is optimized

    Read the article

  • Issue with aborted MySQL connections (error code: 4)

    - by arikfr
    Some of the connections between my application server (Ubuntu, Apache, PHP) and my DB server (Ubuntu, MySQL) are failing with error code 4. According to the documentation error code 4 is: OS error code 4: Interrupted system call At first I thought that maybe the issue is that the DB server has too many connections and fails because there are too much open files. But it seems not to be the case because: Too many open files has different error code (24). I've checked and during peak time the server had 497 files open (checked using lsof command) while the maximum is 1024. The TCP settings were already checked (see prior question). Any ideas what this can be or what should I check?

    Read the article

  • How can I pin point a USB file transfer bottleneck in Unix?

    - by HankHendrix
    I'm experiencing very slow data transfer speeds over USB 2.0 on my nix box and was wondering how I can pin-point the cause of the problem. I've looked into iotop and top but the cpu and mem figures look normal (compared to guides I have checked). The box which is affected is Ubuntu 12.04 32bit Server running on an Asus EEE 701 2G model and I am transferring from the OS over USB 2.0 to an external HDD (which transfers at 30MB/s+ on Windows 7 on other machine). I get rsync write speeds of 1MB/s from OS to USB HDD which seems ridiculously slow. These speeds are consistent with other USB HDDs and sticks.

    Read the article

  • How to find the process(es) which are hogging the machine

    - by Aaron Digulla
    Scenario: All of a sudden, my computer feels sluggish. Mouse moves but windows take ages to open, etc. uptime says the load is 7.69 and raising. What is the fastest way to find out which process(es) are the cause of the load? Now, "top" and similar tools isn't the answer because they either show CPU or memory usage but not both at the same time. What I need is the single command which I might be able to type as it happens - something that will figure out any of System is trying to swap 8GB of RAM to disk because process X ... or process X seeks all over the disk or process X uses 400% CPU" So what I'm looking for is iostat, htop/atop and similar tools run into one with an output like this: 1235 cp - Disk trashing 87 chrome - Uses 2&nbsp;GB of RAM 137 nfs_bench - Uses 95% of the network bandwidth I don't want a tool that gives me some numbers which I can analyze but a tool that tells me exactly which process causes the current load. Assume that the user in front of the keyboard barely knows how to write "process", but the user is quickly overwhelmed when it comes to "resident size", "virtual memory" or "process life cycle". My argument goes like this: A user notices a problem. There can be thousands of reasons ... well, almost :-) The user wants to know the source of the problem. The current solutions give me lots of numbers, and I need to know what these numbers mean. What I'm looking for is a meta tool. 99% of the data is irrelevant to the problem. So what the tool should do is look for processes which hog some resource and list only those along with "this process needs a lot of CPU, this produces many IRQs, this process allocates a lot of RAM (and it's still growing)". This will be a relatively short list. It will be much more simple for someone new to this to locate the culprit from this list than from the output of, say, htop which gives me about 5000 numbers but requires me to fold multi-threaded processes myself (I have 50 lines which say VIRT 2750M but only 16 GB of RAM - the machine ought to swap itself to death but of course, this is a misinterpretation of the data that can happen quickly).

    Read the article

  • Windows 2008 R2 on ESXi 4.1 cpu utilization kernel high

    - by MK.
    I have a Win2k8 guest running on ESXi 4.1. The host has 12 cores and the problem happens even if the guest is the only VM on the host. We have 4 cores dedicated to the guest. We noticed that network starts chocking when the CPU load goes up. After some testing we noticed that when running a simple CPU hogging tool set up to run 3 threads at 100% the regular CPU load goes to 75% like it should and the "kernel times" graph in task manager goes up to 25%. My intuition tells me that the network problem and kernel times problem are the same. This is confirmed by another similar VM we created on the same host which doesn't have either of the problems. VMWare tools are obviously installed. The nic is e1000. What else can we do to troubleshoot this?

    Read the article

  • Lightweight ad-blocker for firefox

    - by student
    On a old machine (512 MB RAM) I am currently running ubuntu jaunty and firefox 3.0.15. I tried the ad blocker addon add block plus but it eats lots of RAM (300 MB). Is high memory load of this add-on a bug, which is fixed in a newer version or just normal? If so, why is the memory usage so high? Is there another ad blocker add-on for firefox or another browser- add-on combination for linux (ubuntu jaunty) which uses significant less RAM?

    Read the article

  • What Do You Think About This Smelly Test?

    - by panamack
    I caught a whiff of a smell eminating from one of my tests, in a scenario akin to the following: [TestFixture] public void CarPresenterTests{ [Test] public void Throws_If_Cars_Wheels_Collection_Is_Null(){ IEnumerable<Wheels> wheels = null; var car = new Car(wheels); Assert.That( ()=>new CarPresenter(car), Throws.InstanceOf<ArgumentException>() .With.Message.EqualTo("Can't create if cars wheels is null)); } } public class CarPresenter{ public CarPresenter(Car car) { if(car.Wheels == null) throw new ArgumentException("Can't create if cars wheels is null); _car = car; _car.Wheels.Rolling += WheelsRollingHandler; } } I was struggling to describe what the problem is except that it seems wrong that a CarPresenter should attempt to dictate to a Car whether or not it's Wheels are initialised correctly. I wondered what pointers people here might give me?

    Read the article

  • What to do before trying to benchmark

    - by user23950
    What are the things that I should do before trying to benchmark my computer: I've got this tools for benchmarking: 3dmark cinebench geekbench juarez dx10 open source mark Do I need to have a full spyware and virus scan before proceeding?What else should I do, in order to get accurate readings.

    Read the article

  • Use the same database or replicate it for reports and web

    - by developer
    I would like to know if i have a web with a huge Database and throw expensive (in time)reports , the best way to do this is with one database for the web and a replicated one for reports, or only one for both, i'm worried that users can throw reports for 5 or more years because they need that information and the web crashes because of this.

    Read the article

  • A better way to organize your Silverlight Code Snippets.

    - by mbcrump
    I hate re-writing code. I also hate it when I find a great code snippet on the web and forget to bookmark it or it gets lost in my endless sea of bookmarks. So what do you do to get around this? This is the question that I was asking myself at the end of 2010. How can I get my Silverlight code organized? My requirements for a snippet manager were: Needs to be FREE. An easy way to view XAML/C# code behind together in one “view”. I wanted the ability to store the code snippets in cloud in case my HDD dies. Searchable Keywords to quickly find code snippets. I started looking for a snippet manager that would allow me to do just that and finally found Snippet Manager. Before going any further, I think that one of the most important things to note here is that this software supports 37 languages. It’s not just for Silverlight developers nor C# only guys. The software supports Java, SQL and even COBOL.   Below is a screenshot of the Snippet Manager that shows my Silverlight code snippet. You will notice that I have highlighted two sections. The top part is my XAML and the bottom is my C# code behind. I’ve included a sample below of my code snippets so that you can get an idea of how I organized it. Another thing that’s great about this software is that it supports plain text. I added some connection strings in the TEXT section below.  Once you have finished adding your code snippets, you can store them in the cloud. I created a FTP directory called “snippets” on my FTP Server and hit the upload button once I am finished adding my new codes snippets. This will allow me to use the code snippets on another computer with this application on my USB Key. See screenshots below: Enter your FTP credentials below: Hit the Uploads button on the Toolbar: Login in to your FTP Server and verify the following files are now on the FTP Server: Another great feature of the Snippet Manager is that you can also integrate this into VS2010 by clicking Tools –> External Tools: And setting up your External Screen to point to the Executable: You can now launch it by going to Tools –> Snippet Manager. If you want you could also a shortcut to launch the program with HotKeys. As you can see, this is a nice little program that includes everything needed to organize your code snippets very clean. I didn’t go over every feature but this is something that you might want to download and give it a shot.  Subscribe to my feed CodeProject

    Read the article

  • Continuing to code on large projects

    - by user3487347
    I am a hobbyist programmer, and I've started many medium - sized projects to work on just by myself. These include games, a raytracer, physics simulations etc. By the time these projects get to a certain size (around 5000 lines), I begin to slow down in adding features to the program. This is not because of a lack of ideas of what to implement in a program, but rather a struggle in how to go about it. In particular, I'm afraid of breaking what I already have working in order to implement a new feature. I've tried using version control like Git and Subversion, but these seem unnecessary when you are a one man team. I simply have a folder of "versions" of my program, one for each major change I make. How do I keep coding past this 5000 line mark? What about the 50000 line mark?

    Read the article

  • Code better with JustCode Q1 SP1

    We've just uploaded the Service Pack 1 for JustCode so feel free to log in to your Telerik accounts and download JustCode. Earlier this week Visual Studio 2010 RTM was released and we are happy to announce that this version of JustCode fully supports it. Other areas of interest in this release are the typing assistance behavior and JavaScript formatting. We also further optimized JustCodes memory usage and speed. Youll find the full release notes for the Service Pack here.  Visual Studio 2010 changes As Visual Studio is now officially out we now fully support its final version as well as the new .NET 4.0 framework features. Typing Assistance improvements As JustCodes first official release approached we started getting an increasing number of requests for a typing assistance feature. In spite of being at a fairly advanced stage of our development cycle we managed to squeeze in a basic ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Updates to Silverlight Code Browser

    A couple of bits of news. I did a number of updates to the HackingSilverlightCodeBrowser here:http://www.hackingsilverlight.net/HackingSilverlightCodeBrowser.html including things like MEF and IsolatedStorage. As to the community edition of the book, I'm still trying to get some contributors to finish. I've been slammed with 60 hour weeks so chapter 2 and 4 and appendix a still need edits. maybe a weeks worth of work pending time which tends to be limited in my life. :)...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • What is the maximum memory that an IIS6 web site/app pool can use?

    - by Robin M
    I have an IIS 6 server running on Windows 2003 SP2 x86. The server has 4GB of RAM and runs consistently with 2GB allocated. I realise that with x86, the server won't utilize all of the 4GB RAM and the application space is also limited but the IIS processes seem to be limited elsewhere. w3wp.exe never has more than 500MB allocated and I occasionally get OutOfMemory exceptions from a busy .NET application (there are several applications running, each with a separate application pool). What is the maximum memory that an IIS6 web site/app pool can use?

    Read the article

  • Windows 7 slowing down during hard drive activity

    - by Iniquities of evil men
    Sometimes when normally using my PC, it will (seemingly) randomly slow down, and maybe sometimes even freeze for several seconds. During this slow down period, it looks like a (I don't know which drive it is) hard drive is constantly being written to. During the last slow down, I started Windows's Ressource Monitor and found out that the System process was writing up to 10MB/s to a drive (I suspect it's the system drive, C:\, but I don't know for sure). I'm not doing anything unusual (at least, I don't think I am), and most of the time, it will work normally, but, as I said, it just randomly slows down for some times. Any ideas on what might be causing this and how I can prevent this from happening again? (I have a triple-core processor and 4GB of RAM. My system drive is a WD Caviar Black 500GB, my secondary, 'data' drive is a Samsung drive, which I don't know the model number of, but I can look it up. I can also post my full PC specs if needed.)

    Read the article

  • Applications starts very slowly from a network path

    - by Snowfox
    Hi We have a windows 2008 server which hosts the network share \\srvcompany\lib. This share contains several applications needed for the daily business. Every client/user (all win xp) has shortcuts on the desktop to these apps. We have the problem that at several (but not all) clients the apps starts very slowly. If I copy the application's programm files to a local folder then they'll start fastly. When I watch the memory usage in the task manager on such a "slow" machine while an applications starts I notice that the memory usage grows much slowier than when I start the app from a "fast" machine. But when I copy files with Windows Explorer from this share, the speed is nearly the same. I've also checked the network driver, both tested clients have the same network card with the same driver version. Has anyone an idea where or what I should check next to solve this problem? Thanks for answers.

    Read the article

  • mysql is not using multiple cpus

    - by mhost
    Our MySQL server has been using a lot of CPU lately (it's reached 100% several times and stays there for a while) and I noticed that it the CPU load is all on one core of one cpu. I was hoping to spread that out to all 4 on my server. I have been tweaking the MySQL settings to use more ram and less cpu, but it still occasionally reaches very high CPU usage. It seems like everything about the topic refers to thread_concurrency (which I've read is a solaris only setting). What can I do in Linux? Thanks.

    Read the article

  • MySQL server simple insert/update/delete queries are taking a long time to execute

    - by ElGabbu
    We have a VPS hosting server with a MySQL server running on it. We host several databases for client's websites. Recently we have noticed that insert/update and delete queries are taking a long time to execute sometimes as close as 30 seconds. I use the following command to see these queries being executed: watch -n1 mysqladmin proc stat We have still not been able to track the root of this problem. I would apprecite if anyone had any pointers as to what we can check or improve to resolve the issue. Thanks

    Read the article

< Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >