Search Results

Search found 35507 results on 1421 pages for 'performance test'.

Page 173/1421 | < Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >

  • Multiple servers vs 1 big server performace

    - by pistacchio
    Hi to all! My team of developers has suggested a server structure for an upcoming project we are developing. Our structure is "logical", meaning that the various logical components of the application (it is a distributed one) relies on different servers. Some components are more critical than others and will be subjected to more load. Our proposal was to have 1 server per component but the hardware guys suggested to replace the various machines with a single, bigger one with virtual servers. They're gonna use Blade Servers. Now, I'm not an expert at all, but my question to the guys was: so if we need, for example, 3 2GHz CPU / 2GB RAM machines and you give me 1 machine with 3 2GHz CPUs and 6 GB of RAM it is the same? They told me it is. Is this accurate? What are the advantages or disadvantages of both the solutions? What are the generally accepted best practices? Could you point out some URL reference dealing with the problem? Thank you in advance! EDIT: Some more info. The (internet / intranet) application is already layered. We have some servers on the DMZ that will expose pages to the internet and the databases are on their own machines. What we want to split (and they want to join) are some webservers that mainly expose webservices. One is a DAL that communicates with the database layer, one is our Single Sign On / User Profile application that gets called once per page and one is a clone of what seen on the Internet to be used on our lan.

    Read the article

  • MySQL Locking Up

    - by Ian
    I've got a innodb table that gets a lot of reads and almost no writes (like, 1 write for every 400,000 reads approx). I'm running into a pretty big problem though when I do INSERT into the table. MySQL completely locks up. It uses 100% cpu, and every single other table (in other databases even) have their statuses set to "Locked" until the INSERT is done. This is a big problem because MySQL stays locked up for up to 4 minutes. I'm using version 5.1.47 (rpm from mysql.com). Any ideas?

    Read the article

  • Firefox takes a really long time to load some sites on Ubuntu

    - by Dave
    Hello guys, I have an issue here. Some sites - just a few - takes a really long time to load on Firefox. One example is A List Apart (http://www.alistapart.com/) which takes more than 30 minutes (yes, minutes, not seconds). On Opera, ou even through a telnet session, the problematic sites run without problem, fast as expected. I am using Linux 8.04, running Firefox 3.6.3 downloaded from mozilla site, with a 10M ADSL connection. I tried many tweaks I found googling, like disable IPv6, and change http pipelining settings on FF's about:config. None worked. I also used Firebug to find what phase during negotiation is the bottleneck. Findings are in the screenshot. Well guys, any idea what is the issue? And how to solve it? I repeat, this only happens with firefox (3.6.3 and prior versions), for a few sites only (even sites with much more requests, images, javascripts, stylesheets work fine), and http pipelines and IPv6 tweaks on about:config didn't work. Thanks

    Read the article

  • Where do vendors publish internal transfer rates of HDDs?

    - by red888
    So I've started to dig into storage fundamentals and found that in order to calculate the IOPS of a HDD you need to know the internal transfer rate of the drive (time it takes data to move from the platters to internal disk's cache). I went on newegg and even a few vendor sites and could not find this info published for any HDDs. Is it sometimes called something else? Take this link to a seagate HDD for instance. Nowhere do I see "internal transfer rate", but I do see something called "Sustained Data Rate OD"- is that the same thing? Just so you know where I'm getting this info (Book: "Information Storage and Management Storing, Managing..."): Consider an example with the following specifications provided for a disk: The average seek time is 5 ms in a random I/O environment; therefore, T = 5 ms. Disk rotation speed of 15,000 revolutions per minute or 250 revolutions per second — from which rotational latency (L) can be determined, which is one-half of the time taken for a full rotation or L = (0.5/250 rps expressed in ms). 40 MB/s internal data transfer rate, from which the internal transfer time (X) is derived based on the block size of the I/O — for example, an I/O with a block size of 32 KB; therefore X = 32 KB/40 MB. Consequently, the time taken by the I/O controller to serve an I/O of block size 32 KB is (TS) = 5 ms + (0.5/250) + 32 KB/40 MB = 7.8 ms. Therefore, the maximum number of I/Os serviced per second or IOPS is (1/TS) = 1/(7.8 × 10^-3) = 128 IOPS.

    Read the article

  • Why such a dramatic difference in wireless router max. simultaneous connections?

    - by Jez
    Recently, I've needed to look into buying a wireless router for a mission-critical system at work that will need to support quite a few simultaneous connections (potentially a few hundred laptops). One thing I've noticed is that there seems to be a dramatic difference between the max. simultaneous connections different routers can support; see this page for example - anything from 32 to 35,000! Why is there this degree of difference? You'd have thought that if we know how to make routers that can handle thousands of connections, we wouldn't be making stuff that's limited to a pathetic 32 anymore. Is it a firmware thing? A hardware thing? Are low-end manufacturers purposely putting low arbitrary connection limits in so people can be "encouraged" to pay more for high-end routers?

    Read the article

  • How to stop Firefox on an SSD from freezing when using the search box or submitting a form?

    - by sblair
    Firefox usually freezes for about a second whenever I search for something from the toolbar search box, when submitting a form, or when clearing the search box history. I suspect it has something to do with the auto-complete feature. Using Windows 7's Resource Monitor, the problem seems to be from the file: C:\Users\<username>\AppData\Roaming\Mozilla\Firefox\Profiles\<profile>\formhistory.sqlite-journal I believe this is a temporary file which caches database writes. The following screenshot shows the very high response times from six different searches, and that the queue length on drive C shoots off the scale: My Firefox profile is on an Intel X25-M G2 SSD. The problem doesn't seem to occur if I create a new profile on a hard disk drive. However, I'd like to know why the problem exists on the SSD in the first place (because it's an annoying problem which contradicts the reason I bought an SSD, and it might happen with other applications too), and how to prevent it. It still occurs if Firefox is started in safe mode, and with the recent beta versions. Updates: VACUUMing the Firefox profile databases does not help with this problem. The SSD Optimizer in the Intel SSD Toolbox does not help either.

    Read the article

  • CPU and HD degradation on sourced based Linux distribution

    - by danilo2
    I was wondering for a long time if source based Linux distributions, like Gentoo or Funtoo are "destroying" your system faster than binary ones (like Fedora or Debian). I'm talking about CPU and hard drive degradation. Of course, when you're updating your system, it has to compile everything from source, so it takes longer and your CPU is used at hard conditions (it is warmer and more loaded). Such systems compile hundreds of packages weekly, so does it really matter? Does such a system degrade faster than binary based ones?

    Read the article

  • Error when I am trying to run Activiti BPM explorer

    - by test test
    I am facing the following problem : I have downloaded Activiti BPM which runs under Apache. I have installed both; Java jre7 & Java jdk1.7.0_06. I set the JAVA_HOME to be C:\Program Files\Java\jdk1.7.0_06. But when I try to run the Activiti BPM by typing the following in windows 7 command line : C:\activiti-5.10\activiti-5.10\setup>ant demo.start, the Tomcat server will start and the demo will build successfully, but if I try to navigate to the following link http://localhost:8080/activiti-explorer I get the following error : HTTP Status 404 - /activiti-explorer type Status report message /activiti-explorer description The requested resource (/activiti-explorer) is not available. Apache Tomcat/6.0.32

    Read the article

  • MySQL maintenance - how to clear the buffer?

    - by Dougal
    We have a server running our web app (PHP / MySQL) which is SLOW. My predecessor says that: "We use to do database maintenance, which use to clear the buffer, cached and unwanted variables." And I wonder what on earth he means with that statement? Does he mean a simple optimize of the tables? Or the query cache? I understand MySQL but don't really know what he is describing. I would appreciate any pointers. Thanks.

    Read the article

  • Having munin server monitoring problem: Graphs not being generated.

    - by geerlingguy
    When I run munin-cron (munin-cron --debug), I get the following error: 2010/05/10 13:39:01 [WARNING] Call to accept timed out. Remaining workers: archstl.org;archstl.archstl.org 2010/05/10 13:39:01 [DEBUG] Active workers: 1/8 These errors simply keep repeating themselves until I quit munin-cron. I've followed the directions for debugging munin on the 'Debugging Munin plugins' wiki page, but I get the following results when going through their directions: After telnetting to localhost 4949, I can see a list of plugins, see a node at archstl.archstl.org, but can't fetch anything. The output is as follows: >fetch cpu . However, on the same machine (which is both the node and the master munin server), I can run munin-run cpu, and it prints the results correctly to the command line, like so: user.value 100829130 nice.value 3479880 system.value 13969362 idle.value 664312639 iowait.value 12180168 irq.value 14242 softirq.value 199526 steal.value 0 Looking at the wiki page mentioned above, it looks like it might be a plugin environment problem, but I can't figure out how to fix/change this... If the plugin does run with munin-run but not through telnet, you probably have a PATH problem. Tip: Set env.PATH for the plugin in the plugin's environment file.

    Read the article

  • Monitoring User Login time

    - by beakersoft
    Hi, i have recently been given the task of trying to work out why the login time (not machine boot time) for some of our users seems slow. The vast majority of clients (95%) are running on XP sp3, Windows 2003 domain controlers. Most users have the same model of machine. I would like to be able to see how long each of the polices are taking to load (if possable split user and computer) and any other info that might help (services starting etc) I changed the userenvdebuglevel reg option to generate the userenv.log file but it did'nt contain very much info Thanks Luke

    Read the article

  • Intel Core 2 Duo E6600 versus AMD Athlon II X2 3GHZ

    - by Billy ONeal
    Hello :) I have an Intel Core 2 Duo E6600 (2.4GHZ) in my current desktop, and I have a newer machine with an AMD Athlon II X2 3.0GHZ. I'm wondering how the systems will perform in comparison to one another. I'd like to use the AMD because it's 45nm and uses less power, but I don't want to do so at a loss in perforamnce. Which should perform better? Billy3

    Read the article

  • Brand new Seagate HDD has high raw read error rate

    - by kpax
    I've just purchased a brand new Seagate ST31000524AS 1TB HDD. Manufacture date shows as January 2012 (yes that's as new as new can get), so must be one of the new batches from the post-flood Thailand. Anyway, I downloaded a copy of Active Hard Disk Monitor tool to check the S.M.A.R.T. parameters and I find the parameter Raw Read Error Rate is very low. Should I be worried? Will this rectify over time? This hdd is just 7 hours old; what gives? Edit: I meant high raw read error rate - Title updated accordingly

    Read the article

  • iptables, blocking large numbers of IP Addresses

    - by Twirrim
    I'm looking to block IP addresses in a relatively automated fashion if they look to be 'screen scraping' content from websites that we host. In the past this was achieved by some ingenious perl scripts and OpenBSD's pf. pf is great in that you can provide it nice tables of IP addresses and it will efficiently handle blocking based on them. However for various reasons (before my time) they made the decision to switch to CentOS. iptables doesn't natively provide the ability to block large numbers of addresses (I'm told it wasn't unusual to be blocking 5000+), and I'm a bit cautious over adding that many rules into an iptable. ipt_recent would be awesome for doing this, plus it provides a lot of flexibility for just severely slowing down access, but there is a bug in the CentOS kernel that is stopping me from using it (reported, but awaiting fix). Using ipset would entail compiling a more up-to-date version of iptables than comes with CentOS which whilst I'm perfectly capable of doing it, I'd rather not do from a patching, security and consistency perspective. Other than those two it looks like nfblock is a reasonable alternative. Is anyone aware of other ways of achieving this? Are my concerns about several thousand IP addresses in iptables as individual rules unfounded?

    Read the article

  • How to configure an ASUS motherboard.

    - by Absolute0
    I have an ASUS P7P55-M motherboard running with an Intel Core i5-750 processor and 4 GB RAM with 1600 MT/s speed. For some reason the default settings of the motherboard make all the components run at half their optimums. I have switched to the "D.O.C.P." profile and supposedly everything is as it's supposed to be (verified with CPU-Z). There is also an "X.M.P." profile and a manual one. Are either of the DOCP or XMP safe to go with? I wouldn't use the manual mode as I would likely mess something up real bad. XMP seems to be more memory oriented.

    Read the article

  • VMWare Pre-Allocated vs. Growable, which is faster?

    - by tekiegreg
    In an effort to increase speed in my Vmware setup, I was thinking about converting a Windows XP Guest 32 bit I have from growable to pre-allocated, I'm currently running VMWare Workstation 7 with Windows 7 64 bit as the host. Specs: Dual Core CPU, one allocated to guest 4GB of RAM, 2GB to guest HD max capacity is 500GB, 150GB allocated to guest (I have 300GB left and don't mind parting with the space, currently HD is 80GB and converting would obviously add another 70GB of space), HD that guest is running on is separate from Host OS Either that or any other suggestions you have might be appreciated, thanks!

    Read the article

  • How dow I remove 1.000.000 WebsiteCache directories?

    - by harper
    I found that in a WebsiteCache directory more than 1.000.000 subdirectories has been created. I want to remove all these directories. My first approach was to use the command line tool: cd WebsiteCache rmdir /Q /S . This will remove all subdirectories except WebsiteCache itself, since it is the current working directory. I noticed after two hours that the directoriws starting with A-H have been removed. Why does rmdir removes the directories in alphabetical order? It must take additional effort to do this ordered. What is a fast way to deleted such an amount of directories?

    Read the article

  • Jumbo Frames, ISCSI and ESXi

    - by vlannoob
    I have enabled Jumbo Frames (9000) in ESXi for all my vmNICs, vmKernels, vSwitches, iSCSI Bindings etc - basically anywhere in ESXi where it has an MTU settings I have put 9000 in it. The ports on the switches (Dell PowerConnects) are all set for Jumbo Frames. I have a Dell MD3200i with 2 controllers, each with 4 ports for iSCSI. Each of these ports is set to Jumbo Frames (9000) as well. So now the questions: Do I need to log into each Windows Server VM I am running and delve into the NIC properties and manually set it to Jumbo Frames in the NIC properties in the device Manager as well? Whats the best way of testing that Jumbo Frames are indeed working as intended?

    Read the article

  • What are possible causes of keyboard lag on my desktop machine?

    - by Jer
    I am running Windows 7 and began experiencing keyboard lag in most applications, and it seems to be getting worse. Certain websites are the worst - on some, I can type a sentence, take my hands off the keyboard, and watch the characters continue to appear on the screen for several seconds. Others are not as bad, but still noticeable and annoying. I just started noticing it in non-browser applications (e.g. Outlook) as well. I've disabled all extensions in Firefox, rebooted my machine, and that did nothing. There is nothing using much memory or cpu cycles, even when the lag is occurring. This is a machine at work with very strict controls over what can be installed, so the chances of any kind of malware are very slim. I don't believe anything as been installed since before the problem started. What could be causing this, and/or what can I do to debug?

    Read the article

  • Microsoft Mouse and Keyboard Center - Slow response for App-specific shortcuts

    - by Darrel Hoffman
    So a few months ago, I bought a new MS mouse, and was surprised that they'd discontinued Intellipoint in favor of this Microsoft Mouse and Keyboard Center. It seems to have the same functionality underneath all the bloat, but there's a very serious drawback - when I set up application-specific functions for the extra buttons on the mouse, they work, but sometimes with a very long delay, like up to a minute or more. For example, I often set up the left side button as an "Undo" in various programs for convenience. But sometimes, when I try to use that Undo button, nothing happens, so I'm forced to use the standard Ctrl-Z or whatever. But then, a minute or so later, it suddenly remembers that I hit that button a while back, and calls the Undo unexpectedly on something entirely different. It's infuriating. No modern computer function should be this slow. It's not the software or the computer itself, because doing an Undo via Ctrl-Z or the menu still works instantly. It's very definitely a side-effect of delayed response to the mouse button. Usually after it delays the first time, it'll work quickly after that, but if you haven't used a given shortcut in several minutes, it "forgets" again and you get another inexplicably long delay. Intellipoint never had this problem, but it's not supported any more, and not compatible with the newer mice. Has anyone else noticed slow-downs with MS M&K C and app-specific shortcuts? Any ideas how to get around this? I use these shortcuts extensively in my workflow and it's just entirely unacceptable to have such a long delay in what should be a pretty basic feature.

    Read the article

  • Reporting memory usage per process/program

    - by Nick Retallack
    How can I get the current memory usage (preferably in bytes so they can be added up accurately) for all running processes individually? Can I roll up the summaries for child processes into the process that spawned them? (e.g all apache threads together). Sometimes, my server runs out of memory and becomes unresponsive. I want to discover what is using up all the memory. Unfortunately, it's likely to not be a single process. Some programs spawn hundreds of processes, each using very little memory, but it adds up. On a side note, is it normal for apache to spawn 200+ processes?

    Read the article

  • Ultra-lightweight web browser?

    - by zildjohn01
    Are there any good super-lightweight graphical web browsers out there? I'd like to be able to browse the web on an old PC, but the mainstream crop of browsers is just too heavy, and I don't want to resort to something like Lynx. There must be something decent out there that'll fit in 16 or 32MB of RAM comfortably. 100% standards compliance isn't necessary, but I'd like something that supports the most widely used parts of CSS and JavaScript. The goal is to get 98% of sites usable in a nice, graphical format.

    Read the article

  • High frequency, kernel bypass vs tuning kernels?

    - by Keith
    I often hear tales about High Frequency shops using network cards which do kernel bypass. However, I also often hear about them using operating systems where they "tune" the kernel. If they are bypassing the kernel, do they need to tune the kernel? Is it a case of they do both because whilst the network packets will bypass the kernel due to the card, there is still all the other stuff going on which tuning the kernel would help? So in other words, they use both approaches, one is just to speed up network activity and the other makes the OS generally more responsive/faster? I ask because a friend of mine who works within this industry once said they don't really bother with kernel tuning anymore-because they use kernel bypass network cards? This didn't make too much sense as I thought you would always want a faster kernel for all the CPU-offloaded calculations.

    Read the article

  • windows 2000 freezing during large disk write

    - by robert
    We have a windows 2000 sp4 server which freezes up for about 1 minutes while its web-app does a ~500mb write operation. I can see the webapp start to do I/O activity (through process explorer) then the RDP session becomes unresponsive, you can click on windows and buttons but nothing happens. When the disk write finally finishes the session 'catches up' on all the mouse clicks you did during the freeze in a mad flurry of window activity and the server returns to normal. During the freeze the web-app stops as well. The same behaviour happens on the console of the server. (so I know its not a network thing) Nothing appears in the Event logs. Its like nothing happened. I have upgraded all the HP hardware drivers to the latest proliant support pack. And also run a HP hardware diagnostics which found nothing wrong. What would cause a disk write to lock the rest of the OS?

    Read the article

< Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >