Search Results

Search found 16787 results on 672 pages for 'mod disk cache'.

Page 373/672 | < Previous Page | 369 370 371 372 373 374 375 376 377 378 379 380  | Next Page >

  • VBUG Spring Conference, 28th and 29th March in Reading

    - by Eric Nelson
    I presented at VBUG last year and can confirm that they put on a really good event. This year I stood aside for my “replacement” Steve Plank to work his magic. Worth checking out… VBUG SPRING CONFERENCE 28/29 March 2011 Wokefield Park, Mortimer, Reading RG7 3AH Day One (Mon 28 March): Developing SharePoint 2010 with Visual Studio 2010 - Dave McMahon Cache Out with Windows Server AppFabric – Phil Pursglove Extending your Corporate Network in to the Windows Azure Data Centre with Windows Azure Connect – Steve Plank Silverlight Development on Windows Phone 7 - Andy Wigley Day Two (Tues 29 March): Self Service BI for your users, but what does that mean for you? - Andrew Fryer Design Patterns – Compare and Contrast – Gary Short Projecting your corporate identity to the cloud – Steve Plank May the Silverlight 4 be with you – Richard Costall The Step up to ALM – an Introduction to Visual Studio 2010 TFS for the Visual Sourcesafe User - Richard Fennell For more information go to http://cms.vbug.net (It isn’t free but it is high quality)

    Read the article

  • Skipping intermediate Ubuntu OS upgrade to latest one,How do I upgrade from 9.04 to 10.04.2?

    - by Yadnesh
    I'm currently runing Ubuntu 9.04 Jaunty. I want to upgrade to 10.04.02, but whenever I use the Update manager to do this, it fails with the error "An upgrade from 'jaunty' to 'lucid' is not supported with this tool". I also tried to run sudo do-release-upgrade -d, but it fails with the same error message: Checking for a new ubuntu release Done Upgrade tool signature Done Upgrade tool Done downloading extracting 'lucid.tar.gz' authenticate 'lucid.tar.gz' against 'lucid.tar.gz.gpg' tar: Removing leading `/' from member names Reading cache Checking package manager Can not upgrade An upgrade from 'jaunty' to 'lucid' is not supported with this tool.

    Read the article

  • New host, high load?

    - by dotancohen
    A few minutes ago I signed up at a new webhost. I have yet to move my sites over. Upon initial SSH connection, I checked the load and memory usage, they do seem rather higher than I would like: # uptime 12:06:51 up 71 days, 23:23, 1 user, load average: 9.02, 9.49, 9.45 # free total used free shared buffers cached Mem: 33014800 31927192 1087608 0 2384812 17729816 -/+ buffers/cache: 11812564 21202236 Swap: 16787916 8584 16779332 Is that a bit to packed? I'm only paying about $5 USD per month, so I don't expect <0.1 loads, but ~10 is worrisome. Is it not? Also, there is no /etc/issue file so I tried other methods to guess the OS: # uname -a Linux box358.bluehost.com 2.6.32-20120131.55.1.bh6.x86_64 #1 SMP Tue Jan 31 15:43:27 EST 2012 x86_64 x86_64 x86_64 GNU/Linux # which yum /usr/bin/yum # which apt-get # That looks like CentOS / RHEL 6.2 possibly?

    Read the article

  • How to safely purge in Varnish if backend is sick without losing content

    - by Highway of Life
    If the backend is sick, what is the preferable way to ensure that stale content can be retrieved from the backend when a PURGE request is made? When a PURGE request is made, whether or not the backend is sick, by default the content will be eliminated from the Varnish cache and if the backend is down, a 503 page would be served to the user until the backend comes back online to serve a new version of the content. I'd like to be able to at least serve up a stale version of the content if a new version could not be retrieved from the backend. Is this possible without installing the Softpurge Varnish Mod?

    Read the article

  • Building lirc package from source with patches

    - by joystick
    I'd like to build latest lirc package for 12.04 with two patches from http://bit.ly/17779VW to make USB Infrared toy v2 work: Running sudo apt-build source lirc gave me ? build ll total 960 drwxr-xr-x 10 root root 4096 Nov 5 07:07 lirc-0.9.0 -rw-r--r-- 1 root root 113909 May 5 2011 lirc_0.9.0-0ubuntu1.debian.tar.gz -rw-r--r-- 1 root root 1553 May 5 2011 lirc_0.9.0-0ubuntu1.dsc -rw-r--r-- 1 root root 857286 May 5 2011 lirc_0.9.0.orig.tar.bz2 in /var/cache/apt-build/build. Running sudo apt-build build-source lirc then gave me Some error occured building package which is not really informative. I have successfully built patched lirc from source but now I would like to get a deb package. Where can I look for this 'some errors' in detail? Thank you, Alexei

    Read the article

  • Lamp with mod_fastcgi

    - by Jonathan
    Hi! I am building a cgi application, and now I would like it to be like an application that stands and parses each connection, with this, I can have all session variables saved in memory instead of saving them to file(or anyother place) and loading them again on a new connection I am using lamp within a linux vmware but I can't seem to find how to install the module for it to work and what to change in the httpd.conf. I tried to compile the module, but I couldn't because my apache isn't a regular instalation, its a lamp already built one, and it seems that the mod needs the apache directory to be compiled. I saw some coding examples out there, so I guess is not that hard once its runing ok with Apache Can you help me with this please? Thanks, Joe

    Read the article

  • Unable to locate package lightread

    - by TENG PENG
    I have changed my source to local server. I'm using Ubuntu 12.10. When I type apt-cache search in terminal, it shows nothing. When I install lightread it shows Unable to locate package lightread. When I install lightread manually by python. It shows python '/home/peng/Downloads/quickly_trunk/setup.py' Traceback (most recent call last): File "/home/peng/Downloads/quickly_trunk/setup.py", line 93, in <module> data_files=[('share/icons/hicolor/128x128/apps', ['data/media/lightread.png'])] File "/usr/lib/python2.7/dist-packages/DistUtilsExtra/auto.py", line 71, in setup src_mark(src, 'setup.py') File "/usr/lib/python2.7/dist-packages/DistUtilsExtra/auto.py", line 527, in src_mark src.remove(path) KeyError: 'setup.py' How to solve the problem?

    Read the article

  • How to install Chrome browser properly via command line?

    - by Bad Learner
    Setting up and managing an Ubuntu server all by myself, in coming months, is a part of my current plans. Hence, I am planning a swtich from Windows to Linux - - Ubuntu. I now need to get some grip on the command line, since I am all used to Windows' GUI. Anyway... the most obvious start is installing apps on my computer, and I thought I should learn to do it via CLI. And this is what I did: $ apt-cache search chrome browser the results showed that the proper term is "chrome-browser," so... $ sudo apt-get install chrome-browser And then "Y" for the Y/n question. But the installation threw errors. (I do not have my PC at hand, so can't mention what error exactly.) Does someone see anything wrong with the commands I issued? I am probably missing some command(s) in between, I think.

    Read the article

  • How do I use modulus for float/double?

    - by ShrimpCrackers
    I'm creating an RPN calculator for a school project. I'm having trouble with the modulus operator. Since we're using the double data type, modulus won't work on floating point numbers. For example, 0.5 % 0.3 should return 0.2 but I'm getting a division by zero exception. The instruction says to use fmod(). I've looked everywhere for fmod(), including javadocs but I can't find it. I'm starting to think it's a method I'm going to have to create? edit: hmm, strange. I just plugged in those numbers again and it seems to be working fine...but just in case. Do I need to watch out using the mod operator in Java when using floating types? I know something like this can't be done in C++ (I think).

    Read the article

  • SVG images grow and create scrollbars when on the server

    - by zuko
    Okay so I embedded some SVG images into my page and opened it locally on Chrome and it looked fine. I upload the same file to the server and look at the page online and the SVG images have grown by maybe 5-10% and are surrounded by scroll bars like they are overflowing. I think it probably has to do with my lack of knowledge on how SVG and Embed work. What's really puzzling me though, is that it works fine locally. (I have cache disabled.) Help? Thanks. Edit: code HTML: <embed type="image/svg+xml" src="content/web-logo.svg"/> There's no CSS on the image. I'm not sure if I was just wrong before or if I changed something I'm not aware of, but it doesn't appear to be actually changing size anymore. It just decides to stuff it into a scrollbox. pic: https://www.dropbox.com/s/wt1aufi7nl1fpyi/svg-problem.png

    Read the article

  • How to search for packages that provides a virtual package?

    - by netvope
    How to search for packages that provides a virtual package? For example, I want to search for packages that provides "x-terminal-emulator" in the "main" repository of Ubuntu 12.04. One way to do this is to parse the package index: curl http://archive.ubuntu.com/ubuntu/dists/precise/main/binary-amd64/Packages.gz | zcat | grep -B12 '^Provides: x-terminal-emulator' | grep ^Package: which gives me the following results: Package: gnome-terminal Package: konsole Package: xterm Is that a better way to do this? Can it be done with any of the official tools (apt-get/apt-cache/etc)?

    Read the article

  • How can I add the version of a file to the file name with Tortoise-SVN?

    - by Eric Belair
    I would like to start giving unique names to "cache-able" files - i.e. *.css and *.js - in order to prevent caching, without requiring changes to the web-server settings (as is currently done in IIS). For instance, let's I have a JavaScript file called global.js. Going forward I would like it to have the name global.123.js when revision 123 is checked in. This would also require the following: The previous version of the file - perhaps it was global.115.js - is removed when the file is deployed. All references to the file are updated with the new file name How do I go about doing this? What concerns do I need to consider?

    Read the article

  • Adoption of Exadata - Gartner research note

    - by Javier Puerta
    Independent research note by Gartner acknowledges Oracle Exadata Database Machine has achieved significant early adoption and acceptance of its database appliance value proposition. Analyst Merv Adrian looks at some of the main issues that IT professionals have solved as they assess or deploy the Oracle Exadata solution, including: OLTP and DSS workload support workload consolidation increasing performance and scalability demands data compression improvements  Gartner reports clients using Oracle Exadata experienced the following: report significant performance improvements substantial amounts of cache memory which greatly improves processing speed Oracle Advanced Compression providing 2-4X data compression delivering significant reductions in storage requirements and driving shorter times for backup operations Tables compressed with Oracle Advanced Compression automatically recompress as data is added/updated. One client specifically reported consolidating more than 400 applications onto the Oracle Exadata platform Read the full Gartner note

    Read the article

  • Installing Ubuntu

    - by Mister AR
    i got a problem when I wanted to installing ubuntu 12.04 on a VMWare system on my Windows 7 x64 system ... in the end of installing after retrieving Files it stopped and didn't move forward... additionally i got a another problem there where i wanted to installing packages i updated. and gave me error below : installArchives() failed: Error in function: Setting up libssl1.0.0 (1.0.1-4ubuntu5.2) ... locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error processing libssl1.0.0 (--configure): subprocess installed post-installation script returned error exit status 1 PLz help me soon ! tY all...

    Read the article

  • Cannot install nautilus elementary.

    - by coklatua
    when I try apt-cache policy nautilus it shows this, Installed: 1:2.32.0-0ubuntu1-ppa1 Candidate: 1:2.32.0-0ubuntu1-ppa1 Version table: *** 1:2.32.0-0ubuntu1-ppa1 0 100 /var/lib/dpkg/status 1:2.32.0-0ubuntu6~ppa160 0 500 http://ppa.launchpad.net/am-monkeyd/nautilus-elementary-ppa/ubuntu/ maverick/main amd64 Packages 1:2.32.0-0ubuntu1.1 0 500 http://archive.ubuntu.com/ubuntu/ maverick-updates/main amd64 Packages 1:2.32.0-0ubuntu1 0 500 http://archive.ubuntu.com/ubuntu/ maverick/main amd64 Pack As you can see I allready add the am-monkeyd ppa but when i'm update & upgrade nothing change.

    Read the article

  • Dynamic MMap ran out of room when trying to sudo apt-get anything

    - by user1610406
    I was having an error in Update Manager that asks me to do a partial upgrade and it fails. Now I can't sudo apt-get install anything. I tried to fix it, and now I can't sudo apt-get anything. Every time, I get this output: Reading package lists... Error! E: Dynamic MMap ran out of room. Please increase the size of APT::Cache-Limit. Current value: 25165824. (man 5 apt.conf) E: Error occurred while processing libuptimed0 (NewVersion1) E: Problem with MergeList /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_lucid_universe_binary-i386_Packages W: Unable to munmap E: The package lists or status file could not be parsed or opened. I have no idea why this is happening or how to fix it, and I fear that if I try something that probably doesn't work that it will make my problem worse. (Just for reference I am currently running 10.04 (Lucid) on my machine.)

    Read the article

  • Attaching HTML file as email in VB 6.0

    - by Shax
    Hi, I am trying to attach an html file file to email using Visual Basic 6.0. when the cursor is comes on Open strFile For Binary Access Read As #hFile line it gives error "Error encoding file - Bad file name or number". Please all your help and support would be highly appreciated. Dim handleFile As Integer Dim strValue As String Dim lEventCtr As Long handleFile = FreeFile Open strFile For Binary Access Read As #handleFile Do While Not EOF(hFile) ' read & Base 64 encode a line of characters strValue = Input(57, #handleFile) SendCommand EncodeBase64String(strValue) & vbCrLf ' DoEvents (occasionally) lEventCtr = lEventCtr + 1 If lEventCtr Mod 50 = 0 Then DoEvents Loop Close #handleFile Exit Sub File_Error: Close #handleFile m_ErrorDesc = "Error encoding file - " & Err.Description Err.Raise Err.Number, Err.Source, m_ErrorDesc End Sub

    Read the article

  • regex in textfield

    - by klox
    dear all..i have this code: <script> var str="KD-R435MUN2D"; var matches=str.match(/(EE|[EJU]).*(D)/i); if (matches) { var firstletter = matches [1]; var secondletter = matches [2]; var thirdletter = matches [3]; alert(firstletter + secondletter + thirdletter); }else{ alert (":("); } </script> i want it can control a textfield <input type="text" id="mod">..how must i do?

    Read the article

  • How can I delay dropbox from starting, but not disable it?

    - by jgbelacqua
    When I log into my user account on Ubuntu 10.10, there is a unsatisfying delay before my system becomes usable. Even launching a terminal, I have to wait a few seconds before the bash prompt appears. During this start-up period, the top process seems to be dropbox. I'm not sure what it's doing exactly (functionality is still fine as far as I can see), but I do know it really doesn't need to be doing it while I'm waiting for desktop to appear. (This is the standard Ubuntu with Gnome desktop, by the way.) What I would like to do is to be able to have a static or even dependency-based delay for dropbox to start. It would be nice if it waited for, e.g., 10 minutes, or for my browser tabs to load and a typing pause. Then it could churn away on file status or cache-chewing, and I would be happy. Is there a way to do this? Thanks!

    Read the article

  • Fibonacci Sequence using loop and recur

    - by AdamJMTech
    I am doing the Project Euler challenge in Clojure and I want to find the sum of all the even numbers in a fibonacci sequence up to a certain number. The code for a function that does this is below. I know there are quicker and easier ways of doing this, I am just experimenting with recursion using loop and recur. However the code doesn't seem to work it never returns an answer. (defn fib-even-sum [upto] (loop [previous 1 nxt 1 sum 0] (if (or (<= upto 1) (>= nxt upto)) sum) (if (= (mod nxt 2) 0) (recur nxt (+ previous nxt) (+ sum nxt)) (recur nxt (+ previous nxt) sum)))) I was not sure if I could do recur twice in the same loop or not. I'm not sure if this is causing the problem?

    Read the article

  • cahoots - zend framework application is not running

    - by Gaurav Sharma
    hello everyone, I downloaded cahoots from sourceforge.net. It is a zend framework application. Very nicely done and I must say that it should be a nice tutorial for everyone who is struggling to learn Zend framework. But my problem is that even after reading the instructions this application is still not running. The application just doesn't run at all giving an error message. I have tried my best. no success :( Also I wanted to execute the application as "http://localhost/cahoots" but it runs by this URL "http://localhost/cahoots/public". why is it so.? I am using XAMPP v 1.7.1 with mod-rewrite enabled. Please guide me through the process. Any good tutorials on zend framework with zend tool would be appreciable. I want to learn this framework. Thanks

    Read the article

  • Installing Cairo to get FastRWeb working for R gWidgetsWWW2 -pkg

    - by hhh
    I want to install FastRWeb for R but it requires some Cairo. How can I install the Cairo? compilation terminated. make: *** [xlib-backend.o] Error 1 ERROR: compilation failed for package ‘Cairo’ * removing ‘/home/xfz/R/i686-pc-linux-gnu-library/2.13/Cairo’ ERROR: dependency ‘Cairo’ is not available for package ‘FastRWeb’ * removing ‘/home/xfz/R/i686-pc-linux-gnu-library/2.13/FastRWeb’ The downloaded packages are in ‘/tmp/Rtmpno8hhF/downloaded_packages’ Warning messages: 1: In install.packages("FastRWeb", , "http://rforge.net/", type = "source") : installation of package 'Cairo' had non-zero exit status 2: In install.packages("FastRWeb", , "http://rforge.net/", type = "source") : installation of package 'FastRWeb' had non-zero exit status I cannot find what the Cairo is here, 16 entries with this search term below. It is apparently some library. $ apt-cache search libcairo|wc 16 132 996 Perhaps related http://stackoverflow.com/questions/9826128/r-making-r-rook-program-into-rscript-program-r http://stackoverflow.com/questions/9812547/r-gui-vizualiser-with-command-line-access-browser-based-letting-users-to-s Some related packages FastRWeb and RServe for the gWidgetsWWW2 -pkg.

    Read the article

  • How can I determine which GPU card is running at PCI Express 2.0 x16 & which is using x8?

    - by M. Tibbits
    Is there a way to determine the speed of the PCI Express connection to a specific card? I have three cards plugged in: two Nvidia GTX 480's (one at x16 & and one at x8) one Nvidia GTX 460 running at x8 Is there some way, either by a function call in C or an option to lspci that I can determine the bus speed of the graphics cards? When I only use one of the cards for my CUDA program, I'd like to use the one which is running at x16. Thanks! Note: lspci -vvv dumps out For the two GTX 480s. I don't see any differences that pertain to bus speed. 03:00.0 VGA compatible controller: nVidia Corporation Device 06c0 (rev a3) Subsystem: eVga.com. Corp. Device 1480 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 16 Region 0: Memory at d4000000 (32-bit, non-prefetchable) [size=32M] Region 1: Memory at b0000000 (64-bit, prefetchable) [size=128M] Region 3: Memory at bc000000 (64-bit, prefetchable) [size=64M] Region 5: I/O ports at df00 [disabled] [size=128] [virtual] Expansion ROM at b8000000 [disabled] [size=512K] Capabilities: <access denied> Kernel driver in use: nvidia Kernel modules: nvidia, nvidiafb, nouveau 03:00.1 Audio device: nVidia Corporation Device 0be5 (rev a1) Subsystem: eVga.com. Corp. Device 1480 Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Interrupt: pin B routed to IRQ 5 Region 0: [virtual] Memory at d7ffc000 (32-bit, non-prefetchable) [disabled] [size=16K] Capabilities: <access denied> 04:00.0 VGA compatible controller: nVidia Corporation Device 06c0 (rev a3) Subsystem: eVga.com. Corp. Device 1480 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 16 Region 0: Memory at dc000000 (32-bit, non-prefetchable) [size=32M] Region 1: Memory at c0000000 (64-bit, prefetchable) [size=128M] Region 3: Memory at cc000000 (64-bit, prefetchable) [size=64M] Region 5: I/O ports at cf00 [size=128] [virtual] Expansion ROM at c8000000 [disabled] [size=512K] Capabilities: <access denied> Kernel driver in use: nvidia Kernel modules: nvidia, nvidiafb, nouveau 04:00.1 Audio device: nVidia Corporation Device 0be5 (rev a1) Subsystem: eVga.com. Corp. Device 1480 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 64 bytes Interrupt: pin B routed to IRQ 5 Region 0: Memory at dfffc000 (32-bit, non-prefetchable) [size=16K] Capabilities: <access denied> And the only differences I see relate specifically to the memory mapping: myComputer:~> diff card1 card2 3c3 < Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- --- > Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- 7,11c7,11 < Region 0: Memory at d4000000 (32-bit, non-prefetchable) [size=32M] < Region 1: Memory at b0000000 (64-bit, prefetchable) [size=128M] < Region 3: Memory at bc000000 (64-bit, prefetchable) [size=64M] < Region 5: I/O ports at df00 [disabled] [size=128] < [virtual] Expansion ROM at b8000000 [disabled] [size=512K] --- > Region 0: Memory at dc000000 (32-bit, non-prefetchable) [size=32M] > Region 1: Memory at c0000000 (64-bit, prefetchable) [size=128M] > Region 3: Memory at cc000000 (64-bit, prefetchable) [size=64M] > Region 5: I/O ports at cf00 [size=128] > [virtual] Expansion ROM at c8000000 [disabled] [size=512K] 18c18 < Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- --- > Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- 19a20 > Latency: 0, Cache Line Size: 64 bytes 21c22 < Region 0: [virtual] Memory at d7ffc000 (32-bit, non-prefetchable) [disabled] [size=16K] --- > Region 0: Memory at dfffc000 (32-bit, non-prefetchable) [size=16K]

    Read the article

  • A quick over view of facebook's db?

    - by Matt
    Hey guys I find it hard to believe that Facebook uses simple sql, surely it would use some other method but lets assume for now it does use sql how would the code assimilating the 'wall' work? Lets say that there is three tables (just for the example) Friends: id (entry key) - uid(your id) - fid (your mates' id) Wall:id (entry key) - username - comment - time - commentcount comments: id (entry key) - wid (wall id (original comment)) - reply - time Lets forget about the like part and report etc, as well as mod things (ip, ban etc.) How would this work? Select wall.id, wall.username, wall.comment, wall.time, wall.commentcount, comments.wid, comments.reply, comments.time FROM wall inner join comments ON wall.id=comments.wid ORDER BY wall.time; That's your own wall but how do they get friend's? A heap of unions?

    Read the article

  • CPU Usage in Very Large Coherence Clusters

    - by jpurdy
    When sizing Coherence installations, one of the complicating factors is that these installations (by their very nature) tend to be application-specific, with some being large, memory-intensive caches, with others acting as I/O-intensive transaction-processing platforms, and still others performing CPU-intensive calculations across the data grid. Regardless of the primary resource requirements, Coherence sizing calculations are inherently empirical, in that there are so many permutations that a simple spreadsheet approach to sizing is rarely optimal (though it can provide a good starting estimate). So we typically recommend measuring actual resource usage (primarily CPU cycles, network bandwidth and memory) at a given load, and then extrapolating from those measurements. Of course there may be multiple types of load, and these may have varying degrees of correlation -- for example, an increased request rate may drive up the number of objects "pinned" in memory at any point, but the increase may be less than linear if those objects are naturally shared by concurrent requests. But for most reasonably-designed applications, a linear resource model will be reasonably accurate for most levels of scale. However, at extreme scale, sizing becomes a bit more complicated as certain cluster management operations -- while very infrequent -- become increasingly critical. This is because certain operations do not naturally tend to scale out. In a small cluster, sizing is primarily driven by the request rate, required cache size, or other application-driven metrics. In larger clusters (e.g. those with hundreds of cluster members), certain infrastructure tasks become intensive, in particular those related to members joining and leaving the cluster, such as introducing new cluster members to the rest of the cluster, or publishing the location of partitions during rebalancing. These tasks have a strong tendency to require all updates to be routed via a single member for the sake of cluster stability and data integrity. Fortunately that member is dynamically assigned in Coherence, so it is not a single point of failure, but it may still become a single point of bottleneck (until the cluster finishes its reconfiguration, at which point this member will have a similar load to the rest of the members). The most common cause of scaling issues in large clusters is disabling multicast (by configuring well-known addresses, aka WKA). This obviously impacts network usage, but it also has a large impact on CPU usage, primarily since the senior member must directly communicate certain messages with every other cluster member, and this communication requires significant CPU time. In particular, the need to notify the rest of the cluster about membership changes and corresponding partition reassignments adds stress to the senior member. Given that portions of the network stack may tend to be single-threaded (both in Coherence and the underlying OS), this may be even more problematic on servers with poor single-threaded performance. As a result of this, some extremely large clusters may be configured with a smaller number of partitions than ideal. This results in the size of each partition being increased. When a cache server fails, the other servers will use their fractional backups to recover the state of that server (and take over responsibility for their backed-up portion of that state). The finest granularity of this recovery is a single partition, and the single service thread can not accept new requests during this recovery. Ordinarily, recovery is practically instantaneous (it is roughly equivalent to the time required to iterate over a set of backup backing map entries and move them to the primary backing map in the same JVM). But certain factors can increase this duration drastically (to several seconds): large partitions, sufficiently slow single-threaded CPU performance, many or expensive indexes to rebuild, etc. The solution of course is to mitigate each of those factors but in many cases this may be challenging. Larger clusters also lead to the temptation to place more load on the available hardware resources, spreading CPU resources thin. As an example, while we've long been aware of how garbage collection can cause significant pauses, it usually isn't viewed as a major consumer of CPU (in terms of overall system throughput). Typically, the use of a concurrent collector allows greater responsiveness by minimizing pause times, at the cost of reducing system throughput. However, at a recent engagement, we were forced to turn off the concurrent collector and use a traditional parallel "stop the world" collector to reduce CPU usage to an acceptable level. In summary, there are some less obvious factors that may result in excessive CPU consumption in a larger cluster, so it is even more critical to test at full scale, even though allocating sufficient hardware may often be much more difficult for these large clusters.

    Read the article

< Previous Page | 369 370 371 372 373 374 375 376 377 378 379 380  | Next Page >