Search Results

Search found 19483 results on 780 pages for 'load average'.

Page 153/780 | < Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >

  • Very uneven CPU utilization with SQL Server 2012 on 2 processor computer with 16 cores / processor

    - by cooplarsh
    After installing SQL Server Enterprise 2012 with the Server + Cal license model, on a computer with 2 processors each with 16 cores (and no hyperthreading involved) and putting the server under extremely heavy load the 16 cores on the first processor were very underutilized, the first 4 cores on the 2nd CPU were heavily utilized, and the last 12 cores were not used at all (because of the 20 core limit for this sql server version). Total CPU utilization was displaying as around 25%. Unfortunately, the server suffered from extremely poor performance even though if the tasks were evenly distributed across the 20 cores it wouldn't have been anywhere near as bad. The Windows Server was running on a VMWare virtual image under ESX Server, but all of the CPU was allocated to the windows server. We tried changing affinity settings (e.g., allocating most cores to CPU and the others to I/O), but that didn't help solve the performance problems. Upgrading the product edition to SQL Server Enterprise Core 2012 not only allowed the SQL Server to utilize the 12 previously unused cores on the 2nd processor, but it also resulted in a much more even distribution of tasks across all of the processors. To get through the backlog of requests cpU utilization jumped to around 90%, and then came down to around 33% once it was caught up, but performance improved dramatically since we failed over to the newly updated version And the performance issues went away. I was wondering if anyone knows what might cause SQL Server to unevenly distribute the load, relying almost exclusively on the first 4 cores of the 2nd processor that had 12 cores idle, and allocate only a few tasks to each of the 16 cores on the first processor. Also, is there any way we could have more evenly distributed the load across the 20 cores that were being used without the product edition upgrade? The flip side of that question is what did the product upgrade do that caused SQL Server to start evenly distributing the load across all of the cores that it recognized? Thanks to any insight to answer these questions and/or links that might help me better understand how to make sense of what was happenings.

    Read the article

  • Access Monit on PHP

    - by xian
    i have a remote server (Centos 5.8) and i have installed monit on it as my monitoring tool. monit installation yum install monit now, on my local machine (still Centos5.8), i want to get the system status (shown below) of the said server provided by monit --------------------------------------------------- | Parameter | Value | --------------------------------------------------- | Name | serverHostname | | Status | Running | | Monitoring mode | active | | Monitoring status | Monitored | | Data collected | Fri, 22 Jun 2012 16:47:01 | | Load average | [0.32] [0.37] [0.43] | | CPU usage | 3.3%us 0.2%sy 0.0%wa | | Memory usage | 2005212 kB [53.9%] | | Swap usage | 893256 kB [21.8%] | --------------------------------------------------- This information is shown when you clicked on the system name link found on the your monit's home page http://localhost:2812 How can I do that in php? How can I retrieve those information? i was thinking of executing this linux command in php: /usr/bin/monit status Output of this is The Monit daemon 5.4 uptime: 2h 55m System 'system_asi454' status Running monitoring status Monitored load average [0.22] [0.34] [0.42] cpu 3.3%us 0.2%sy 0.0%wa memory usage 2005212 kB [53.9%] swap usage 892928 kB [21.8%] data collected Fri, 22 Jun 2012 16:47:01 which is similar to the table content show above.

    Read the article

  • APC Smart UPS network shutdown issue

    - by Rob Clarke
    Here is a bit about our setup: We have 2x Smart-UPS RT 6000 XL units with network management cards We are running Powerchute from a network server Powerchute is connected to the management cards of both UPSs UPSs are set to do a graceful shutdown via Powerchute when the battery duration is under 20 minutes We also have a command file that runs with Powerchute Although our setup is redundant we do not have an equal load on each server due to APC switches for single power devices The problem is that as we do not have an equal load on each server the batteries drain at different rates. This means that the UPSs both get to the specified low battery duration at completely different times. The problem here is that UPS 1 may have run down to 5 minutes and is in desperate need of initiating a Powerchute shutdown - UPS 2 still has 25 minutes of runtime so no shutdown is initiated. Consequently UPS 1 goes down and takes all the servers with and then shuts down UPS 2 as well! What we need to happen are 1 of either 2 things: Powerchute initiates the shutdown as soon as either UPS reaches the 20 minutes low battery duration setting - and doesnt wait for both The UPS with the heavier load expends its entire battery but does not shutdown both UPSs and lets the load be switched across to the UPS that still has runtime remaining. That way when the UPS that still has runtime reaches its low battery duration it can proceed with the graceful shutdown via Powerchute. Hope that makes sense, any help is greatly appreciated!

    Read the article

  • IIS/ASP.NET performance incident - Perfmon Current Annonymous Users going through roof but Requests/sec low

    - by Laurence
    Setup: ASP.NET 4.0 website on IIS 6.0 on Win 2003 64 bit, 8xCPUs, 16GB memory, separate SQL 2005 DB server. Had a serious slowdown today with any otherwise fairly well performing ASP.NET site. For a period of a couple of hours all page requests were taking a very long time to be served - e.g. 30-60s compared to usual 2s. The w3wp.exe's CPU and memory usage on the webserver was not much higher than normal. The application pool was not in the middle of recycling (and it hadn't recycled for several hours). Bottlenecks in the database were ruled out - no blocks occurring and query results were being returned quickly. I couldn't make any sense of it and set up the following Perfmon counters: Current Anonymous Users (for site in question) Get requests/sec (ditto) Requests/sec for the ASP.NET application running the site Get requests/sec was averaging 100-150. Requests/sec for ASP.NET was averaging 5-10. However Current Anonymous Users was around 200. And then as I was watching, the Current Anonymous Users began to climb steeply going up to about 500 within a few minutes. All this time Get requests/sec & Requests/sec for ASP.NET was if anything going down. I did a whole load of things (in a panic!) to try to get the site working, like shutting it down, recycling the app pool, and adding another worker process to the pool. I also extended the expiration time for content (in IIS under HTTP Headers) in an attempt to lower the number of requests for static files (there are a lot of images on the site). The site is now back to normal, and the counters are fairly steady and reading (added Current Connections counter): Current Anonymous Users : average 30 Get requests/sec : average 100 Requests/sec for ASP.NET : 5 Current Connections : average 300 I have also observed an inverse relationship between Get requests/sec & Current Anonymous Users. Usually both are fairly steady but there will be short periods when Get requests/sec will go down dramatically and Current Anonymous Users will go up in a perfect mirror image. Then they will flip back to their usual levels. So, my questions are: Thinking of the original performance issue - if w3wp.exe CPU, memory usage were normal and there was no DB bottleneck, what could explain page requests taking 20 times longer to be served than usual? What other counters should I be looking at if this happens again? What explains the inverse relationship between Get requests/sec & Current Anonymous Users? What could explain Current Anonymous Users going from 200 to 500 within a few minutes? Many thanks for any insight into this.

    Read the article

  • excel rows,find if include,low and high

    - by Malin Pedersen
    Link to full-size image For what is marked in orange: As mentioned in the example in the picture it says "headphones". I would like it to search through all the lines in column A, to find something that has that name in it, then it should count the number of people, and come out with the number (in how many) the "middle price" I want it to take the price of B (depending on where it found it called headphones) and take the average price of it. In secured, as I would like it to count how many of them (from the number, or from the beginning) that have "secured" as "no" and "yes." I would like to use this on several things. For what is marked in pink: Where would I find the average price of all the goods, and what the name of the particular item is? Same with the highest and lowest price. How can I do this?

    Read the article

  • Postfix qmgr process causes heavy overload on mailservers

    - by Mattias
    We are using Postfix as MTA for our e-mailmarketing software and once in a while we see that the load on one of the mailservers rises above 5. The load is caused by the qmgr-process which is the heart of Postfix and I see that it is consuming a lot of CPU resources. The process seems to be stuck because after 15 minutes it is still doing the samething and still increasing the load. Once I restart the postfix service the load rapidly decreases to below 1 and Postfix continues to send e-mails without any problems. I'm wondering if anyone else has encountered this problem and if people have suggestions on how to prevent it. The problem shows up on all our mailservers but almost never at more than 1 at the time. It seems to be triggered only when we are sending a mailing but the size (10 or 100.000 e-mails doesn't seem to make a difference). It maybe happens once a week or even less often and the time and day is also different every time. We tried to solve the problem by decreasing the amount of messages qmgr is allowed to process but this didn't solve it. We are using Postfix 2.5.5 on Debian Lenny 5.0.8 (postfix is installed through the default Debian repository). No special messages can be found in the logs (syslog, messages, mail.*). Thank you for your time

    Read the article

  • Calculating IOPS for a single HDD - what am I doing wrong?

    - by red888
    So I know there is no standardized way of calculating IOPS for a HDD, but from everything I have read it appears one of the most accurate formulas is the following: IOP/ms = + {rotational latency} + ({block size} / {data transfer rate}) Which is IOs per millisecond or what the book I've been reading calls "Disk Service Time". Also rotational latency is calculated as half of one rotation in milliseconds. This was taken from the EMC book "Information Storage and Management" -arguably a pretty reliable source right\wrong? Putting this formula into practice consider this Seagate data sheet. I am going to calculate IOPS for the ST3000DM001 model for a block size of 4kb: Seek Average (Write) = 9.5 -I'll measuring IOPS for writes Spindle speed = 7200rpm Average Data Rate = 156MB/s So my variables are: Seek Time = 9.5ms Rotational latency = (.5 / (7200rpm / 60)) = 0.004s = 4ms Data Rate = 156MB/s = (0.156MB/ms / 0.004MB) = 39 9.5ms + 4ms + 39 = IO/ms 52.5 1 / (52.5 * 0.001) = 19 IOPS 19 IOPS for this drive clearly is not right so what am I doing wrong?

    Read the article

  • perfmon reporting higher IOPs than possible?

    - by BlueToast
    We created a monitoring report for IOPs on performance counters using Disk reads/sec and Disk writes/sec on four servers (physical boxes, no virtualization) that have 4x 15k 146GB SAS drives in RAID10 per server, set to check and record data every 1 second, and logged for 24 hours before stopping reports. These are the results we got: Server1 Maximum disk reads/sec: 4249.437 Maximum disk writes/sec: 4178.946 Server2 Maximum disk reads/sec: 2550.140 Maximum disk writes/sec: 5177.821 Server3 Maximum disk reads/sec: 1903.300 Maximum disk writes/sec: 5299.036 Server4 Maximum disk reads/sec: 8453.572 Maximum disk writes/sec: 11584.653 The average disk reads and writes per second were generally low. I.e. for one particular server it was like average 33 writes/sec, but when monitoring in real-time it would often spike up to several hundreds and also sometimes into the thousands. Could someone explain to me why these numbers are significantly higher than theoretical calculations assuming each drive can do 180 IOPs? Additional details (RAID card): HP Smart Array P410i, Total cache size of 1GB, Write cache is disabled, Array accelerator cache ratio is 25% read and 75% write

    Read the article

  • SOLR high CPU usage in amazon EC2

    - by user644745
    I installed solr-3.6 in my local windows box and it worked fine. I installed solr-4.0 in amazon ec2 linux large instance and the cpu usage shot upto 100%. It maintained at 80-90% average cpu power. I thought it could be because of 4.0, So I installed 3.6 in EC2 again. But again the CPU usage was 80-90% average. With both the versions, solr works in EC2. dont know why CPU usage is so high. i started the solr server using "sudo nohup java -jar start.jar &" In my local box java 1.7 is installed and in EC2 it is 1.6.0_24. I have mapped solr dir to an EBS volume. /dev/mapper/vg1-solr 8361916 1935928 6342128 24% /home/ec2-user/SOLR/solr/example/solr Is there any known issue ?

    Read the article

  • Troubleshoot odd large transaction log backups...

    - by Tim
    I have a SQL Server 2005 SP2 system with a single database that is 42gigs in size. It is a modestly active database that sees on average 25 transactions per second. The database is configured in Full recovery model and we perform transaction log backups every hour. However it seems to be pretty random at some point during the day the log backup will go from it's average size of 15megs all the way up to 40gigs. There are only 4 jobs that are scheduled to run on the SQL server and they are all typical backup jobs which occur on a daily/weekly basis. I'm not entirely sure of what client activity takes place as the application servers are maintained by a different department. Is there any good way to track down the cause of these log file growths and pinpoint them to a particular application, or client? Thanks in advance.

    Read the article

  • What file transfer protocols can be used for PXE booting besides TFTP?

    - by Stefan Lasiewski
    According to ISC's dhcpd manpage: The filename statement filename "filename"; The filename statement can be used to specify the name of the initial boot file which is to be loaded by a client. The filename should be a filename recognizable to whatever file transfer protocol the client can be expected to use to load the file. My questions are: What file transfer protocols, besides tftp, are available to load the file (e.g. What protocols "can be expected to" load the file)? How can I tell? Can I see a list of these protocols? Does my choice of DHCP server influence which file transfer protocols are in use? Pretend I want to use dnsmasq instead of ISC's dhcpd Are these features dependent on the PXE which is in use (e.g. My Intel NICs use an Intel ROM)? I know that some PXE-variants, such as iPXE/gPXE/Etherboot, can also load files over HTTP. However, the PXE rom needs to be replaced with the iPXE image, either by chainloading or by burning the PXE rom onto the NIC. For example, the iPXE Howto "Using ISC dhcpd" says: ISC dhcpd is configured using the file /etc/dhcpd.conf. You can instruct iPXE to boot using the filename directive: filename "pxelinux.0"; or filename "http://boot.ipxe.org/demo/boot.php";

    Read the article

  • Three server processes consume no more than 50% of Dual Core CPU

    - by thor
    I have three processes running on Intel Core 2 Duo CPU. From watching output of 'top' and graphs of CPU load (drawn by MRTG, data collection via SNMP) I can see that CPU load is never more than 50%, and, most of the day, when those processes are busy CPU load has a ceiling at 50 %. I mean, CPU load grows up to 50% in the morning and stays there until late evening. My first thought was that only one core was used at 100% thus giving 50% of both CPUs. But, as there are three processes running and from 'top' I see that both cores are being loaded, so this is not the case. schedtool shows that CPU affinity for those three processes is at default, 0x03, allowing them to use both cores. If I force one process to one core (schedtool -a 0x01), and two others to second (schedtool -a 0x02), cumulative usage grows beyond 50%. Why three processes seem to consume only 50% of two cores? Why forcing them to different CPUs allows usage to grow higher? Any hints? P.S. Processes in question are Counter-Strike servers.

    Read the article

  • Decreasing lagging on router, while gaming

    - by user2699451
    I had absolutely no idea where to post this question and get a professional answer for it but here goes... Okay, so I guess everyone whos is reading this had played online, and so I was playing LoL again tonight and my brother decided that now was a great time to go on youtube and start watching a movie, so my ping (connecting from South Africa to EU west server) is around 190-220 average, however it started spiking to 2000 and average was 600-800, so it arised the question, how ther hell can I "kick" him off for the time being I tried reasoning it out with him but its like playing chess with a pigeon, he's studying to be an engineer, and I just cant win an argument with him, so i need to step it up a level... I have in the past used the aireplay method by sending deauth packets but it only helped so much, is there another way of either kicking a peer of the local wifi or decreasing the lag spikes while in session or even splitting the bandwidth equally in 2 or 3,etc What do I do p.s. sorry if off topic, if it is not appropriate, just say which website will be able to help or assist me...

    Read the article

  • ping alternative to measure routing distance (on Windows)

    - by Marco Demaio
    Hello, in order to measure aprroximately the rouitng distance (to see if a server is close to my country or too far away) I usually use ping command. I'm in Italy, when I ping Italian servers I get 36ms when I ping US EAST servers I get an average of 120ms when I ping US WEST servers I get an average of 200ms etc. Unfortunately some web hosters turn off the ping reply on their servers, so my question is how do I detect the routing distance, is there another easy to use command in Windows to accomplish the same task? Thanks!

    Read the article

  • How to know if my nginx is in good health?

    - by Howard
    I am running a nginx on EC2 (m1.small) for SSL termination. I am using 2 workers on Ubuntu, with latest nginx (stable), the network throughput is around 2Mbps and system load average is around 2 to 3. I am wondering if this system is in good health for now, e.g. what is the queue length (I know nginx can handle a lot of concurrent request, but I mean before the request is being served, how many of them need to wait before being served) what is the average queue time for a given request to be served. I want to know because if my nginx is cpu bounded (e.g. due to SSL), I will need to upgrade to a faster instance. My current nginx status Active connections: 4076 server accepts handled requests 90664283 90664283 104117012 Reading: 525 Writing: 81 Waiting: 3470

    Read the article

  • Video encoding is very slow on Amazon EC2 instance

    - by Timka
    We are using Amazon EC2 m1.xlarge instance for video re-encoding and it looks like the actual encoding process takes a very long time. For an average 250mb video file it takes about an hour to encode. Intance: m1.xlarge (Xeon E5645 x 15gb ram) Windows Server 2008 R2 64-bit AviSynth version 2.5 (32bit) + ffms2 plugin (FFmpegSource 1.21) FFmpeg SVN-r13712 libavutil 3213056 libavcodec 3356930 libavformat 3411456 libavdevice 3407872 Number of parallel jobs is 3 Average CPU utilization ~96% Update#1 Source video: mp4/h.264 Parameters for ffmpeg: --enable-memalign-hack --enable-avisynth --enable-libxvid --enable-libx264 + --enable-libgsm --enable-libfaac --enable-libfaad --enable-liba52 + --enable-libmp3lame --enable-libvorbis --enable-libtheora --enable-pthreads + --enable-swscale --enable-gpl Video files encoded to mp4/h.264 with the following extra command line options: -threads 0 -coder 0 -bf 0 -refs 1 -level 30 -maxrate 10000000 -bufsize 10000000

    Read the article

  • When is it time to buy a new hard drive, and what considerations go into buying a new hard drive?

    - by user1125620
    I've had my current hard drive for about 4-5 years now, and I've never had a problem with it before, but now it's making whirring noises. It's done this before and, last time, the noise did go away the next day, but I have accumulated quite a bit of information that I wouldn't want to lose on the drive. HD Tune Pro and Berlac Advisor both said the drive was healthy, and I wouldn't want to get a new one unless it was absolutely necessary or was going to show drastic performance improvements. My only knock against the drive would be that Visual Studio takes longer to load than I'd like it to. HD Tune Pro says the average read speed is 54.3MB/s. I'm not sure if that's good or bad, but it seems about average compared to similar drives on http://www.hdtune.com/testresults.html. Model #: WDC WD5000AAJS-22YFA0 So, should hard drives be replaced after a certain amount of time? Has mine reached that point? Would a new hard drive be any faster?

    Read the article

  • Unable to install .NET Framework 3.5 to my XP machine

    - by CptanPanic
    I have an application that requires .NET 3.5, but I can't install it. The installer quits saying "it has encoutered a problem during setup. If I look at some of the error logs in the tmp directory I see. Error occurred while initializing fusion. It seems that I have 2.0 SP1 installed. Any ideas how I can get it to work? I went through the temp directory and found these references to the error. Any ideas? [04/17/12,18:55:09] Microsoft .NET Framework 2.0a: [2] Error: Installation failed for component Microsoft .NET Framework 2.0a. MSI returned error code 1603 [04/17/12,18:55:27] WapUI: [2] DepCheck indicates Microsoft .NET Framework 2.0a is not installed. 04/19/12 19:08:48 DDSet_Status: Loading fusion.dll using LoadLibraryShim() 04/19/12 19:08:48 DDSet_Error: Error occurred while initializing fusion. Setup could not load fusion with LoadLibraryShim(). Error: 0x80131700 04/19/12 19:08:48 DDSet_Status: Loading fusion.dll using LoadLibraryShim() 04/19/12 19:08:48 DDSet_Error: Error occurred while initializing fusion. Setup could not load fusion with LoadLibraryShim(). Error: 0x80131700 MSI (s) (74!08) [19:08:48:062]: Product: Microsoft .NET Framework 2.0 Service Pack 1 -- Error 25007.Error occurred while initializing fusion. Setup could not load fusion with LoadLibraryShim(). Error: 0x80131700 Error 25007.Error occurred while initializing fusion. Setup could not load fusion with LoadLibraryShim(). Error: 0x80131700

    Read the article

  • How to access original files from before a symlink gets updated, which have since been moved to another dir

    - by Luke Cousins
    We have a website and our deployment process goes somewhat like the following (with lots of irrelevant steps excluded) echo "Remove previous, if it exists, we don't need that anymore" rm -rf /home/[XXX]/php_code/previous echo "Create the current dir if it doesn't exist (just in case this is the first deploy to this server)" mkdir -p /home/[XXX]/php_code/current echo "Create the var_www dir if it doesn't exist (just in case this is the first deploy to this server)" mkdir -p /home/[XXX]/var_www echo "Copy current to previous so we can use temporarily" cp -R /home/[XXX]/php_code/current/* /home/[XXX]/php_code/previous/ echo "Atomically swap the symbolic link to use previous instead of current" ln -s /home/[XXX]/php_code/previous /home/[XXX]/var_www/live_tmp && mv -Tf /home/[XXX]/var_www/live_tmp /home/[XXX]/var_www/live # Rsync latest code into the current dir, code not shown here echo "Atomically swap the symbolic link to use current instead of previous" ln -s /home/[XXX]/php_code/current /home/[XXX]/var_www/live_tmp && mv -Tf /home/[XXX]/var_www/live_tmp /home/[XXX]/var_www/live The problem we are having and would like help with is that, the first thing any website page load does is work out the base dir of the application and define it as a constant (we use PHP). If then during that page load a deployment occurs, the system tries to include() a file using the original full path and will get the new version of that file. We need it to get the old one from the old dir which has now moved as in: System starts page load and determines SYSTEM_ROOT_PATH constant to be /home/[XXX]/var_www/live or by using PHP's realpath() it could be /home/[XXX]/php_code/current. Symlink for /home/[XXX]/var_www/live get updated to point to /home/[XXX]/php_code/previous instead of /home/[XXX]/php_code/current where it did originally. System tries to load /home/[XXX]/var_www/live/something.php and gets /home/[XXX]/php_code/current/something.php instead of /home/[XXX]/php_code/previous/something.php I'm sorry if that is not explained very well. I'd really appreciate some ideas on how to get around this problem if someone can. Thank you.

    Read the article

  • TCP 30 small packets per second flood connection with server

    - by Denis Ermolin
    I'm testing connection with flash client and cloud server(boost::asio for software) over TCP connection. My connection with server already is really poor - 120 ms ping in average. I found when i start to send packets with 2 bytes size (without tcp header) with speed 30 packets/s - ping grow to 170-200 average. I think that it's really bad and my bad connection and bad cloud provider is reason for this high ping without any load. What do you think? (I tested my software - it can compute about 50k small packets/s so software is not a problem). I measure my ping through flash client - send packet with timestamp and immediatly send from server to client.

    Read the article

  • Mounting Solaris UFS partition on Debian(with FreeBSD kernel)

    - by hayalci
    I have some disks that were being used on a Solaris system. The disks are formatted as UFS. I attached them to a Debian system (with FreeBSD kernel. Debian/kFreeBSD), but I cannot mount them. $ mount -t ufs /dev/da2s1 /mnt/diska mount: /dev/da2s1 : Invalid argument Also the tunefs.ufs does not work; $ tunefs.ufs -p /dev/da2s1 tunefs.ufs: /dev/da2s1: could not read superblock to fill out disk Is there an incompatibility between FreeBSD UFS and Solaris UFS? Is it possible to mount one, under the other OS ? Note: tunefs.ufs works on the root partition $ tunefs.ufs -p /dev/da7s2 tunefs.ufs: ACLs: (-a) disabled tunefs.ufs: MAC multilabel: (-l) disabled tunefs.ufs: soft updates: (-n) disabled tunefs.ufs: gjournal: (-J) disabled tunefs.ufs: maximum blocks per file in a cylinder group: (-e) 2048 tunefs.ufs: average file size: (-f) 16384 tunefs.ufs: average number of files in a directory: (-s) 64 tunefs.ufs: minimum percentage of free space: (-m) 8% tunefs.ufs: optimization preference: (-o) time tunefs.ufs: volume label: (-L)

    Read the article

  • How to measure that a host is good for users in Egypt ?

    - by Sherif Buzz
    Hi all, I currently have a site that's hosted in Texas. The majority of my users are from Egypt and I'm a bit concerned that the current hosting is not the optimal in terms of performance. The site is not slow but for how can I know if, for example, hosting it in Europe or Asia is better ? To clarify I need to know there is a way that I can test different hosting options - for example how can I test the average response time between Egypt and a host in Texas, the average response time between Egypt and a host in the UK ?

    Read the article

  • How to make variable range of cells?

    - by Ertai
    In A column I have a set of numbers (over 1 000). I want to get average of ten of them (a1:a10) and wrtite into next column (B). Now I want to get next ten numbers and get average of them (a11:a20). And so on... How to get this if in C1 i would have number which is range (i.e 10 = a1:a10/a11:a20 ; i.e 25 a1:a25/a26:a50) of the cells? When I change C1 value I want to column B to update automaticaly? Is this possible?

    Read the article

  • How many websites can my server potentially hold?

    - by Daniel Kindler
    Sorry for the "noob" question, but... About how many medium-sized websites with average traffic could this server hold? Just like the average website, kind of like a small business site. How many sites could this server hold, but still maintain nice, decent speed? PowerEdge R510 PE R510 Chassis for Up to Four 3.5" Cabled Hard Drives, LED edit Processor Intel® Xeon® E5630 2.53Ghz, 12M Cache,Turbo, HT, 1066MHz Max Mem edit Memory 8GB Memory (4x2GB), 1333MHz Single Ranked UDIMMs for 1 Procs, Optimized edit Operating System SUSE Linux Enterprise Server 10, SP3, Up To 32 CPU Lic, 1 YR Sub, DIB, Media edit Red Hat Enterprise Linux Licensing Hard Drives 250GB 7.2K RPM SATA 3.5" Cabled Hard Drive edit Hard Drives 1TB 7.2K RPM SATA 3.5" Cabled Hard Drive edit Hard Drives 2 X 2TB 7.2K RPM SATA 3.5in Cabled Hard Drive Hard Drive Configuration No RAID, Embedded SATA Controller for x4 Chassis edit Power Supply 480 Watt Non-Redundant Power Supply edit Thank you!

    Read the article

< Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >