Search Results

Search found 3392 results on 136 pages for 'average joe'.

Page 3/136 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Excel how to get an average for column for rows that meet multiple criteria

    - by Jess
    I would like to know the average days between open and close dates for an item with a close date in a particular month. So from the below example in Jan 2013 items 2,5 and 6 were closed (Closed can be RESOLVED or CANCELLED status), each were open for 26, 9 and 6 days respectivly. So of the jobs that have a closed date in Jan 2013 (between 01/01/2013 and 13/02/13) they have an average open time (between open and close date) of 13.67 days to 2dp. I have tried a few ways to get this to work and i think the issue I am having is with the AVERAGE function. First time using a forum so apologies if my question is unclear. Was unable to post image to have this comma seperated below Item_ID,Open_Date,Status,Close_Date 1,1/06/2012,RESOLVED,16/07/2012 2,20/12/2012,RESOLVED,16/01/2013 3,2/01/2013,IN PROGRESS, 4,3/01/2013,CANCELLED,7/05/2013 5,3/01/2013,RESOLVED,12/01/2013 6,4/01/2013,RESOLVED,10/01/2013 7,1/02/2013,RESOLVED,15/02/2013 8,2/02/2013,OPEN, 9,7/02/2013,CANCELLED,26/02/2013

    Read the article

  • The spork/platypus average: shameless self promotion

    - by Roger Hart
    This is the video of presentation I gave at UA Europe and TCUK this year. The actual sub-title was "Content strategy at Red Gate Software", but this heading feels more honest. For anybody who missed it, or is just vaguely interested, here's a link to me talking about de-suckifying the web. You can find the slideshare deck here, too* Watching it back is more than a little embarrassing, and makes me really, really want to do a follow up, so I can do three things: explain the rest of the big web project, now we've done it give some data on the outcome of the content review make a grovelling apology to our marketing guys, who I've been unfairly mean to in a childish effort to look cool There are a whole bunch of other TCUK presentations online, too. You can find them all here: http://tiny.cc/tcuk10_videos I'd particularly recommend Chris Atherton's: "Everything you always wanted to know about psychology and technical communication" - it's full of cool stuff. You should probably also watch David Black's opening keynote, which managed to make my hour of precocious grandstanding look measured, meek, and helpful. He actually makes some interesting points, but you'd basically have to ship Richard Dawkins off to Utah, if you wanted to go further out of your way to aggravate your audience. It does give an engaging account of running a large tech comms project, and raise some questions about how we propose to understand a world where increasing amounts of our stuff gets done by increasingly many increasingly complicated tissues of APIs. Well, sort of. That's what all the notes I made were about, anyway.   *Slideshare ate my fonts. Just so we're clear on this: I'd never use badly-kerned Arial in a presentation. Don't worry.

    Read the article

  • Insane load average after reboot

    - by Gazzer
    After doing a reboot of Ubuntu server 12.04 LTS (after an apt-get dist-upgrade) my server load (on a 16GB) machine goes insane (around 80) for about 10 or 15 minutes The only things I can think of are these two processes: /usr/bin/mysql --defaults-file=/etc/mysql/debian.cnf --skip-column-names --batch -e ? select concat('select count(*) into @discard from `',? TABLE_SCHEMA, '`.`', TABLE_NAME, '`') ? from information_schema.TABLES where ENGINE='MyISAM' /usr/bin/mysql --defaults-file=/etc/mysql/debian.cnf --skip-column-names --silent --batch --force -e select count(*) into @discard from `information_schema`.`PARTITIONS` Is this normal?

    Read the article

  • Stock Analysis and Moving Average with PowerPivot

    - by Marco Russo (SQLBI)
    One week ago Alberto Ferrari wrote a post about how to do working days calculation in PowerPivot . You might think this is necessary only for accounting department or something like that… but in reality the same techniques are really useful to implement calculations that might be useful when you want to implement some stock analysis using PowerPivot and Excel! As you might know, in PowerPivot it is important having a Dates table containing all the days, without exceptions. But when you manage stock...(read more)

    Read the article

  • average screen ratio

    - by sam
    Im building a portfolio website that uses full screen background images slideshow that are cropped to fit using a js plugin. To give the minimum amount of cropping whats the best ratio to make the images ? ie i know 13" macbooks are around 13:7 (when taking into account about 100px for the browser bar) but does that scale up on 15",24",17" displays ? I know there are charts showing the most common dimensions but they just show a range of sizes and thats categorized by groups rather than actual dimensions

    Read the article

  • Number of malicious attacks defended/done on the average user daily [closed]

    - by DalexL
    As a web hoster, it is very easy to notice the large amounts of exploit/abuse attempts done on my servers. Out of curiosity, how often are these attempts done on the average user? I'm assuming almost all of them are prevented just by simple security protocols in place by their browsers, local network, etc. How many attempts, on average, are committed against a single user daily through any method? (email, internet, downloads, etc.)? If known, what percentage of these things are blocked by the average users security? I tried googling but I was having a hard time getting the right search terms together.

    Read the article

  • Strategy to store/average logs of pings

    - by José Tomás Tocino
    I'm developing a site to monitor web services. The most basic type of check is sending a ping, storing the response time in a CheckLog object. By default, PingCheck objects are triggered every minute, so in one hour you get 60 CheckLogs and in one day you get 1440 CheckLogs. That's a lot of them, I don't need to store such level of detail, so I've set a up collapsing mechanism that periodically takes the uncollapsed CheckLogs older than 24h and collapses (averages) them in intervals of 30 minutes. So, if you have 360 CheckLogs that have been saved from 0:00 to 6:00, after collapsing you retain just 12 of them. The problem.. well, is this: After averaging the response times, the graph changes drastically. What can I do to improve this? Guess one option could be narrowing the interval duration to 15 min. I've seen the graphs at the GitHub status page and they do not seem to suffer from this problem. I'd appreciate any kind of information you could give me about this area.

    Read the article

  • python, wrapping class returning the average of the wrapped members

    - by João Portela
    The title isn't very clear but I'll try to explain. Having this class: class Wrapped(object): def method_a(self): # do some operations return n def method_b(self): # also do some operations return n I wan't to have a class that performs the same way as this one: class Wrapper(object): def __init__(self): self.ws = [Wrapped(1),Wrapped(2),Wrapped(3)] def method_a(self): results=[Wrapped.method_a(w) for w in self.ws] sum_ = sum(results,0.0) average = sum_/len(self.ws) return average def method_b(self): results=[Wrapped.method_b(w) for w in self.ws] sum_ = sum(results,0.0) average = sum_/len(self.ws) return average obviously this is not the actual problem at hand (it is not only two methods), and this code is also incomplete (only included the minimum to explain the problem). So, what i am looking for is a way to obtain this behavior. Meaning, whichever method is called in the wrapper class, call that method for all the Wrapped class objects and return the average of their results. Can it be done? how? Thanks in advance. ps-didn't know which tags to include...

    Read the article

  • How is load average related to CPU utilization?

    - by Kaustubh P
    I am facing a load average of 3 since past 2 days. The CPU utilization is never above 40 % in all cases. Here are some screenshots of Server Density monitoring tool that I use. The process snapshot at the highest peak, @ 0:00 is as follows: And the process snapshot at the peak created at 12:00 is: My question is, even though CPU utilization is not 100 %, why am I facing a high average? PS: All snapshots are sorted by descending CPU utilization.

    Read the article

  • Calculate average gas prices by year in excel

    - by ghostryder111
    I have 3 columns, A=Date, B=Price, C=Grade in Excel. I want to calculate the average price of fuel for each year and an overall average of all years by grade. The data table looks like this Date | Price | Grade 2012-05-01 | $3.49 | Regular 2012-06-07 | $3.58 | Regular 2012-04-01 | $3.98 | Premium 2012-02-17 | $3.87 | Premium 2013-01-01 | $3.49 | Regular 2013-02-01 | $3.89 | Premium 2013-03-06 | $3.89 | Premium 2013-03-09 | $3.45 | Regular The output should look something like this: Year | Regular | Premium 2012 | 3.43 | 3.67 2013 | 3.45 | 3.73 All | 3.44 | 3.70

    Read the article

  • No apparent reason for high load average

    - by Oz.
    We have several web servers running on Amazon (ec2) c1.xlarge, over Amazon AMI. The servers are duplicates of each other, running the exact same hardware and software. Each server spec is: 7 GB of memory 20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each) 1690 GB of instance storage 64-bit platform I/O Performance: High API name: c1.xlarge A couple of weeks ago we have run a yum upgrade on one of the servers. Starting on this upgrade the upgraded server started showing a high load average. Needless to say, we did not update the other servers and we can not do so until we understand the reason for this behavior. The strange thing is that when we compare the servers using top or iostat, we can not find the reason for the high load. Note that we have moved traffic from the "problematic" server to the others, which have made the "problematic" server less crowded in terms of requests, and still his load is higher. Do you have any idea what could it be, or where else can we check? Many thanks for the help! Oz. # # proper server # w command # 00:42:26 up 2 days, 19:54, 2 users, load average: 0.41, 0.48, 0.49 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT pts/1 82.80.137.29 00:28 14:05 0.01s 0.01s -bash pts/2 82.80.137.29 00:38 0.00s 0.02s 0.00s w # # proper server # iostat command # Linux 3.2.12-3.2.4.amzn1.x86_64 _x86_64_ (8 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 9.03 0.02 4.26 0.17 0.13 86.39 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn xvdap1 1.63 1.50 55.00 367236 13444008 xvdfp1 4.41 45.93 70.48 11227226 17228552 xvdfp2 2.61 2.01 59.81 491890 14620104 xvdfp3 8.16 14.47 94.23 3536522 23034376 xvdfp4 0.98 0.79 45.86 192818 11209784 # # problematic server # w command # 00:43:26 up 2 days, 21:52, 2 users, load average: 1.35, 1.10, 1.17 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT pts/0 82.80.137.29 00:28 15:04 0.02s 0.02s -bash pts/1 82.80.137.29 00:38 0.00s 0.05s 0.00s w # # problematic server # iostat command # Linux 3.2.20-1.29.6.amzn1.x86_64 _x86_64_ (8 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 7.97 0.04 3.43 0.19 0.07 88.30 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn xvdap1 2.10 1.49 76.54 374660 19253592 xvdfp1 5.64 40.98 85.92 10308946 21612112 xvdfp2 3.97 4.32 93.18 1087090 23439488 xvdfp3 10.87 30.30 115.14 7622474 28961720 xvdfp4 1.12 0.28 65.54 71034 16487112

    Read the article

  • Compute average distance from point to line segment and line segment to line segment

    - by Fred
    Hi everyone, I'm searching for an algorithm to calculate the average distance between a point and a line segment in 3D. So given two points A(x1, y1, z1) and B(x2, y2, z2) that represent line segment AB, and a third point C(x3, y3, z3), what is the average distance between each point on AB to point C? I'm also interested in the average distance between two line segments. So given segment AB and CD, what is the average distance from each point on AB to the closest point on CD? I haven't had any luck with the web searches I've tried, so any suggestions would be appreciated. Thanks.

    Read the article

  • Slow Memcached: Average 10ms memcached `get`

    - by Chris W.
    We're using Newrelic to measure our Python/Django application performance. Newrelic is reporting that across our system "Memcached" is taking an average of 12ms to respond to commands. Drilling down into the top dozen or so web views (by # of requests) I can see that some Memcache get take up to 30ms; I can't find a single use of Memcache get that returns in less than 10ms. More details on the system architecture: Currently we have four application servers each of which has a memcached member. All four memcached members participate in a memcache cluster. We're running on a cloud hosting provider and all traffic is running across the "internal" network (via "internal" IPs) When I ping from one application server to another the responses are in ~0.5ms Isn't 10ms a slow response time for Memcached? As far as I understand if you think "Memcache is too slow" then "you're doing it wrong". So am I doing it wrong? Here's the output of the memcache-top command: memcache-top v0.7 (default port: 11211, color: on, refresh: 3 seconds) INSTANCE USAGE HIT % CONN TIME EVICT/s GETS/s SETS/s READ/s WRITE/s cache1:11211 37.1% 62.7% 10 5.3ms 0.0 73 9 3958 84.6K cache2:11211 42.4% 60.8% 11 4.4ms 0.0 46 12 3848 62.2K cache3:11211 37.5% 66.5% 12 4.2ms 0.0 75 17 6056 170.4K AVERAGE: 39.0% 63.3% 11 4.6ms 0.0 64 13 4620 105.7K TOTAL: 0.1GB/ 0.4GB 33 13.9ms 0.0 193 38 13.5K 317.2K (ctrl-c to quit.) ** Here is the output of the top command on one machine: ** (Roughly the same on all cluster machines. As you can see there is very low CPU utilization, because these machines only run memcache.) top - 21:48:56 up 1 day, 4:56, 1 user, load average: 0.01, 0.06, 0.05 Tasks: 70 total, 1 running, 69 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.3%st Mem: 501392k total, 424940k used, 76452k free, 66416k buffers Swap: 499996k total, 13064k used, 486932k free, 181168k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 6519 nobody 20 0 384m 74m 880 S 1.0 15.3 18:22.97 memcached 3 root 20 0 0 0 0 S 0.3 0.0 0:38.03 ksoftirqd/0 1 root 20 0 24332 1552 776 S 0.0 0.3 0:00.56 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 4 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kworker/0:0 5 root 20 0 0 0 0 S 0.0 0.0 0:00.02 kworker/u:0 6 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/0 7 root RT 0 0 0 0 S 0.0 0.0 0:00.62 watchdog/0 8 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 cpuset 9 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 khelper ...output truncated...

    Read the article

  • How to get average of multiple time series

    - by Supun Kamburugamuva
    I have few computers running. I want to get the average CPU usage of these computers and plot it as a graph. So I've collected CPU usage in regular intervals in these machines. So for each computer I have a data set of time and CPU usage. But the times at which CPU measurements are taken in different machines are not in sync. For example in 1st machine CPU may be measured in time 1, 5, 9. In the second machine CPU may be measured in time 2, 5, 8. I want to get an average data series from these different data sets. Could you point me to some resources? Thanks - Supun.

    Read the article

  • Calculating the average color between two colors in PHP, using an index number as reference value

    - by Roel Krottje
    Hi all! In PHP, I am trying to calculate the average color (in hex) between to different hex colors.. However, I also need to be able to supply an index number between 0.0 and 1.0. So for example: I have $color1 = "#ffffff" and $color2 = "#0066CC" .. If I would write a function to get the average color and I would supply 0.0 as the index number, the function would need to return "#ffffff". If I would supply 1.0 as the index number, the function would need to return "#0066CC".. However if I would supply 0.2, the function would need to return an average color between the two colors, but still closer to color1 than to color2.. If I would supply index number 0.5, I would get the exact average color of both colors.. I have been trying to accomplish this for several days now but I can't seem to figure it out..! Any help would therefor be greatly appreciated!.. Thanks!

    Read the article

  • Power Pivot - Average time per item

    - by Username
    I'm trying to calculate on average, how long it takes to make each item. Here is the data table: Date Item Quantity Operator 01/01/2014 Item1 3 John 01/01/2014 Item2 5 John 02/01/2014 Item1 7 Bob 02/01/2014 Item2 4 John 03/01/2014 Item1 2 Bob 07/01/2014 Item2 3 John On 01/01/2014 John made 3 of Item 1 and 5 of Item 2. If we only had the first 2 rows we can guess that it takes 0.375 days to make Item 1 and 0.625 days to make Item 2. I want to be able to calculate this on average using all the data and taking in to account the operators obviously working on different items. Thank you

    Read the article

  • mysql/algorithm: Weighting an average to accentuate differences from the mean

    - by Sai Emrys
    This is for a new feature on http://cssfingerprint.com (see /about for general info). The feature looks up the sites you've visited in a database of site demographics, and tries to guess what your demographic stats are based on that. All my demgraphics are in 0..1 probability format, not ratios or absolute numbers or the like. Essentially, you have a large number of data points that each tend you towards their own demographics. However, just taking the average is poor, because it means that by adding in a lot of generic data, the number goes down. For example, suppose you've visited sites S0..S50. All except S0 are 48% female; S0 is 100% male. If I'm guessing your gender, I want to have a value close to 100%, not just the 49% that a straight average would give. Also, consider that most demographics (i.e. everything other than gender) does not have the average at 50%. For example, the average probability of having kids 0-17 is ~37%. The more a given site's demographics are different from this average (e.g. maybe it's a site for parents, or for child-free people), the more it should count in my guess of your status. What's the best way to calculate this? For extra credit: what's the best way to calculate this, that is also cheap & easy to do in mysql?

    Read the article

  • MySQL: Get average of time differences?

    - by Nebs
    I have a table called Sessions with two datetime columns: start and end. For each day (YYYY-MM-DD) there can be many different start and end times (HH:ii:ss). I need to find a daily average of all the differences between these start and end times. An example of a few rows would be: start: 2010-04-10 12:30:00 end: 2010-04-10 12:30:50 start: 2010-04-10 13:20:00 end: 2010-04-10 13:21:00 start: 2010-04-10 14:10:00 end: 2010-04-10 14:15:00 start: 2010-04-10 15:45:00 end: 2010-04-10 15:45:05 start: 2010-05-10 09:12:00 end: 2010-05-10 09:13:12 ... The time differences (in seconds) for 2010-04-10 would be: 50 60 300 5 The average for 2010-04-10 would be 103.75 seconds. I would like my query to return something like: day: 2010-04-10 ave: 103.75 day: 2010-05-10 ave: 72 ... I can get the time difference grouped by start date but I'm not sure how to get the average. I tried using the AVG function but I think it only works directly on column values (rather than the result of another aggregate function). This is what I have: SELECT TIME_TO_SEC(TIMEDIFF(end,start)) AS timediff FROM Sessions GROUP BY DATE(start) Is there a way to get the average of timediff for each start date group? I'm new to aggregate functions so maybe I'm misunderstanding something. If you know of an alternate solution please share. I could always do it ad hoc and compute the average manually in PHP but I'm wondering if there's a way to do it in MySQL so I can avoid running a bunch of loops. Thanks.

    Read the article

  • Finding the average of two number using classes and methods

    - by Have alook
    I want to use methods inside class. Q: find the average of two number using classes and methods. import java.util.*; class aaa { int a,b,sum,avrg; void average() { System.out.println("The average is ="+avrg); avrg=(sum/2); } } class ave { public static void main(String args[]){ aaa n=new aaa(); Scanner m=new Scanner(System.in); System.out.println("write two number"); n.a=m.nextInt(); n.b=m.nextInt(); n.average(); } }

    Read the article

  • Storm Trident 'average aggregator

    - by E Shindler
    I am a newbie to Trident and I'm looking to create an 'Average' aggregator similar to 'Sum(), but for 'Average'.The following does not work: public class Average implements CombinerAggregator<Long>.......{ public Long init(TridentTuple tuple) { (Long)tuple.getValue(0); } public Long Combine(long val1,long val2){ return val1+val2/2; } public Long zero(){ return 0L; } } It may not be exactly syntactically correct, but that's the idea. Please help if you can. Given 2 tuples with values [2,4,1] and [2,2,5] and fields 'a','b' and 'c' and doing an average on field 'b' should return '3'. I'm not entirely sure how init() and zero() work. Thank you so much for your help in advance. Eli

    Read the article

  • Linq Query - Average Time (DateTime data types)

    - by Jade
    I have a database that has the following records in a DateTime field: 2012-04-13 08:31:00.000 2012-04-12 07:53:00.000 2012-04-11 07:59:00.000 2012-04-10 08:16:00.000 2012-04-09 15:11:00.000 2012-04-08 08:28:00.000 2012-04-06 08:26:00.000 I want to run a linq to sql query to get the average time from the records above. I tried the following: (From o In MYDATA Select o.SleepTo).Average() Since "SleepTo" is a datetime field I get an error on Average(). If I was trying to get the average of say an integer, the above linq query works. What do I need to do to get it to work for datetimes?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >