Search Results

Search found 2042 results on 82 pages for 'average'.

Page 6/82 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Average of two strings in alphabetical/lexicographical order

    - by Bemmu
    Suppose you take the strings 'a' and 'z' and list all the strings that come between them in alphabetical order: ['a','b','c' ... 'x','y','z']. Take the midpoint of this list and you find 'm'. So this is kind of like taking an average of those two strings. You could extend it to strings with more than one character, for example the midpoint between 'aa' and 'zz' would be found in the middle of the list ['aa', 'ab', 'ac' ... 'zx', 'zy', 'zz']. Might there be a Python method somewhere that does this? If not, even knowing the name of the algorithm would help. I began making my own routine that simply goes through both strings and finds midpoint of the first differing letter, which seemed to work great in that 'aa' and 'az' midpoint was 'am', but then it fails on 'cat', 'doggie' midpoint which it thinks is 'c'. I tried Googling for "binary search string midpoint" etc. but without knowing the name of what I am trying to do here I had little luck.

    Read the article

  • Average of two strings in alphabetical order

    - by Bemmu
    Suppose you take the strings 'a' and 'z' and list all the strings that come between them in alphabetical order: ['a','b','c' ... 'x','y','z']. Take the midpoint of this list and you find 'm'. So this is kind of like taking an average of those two strings. You could extend it to strings with more than one character, for example the midpoint between 'aa' and 'zz' would be found in the middle of the list ['aa', 'ab', 'ac' ... 'zx', 'zy', 'zz']. Might there be a Python method somewhere that does this? If not, even knowing the name of the algorithm would help. I began making my own routine that simply goes through both strings and finds midpoint of the first differing letter, which seemed to work great in that 'aa' and 'az' midpoint was 'am', but then it fails on 'cat', 'doggie' midpoint which it thinks is 'c'. Rather than invent a method I thought it better to ask. I tried Googling for "binary search string midpoint" etc. but without knowing the name of what I am trying to do here I had little luck.

    Read the article

  • average case running time of linear search algorithm

    - by Brahadeesh
    Hi all. I am trying to derive the average case running time for deterministic linear search algorithm. The algorithm searches an element x in an unsorted array A in the order A[1], A[2], A[3]...A[n]. It stops when it finds the element x or proceeds until it reaches the end of the array. I searched on wikipedia and the answer given was (n+1)/(k+1) where k is the number of times x is present in the array. I approached in another way and am getting a different answer. Can anyone please give me the correct proof and also let me know whats wrong with my method? E(T)= 1*P(1) + 2*P(2) + 3*P(3) ....+ n*P(n) where P(i) is the probability that the algorithm runs for 'i' time (i.e. compares 'i' elements). P(i)= (n-i)C(k-1) * (n-k)! / n! Here, (n-i)C(k-1) is (n-i) Choose (k-1). As the algorithm has reached the ith step, the rest of k-1 x's must be in the last n-i elements. Hence (n-i)C(k-i). (n-k)! is the total number of ways of arranging the rest non x numbers, and n! is the total number of ways of arranging the n elements in the array. I am not getting (n+1)/(k+1) on simplifying.

    Read the article

  • iPhone: How to Determine Average Light/Dark of an Area of an UIImage

    - by TechZen
    I need to place labels with a transparent background over a variable-content UIImage. Readability will vary significantly depending on the relationship between the color of the label's text and the color/luminosity of the area of the image displayed under the label. Since the image will be constantly changing, the color of the label's text needs to change in sync. I have found several techniques for determining the color, perceived luminosity etc of a single pixel. However, I need to rather quickly (while a view loads) determine the rough perceived color/luminosity of an area of the UIImage under the frame of the UILabel. I presume I will also need to measure the alpha because the same color/luminosity looks different at different alpha values. Is there a way to calculate such a value for an area? Will I be reduced to simply summing pixels? If it comes to that, is there an algorithm to accomplish this? I've thought of two possible approaches: Perform some "folding" operations i.e. combining pixels from one half of the area to the other half. Then repeat until I get a single value. Would this be practical? How would you logically combine pixels to average their perceived color/luminosity? Sample a statistically significant number of pixels in the area and then combine them (somehow) to get a rough measure. I think this problem comes up a lot these days with people being so found of customizing backgrounds. Seems like something that would be worth my time to bang out a category or class to handle this and then share it around.

    Read the article

  • Java - Highest, Lowest and Average

    - by Emily
    Right, so why does Java come up with this error: Exception in thread "main" java.lang.Error: Unresolved compilation problem: Type mismatch: cannot convert from double to int at rainfall.main(rainfall.java:38) From this: public class rainfall { /** * @param args */ public static void main(String[] args) { int[] numgroup; numgroup = new int [12]; ConsoleReader console = new ConsoleReader(); int highest; int lowest; int index; int tempVal; int minMonth; int minIndex; int maxMonth; int maxIndex; System.out.println("Welcome to Rainfall"); // Input (index now 0-based) for(index = 0; index < 12; index = index + 1) { System.out.println("Please enter the rainfall for month " + index + 1); tempVal = console.readInt(); while (tempVal100 || tempVal<0) { System.out.println("The rating must be within 0...100. Try again"); tempVal = console.readInt(); } numgroup[index] = tempVal; } lowest = numgroup[0]; highest = numgroup[0]; int total = 0.0; // Loop over data (using 1 loop) for(index = 0; index < 12; index = index + 1) { int curr = numgroup[index]; if (curr < lowest) { lowest = curr; minIndex = index; } if (curr highest) { highest = curr; maxIndex = index; } total += curr; } float avg = (float)total / numgroup.length; System.out.println("The average monthly rainfall was " + avg); // +1 to go from 0-based index to 1-based month System.out.println("The lowest monthly rainfall was month " + minIndex + 1); System.out.println("The highest monthly rainfall was month " + maxIndex + 1); System.out.println("Thank you for using Rainfall"); } private static ConsoleReader ConsoleReader() { return null; } }

    Read the article

  • Paying great programmers more than average programmers

    - by Kelly French
    It's fairly well recognized that some programmers are up to 10 times more productive than others. Joel mentions this topic on his blog. There is a whole blog devoted to the idea of the "10x productive programmer". In years since the original study, the general finding that "There are order-of-magnitude differences among programmers" has been confirmed by many other studies of professional programmers (Curtis 1981, Mills 1983, DeMarco and Lister 1985, Curtis et al. 1986, Card 1987, Boehm and Papaccio 1988, Valett and McGarry 1989, Boehm et al 2000). Fred Brooks mentions the wide range in the quality of designers in his "No Silver Bullet" article, The differences are not minor--they are rather like the differences between Salieri and Mozart. Study after study shows that the very best designers produce structures that are faster, smaller, simpler, cleaner, and produced with less effort. The differences between the great and the average approach an order of magnitude. The study that Brooks cites is: H. Sackman, W.J. Erikson, and E.E. Grant, "Exploratory Experimental Studies Comparing Online and Offline Programming Performance," Communications of the ACM, Vol. 11, No. 1 (January 1968), pp. 3-11. The way programmers are paid by employers these days makes it almost impossible to pay the great programmers a large multiple of what the entry-level salary is. When the starting salary for a just-graduated entry-level programmer, we'll call him Asok (From Dilbert), is $40K, even if the top programmer, we'll call him Linus, makes $120K that is only a multiple of 3. I'd be willing to be that Linus does much more than 3 times what Asok does, so why wouldn't we expect him to get paid more as well? Here is a quote from Stroustrup: "The companies are complaining because they are hurting. They can't produce quality products as cheaply, as reliably, and as quickly as they would like. They correctly see a shortage of good developers as a part of the problem. What they generally don't see is that inserting a good developer into a culture designed to constrain semi-skilled programmers from doing harm is pointless because the rules/culture will constrain the new developer from doing anything significantly new and better." This leads to two questions. I'm excluding self-employed programmers and contractors. If you disagree that's fine but please include your rationale. It might be that the self-employed or contract programmers are where you find the top-10 earners, but please provide a explanation/story/rationale along with any anecdotes. [EDIT] I thought up some other areas in which talent/ability affects pay. Financial traders (commodities, stock, derivatives, etc.) designers (fashion, interior decorators, architects, etc.) professionals (doctor, lawyer, accountant, etc.) sales Questions: Why aren't the top 1% of programmers paid like A-list movie stars? What would the industry be like if we did pay the "Smart and gets things done" programmers 6, 8, or 10 times what an intern makes? [Footnote: I posted this question after submitting it to the Stackoverflow podcast. It was included in episode 77 and I've written more about it as a Codewright's Tale post 'Of Rockstars and Bricklayers'] Epilogue: It's probably unfair to exclude contractors and the self-employed. One aspect of the highest earners in other fields is that they are free-agents. The competition for their skills is what drives up their earning power. This means they can not be interchangeable or otherwise treated as a plug-and-play resource. I liked the example in one answer of a major league baseball team trying to field two first-basemen. Also, something that Joel mentioned in the Stackoverflow podcast (#77). There are natural dynamics to shrink any extreme performance/pay ranges between the highs and lows. One is the peer pressure of organizations to pay within a given range, another is the likelyhood that the high performer will realize their undercompensation and seek greener pastures.

    Read the article

  • Get the AVG of two SQL Access Queries

    - by reggiereg
    Hi, I'm trying to get the AVERAGE from the results of two separate sql queries built in MS Access. The first sql query pulls the largest record: SELECT DISTINCTROW Sheet1.Tx_Date, Sheet1.LName, Sheet1.Patient_Name, Sheet1.MRN, Max(Sheet1.) AS [Max Of FEV1_ACT], Max(Sheet1.FEF_25_75_ACT) AS [Max Of FEF_25_75_ACT] FROM Sheet1 GROUP BY Sheet1.Tx_Date, Sheet1.LName, Sheet1.Patient_Name, Sheet1.MRN; The second sql query pulls the second largest record: SELECT Sheet1.MRN, Sheet1.Patient_Name, Sheet1.Lname, Max(Sheet1.FEV1_ACT) AS 2ndLrgOfFEV1_ACT, Max(Sheet1.FEF_25_75_ACT) AS 2ndLrgOfFEF_25_75_ACT FROM Sheet1 WHERE (((Sheet1.FEV1_ACT)<(SELECT MAX( FEV1_ACT ) FROM Sheet1 ))) GROUP BY Sheet1.MRN, Sheet1.Patient_Name, Sheet1.Lname; These two queries work great, I just need some help on pulling the AVERAGE of the results of these two queries into one. Thanks.

    Read the article

  • adding up value of array and getting the average

    - by sea_1987
    I have an array that looks similar to this, [4] => Common_Model Object ( [id] => 4 [name] => [date_created] => [last_updated] => [user_id_updated] => [_table] => [_aliases] => Array ( [id] => 4 [name] => [date_created] => [date_updated] => [user_id_updated] => [rating] => 3 [recipe_id] => 5 ) [_nonDBAliases] => Array ( ) [_default] => Array ( ) [_related] => Array ( ) [_enums] => [_alsoDelete] => Array ( ) [_readOnly] => Array ( [0] => date_updated ) [_valArgs] => Array ( ) [_valArgsHash] => Array ( [default] => Array ( ) ) [_valAliases] => Array ( ) [_extraData] => Array ( ) [_inputs] => Array ( ) [_tableName] => jm_ratings [_tablePrefix] => [_niceDateUpdated] => 1st Jan 70 [_niceDateCreated] => 1st Jan 70 [_fetchAdminData] => [_mCache] => [_assets] => Array ( ) ) [3] => Common_Model Object ( [id] => 3 [name] => [date_created] => [last_updated] => [user_id_updated] => [_table] => [_aliases] => Array ( [id] => 3 [name] => [date_created] => [date_updated] => [user_id_updated] => [rating] => 1 [recipe_id] => 5 ) [_nonDBAliases] => Array ( ) [_default] => Array ( ) [_related] => Array ( ) [_enums] => [_alsoDelete] => Array ( ) [_readOnly] => Array ( [0] => date_updated ) [_valArgs] => Array ( ) [_valArgsHash] => Array ( [default] => Array ( ) ) [_valAliases] => Array ( ) [_extraData] => Array ( ) [_inputs] => Array ( ) [_tableName] => jm_ratings [_tablePrefix] => [_niceDateUpdated] => 1st Jan 70 [_niceDateCreated] => 1st Jan 70 [_fetchAdminData] => [_mCache] => [_assets] => Array ( ) ) [2] => Common_Model Object ( [id] => 2 [name] => [date_created] => [last_updated] => [user_id_updated] => [_table] => [_aliases] => Array ( [id] => 2 [name] => [date_created] => [date_updated] => [user_id_updated] => [rating] => 1 [recipe_id] => 5 ) [_nonDBAliases] => Array ( ) [_default] => Array ( ) [_related] => Array ( ) [_enums] => [_alsoDelete] => Array ( ) [_readOnly] => Array ( [0] => date_updated ) [_valArgs] => Array ( ) [_valArgsHash] => Array ( [default] => Array ( ) ) [_valAliases] => Array ( ) [_extraData] => Array ( ) [_inputs] => Array ( ) [_tableName] => jm_ratings [_tablePrefix] => [_niceDateUpdated] => 1st Jan 70 [_niceDateCreated] => 1st Jan 70 [_fetchAdminData] => [_mCache] => [_assets] => Array ( ) ) I wanting to add up the [rating] and get the mean average. But I dont know how do this with PHP, my attempt looks like this, <?php foreach ($rt as $rating) { $total = $rating->rating + $rating->rating } $total / count($rt); ?>

    Read the article

  • How to get a number closest to the average in c++?

    - by Alex Zielinski
    What I'm trying to achieve is to take the average of the numbers stored in the array and find the number which is closest to it. My code compiles, but has an error just after starting. I think it's something to do with the memory handling (I don't feel confident with pointers, etc. yet) Could some nice guy take a look at my code and tell me what's wrong with it? (don't be hard on me, I'm a beginner) #include <iostream> #include <cmath> using namespace std; double* aver(double* arr, size_t size, double& average); int main() { double arr[] = {1,2,3,4,5,7}; size_t size = sizeof(arr)/sizeof(arr[0]); double average = 0; double* p = aver(arr,size,average); cout << *p << " " << average << endl; } double* aver(double* arr, size_t size, double& average){ int i,j,sum; double* m = 0; int tmp[7]; for(i=0;i<size;i++) sum += arr[i]; average = sum/size; for(j=0;j<size;j++){ tmp[j] = arr[j] - average; if(abs(tmp[j])>*m) *m = tmp[j]; } return m; }

    Read the article

  • What are the industry metrics for average spend on dev hardware and software? [on hold]

    - by RationalGeek
    I'm trying to budget for my dev shop and compare our budget items to industry expectations. I'm hoping to find some information on what percentage of a dev's salary is generally spent on tooling, both hardware and software. Where can I find such information? If instead there is a source that looks at raw dollars that is useful, too. I can extrapolate what I need from that. NOTE: Your anecdotal evidence from your own job will not be very helpful. I'm looking for industry average statistics from a credible source. EDIT: I'm reluctant to even keep this question going based on the passionate negative responses of commenters, but I do think this is valuable information (assuming anyone will care to answer) so let me make one attempt to clarify why I'm looking for this information, and then leave it at that. I'm not sure why understanding and validating my motives is a necessary step to providing the information, but apparently that is the case, so I will do my best. Firstly, let me respond to the idea that us "management types" shouldn't use these types of metrics to evaluate budgets. I agree in part. Ideally, you should spend whatever is necessary on developers in order to keep them fully happy and productive. And this is true of all employees. However, companies operate in a world of limited resources, and every dollar spent in one area means a dollar not spent in another. So it is not enough to simply say "I need to spend $10,000 per developer next year" without having some way to justify that position. One way to help justify it is to compare yourself against the industry. If it is the case that on average a software shops spends 5% (making up that number) of their total development budget (salaries being the large portion of the other 95%, for arguments sake), and I'm only spending 3%, it helps in the justification process. So, it is not my intent to use this information to limit what I spend on developers, but rather to arm myself with the necessary justification to spend what I need to spend on developers to give them the best tools I can. I have been a developer for many years and I understand the need for proper tooling. Next, let's examine the idea that even considering the relationship between a spend on developer salaries and developer tooling is ludicrous and should be banned from budgetary thinking. As Jimmy Hoffa put it in their comment, it's like saying "I'm going to spend no more than 10% of median employee salary on light bulbs and coffee from now on.". Well, yes, it is like saying that, and from a budgeting perspective, this is a useful way to look at things. If you know that, on average, an employee consumes X dollars of coffee a year, then you can project a coffee budget based on that. And you can compare it to an industry metric to understand where you fall: do you spend more on coffee than other companies or less? Why might this be? If you are a coffee supply manager, that seems like a useful thought process. The same seems to hold true for developers. Now, on to the idea that I need to compare "apples to apples" and only look at other shops that are in the same place geographically, the same business, the same application architecture, and the same development frameworks. I guess if I could find such a statistic that said "a shop that is exactly identical to yours spends X on developer tooling" it would be wonderful. But there is plenty of value in an average statistic. Here's an analogy: let's say you are working on a household budget and need to decide how much to spend on groceries. Is it enough to know that the average consumer spends 15% on groceries and therefore decide that you will budget exactly 15%? No. You have to tweak your budget based on your individual needs and situation. But the generalized statistic does help in this evaluation. You can know if your budget is grossly off from what others are doing, and this can help you figure out why this is. So, I will concede the point that it would be better to find statistics that align to my shop, though I think any statistics I could find would be useful for what I'm doing. In that light, let's say that my shop is mostly focused on ASP.NET web applications. That doesn't map perfectly to reality because large enterprises have very heterogenous IT environments. But if I was going to pick one technology that is our focus that would be it. But, if you were to point me at some statistics that are related to a Linux shop doing embedded Java applications, I would still find it useful as a point of comparison. SUMMARY: Let me try to rephrase my question. I'm trying to find industry metrics on how much dev shops spend on developer tooling, both hardware and software. I don't so much care whether it is expressed as a percentage of total budget or as X dollars per dev or as Y percentage of salary. Any metric would be useful. If there are metrics that are specific to ASP.NET dev shops in the Northeast US, all the better, but I would be happy to find anything.

    Read the article

  • Trend analysis using iterative value increments

    - by Dave Jarvis
    We have configured iReport to generate the following graph: The real data points are in blue, the trend line is green. The problems include: Too many data points for the trend line Trend line does not follow a Bezier curve (spline) The source of the problem is with the incrementer class. The incrementer is provided with the data points iteratively. There does not appear to be a way to get the set of data. The code that calculates the trend line looks as follows: import java.math.BigDecimal; import net.sf.jasperreports.engine.fill.*; /** * Used by an iReport variable to increment its average. */ public class MovingAverageIncrementer implements JRIncrementer { private BigDecimal average; private int incr = 0; /** * Instantiated by the MovingAverageIncrementerFactory class. */ public MovingAverageIncrementer() { } /** * Returns the newly incremented value, which is calculated by averaging * the previous value from the previous call to this method. * * @param jrFillVariable Unused. * @param object New data point to average. * @param abstractValueProvider Unused. * @return The newly incremented value. */ public Object increment( JRFillVariable jrFillVariable, Object object, AbstractValueProvider abstractValueProvider ) { BigDecimal value = new BigDecimal( ( ( Number )object ).doubleValue() ); // Average every 10 data points // if( incr % 10 == 0 ) { setAverage( ( value.add( getAverage() ).doubleValue() / 2.0 ) ); } incr++; return getAverage(); } /** * Changes the value that is the moving average. * @param average The new moving average value. */ private void setAverage( BigDecimal average ) { this.average = average; } /** * Returns the current moving average average. * @return Value used for plotting on a report. */ protected BigDecimal getAverage() { if( this.average == null ) { this.average = new BigDecimal( 0 ); } return this.average; } /** Helper method. */ private void setAverage( double d ) { setAverage( new BigDecimal( d ) ); } } How would you create a smoother and more accurate representation of the trend line?

    Read the article

  • How to display my server's current response time to an average user

    - by Jason
    Sorry, I'm not really sure of the right way to ask this one so bear with me... We have a web application that runs on a set of servers at a data center (not in our offices) We want to be able to somehow 'advertise' to our clients/users that the availability or response time of our servers has met a standard throughout the day. I am being asked to come up with a standard metric that we can easily advertise on our login screen that shows current "standard response time" checked every x minutes. My thinking is that I need to capture something like the results of a traceroute from a server (either in our office, amazon, etc..) to one of the data center servers and come up with a Red/Yellow/Green type of a notifier for the login screen to let the user know that our tests are responding normally and if they are having delay issues it could be their network or connection to the internet. We have lots of clients in rural areas that have poor connectivity and we are trying to let them know any slowness might be on their end, not ours. I've got the LAMP stack to work with, but this could also be some other system all together as long as it can update the main server with the results. I already have pingdom reports that are available, but that's a bit more than people want to read sometimes. Any ideas on what I can do?

    Read the article

  • Computing "average" of two colors

    - by Francisco P.
    This is only marginally programming related - has much more to do w/ colors and their representation. I am working on a very low level app. I have an array of bytes in memory. Those are characters. They were rendered with anti-aliasing: they have values from 0 to 255, 0 being fully transparent and 255 totally opaque (alpha, if you wish). I am having trouble conceiving an algorithm for the rendering of this font. I'm doing the following for each pixel: // intensity is the weight I talked about: 0 to 255 intensity = glyphs[text[i]][x + GLYPH_WIDTH*y]; if (intensity == 255) continue; // Don't draw it, fully transparent else if (intensity == 0) setPixel(x + xi, y + yi, color, base); // Fully opaque, can draw original color else { // Here's the tricky part // Get the pixel in the destination for averaging purposes pixel = getPixel(x + xi, y + yi, base); // transfer is an int for calculations transfer = (int) ((float)((float) (255.0 - (float) intensity/255.0) * (float) color.red + (float) pixel.red)/2); // This is my attempt at averaging newPixel.red = (Byte) transfer; transfer = (int) ((float)((float) (255.0 - (float) intensity/255.0) * (float) color.green + (float) pixel.green)/2); newPixel.green = (Byte) transfer; // transfer = (int) ((float) ((float) 255.0 - (float) intensity)/255.0 * (((float) color.blue) + (float) pixel.blue)/2); transfer = (int) ((float)((float) (255.0 - (float) intensity/255.0) * (float) color.blue + (float) pixel.blue)/2); newPixel.blue = (Byte) transfer; // Set the newpixel in the desired mem. position setPixel(x+xi, y+yi, newPixel, base); } The results, as you can see, are less than desirable. That is a very zoomed in image, at 1:1 scale it looks like the text has a green "aura". Any idea for how to properly compute this would be greatly appreciated. Thanks for your time!

    Read the article

  • Eliminate subquery for average numeric value

    - by Dave Jarvis
    Quest A query selects locations that begin with Vancouver, which are in a 5 minute radius from one another. SQL Code The following SQL abomination does the trick: SELECT NAME FROM STATION WHERE DISTRICT_ID = '110' AND NAME LIKE 'Vancouver%' AND LATITUDE BETWEEN (SELECT round((min(LATITUDE) + max(LATITUDE)) / 2)-5 FROM STATION WHERE DISTRICT_ID = '110' AND NAME LIKE 'Vancouver%') and (SELECT round((min(LATITUDE) + max(LATITUDE)) / 2)+5 FROM STATION WHERE DISTRICT_ID = '110' AND NAME LIKE 'Vancouver%') AND LONGITUDE BETWEEN (SELECT round((min(LONGITUDE) + max(LONGITUDE)) / 2)-5 FROM STATION WHERE DISTRICT_ID = '110' AND NAME LIKE 'Vancouver%') and (SELECT round((min(LONGITUDE) + max(LONGITUDE)) / 2)+5 FROM STATION WHERE DISTRICT_ID = '110' AND NAME LIKE 'Vancouver%') ORDER BY LATITUDE Question How can this query be simplified to remove the redundancy, without using a view? Restrictions The database is MySQL, but ANSI SQL is always nice. Thank you!

    Read the article

  • How to let an average user design a boolean expression graphically

    - by Svein Bringsli
    In our application there's a list of customers, and a list of keywords (among other things). Each customer can have a number of keywords, but it's not mandatory. So for instance, one customer can have the keywords "retail" and "chain", one can have only "contractor" and a third can have none at all. I want to let the user make a selection of customers based on these keywords, but not having to write (retail AND chain) or contractor and not wholesale I would like to make it as user-friendly as possible, and ideally with only "simple" controls, like checkboxes, comboboxes etc. Does anyone have any suggestions on how to design this? Or maybe some examples of applications where there are similar functionality?

    Read the article

  • Calculate average in each group

    - by Gokul
    I am using the following class class Country { int CountryID {get; set;} List<City> city {get; set;} } class City { int CountryID {get; set; } string city {get; set;} int sqkm {get; set;} } Here's is some sample data for Country and City Country US UK Canada City CityC CityF CityA CityB CityG CityD CityE I am populating using List<Country> countries = new List<Country> { new Country() { CountryID = "US", city = new List<City> { new City() {CountryID = "US", City ="CityF", sqkm = 2803 }, and so on Question 1: I want to use LINQ to find avg sq. km of land per country Eg: Canada - 2459 UK - 3243 US - 3564

    Read the article

  • 3x3 Average filter in matlab

    - by turingcomplete
    I've written code to smooth an image using a 3x3 averaging filter, however the output is strange, it is almost all black. Here's my code. function [filtered_img] = average_filter(noisy_img) [m,n] = size(noisy_img); filtered_img = zeros(m,n); for i = 1:m-2 for j = 1:n-2 sum = 0; for k = i:i+2 for l = j:j+2 sum = sum+noisy_img(k,l); end end filtered_img(i+1,j+1) = sum/9.0; end end end I call the function as follows: img=imread('img.bmp'); filtered = average_filter(img); imshow(uint8(filtered)); I can't see anything wrong in the code logic so far, I'd appreciate it if someone can spot the problem.

    Read the article

  • MySQL: Is it possible to compute MAX( AVG (field) )?

    - by Brad
    My current query reads: SELECT entry_id, user_id, cat_id, AVG( rating ) as avg_rate FROM entry_rate WHERE 1 GROUP BY entry_id cat_id relates to different categories: 1, 2, 3 or 4 Is there a way I can find the maximum average for each user in each category without setting up an additional table? The return could potentially be 4 maximum avg_rate for each user_id Visit the link below for example: http://lh5.ggpht.com/_rvDQuhTddnc/S8Os_77qR9I/AAAAAAAAA2M/IPmzNeYjfCA/s800/table1.jpg

    Read the article

  • named_scope + average is causing the table to be specified more then once in the sql query run on po

    - by hadees
    I have a named scopes like so... named_scope :gender, lambda { |gender| { :joins => {:survey_session => :profile }, :conditions => { :survey_sessions => { :profiles => { :gender => gender } } } } } and when I call it everything works fine. I also have this average method I call... Answer.average(:rating, :include => {:survey_session => :profile}, :group => "profiles.career") which also works fine if I call it like that. However if I were to call it like so... Answer.gender('m').average(:rating, :include => {:survey_session => :profile}, :group => "profiles.career") I get... ActiveRecord::StatementInvalid: PGError: ERROR: table name "profiles" specified more than once : SELECT avg("answers".rating) AS avg_rating, profiles.career AS profiles_career FROM "answers" LEFT OUTER JOIN "survey_sessions" survey_sessions_answers ON "survey_sessions_answers".id = "answers".survey_session_id LEFT OUTER JOIN "profiles" ON "profiles".id = "survey_sessions_answers".profile_id INNER JOIN "survey_sessions" ON "survey_sessions".id = "answers".survey_session_id INNER JOIN "profiles" ON "profiles".id = "survey_sessions".profile_id WHERE ("profiles"."gender" = E'm') GROUP BY profiles.career Which is a little hard to read but says I'm including the table profiles twice. If I were to just remove the include from average it works but it isn't really practical because average is actually being called inside a method which gets passed the scoped. So there is some times gender or average might get called with out each other and if either was missing the profile include it wouldn't work. So either I need to know how to fix this apparent bug in Rails or figure out a way to know what scopes were applied to a ActiveRecord::NamedScope::Scope object so that I could check to see if they have been applied and if not add the include for average.

    Read the article

  • Using exponential smoothing with NaN values

    - by Eric
    I have a sample of some kind that can create somewhat noisy output. The sample is the result of some image processing from a camera, which indicates the heading of a blob of a certain color. It is an angle from around -45° to +45°, or a NaN, which means that the blob is not actually in view. In order to combat the noisy data, I felt that exponential smoothing would do the trick. However, I'm not sure how to handle the NaN values. On the one hand, involving them in the math would result in a NaN average, which would then prevent any meaningful results. On the other hand, ignoring NaN values completely would mean that a "no-detection" scenario would never be reported. And just to complicate things, the data is also noisy in that it can get false NaN value, which ideally would be smoothed somehow to prevent random noise. Any ideas about how I could implement such an exponential smoother?

    Read the article

  • Function for averages of tuples in a dictionary

    - by Billy Mann
    I have a string, dictionary in the form: ('the head', {'exploded': (3.5, 1.0), 'the': (5.0, 1.0), "puppy's": (9.0, 1.0), 'head': (6.0, 1.0)}) Each parentheses is a tuple which corresponds to (score, standard deviation). I'm taking the average of just the first integer in each tuple. I've tried this: def score(string, d): for word in d: (score, std) = d[word] d[word]=float(score),float(std) if word in string: word = string.lower() number = len(string) return sum([v[0] for v in d.values()]) / float(len(d)) if len(string) == 0: return 0 When I run: print score('the head', {'exploded': (3.5, 1.0), 'the': (5.0, 1.0), "puppy's": (9.0, 1.0), 'head': (6.0, 1.0)}) I should get 5.5 but instead I'm getting 5.875. Can't figure out what in my function is not allowing me to get the correct answer.

    Read the article

  • R: What are the best functions to deal with concatenating and averaging values in a data.frame?

    - by John
    I have a data.frame from this code: my_df = data.frame("read_time" = c("2010-02-15", "2010-02-15", "2010-02-16", "2010-02-16", "2010-02-16", "2010-02-17"), "OD" = c(0.1, 0.2, 0.1, 0.2, 0.4, 0.5) ) which produces this: > my_df read_time OD 1 2010-02-15 0.1 2 2010-02-15 0.2 3 2010-02-16 0.1 4 2010-02-16 0.2 5 2010-02-16 0.4 6 2010-02-17 0.5 I want to average the OD column over each distinct read_time (notice some are replicated others are not) and I also would like to calculate the standard deviation, producing a table like this: > my_df read_time OD stdev 1 2010-02-15 0.15 0.05 5 2010-02-16 0.3 0.1 6 2010-02-17 0.5 0 Which are the best functions to deal with concatenating such values in a data.frame?

    Read the article

  • About Load average in htop, how to decide if it's still doing ok?

    - by Joe Huang
    I use 'htop' to monitor my web server. It's recently quite loaded and the Load average is showing something like this: Load average: 3.10 2.56 1.63 I searched the web about these numbers and I found an article about it: http://blog.scoutapp.com/articles/2009/07/31/understanding-load-averages In the article, it says if I have 2 CPUs, 2.0 means 100% CPU utilization. And my VPS has two CPUs, so what does 3.1 mean? How could it exceed 100% CPU utilization? And from these numbers, does it mean I should be wary about the loading now? But the performance seems totally fine, and this is a managed VPS, the hosting company has not notified me any warning about it. During day time, Load average always show these high numbers... here is another snapshot while writing. Load average: 3.03 2.77 1.97 Load average: 0.41 1.29 1.60 <---- 5 more minutes later So I am wondering how much room left for this site to grow in current configurations? What kind of proactive actions I should take in advance? I don't want to wait until the server bursts. Thanks.

    Read the article

  • How could Load average numbers from 'htop' exceed 100% CPU utlization?

    - by Joe Huang
    I use 'htop' to monitor my web server. It's recently quite loaded and the Load average is showing something like this: Load average: 3.10 2.56 1.63 I searched the web about these numbers and I found an article about it: http://blog.scoutapp.com/articles/2009/07/31/understanding-load-averages In the article, it says if I have 2 CPUs, 2.0 means 100% CPU utilization. And my VPS has two CPUs, so what does 3.1 mean? How could it exceed 100% CPU utilization? And from these numbers, does it mean I should be wary about the loading now? But the performance seems totally fine, and this is a managed VPS, the hosting company has not notified me any warning about it. During day time, Load average always show these high numbers... here is another snapshot while writing. Load average: 3.03 2.77 1.97 Load average: 0.41 1.29 1.60 <---- 5 more minutes later So I am wondering how much room left for this site to grow in current configurations? What kind of proactive actions I should take in advance? I don't want to wait until the server bursts. Thanks.

    Read the article

  • need more complex index !

    - by silversky
    I'm sorry if it's not an appropriate question for this site, and if it's necesary I'll close this question. But maybe someone could give me an ideea: I'm trying to find a more complex index to make an hierarchy. For example: 5 votes from 6 = 83% AND 500 votes from 600 = 83%; 10 votes from 600 = 1.66% If I make a hierarchy with the %, first two will be on the same place, but I think that 83% from 600 it's more valuable than the first one. I could compare 5, 10, 500, but again it's not fair because the third case (10 votes) will be in front of the first case (5 votes), wich it's not fair beacuse the third case has only 1.66% Maybe someone could give me an ideea how to give more weight for the second case but in the same time let the let the new entries have a fair chance.

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >