Search Results

Search found 2210 results on 89 pages for 'sum'.

Page 14/89 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • F# Objects &ndash; Integration with the other .Net Languages &ndash; Part 2

    - by MarkPearl
    So in part one of my posting I covered the real basics of object creation. Today I will hopefully dig a little deeper… My expert F# book brings up an interesting point – properties in F# are just syntactic sugar for method calls. This makes sense… for instance assume I had the following object with the property exposed called Firstname. type Person(Firstname : string, Lastname : string) = member v.Firstname = Firstname I could extend the Firstname property with the following code and everything would be hunky dory… type Person(Firstname : string, Lastname : string) = member v.Firstname = Console.WriteLine("Side Effect") Firstname   All that this would do is each time I use the property Firstname, I would see the side effect printed to the screen saying “Side Effect”. Member methods have a very similar look & feel to properties, in fact the only difference really is that you declare that parameters are being passed in. type Person(Firstname : string, Lastname : string) = member v.FullName(middleName) = Firstname + " " + middleName + " " + Lastname   In the code above, FullName requires the parameter middleName, and if viewed from another project in C# would show as a method and not a property. Precomputation Optimizations Okay, so something that is obvious once you think of it but that poses an interesting side effect of mutable value holders is pre-computation of results. All it is, is a slight difference in code but can result in quite a huge saving in performance. Basically pre-computation means you would not need to compute a value every time a method is called – but could perform the computation at the creation of the object (I hope I have got it right). In a way I battle to differentiate this from lazy evaluation but I will show an example to explain the principle. Let me try and show an example to illustrate the principle… assume the following F# module namespace myNamespace open System module myMod = let Add val1 val2 = Console.WriteLine("Compute") val1 + val2 type MathPrecompute(val1 : int, val2 : int) = let precomputedsum = Add val1 val2 member v.Sum = precomputedsum type MathNormalCompute(val1 : int, val2 : int) = member v.Sum = Add val1 val2 Now assume you have a C# console app that makes use of the objects with code similar to the following… using System; using myNamespace; namespace CSharpTest { class Program { static void Main(string[] args) { Console.WriteLine("Constructing Objects"); var myObj1 = new myMod.MathNormalCompute(10, 11); var myObj2 = new myMod.MathPrecompute(10, 11); Console.WriteLine(""); Console.WriteLine("Normal Compute Sum..."); Console.WriteLine(myObj1.Sum); Console.WriteLine(myObj1.Sum); Console.WriteLine(myObj1.Sum); Console.WriteLine(""); Console.WriteLine("Pre Compute Sum..."); Console.WriteLine(myObj2.Sum); Console.WriteLine(myObj2.Sum); Console.WriteLine(myObj2.Sum); Console.ReadKey(); } } } The output when running the console application would be as follows…. You will notice with the normal compute object that the system would call the Add function every time the method was called. With the Precompute object it only called the compute method when the object was created. Subtle, but something that could lead to major performance benefits. So… this post has gone off in a slight tangent but still related to F# objects.

    Read the article

  • Beginner to RUBY - require_relative problem

    - by WANNABE
    Hi, I'm learning Ruby (using version 1.8.6) on Windows 7. When I try to run the stock_stats.rb program below, I get the following error: C:\Users\Will\Desktop\ruby>ruby stock_stats.rb stock_stats.rb:1: undefined method `require_relative' for main:Object (NoMethodE rror) I have three v.small code files: stock_stats.rb require_relative 'csv_reader' reader = CsvReader.new ARGV.each do |csv_file_name| STDERR.puts "Processing #{csv_file_name}" reader.read_in_csv_data(csv_file_name) end puts "Total value = #{reader.total_value_in_stock}" csv_reader.rb require 'csv' require_relative 'book_in_stock' class CsvReader def initialize @books_in_stock = [] end def read_in_csv_data(csv_file_name) CSV.foreach(csv_file_name, headers: true) do |row| @books_in_stock << BookInStock.new(row["ISBN"], row["Amount"]) end end # later we'll see how to use inject to sum a collection def total_value_in_stock sum = 0.0 @books_in_stock.each {|book| sum += book.price} sum end def number_of_each_isbn # ... end end book_in_stock.rb require 'csv' require_relative 'book_in_stock' class CsvReader def initialize @books_in_stock = [] end def read_in_csv_data(csv_file_name) CSV.foreach(csv_file_name, headers: true) do |row| @books_in_stock << BookInStock.new(row["ISBN"], row["Amount"]) end end # later we'll see how to use inject to sum a collection def total_value_in_stock sum = 0.0 @books_in_stock.each {|book| sum += book.price} sum end def number_of_each_isbn # ... end end Thanks in advance for any help.

    Read the article

  • How to write a subquery using DB

    - by Rita
    SELECT SUM( amount_disbursed ) disbusedamount, (select SUM( loaninstallmentpaid_amount ) FROM ourbank_loan_repayment) paidamount, SUM( amount_disbursed )- (select SUM( loaninstallmentpaid_amount ) FROM ourbank_loan_repayment) differenceamount FROM ourbank_loan_disbursement Any Help would be appreciated

    Read the article

  • Why did i get this error?

    - by David
    here's the code: class Acount { int sum ; String owner ; //these seem to make sense //a constructor or two public Acount () { this.sum = 0 ; this.owner = "John Doe" ; } public Acount (String name) {this.sum = 0 ; this.owner = name ; } public Acount (String name, int sum) {this.sum = sum ; this.owner = name ; } //prints an acount in the format "owner" "sum" public static void printAcount (Acount Acount) {System.out.print (Acount.owner) ; System.out.print (" ") ; System.out.println (Acount.sum) ; } public static void main (String[]arg) { Acount Acount1 = new Acount ("david", 100) ; System.out.println ("heres the first acount as it was created:") ; printAcount (Acount1) ; System.out.println ("now i changed one of its instance varaibles with a static method") ; upOne (Acount1) ; printAcount (Acount1) ; } public static Acount upOne (Acount Acount) { Acount.sum = Acount.sum + 1 ; return Acount ; } } here's the error: Exception in thread "main" java.lang.NoClassDefFoundError: Acount/java What went wrong?

    Read the article

  • round off and displaying the values

    - by S.PRATHIBA
    Hi all, I have the following code: import java.io.; import java.sql.; import java.math.; import java.lang.; public class Testd1{ public static void main(String[] args) { System.out.println("Sum of the specific column!"); Connection con = null; int m=1; double sum,sum1,sum2; int e[]; e=new int[100]; int p; int decimalPlaces = 5; for( int i=0;i< e.length;i++) { e[i]=0; } double b2,c2,d2,u2,v2; int i,j,k,x,y; double mat[][]=new double[10][10]; try { Class.forName("com.mysql.jdbc.Driver"); con = DriverManager.getConnection ("jdbc:mysql://localhost:3306/prathi","root","mysql"); try{ Statement st = con.createStatement(); ResultSet res = st.executeQuery("SELECT Service_ID,SUM(consumer_feedback) FROM consumer1 group by Service_ID"); while (res.next()){ int data=res.getInt(1); System.out.println(data); System.out.println("\n\n"); int c1 = res.getInt(2); e[m]=res.getInt(2); if(e[m]<0) e[m]=0; m++; System.out.print(c1); System.out.println("\t\t"); } sum=e[1]+e[2]+e[3]+e[4]+e[5]; System.out.println("\n \n The sum is" +sum); for( p=21; p<=25; p++) { if(e[p] != 0) e[p]=e[p]/(int)sum; //I have type casted sum to get output BigDecimal bd1 = new BigDecimal(e[p]); bd1 = bd1.setScale(decimalPlaces, BigDecimal.ROUND_HALF_UP); // setScale is immutable e[p] = bd1.intValue(); System.out.println("\n\n The normalized value is" +e[p]); mat[4][p-21]=e[p]; } } catch (SQLException s){ System.out.println("SQL statement is not executed!"); } } catch (Exception e1){ e1.printStackTrace(); } } } I have a table named consumer1.After calculating the sum i am getting the values as follows mysql select Service_ID,sum(consumer_feedback) from consumer1 group by Service_ ID; Service_ID sum(consumer_feedback) 31 17 32 0 33 60 34 38 35 | 38 In my program I am getting the sum for each Service_ID correctly.But,after normalization ie while I am calculating 17/153=0.111 I am getting the normalized value is 0.I want the normalized values to be displayed correctly after rounding off.My output is as follows C:javac Testd1.java C:java Testd1 Sum of the specific column! 31 17 32 0 33 60 34 38 35 38 The sum is153.0 The normalized value is0 The normalized value is0 The normalized value is0 The normalized value is0 The normalized value is0 But,after normalization i want to get 17/153=0.111 I am getting the normalized value is 0.I want these values to be rounded off.

    Read the article

  • Linear regression confidence intervals in SQL

    - by Matt Howells
    I'm using some fairly straight-forward SQL code to calculate the coefficients of regression (intercept and slope) of some (x,y) data points, using least-squares. This gives me a nice best-fit line through the data. However we would like to be able to see the 95% and 5% confidence intervals for the line of best-fit (the curves below). What these mean is that the true line has 95% probability of being below the upper curve and 95% probability of being above the lower curve. How can I calculate these curves? I have already read wikipedia etc. and done some googling but I haven't found understandable mathematical equations to be able to calculate this. Edit: here is the essence of what I have right now. --sample data create table #lr (x real not null, y real not null) insert into #lr values (0,1) insert into #lr values (4,9) insert into #lr values (2,5) insert into #lr values (3,7) declare @slope real declare @intercept real --calculate slope and intercept select @slope = ((count(*) * sum(x*y)) - (sum(x)*sum(y)))/ ((count(*) * sum(Power(x,2)))-Power(Sum(x),2)), @intercept = avg(y) - ((count(*) * sum(x*y)) - (sum(x)*sum(y)))/ ((count(*) * sum(Power(x,2)))-Power(Sum(x),2)) * avg(x) from #lr Thank you in advance.

    Read the article

  • Summation loop program in Pascal

    - by user2526598
    I am having a bit of an issue with this problem. I am taking a Pascal programming class and this problem was in my logic book. I am required to have the user enter a series of (+) numbers and once he/she enters a (-) number, the program should find the sum of all the (+) numbers. I accomplished this, but now I am attempting part two of this problem, which requires me to utilize a nested loop to run the program x amount of times based on the user's input. The following code is what I have so far and honestly I am stumped: program summation; //Define main program's variables var num, sum, numRun : integer; //Design procedure that will promt user for number of runs procedure numRunLoop ( var numRun : integer ); begin writeln('How many times shall I run this program?'); readln(numRun); end; //Design procedure that will sum a series of numbers //based on user input procedure numPromptLoop( numRun : integer; var num : integer ); var count : integer; begin //Utilize for to establish run limit for count := 1 to numRun do begin //Use repeat to prompt user for numbers repeat writeln('Enter a number: '); readln(num); //Tells program when to sum if num >= 0 then sum := sum + num; until num < 0; end; end; //Design procedure that will display procedure addItion( sum : integer ); begin writeln('The sum is; ', sum); end; begin numRunLoop(numRun); numPromptloop(numRun, num); addItion(sum); readln(); end.

    Read the article

  • In Ruby, why is a method invocation not be able to be treated as a unit when "do" and "end" is used?

    - by Jian Lin
    The following question is related to the question "Ruby Print Inject Do Syntax". My question is, can we insist on using do and end and make it work with puts or p? This works: a = [1,2,3,4] b = a.inject do |sum, x| sum + x end puts b # prints out 10 so, is it correct to say, inject is a class method of the Array class, which takes a block of code, and then returns a number. If so, then it should be no different from calling a function and getting back a return value: b = foo(3) puts b or b = circle.getRadius() puts b In the above two cases, we can directly say puts foo(3) puts circle.getRadius() so, there is no way to make it work directly by using the following 2 ways: a = [1,2,3,4] puts a.inject do |sum, x| sum + x end but it gives ch01q2.rb:7:in `inject': no block given (LocalJumpError) from ch01q2.rb:4:in `each' from ch01q2.rb:4:in `inject' from ch01q2.rb:4 grouping the method call using ( ) doesn't work either: a = [1,2,3,4] puts (a.inject do |sum, x| sum + x end) and this gives: ch01q3.rb:4: syntax error, unexpected kDO_BLOCK, expecting ')' puts (a.inject do |sum, x| ^ ch01q3.rb:4: syntax error, unexpected '|', expecting '=' puts (a.inject do |sum, x| ^ ch01q3.rb:6: syntax error, unexpected kEND, expecting $end end) ^ finally, the following version works: a = [1,2,3,4] puts a.inject { |sum, x| sum + x } but why doesn't the grouping of the method invocation using ( ) work in the earlier example? What if a programmer insist that he uses do and end, can it be made to work?

    Read the article

  • In Ruby, why does a method invocation not be able to be treated as a unit when "do" and "end" is use

    - by Jian Lin
    The following question is related to http://stackoverflow.com/questions/2127836/ruby-print-inject-do-syntax The question is, can we insist on using DO and END and make it work with puts or p? This works: a = [1,2,3,4] b = a.inject do |sum, x| sum + x end puts b # prints out 10 so, is it correct to say, inject is a class method of the Array class, which takes a block of code, and then returns a number. If so, then it should be no different from calling a function and getting back a return value: b = foo(3) puts b or b = circle.getRadius() puts b In the above two cases, we can directly say puts foo(3) puts circle.getRadius() so, there is no way to make it work directly by using the following 2 ways: a = [1,2,3,4] puts a.inject do |sum, x| sum + x end but it gives ch01q2.rb:7:in `inject': no block given (LocalJumpError) from ch01q2.rb:4:in `each' from ch01q2.rb:4:in `inject' from ch01q2.rb:4 grouping the method call using ( ) doesn't work either: a = [1,2,3,4] puts (a.inject do |sum, x| sum + x end) and this gives: ch01q3.rb:4: syntax error, unexpected kDO_BLOCK, expecting ')' puts (a.inject do |sum, x| ^ ch01q3.rb:4: syntax error, unexpected '|', expecting '=' puts (a.inject do |sum, x| ^ ch01q3.rb:6: syntax error, unexpected kEND, expecting $end end) ^ finally, the following version works: a = [1,2,3,4] puts a.inject { |sum, x| sum + x } but why doesn't the grouping of the method invocation using ( ) work? What if a programmer insists that he uses do and end, can it be made to work directly with p or puts, without an extra temporary variable?

    Read the article

  • MySql selecting default value if there are no result?

    - by Kenan
    i'm having 2 tables: members and comments. I select all members, and then join comments. But in comments I'm selecting some SUM of points, and if user never commented, I can't get that user in listing?! So how to select default value for SUM, or some other solution: SELECT c.comment_id AS item_id, m.member_id AS member_id, m.avatar, SUM(c.vote_value) AS vote_value, SUM(c.best) AS best, SUM(c.vote_value) + SUM(c.najbolji)*10 AS total FROM members m LEFT JOIN comments c ON m.member_id = c.author_id GROUP BY c.author_id ORDER BY m.member_id DESC LIMIT {$sql_start}, {$sql_pokazi}

    Read the article

  • In Ruby, why is a method invocation not able to be treated as a unit when "do" and "end" is used?

    - by Jian Lin
    The following question is related to the question "Ruby Print Inject Do Syntax". My question is, can we insist on using do and end and make it work with puts or p? This works: a = [1,2,3,4] b = a.inject do |sum, x| sum + x end puts b # prints out 10 so, is it correct to say, inject is an instance method of the Array object, and this instance method takes a block of code, and then returns a number. If so, then it should be no different from calling a function or method and getting back a return value: b = foo(3) puts b or b = circle.getRadius() puts b In the above two cases, we can directly say puts foo(3) puts circle.getRadius() so, there is no way to make it work directly by using the following 2 ways: a = [1,2,3,4] puts a.inject do |sum, x| sum + x end but it gives ch01q2.rb:7:in `inject': no block given (LocalJumpError) from ch01q2.rb:4:in `each' from ch01q2.rb:4:in `inject' from ch01q2.rb:4 grouping the method call using ( ) doesn't work either: a = [1,2,3,4] puts (a.inject do |sum, x| sum + x end) and this gives: ch01q3.rb:4: syntax error, unexpected kDO_BLOCK, expecting ')' puts (a.inject do |sum, x| ^ ch01q3.rb:4: syntax error, unexpected '|', expecting '=' puts (a.inject do |sum, x| ^ ch01q3.rb:6: syntax error, unexpected kEND, expecting $end end) ^ finally, the following version works: a = [1,2,3,4] puts a.inject { |sum, x| sum + x } but why doesn't the grouping of the method invocation using ( ) work in the earlier example? What if a programmer insist that he uses do and end, can it be made to work?

    Read the article

  • How do I write recursive anonymous functions?

    - by James T Kirk
    In my continued effort to learn scala, I'm working through 'Scala by example' by Odersky and on the chapter on first class functions, the section on anonymous function avoids a situation of recursive anonymous function. I have a solution that seems to work. I'm curious if there is a better answer out there. From the pdf: Code to showcase higher order functions def sum(f: Int => Int, a: Int, b: Int): Int = if (a > b) 0 else f(a) + sum(f, a + 1, b) def id(x: Int): Int = x def square(x: Int): Int = x * x def powerOfTwo(x: Int): Int = if (x == 0) 1 else 2 * powerOfTwo(x-1) def sumInts(a: Int, b: Int): Int = sum(id, a, b) def sumSquares(a: Int, b: Int): Int = sum(square, a, b) def sumPowersOfTwo(a: Int, b: Int): Int = sum(powerOfTwo, a, b) scala> sumPowersOfTwo(2,3) res0: Int = 12 from the pdf: Code to showcase anonymous functions def sum(f: Int => Int, a: Int, b: Int): Int = if (a > b) 0 else f(a) + sum(f, a + 1, b) def sumInts(a: Int, b: Int): Int = sum((x: Int) => x, a, b) def sumSquares(a: Int, b: Int): Int = sum((x: Int) => x * x, a, b) // no sumPowersOfTwo My code: def sumPowersOfTwo(a: Int, b: Int): Int = sum((x: Int) => { def f(y:Int):Int = if (y==0) 1 else 2 * f(y-1); f(x) }, a, b) scala> sumPowersOfTwo(2,3) res0: Int = 12

    Read the article

  • VB6 Require some help with looping

    - by k80sg
    Hi, I am trying to convert a source from C++ to vb6: C++: static double mdArray[3][3]; static double mdArray2[3][3]; for (i = 0; i < 3; i++) for (j = 0; j < 3; j++) { double sum = 0; for(k = 0; k < 3; k++) sum = sum + mdArray[k][i] * mdArray[k][k]; mdArray2[i][j] = sum } VB6: dim mdArray(0 to 2, 0 to 2) as integer dim mdArray2(0 to 2, 0 to 2) as integer for i = 0 to 2 for j = 0 to 2 dim a as double sum = 0 for k = 0 to 2 sum = sum + mdArray(k,i) * mdArray(k,j) mdArray2(i,j) = sum Next Next Next Will the vb6 version yield the same result as the C++ version? Thanks.

    Read the article

  • division problems

    - by David
    This line of code: System.out.println ("aray[j], "+aray[j]+", divided by sum, "+sum+", equals: aray[j]/sum: "+ aray[j]/sum) ; is yeilding this line of text: aray[j], 21, divided by sum, 100, equals: aray[j]/sum: 0 why is it doing this? (everything is right eccept that the answer should be .21)

    Read the article

  • Help with mysql sum and group query and managing jquery graph results.

    - by Scarface
    Hey guys, I have a system I am trying to design that will retrieve information from a database, so that it can be plotted in a jquery graph. I need to retrieve the information and somehow put it in the necessary format (for example two coordinates var d = [[1269417600000, 10],[1269504000000, 15]];). My table that I am selecting from is a table that stores user votes with fields: points_id (1=vote up ,2=vote down), user_id, timestamp, and topic_id. What I need to do is select all the votes and somehow group them into respective days and then sum the difference between 1 votes and 2 votes for each day. I then need to somehow display the data in the appropriate plotting format shown earlier. For example April 1, 4 votes. The data needs to be separated by commas, except the last plot entry, so I am not sure how to approach that. I showed an example below of the kind of thing I need but it is not correct, echo "var d=["; $query=mysql_query("SELECT *, SUM(IF(points_id = \"1\", 1,0))-SUM(IF([points_id = \"2\", 1,0)) AS 'total' FROM points LEFT JOIN topic on topic.topic_id=points.topic_id WHERE topic.creator='$user' GROUP by timestamp HAVING certain time interval"); while ($row=mysql_fetch_assoc($query)){ $timestamp=$row['timestamp']; $votes=$row['total']; echo "[$timestamp,$vote],"; } echo "];";

    Read the article

  • Help with mysql sum and group query and managing results for jquery graph.

    - by Scarface
    I have a system I am trying to design that will retrieve information from a database, so that it can be plotted in a jquery graph. I need to retrieve the information and somehow put it in the necessary coordinates format (for example two coordinates var d = [[1269417600000, 10],[1269504000000, 15]];). My table that I am selecting from is a table that stores user votes with fields: points_id (1=vote up, 2=vote down), user_id, timestamp, topic_id What I need to do is select all the votes and somehow group them into respective days and then sum the difference between 1 votes and 2 votes for each day. I then need to somehow display the data in the appropriate plotting format shown earlier. For example April 1, 4 votes. The data needs to be separated by commas, except the last plot entry, so I am not sure how to approach that. I showed an example below of the kind of thing I need but it is not correct, echo "var d=["; $query=mysql_query( "SELECT *, SUM(IF(points_id = \"1\", 1,0))-SUM(IF([points_id = \"2\", 1,0)) AS 'total' FROM points LEFT JOIN topic ON topic.topic_id=points.topic_id WHERE topic.creator='$user' GROUP by timestamp HAVING certain time interval" ); while ($row=mysql_fetch_assoc($query)){ $timestamp=$row['timestamp']; $votes=$row['total']; echo "[$timestamp,$vote],"; } echo "];";

    Read the article

  • How to find sum of node's value for given depth in binary tree?

    - by masato-san
    I've been scratching my head for several hours for this... problem: Binary Tree (0) depth 0 / \ 10 20 depth 1 / \ / \ 30 40 50 60 depth 2 I am trying to write a function that takes depth as argument and return the sum of values of nodes of the given depth. For instance, if I pass 2, it should return 180 (i.e. 30+40+50+60) I decided to use breath first search and when I find the node with desired depth, sum up the value, but I just can't figure out how to find out the way which node is in what depth. But with this approach I feel like going to totally wrong direction. function level_order($root, $targetDepth) { $q = new Queue(); $q->enqueue($root); while(!$q->isEmpty) { //how to determin the depth of the node??? $node = $q->dequeue(); if($currentDepth == $targetDepth) { $sum = $node->value; } if($node->left != null) { $q->enqueue($node->left); } if($node->right != null) { $q->enqueue($node->right); } //need to reset this somehow $currentDepth ++; } }

    Read the article

  • How can I order my entries by sum from a separate table?

    - by bgadoci
    I am wondering how I can order posts in my PostController#index to display by a column total in a separate table. Here is how I have it set up. class Post < ActiveRecord::Base :has_many :votes end and Class Vote < ActiveRecord::Base :belongs_to :post end I user can either vote up or down a particular post. I know there are likely better ways to do what I am currently doing but looking for a fix given my current situation. When a user votes up a post, a value of 1 is passed to the Vote Table via a hidden field. When a user votes down a post a value of -1 is passed to the same column (names vote). I am wondering how I can display my posts in order of the sum of the vote column (in the vote table) for a particular post. Another way to say that is, if a particular post has a net vote sum of 5, I want that to appear above a post with a net vote sum of 4. I am assuming that I need to affect the PostController#index action in some fashion. But not sure how to do that.

    Read the article

  • how to sum the amount of cells with the same number in a column - Microsoft Excel 2010

    - by jerlebrink
    My english isn't that good, I hope you understand what I wan't to accomplish I have a column (A) with different zip codes ( total of 3583 rows). I need a formula/function to go through each cell and the come up with sum of how many instances (column B) there are of the same zip code (column C).There are probably more than hundred different zip codes so I can't do it manually. Thanks in advance.

    Read the article

  • SQL SEVER – Finding Memory Pressure – External and Internal

    - by pinaldave
    Following query will provide details of external and internal memory pressure. It will return the data how much portion in the existing memory is assigned to what kind of memory type. SELECT TYPE, SUM(single_pages_kb) InternalPressure, SUM(multi_pages_kb) ExtermalPressure FROM sys.dm_os_memory_clerks GROUP BY TYPE ORDER BY SUM(single_pages_kb) DESC, SUM(multi_pages_kb) DESC GO What is your method to find memory pressure? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Advanced TSQL Tuning: Why Internals Knowledge Matters

    - by Paul White
    There is much more to query tuning than reducing logical reads and adding covering nonclustered indexes.  Query tuning is not complete as soon as the query returns results quickly in the development or test environments.  In production, your query will compete for memory, CPU, locks, I/O and other resources on the server.  Today’s entry looks at some tuning considerations that are often overlooked, and shows how deep internals knowledge can help you write better TSQL. As always, we’ll need some example data.  In fact, we are going to use three tables today, each of which is structured like this: Each table has 50,000 rows made up of an INTEGER id column and a padding column containing 3,999 characters in every row.  The only difference between the three tables is in the type of the padding column: the first table uses CHAR(3999), the second uses VARCHAR(MAX), and the third uses the deprecated TEXT type.  A script to create a database with the three tables and load the sample data follows: USE master; GO IF DB_ID('SortTest') IS NOT NULL DROP DATABASE SortTest; GO CREATE DATABASE SortTest COLLATE LATIN1_GENERAL_BIN; GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest', SIZE = 3GB, MAXSIZE = 3GB ); GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest_log', SIZE = 256MB, MAXSIZE = 1GB, FILEGROWTH = 128MB ); GO ALTER DATABASE SortTest SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE SortTest SET AUTO_CLOSE OFF ; ALTER DATABASE SortTest SET AUTO_CREATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_SHRINK OFF ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS_ASYNC ON ; ALTER DATABASE SortTest SET PARAMETERIZATION SIMPLE ; ALTER DATABASE SortTest SET READ_COMMITTED_SNAPSHOT OFF ; ALTER DATABASE SortTest SET MULTI_USER ; ALTER DATABASE SortTest SET RECOVERY SIMPLE ; USE SortTest; GO CREATE TABLE dbo.TestCHAR ( id INTEGER IDENTITY (1,1) NOT NULL, padding CHAR(3999) NOT NULL,   CONSTRAINT [PK dbo.TestCHAR (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestMAX ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAX (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestTEXT ( id INTEGER IDENTITY (1,1) NOT NULL, padding TEXT NOT NULL,   CONSTRAINT [PK dbo.TestTEXT (id)] PRIMARY KEY CLUSTERED (id), ) ; -- ============= -- Load TestCHAR (about 3s) -- ============= INSERT INTO dbo.TestCHAR WITH (TABLOCKX) ( padding ) SELECT padding = REPLICATE(CHAR(65 + (Data.n % 26)), 3999) FROM ( SELECT TOP (50000) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) - 1 FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) AS Data ORDER BY Data.n ASC ; -- ============ -- Load TestMAX (about 3s) -- ============ INSERT INTO dbo.TestMAX WITH (TABLOCKX) ( padding ) SELECT CONVERT(VARCHAR(MAX), padding) FROM dbo.TestCHAR ORDER BY id ; -- ============= -- Load TestTEXT (about 5s) -- ============= INSERT INTO dbo.TestTEXT WITH (TABLOCKX) ( padding ) SELECT CONVERT(TEXT, padding) FROM dbo.TestCHAR ORDER BY id ; -- ========== -- Space used -- ========== -- EXECUTE sys.sp_spaceused @objname = 'dbo.TestCHAR'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAX'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestTEXT'; ; CHECKPOINT ; That takes around 15 seconds to run, and shows the space allocated to each table in its output: To illustrate the points I want to make today, the example task we are going to set ourselves is to return a random set of 150 rows from each table.  The basic shape of the test query is the same for each of the three test tables: SELECT TOP (150) T.id, T.padding FROM dbo.Test AS T ORDER BY NEWID() OPTION (MAXDOP 1) ; Test 1 – CHAR(3999) Running the template query shown above using the TestCHAR table as the target, we find that the query takes around 5 seconds to return its results.  This seems slow, considering that the table only has 50,000 rows.  Working on the assumption that generating a GUID for each row is a CPU-intensive operation, we might try enabling parallelism to see if that speeds up the response time.  Running the query again (but without the MAXDOP 1 hint) on a machine with eight logical processors, the query now takes 10 seconds to execute – twice as long as when run serially. Rather than attempting further guesses at the cause of the slowness, let’s go back to serial execution and add some monitoring.  The script below monitors STATISTICS IO output and the amount of tempdb used by the test query.  We will also run a Profiler trace to capture any warnings generated during query execution. DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TC.id, TC.padding FROM dbo.TestCHAR AS TC ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; Let’s take a closer look at the statistics and query plan generated from this: Following the flow of the data from right to left, we see the expected 50,000 rows emerging from the Clustered Index Scan, with a total estimated size of around 191MB.  The Compute Scalar adds a column containing a random GUID (generated from the NEWID() function call) for each row.  With this extra column in place, the size of the data arriving at the Sort operator is estimated to be 192MB. Sort is a blocking operator – it has to examine all of the rows on its input before it can produce its first row of output (the last row received might sort first).  This characteristic means that Sort requires a memory grant – memory allocated for the query’s use by SQL Server just before execution starts.  In this case, the Sort is the only memory-consuming operator in the plan, so it has access to the full 243MB (248,696KB) of memory reserved by SQL Server for this query execution. Notice that the memory grant is significantly larger than the expected size of the data to be sorted.  SQL Server uses a number of techniques to speed up sorting, some of which sacrifice size for comparison speed.  Sorts typically require a very large number of comparisons, so this is usually a very effective optimization.  One of the drawbacks is that it is not possible to exactly predict the sort space needed, as it depends on the data itself.  SQL Server takes an educated guess based on data types, sizes, and the number of rows expected, but the algorithm is not perfect. In spite of the large memory grant, the Profiler trace shows a Sort Warning event (indicating that the sort ran out of memory), and the tempdb usage monitor shows that 195MB of tempdb space was used – all of that for system use.  The 195MB represents physical write activity on tempdb, because SQL Server strictly enforces memory grants – a query cannot ‘cheat’ and effectively gain extra memory by spilling to tempdb pages that reside in memory.  Anyway, the key point here is that it takes a while to write 195MB to disk, and this is the main reason that the query takes 5 seconds overall. If you are wondering why using parallelism made the problem worse, consider that eight threads of execution result in eight concurrent partial sorts, each receiving one eighth of the memory grant.  The eight sorts all spilled to tempdb, resulting in inefficiencies as the spilled sorts competed for disk resources.  More importantly, there are specific problems at the point where the eight partial results are combined, but I’ll cover that in a future post. CHAR(3999) Performance Summary: 5 seconds elapsed time 243MB memory grant 195MB tempdb usage 192MB estimated sort set 25,043 logical reads Sort Warning Test 2 – VARCHAR(MAX) We’ll now run exactly the same test (with the additional monitoring) on the table using a VARCHAR(MAX) padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TM.id, TM.padding FROM dbo.TestMAX AS TM ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query takes around 8 seconds to complete (3 seconds longer than Test 1).  Notice that the estimated row and data sizes are very slightly larger, and the overall memory grant has also increased very slightly to 245MB.  The most marked difference is in the amount of tempdb space used – this query wrote almost 391MB of sort run data to the physical tempdb file.  Don’t draw any general conclusions about VARCHAR(MAX) versus CHAR from this – I chose the length of the data specifically to expose this edge case.  In most cases, VARCHAR(MAX) performs very similarly to CHAR – I just wanted to make test 2 a bit more exciting. MAX Performance Summary: 8 seconds elapsed time 245MB memory grant 391MB tempdb usage 193MB estimated sort set 25,043 logical reads Sort warning Test 3 – TEXT The same test again, but using the deprecated TEXT data type for the padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TT.id, TT.padding FROM dbo.TestTEXT AS TT ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query runs in 500ms.  If you look at the metrics we have been checking so far, it’s not hard to understand why: TEXT Performance Summary: 0.5 seconds elapsed time 9MB memory grant 5MB tempdb usage 5MB estimated sort set 207 logical reads 596 LOB logical reads Sort warning SQL Server’s memory grant algorithm still underestimates the memory needed to perform the sorting operation, but the size of the data to sort is so much smaller (5MB versus 193MB previously) that the spilled sort doesn’t matter very much.  Why is the data size so much smaller?  The query still produces the correct results – including the large amount of data held in the padding column – so what magic is being performed here? TEXT versus MAX Storage The answer lies in how columns of the TEXT data type are stored.  By default, TEXT data is stored off-row in separate LOB pages – which explains why this is the first query we have seen that records LOB logical reads in its STATISTICS IO output.  You may recall from my last post that LOB data leaves an in-row pointer to the separate storage structure holding the LOB data. SQL Server can see that the full LOB value is not required by the query plan until results are returned, so instead of passing the full LOB value down the plan from the Clustered Index Scan, it passes the small in-row structure instead.  SQL Server estimates that each row coming from the scan will be 79 bytes long – 11 bytes for row overhead, 4 bytes for the integer id column, and 64 bytes for the LOB pointer (in fact the pointer is rather smaller – usually 16 bytes – but the details of that don’t really matter right now). OK, so this query is much more efficient because it is sorting a very much smaller data set – SQL Server delays retrieving the LOB data itself until after the Sort starts producing its 150 rows.  The question that normally arises at this point is: Why doesn’t SQL Server use the same trick when the padding column is defined as VARCHAR(MAX)? The answer is connected with the fact that if the actual size of the VARCHAR(MAX) data is 8000 bytes or less, it is usually stored in-row in exactly the same way as for a VARCHAR(8000) column – MAX data only moves off-row into LOB storage when it exceeds 8000 bytes.  The default behaviour of the TEXT type is to be stored off-row by default, unless the ‘text in row’ table option is set suitably and there is room on the page.  There is an analogous (but opposite) setting to control the storage of MAX data – the ‘large value types out of row’ table option.  By enabling this option for a table, MAX data will be stored off-row (in a LOB structure) instead of in-row.  SQL Server Books Online has good coverage of both options in the topic In Row Data. The MAXOOR Table The essential difference, then, is that MAX defaults to in-row storage, and TEXT defaults to off-row (LOB) storage.  You might be thinking that we could get the same benefits seen for the TEXT data type by storing the VARCHAR(MAX) values off row – so let’s look at that option now.  This script creates a fourth table, with the VARCHAR(MAX) data stored off-row in LOB pages: CREATE TABLE dbo.TestMAXOOR ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAXOOR (id)] PRIMARY KEY CLUSTERED (id), ) ; EXECUTE sys.sp_tableoption @TableNamePattern = N'dbo.TestMAXOOR', @OptionName = 'large value types out of row', @OptionValue = 'true' ; SELECT large_value_types_out_of_row FROM sys.tables WHERE [schema_id] = SCHEMA_ID(N'dbo') AND name = N'TestMAXOOR' ; INSERT INTO dbo.TestMAXOOR WITH (TABLOCKX) ( padding ) SELECT SPACE(0) FROM dbo.TestCHAR ORDER BY id ; UPDATE TM WITH (TABLOCK) SET padding.WRITE (TC.padding, NULL, NULL) FROM dbo.TestMAXOOR AS TM JOIN dbo.TestCHAR AS TC ON TC.id = TM.id ; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAXOOR' ; CHECKPOINT ; Test 4 – MAXOOR We can now re-run our test on the MAXOOR (MAX out of row) table: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) MO.id, MO.padding FROM dbo.TestMAXOOR AS MO ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; TEXT Performance Summary: 0.3 seconds elapsed time 245MB memory grant 0MB tempdb usage 193MB estimated sort set 207 logical reads 446 LOB logical reads No sort warning The query runs very quickly – slightly faster than Test 3, and without spilling the sort to tempdb (there is no sort warning in the trace, and the monitoring query shows zero tempdb usage by this query).  SQL Server is passing the in-row pointer structure down the plan and only looking up the LOB value on the output side of the sort. The Hidden Problem There is still a huge problem with this query though – it requires a 245MB memory grant.  No wonder the sort doesn’t spill to tempdb now – 245MB is about 20 times more memory than this query actually requires to sort 50,000 records containing LOB data pointers.  Notice that the estimated row and data sizes in the plan are the same as in test 2 (where the MAX data was stored in-row). The optimizer assumes that MAX data is stored in-row, regardless of the sp_tableoption setting ‘large value types out of row’.  Why?  Because this option is dynamic – changing it does not immediately force all MAX data in the table in-row or off-row, only when data is added or actually changed.  SQL Server does not keep statistics to show how much MAX or TEXT data is currently in-row, and how much is stored in LOB pages.  This is an annoying limitation, and one which I hope will be addressed in a future version of the product. So why should we worry about this?  Excessive memory grants reduce concurrency and may result in queries waiting on the RESOURCE_SEMAPHORE wait type while they wait for memory they do not need.  245MB is an awful lot of memory, especially on 32-bit versions where memory grants cannot use AWE-mapped memory.  Even on a 64-bit server with plenty of memory, do you really want a single query to consume 0.25GB of memory unnecessarily?  That’s 32,000 8KB pages that might be put to much better use. The Solution The answer is not to use the TEXT data type for the padding column.  That solution happens to have better performance characteristics for this specific query, but it still results in a spilled sort, and it is hard to recommend the use of a data type which is scheduled for removal.  I hope it is clear to you that the fundamental problem here is that SQL Server sorts the whole set arriving at a Sort operator.  Clearly, it is not efficient to sort the whole table in memory just to return 150 rows in a random order. The TEXT example was more efficient because it dramatically reduced the size of the set that needed to be sorted.  We can do the same thing by selecting 150 unique keys from the table at random (sorting by NEWID() for example) and only then retrieving the large padding column values for just the 150 rows we need.  The following script implements that idea for all four tables: SET STATISTICS IO ON ; WITH TestTable AS ( SELECT * FROM dbo.TestCHAR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id = ANY (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAX ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestTEXT ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAXOOR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; All four queries now return results in much less than a second, with memory grants between 6 and 12MB, and without spilling to tempdb.  The small remaining inefficiency is in reading the id column values from the clustered primary key index.  As a clustered index, it contains all the in-row data at its leaf.  The CHAR and VARCHAR(MAX) tables store the padding column in-row, so id values are separated by a 3999-character column, plus row overhead.  The TEXT and MAXOOR tables store the padding values off-row, so id values in the clustered index leaf are separated by the much-smaller off-row pointer structure.  This difference is reflected in the number of logical page reads performed by the four queries: Table 'TestCHAR' logical reads 25511 lob logical reads 000 Table 'TestMAX'. logical reads 25511 lob logical reads 000 Table 'TestTEXT' logical reads 00412 lob logical reads 597 Table 'TestMAXOOR' logical reads 00413 lob logical reads 446 We can increase the density of the id values by creating a separate nonclustered index on the id column only.  This is the same key as the clustered index, of course, but the nonclustered index will not include the rest of the in-row column data. CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestCHAR (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAX (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestTEXT (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAXOOR (id); The four queries can now use the very dense nonclustered index to quickly scan the id values, sort them by NEWID(), select the 150 ids we want, and then look up the padding data.  The logical reads with the new indexes in place are: Table 'TestCHAR' logical reads 835 lob logical reads 0 Table 'TestMAX' logical reads 835 lob logical reads 0 Table 'TestTEXT' logical reads 686 lob logical reads 597 Table 'TestMAXOOR' logical reads 686 lob logical reads 448 With the new index, all four queries use the same query plan (click to enlarge): Performance Summary: 0.3 seconds elapsed time 6MB memory grant 0MB tempdb usage 1MB sort set 835 logical reads (CHAR, MAX) 686 logical reads (TEXT, MAXOOR) 597 LOB logical reads (TEXT) 448 LOB logical reads (MAXOOR) No sort warning I’ll leave it as an exercise for the reader to work out why trying to eliminate the Key Lookup by adding the padding column to the new nonclustered indexes would be a daft idea Conclusion This post is not about tuning queries that access columns containing big strings.  It isn’t about the internal differences between TEXT and MAX data types either.  It isn’t even about the cool use of UPDATE .WRITE used in the MAXOOR table load.  No, this post is about something else: Many developers might not have tuned our starting example query at all – 5 seconds isn’t that bad, and the original query plan looks reasonable at first glance.  Perhaps the NEWID() function would have been blamed for ‘just being slow’ – who knows.  5 seconds isn’t awful – unless your users expect sub-second responses – but using 250MB of memory and writing 200MB to tempdb certainly is!  If ten sessions ran that query at the same time in production that’s 2.5GB of memory usage and 2GB hitting tempdb.  Of course, not all queries can be rewritten to avoid large memory grants and sort spills using the key-lookup technique in this post, but that’s not the point either. The point of this post is that a basic understanding of execution plans is not enough.  Tuning for logical reads and adding covering indexes is not enough.  If you want to produce high-quality, scalable TSQL that won’t get you paged as soon as it hits production, you need a deep understanding of execution plans, and as much accurate, deep knowledge about SQL Server as you can lay your hands on.  The advanced database developer has a wide range of tools to use in writing queries that perform well in a range of circumstances. By the way, the examples in this post were written for SQL Server 2008.  They will run on 2005 and demonstrate the same principles, but you won’t get the same figures I did because 2005 had a rather nasty bug in the Top N Sort operator.  Fair warning: if you do decide to run the scripts on a 2005 instance (particularly the parallel query) do it before you head out for lunch… This post is dedicated to the people of Christchurch, New Zealand. © 2011 Paul White email: @[email protected] twitter: @SQL_Kiwi

    Read the article

  • What statistics can be maintained for a set of numerical data without iterating?

    - by Dan Tao
    Update Just for future reference, I'm going to list all of the statistics that I'm aware of that can be maintained in a rolling collection, recalculated as an O(1) operation on every addition/removal (this is really how I should've worded the question from the beginning): Obvious Count Sum Mean Max* Min* Median** Less Obvious Variance Standard Deviation Skewness Kurtosis Mode*** Weighted Average Weighted Moving Average**** OK, so to put it more accurately: these are not "all" of the statistics I'm aware of. They're just the ones that I can remember off the top of my head right now. *Can be recalculated in O(1) for additions only, or for additions and removals if the collection is sorted (but in this case, insertion is not O(1)). Removals potentially incur an O(n) recalculation for non-sorted collections. **Recalculated in O(1) for a sorted, indexed collection only. ***Requires a fairly complex data structure to recalculate in O(1). ****This can certainly be achieved in O(1) for additions and removals when the weights are assigned in a linearly descending fashion. In other scenarios, I'm not sure. Original Question Say I maintain a collection of numerical data -- let's say, just a bunch of numbers. For this data, there are loads of calculated values that might be of interest; one example would be the sum. To get the sum of all this data, I could... Option 1: Iterate through the collection, adding all the values: double sum = 0.0; for (int i = 0; i < values.Count; i++) sum += values[i]; Option 2: Maintain the sum, eliminating the need to ever iterate over the collection just to find the sum: void Add(double value) { values.Add(value); sum += value; } void Remove(double value) { values.Remove(value); sum -= value; } EDIT: To put this question in more relatable terms, let's compare the two options above to a (sort of) real-world situation: Suppose I start listing numbers out loud and ask you to keep them in your head. I start by saying, "11, 16, 13, 12." If you've just been remembering the numbers themselves and nothing more, and then I say, "What's the sum?", you'd have to think to yourself, "OK, what's 11 + 16 + 13 + 12?" before responding, "52." If, on the other hand, you had been keeping track of the sum yourself while I was listing the numbers (i.e., when I said, "11" you thought "11", when I said "16", you thought, "27," and so on), you could answer "52" right away. Then if I say, "OK, now forget the number 16," if you've been keeping track of the sum inside your head you can simply take 16 away from 52 and know that the new sum is 36, rather than taking 16 off the list and them summing up 11 + 13 + 12. So my question is, what other calculations, other than the obvious ones like sum and average, are like this? SECOND EDIT: As an arbitrary example of a statistic that (I'm almost certain) does require iteration -- and therefore cannot be maintained as simply as a sum or average -- consider if I asked you, "how many numbers in this collection are divisible by the min?" Let's say the numbers are 5, 15, 19, 20, 21, 25, and 30. The min of this set is 5, which divides into 5, 15, 20, 25, and 30 (but not 19 or 21), so the answer is 5. Now if I remove 5 from the collection and ask the same question, the answer is now 2, since only 15 and 30 are divisible by the new min of 15; but, as far as I can tell, you cannot know this without going through the collection again. So I think this gets to the heart of my question: if we can divide kinds of statistics into these categories, those that are maintainable (my own term, maybe there's a more official one somewhere) versus those that require iteration to compute any time a collection is changed, what are all the maintainable ones? What I am asking about is not strictly the same as an online algorithm (though I sincerely thank those of you who introduced me to that concept). An online algorithm can begin its work without having even seen all of the input data; the maintainable statistics I am seeking will certainly have seen all the data, they just don't need to reiterate through it over and over again whenever it changes.

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >