Search Results

Search found 4689 results on 188 pages for 'no average geek'.

Page 118/188 | < Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >

  • Are there any worse sorting algorithms than Bogosort (a.k.a Monkey Sort)?

    - by womp
    My co-workers took me back in time to my University days with a discussion of sorting algorithms this morning. We reminisced about our favorites like StupidSort, and one of us was sure we had seen a sort algorithm that was O(n!). That got me started looking around for the "worst" sorting algorithms I could find. We postulated that a completely random sort would be pretty bad (i.e. randomize the elements - is it in order? no? randomize again), and I looked around and found out that it's apparently called BogoSort, or Monkey Sort, or sometimes just Random Sort. Monkey Sort appears to have a worst case performance of O(∞), a best case performance of O(n), and an average performance of O(n * n!). Are there any named algorithms that have worse average performance than O(n * n!)? Or are just sillier than Monkey Sort in general?

    Read the article

  • Database Network Latency

    - by Karl
    Hi All, I am currently working on an n-tier system and battling some database performance issues. One area we have been investigating is the latency between the database server and the application server. In our test environment the average ping times between the two boxes is in the region of 0.2ms however on the clients site its more in the region of 8.2 ms. Is that somthing we should be worried about? For your average system what do you guys consider a resonable latency and how would you go about testing/measuring the latency? Karl

    Read the article

  • Is there any performance difference between Debug and Release?

    - by Lazlo
    I'm using MySql Connector .NET to load an account and transfer it to the client. This operation is rather intensive, considering the child elements of the account to load. In Debug mode, it takes, at most, 1 second to load the account. The average would be 500ms. In Release mode, it takes from 1 to 4 seconds to load the account. The average would be 1500ms. Since there is no #if DEBUG directive or the like in my code, I'm wondering where the difference is coming from. Is there a project build option I could change? Or does it have to do with MySql Connector .NET that would have different behaviors depending on the build mode?

    Read the article

  • SQL select statement with increment

    - by Matt
    Currently I'm using a for statement in PHP to get all the months for this SQL statement, but would like to know if I can do it all with SQL. Basically I have to get the average listing price, and the average selling price for each month going back 1 year where the sellingdate = the month. simple with PHP, but that creates 12 database hits. I'm trying the sql statment below, but it returns listings totally out of order SELECT avg(ListingPrice), avg(SellingPrice), count(ListingDate), DATE(SellingDate) as date, MONTH(SellingDate) as month, YEAR(SellingDate) as year FROM `rets_property_resi` WHERE Area = '5030' AND Status = 'S' AND SellingDate Output: 867507.142857 877632.492063 63 1996-12-24 12 1996 971355.833333 981533.333333 60 1997-11-18 11 1997 949334.328358 985453.731343 67 1997-10-23 10 1997 794150.000000 806642.857143 70 1996-09-20 9 1996 968371.621622 988074.702703 74 1997-08-21 8 1997 1033413.366337 1053018.534653 101 1997-07-30 7 1997 936115.054795 962787.671233 73 1996-06-07 6 1996 875378.735632 906921.839080 87 1996-05-16 5 1996 926635.616438 958561.643836 73 2010-04-13 4 2010 1030224.472222 1046332.291667 72 2010-03-31 3 2010 921711.458333 924177.083333 48 1997-02-28 2 1997 799484.615385 791551.282051 39 1997-01-15 1 1997 As you can see, it pulls from random dates, I need to to pull from 2010-03, 2010-02, 2010-01, etc... any help is appreciated!

    Read the article

  • Math: How to sum each row of a matrix

    - by macek
    I have a 1x8 matrix of students where each student is a 4x1 matrix of scores. Something like: SCORES S [62, 91, 74, 14] T [59, 7 , 59, 21] U [44, 9 , 69, 6 ] D [4 , 32, 28, 53] E [78, 99, 53, 83] N [48, 86, 89, 60] T [56, 71, 15, 80] S [47, 67, 79, 40] Main question: Using sigma notation, or some other mathematical function, how can I get a 1x8 matrix where each student's scores are summed? # expected result TOTAL OF SCORES S [241] T [146] U [128] D [117] E [313] N [283] T [222] S [233] Sub question. To get the average, I will multiply the matrix by 1/4. Would there be a quicker way to get the final result? AVERAGE SCORE S [60.25] T [36.50] U [32.00] D [29.25] E [78.25] N [70.75] T [55.50] S [58.25] Note: I'm not looking for programming-related algorithms here. I want to know if it is possible to represent this with pure mathematical functions alone.

    Read the article

  • What statistics can be maintained for a set of numerical data without iterating?

    - by Dan Tao
    Update Just for future reference, I'm going to list all of the statistics that I'm aware of that can be maintained in a rolling collection, recalculated as an O(1) operation on every addition/removal (this is really how I should've worded the question from the beginning): Obvious Count Sum Mean Max* Min* Median** Less Obvious Variance Standard Deviation Skewness Kurtosis Mode*** Weighted Average Weighted Moving Average**** OK, so to put it more accurately: these are not "all" of the statistics I'm aware of. They're just the ones that I can remember off the top of my head right now. *Can be recalculated in O(1) for additions only, or for additions and removals if the collection is sorted (but in this case, insertion is not O(1)). Removals potentially incur an O(n) recalculation for non-sorted collections. **Recalculated in O(1) for a sorted, indexed collection only. ***Requires a fairly complex data structure to recalculate in O(1). ****This can certainly be achieved in O(1) for additions and removals when the weights are assigned in a linearly descending fashion. In other scenarios, I'm not sure. Original Question Say I maintain a collection of numerical data -- let's say, just a bunch of numbers. For this data, there are loads of calculated values that might be of interest; one example would be the sum. To get the sum of all this data, I could... Option 1: Iterate through the collection, adding all the values: double sum = 0.0; for (int i = 0; i < values.Count; i++) sum += values[i]; Option 2: Maintain the sum, eliminating the need to ever iterate over the collection just to find the sum: void Add(double value) { values.Add(value); sum += value; } void Remove(double value) { values.Remove(value); sum -= value; } EDIT: To put this question in more relatable terms, let's compare the two options above to a (sort of) real-world situation: Suppose I start listing numbers out loud and ask you to keep them in your head. I start by saying, "11, 16, 13, 12." If you've just been remembering the numbers themselves and nothing more, and then I say, "What's the sum?", you'd have to think to yourself, "OK, what's 11 + 16 + 13 + 12?" before responding, "52." If, on the other hand, you had been keeping track of the sum yourself while I was listing the numbers (i.e., when I said, "11" you thought "11", when I said "16", you thought, "27," and so on), you could answer "52" right away. Then if I say, "OK, now forget the number 16," if you've been keeping track of the sum inside your head you can simply take 16 away from 52 and know that the new sum is 36, rather than taking 16 off the list and them summing up 11 + 13 + 12. So my question is, what other calculations, other than the obvious ones like sum and average, are like this? SECOND EDIT: As an arbitrary example of a statistic that (I'm almost certain) does require iteration -- and therefore cannot be maintained as simply as a sum or average -- consider if I asked you, "how many numbers in this collection are divisible by the min?" Let's say the numbers are 5, 15, 19, 20, 21, 25, and 30. The min of this set is 5, which divides into 5, 15, 20, 25, and 30 (but not 19 or 21), so the answer is 5. Now if I remove 5 from the collection and ask the same question, the answer is now 2, since only 15 and 30 are divisible by the new min of 15; but, as far as I can tell, you cannot know this without going through the collection again. So I think this gets to the heart of my question: if we can divide kinds of statistics into these categories, those that are maintainable (my own term, maybe there's a more official one somewhere) versus those that require iteration to compute any time a collection is changed, what are all the maintainable ones? What I am asking about is not strictly the same as an online algorithm (though I sincerely thank those of you who introduced me to that concept). An online algorithm can begin its work without having even seen all of the input data; the maintainable statistics I am seeking will certainly have seen all the data, they just don't need to reiterate through it over and over again whenever it changes.

    Read the article

  • ISBNdb Retrieving and Managing Info

    - by Pierre Sylvestre
    Given a set of databases how can one you get information on a book with given price first (which consist of the average of a hidden list of prices coming from different web site) and followed by an optional second option that shows the list of the different web site with their page? To take an example given a query for ISBN 9785554443331 - it returns "Chemistry the central science 11 edition" : new:$50 used good condition:$35 used poor condition:$20 If the return does not match with our product list an option to "click here to visit our partner" appears and which returns: Atextbook: $10 Btextbook: $10 Ctextbook: $9 Dtextbook: $8.50 I understand that the first search would be done simultaneous on the web and our database to determine whether or not we have the book and the web to get the average of the price of a given list of web site. Thank you in advance for the help

    Read the article

  • Need help with SQL query on SQL Server 2005

    - by Avinash
    We're seeing strange behavior when running two versions of a query on SQL Server 2005: version A: SELECT otherattributes.* FROM listcontacts JOIN otherattributes ON listcontacts.contactId = otherattributes.contactId WHERE listcontacts.listid = 1234 ORDER BY name ASC version B: DECLARE @Id AS INT; SET @Id = 1234; SELECT otherattributes.* FROM listcontacts JOIN otherattributes ON listcontacts.contactId = otherattributes.contactId WHERE listcontacts.listid = @Id ORDER BY name ASC Both queries return 1000 rows; version A takes on average 15s; version B on average takes 4s. Could anyone help us understand the difference in execution times of these two versions of SQL? If we invoke this query via named parameters using NHibernate, we see the following query via SQL Server profiler: EXEC sp_executesql N'SELECT otherattributes.* FROM listcontacts JOIN otherattributes ON listcontacts.contactId = otherattributes.contactId WHERE listcontacts.listid = @id ORDER BY name ASC', N'@id INT', @id=1234; ...and this tends to perform as badly as version A.

    Read the article

  • min() and max() give error: TypeError: 'float' object is not iterable

    - by PythonUser3.3
    markList=[] Lmark=0 Hmark=0 while True: mark=float(input("Enter your marks here(Click -1 to exit)")) if mark == -1: break markList.append(mark) markList.sort() mid = len(markList)//2 if len(markList)%2==0: median=(markList[mid]+ markList[mid-1])/2 print("Median:", median) else: print("Median:" , markList[mid]) Lmark==(min(mark)) print("The lowest mark is", Lmark) Hmark==(max(mark)) print("The highest mark is", Hmark) My program is a basic grade calculator using lists. My program asks the user to input their grades into a list in which it then calculates your average and finds your lowest and highest mark. I have found the average but I can't seem to figure out how to find the lowest and highest grade. Can you please show me pr tell me what to do?

    Read the article

  • AVG time spent on multiple rows SQL-server?

    - by seo20
    I have a table tblSequence with 3 cols in MS SQL: ID, IP, [Timestamp] Content could look like this: ID IP [Timestamp] -------------------------------------------------- 4347 62.107.95.103 2010-05-24 09:27:50.470 4346 62.107.95.103 2010-05-24 09:27:45.547 4345 62.107.95.103 2010-05-24 09:27:36.940 4344 62.107.95.103 2010-05-24 09:27:29.347 4343 62.107.95.103 2010-05-24 09:27:12.080 ID is unique, there can be n number of IP's. Would like to calculate the average time spent per IP. in a single row Know you can do something like this: SELECT CAST(AVG(CAST(MyTable.MyDateTimeFinish - MyTable.MyDateTimeStart AS float)) AS datetime) But how on earth do I find the first and last entry of my unique IP row so I can have a start and finish time? I'M stuck. Would like to calculate the average time spent per IP. in a single row

    Read the article

  • Performance optimization for SQL Server: decrease stored procedures execution time or unload the ser

    - by tim
    We have a web service which provides search over hotels. There is a problem with performance: a single request to the service takes around 5000 ms. Almost all of the time is spent in database by executing storing procedures. During the request our server (mssql2008) consumes ~90% of the processor time. When 2 requests are made in parallel the average time grows and is around 7000 ms. When number of request is increasing, the average time of response is increasing as well. We have 20-30 requests per minute. Which kind of optimization is the best in this case having in mind that the goal is to provide stable response time for the service: 1) Try to decrease the stored procedures execution time 2) Try to find the way how to unload the server It is interesting to hear from people who deal with booking sites.

    Read the article

  • Using XCode and instruments to improve iPhone app performance

    - by MrDatabase
    I've been experimenting with Instruments off and on for a while and and I still can't do the following (with any sensible results): determine or estimate the average runtime of a function that's called many times. For example if I'm driving my gameLoop at 60 Hz with a CADisplayLink I'd like to see how long the loop takes to run on average... 10 ms? 30 ms etc. I've come close with the "CPU activity" instrument but the results are inconsistent or don't make sense. The time profiler seems promising but all I can get is "% of runtime"... and I'd like an actual runtime.

    Read the article

  • C++ Linked List - Reading data from a file with a sentinel

    - by Nick
    So I've done quite a bit of research on this and can't get my output to work correctly. I need to read in data from a file and have it stored into a Linked List. The while loop used should stop once it hits the $$$$$ sentinel. Then I am to display the data (by searching by ID Number[user input]) I am not that far yet I just want to properly display the data and get it read in for right now. My problem is when it displays the data is isn't stopping at the $$$$$ (even if I do "inFile.peek() != EOF and omit the $$$$$) I am still getting an extra garbage record. I know it has something to do with my while loop and how I am creating a new Node but I can't get it to work any other way. Any help would be appreciated. students.txt Nick J Cooley 324123 60 70 80 90 Jay M Hill 412254 70 80 90 100 $$$$$ assign6.h file #pragma once #include <iostream> #include <string> using namespace std; class assign6 { public: assign6(); // constructor void displayStudents(); private: struct Node { string firstName; string midIni; string lastName; int idNum; int sco1; //Test score 1 int sco2; //Test score 2 int sco3; //Test score 3 int sco4; //Test score 4 Node *next; }; Node *head; Node *headPtr; }; assign6Imp.cpp // Implementation File #include "assign6.h" #include <fstream> #include <iostream> #include <string> using namespace std; assign6::assign6() //constructor { ifstream inFile; inFile.open("students.txt"); head = NULL; head = new Node; headPtr = head; while (inFile.peek() != EOF) //reading in from file and storing in linked list { inFile >> head->firstName >> head->midIni >> head->lastName; inFile >> head->idNum; inFile >> head->sco1; inFile >> head->sco2; inFile >> head->sco3; inFile >> head->sco4; if (inFile != "$$$$$") { head->next = NULL; head->next = new Node; head = head->next; } } head->next = NULL; inFile.close(); } void assign6::displayStudents() { int average = 0; for (Node *cur = headPtr; cur != NULL; cur = cur->next) { cout << cur->firstName << " " << cur->midIni << " " << cur->lastName << endl; cout << cur->idNum << endl; average = (cur->sco1 + cur->sco2 + cur->sco3 + cur->sco4)/4; cout << cur->sco1 << " " << cur->sco2 << " " << cur->sco3 << " " << cur->sco4 << " " << "average: " << average << endl; } }

    Read the article

  • Ruby on Rails, PHP or C++ for web social network

    - by faya
    Hello, I have chosen diploma work in university. It's a mini social network. But now I am really stuck with which technology I should stick. I am average at C++ ISAPI web services development, below average PHP(had few projects with it) and new to Ruby and its framework RAILS. I have a deadline 1.5 month to develop it(about 5 hours every day after my full time job). Also I heard that its very easy to learn and develop with Ruby on Rails. Considering C++ I know that I have to do lots of coding and work by myself and PHP looks almost the same to me. So I am looking for you skilled developers advise what would you do in my position? Learn RoR, stick with C++ or PHP or maybe use something else?

    Read the article

  • Python: Count lines and differentiate between them

    - by Mister X
    I'm using an application that gives a timed output based on how many times something is done in a minute, and I wish to manually take the output (copy paste) and have my program, and I wish to count how many times each minute it is done. An example output is this: 13:48 An event happened. 13:48 Another event happened. 13:49 A new event happened. 13:49 A random event happened. 13:49 An event happened. So, the program would need to understand that 2 things happened at 13:48, and 3 at 13:49. I'm not sure how the information would be stored, but I need to average them after, to determine an average of how often it happens. Sorry for being so complicated!

    Read the article

  • How to implement Voting for Grails Domain Classes?

    - by userWebMobile
    I have a Book class and need to implement a yes/no voting functionality. My domain classes look like this: class Book { String title static hasMany = [votes: Vote] } class User { String name static hasMany = [votes: Vote] } class Vote { boolean yesVote static belongsTo = [user: User, book: Book] } What is the best way to implement a voting for the book class. I need the following informations: What is the average yesVote for a book over all votes (either yes or no)? How to check if a specific user has done a vote? What is the best way to implement the computation of the average yesVote such that the performance does not drop?

    Read the article

  • Methodologies or algorithms for filling in missing data

    - by tbone
    I am dealing with datasets with missing data and need to be able to fill forward, backward, and gaps. So, for example, if I have data from Jan 1, 2000 to Dec 31, 2010, and some days are missing, when a user requests a timespan that begins before, ends after, or encompasses the missing data points, I need to "fill in" these missing values. Is there a proper term to refer to this concept of filling in data? Imputation is one term, don't know if it is "the" term for it though. I presume there are multiple algorithms & methodologies for filling in missing data (use last measured, using median/average/moving average, etc between 2 known numbers, etc. Anyone know the proper term for this problem, any online resources on this topic, or ideally links to open source implementations of some algorithms (C# preferably, but any language would be useful)

    Read the article

  • need help with db-query on sql-server 2005.

    - by Avinash
    We're seeing strange behavior when running two versions of a query on SQL Server 2005: version A: SELECT otherattributes.* FROM listcontacts JOIN otherattributes ON listcontacts.contactId = otherattributes.contactId WHERE listcontacts.listid = 1234 ORDER BY name ASC version B: DECLARE @Id AS INT; SET @Id = 1234; SELECT otherattributes.* FROM listcontacts JOIN otherattributes ON listcontacts.contactId = otherattributes.contactId WHERE listcontacts.listid = @Id ORDER BY name ASC Both queries return 1000 rows; version A takes on average 15s; version B on average takes 4s. Could anyone help us understand the difference in execution times of these two versions of SQL? If we invoke this query via named parameters using NHibernate, we see the following query via SQL Server profiler: EXEC sp_executesql N'SELECT otherattributes.* FROM listcontacts JOIN otherattributes ON listcontacts.contactId = otherattributes.contactId WHERE listcontacts.listid = @id ORDER BY name ASC', N'@id INT', @id=1234; ...and this tends to perform as badly as version A. Thanks in advance,

    Read the article

  • JPQL: What kind of objects contains a result list when querying multiple columns?

    - by Bunkerbewohner
    Hello! I'm trying to do something which is easy as pie in PHP & Co: SELECT COUNT(x) as numItems, AVG(y) as average, ... FROM Z In PHP I would get a simple array like [{ numItems: 0, average: 0 }] which I could use like this: echo "Number of Items: " . $result[0]['numItems']; Usually in JPQL you only query single objects or single columns and get Lists types, for example List or List. But what do you get, when querying multiple columns?

    Read the article

  • Problem with acceding in DOM with jQuery

    - by Tristan
    Hello, I changed the tree of my JSON-P output, and i cannot access to my object DOM anymore : Here's my output : jsonp1271634374310( {"Inter-Medias": {"name":"Inter-Medias","idGSP":"14","average":"80","services":"8.86"} }); And here's my jQuery script : success: function(data, textStatus, XMLHttpRequest){ widget = data.name; widget += data.average ; .... I know one level is missing, but if I try to do : data.Inter-Medias.name or data.name.name it's still not working. Any idea please ? I will have case where i'll have multiple Ojbect, so i want to display all of them, how to do that ? for (i=0;i < data.length;i++) Does it look right ? Bonus question : i understand function(data, textStatus, XMLHttpRequest) that in case of success this function is triggered, the data is what the script recieved from his request, but i have no idea what textStatus or XMLHttpRequest are here too and why ?! Thank you.

    Read the article

  • SQL Time(2) to Array in C#?

    - by Jacob Huggart
    Hello all, I am using ASP MVC and SQL Server and I have a database that is updated intermittently with expected wait times for some event. Also, I am using some ajax and jquery. I need to display the average and maximum wait times. How can I take the entire list of time from the server and get the average time? Also, what would be the best method to simply grab a new time from the server when it is updated without having to pull the whole list again? Thanks!

    Read the article

  • Problem with accessing in DOM with jQuery

    - by Tristan
    Hello, I changed the tree of my JSON-P output, and i cannot access to my object DOM anymore : Here's my output : jsonp1271634374310( {"Inter-Medias": {"name":"Inter-Medias","idGSP":"14","average":"80","services":"8.86"} }); And here's my jQuery script : success: function(data, textStatus, XMLHttpRequest){ widget = data.name; widget += data.average ; .... I know one level is missing, but if I try to do : data.Inter-Medias.name or data.name.name it's still not working. Any idea please ? I will have case where i'll have multiple Ojbect, so i want to display all of them, how to do that ? for (i=0;i < data.length;i++) Does it look right ? Bonus question : i understand function(data, textStatus, XMLHttpRequest) that in case of success this function is triggered, the data is what the script recieved from his request, but i have no idea what textStatus or XMLHttpRequest are here too and why ?! Thank you.

    Read the article

< Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >