Search Results

Search found 2156 results on 87 pages for 'weighted average'.

Page 11/87 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Using ConcurrentQueue for thread-safe Performance Bookkeeping.

    - by Strenium
    Just a small tidbit that's sprung up today. I had to book-keep and emit diagnostics for the average thread performance in a highly-threaded code over a period of last X number of calls and no more. Need of the day: a thread-safe, self-managing stats container. Since .NET 4.0 introduced new thread-safe 'Collections.Concurrent' objects and I've been using them frequently - the one in particular seemed like a good fit for storing each threads' performance data - ConcurrentQueue. But I wanted to store only the most recent X# of calls and since the ConcurrentQueue currently does not support size constraint I had to come up with my own generic version which attempts to restrict usage to numeric types only: unfortunately there is no IArithmetic-like interface which constrains to only numeric types – so the constraints here here aren't as elegant as they could be. (Note the use of the Average() method, of course you can use others as well as make your own).   FIFO FixedSizedConcurrentQueue using System;using System.Collections.Concurrent;using System.Linq; namespace xxxxx.Data.Infrastructure{    [Serializable]    public class FixedSizedConcurrentQueue<T> where T : struct, IConvertible, IComparable<T>    {        private FixedSizedConcurrentQueue() { }         public FixedSizedConcurrentQueue(ConcurrentQueue<T> queue)        {            _queue = queue;        }         ConcurrentQueue<T> _queue = new ConcurrentQueue<T>();         public int Size { get { return _queue.Count; } }        public double Average { get { return _queue.Average(arg => Convert.ToInt32(arg)); } }         public int Limit { get; set; }        public void Enqueue(T obj)        {            _queue.Enqueue(obj);            lock (this)            {                T @out;                while (_queue.Count > Limit) _queue.TryDequeue(out @out);            }        }    } }   The usage case is straight-forward, in this case I’m using a FIFO queue of maximum size of 200 to store doubles to which I simply Enqueue() the calculated rates: Usage var RateQueue = new FixedSizedConcurrentQueue<double>(new ConcurrentQueue<double>()) { Limit = 200 }; /* greater size == longer history */   That’s about it. Happy coding!

    Read the article

  • php paging and the use of limit clause

    - by Average Joe
    Imagine you got a 1m record table and you want to limit the search results down to say 10,000 and not more than that. So what do I use for that? Well, the answer is use the limit clause. example select recid from mytable order by recid asc limit 10000 This is going to give me the last 10,000 records entered into this table. So far no paging. But the limit phrase is already in use. That brings to question to the next level. What if I want to page thru this record particular record set 100 recs at a time? Since the limit phrase is already part of the original query, how do I use it again, this time to take care of the paging? If the org. query did not have a limit clause to begin with, I'd adjust it as limit 0,100 and then adjusting it as 100,100 and then 200,100 and so on while the paging takes it course. But at this time, I cannot. You almost think you'd want to use two limit phrases one after the other - which is not not gonna work. limit 10000 limit 0,1000 for sure it would error out. So what's the solution in this particular case?

    Read the article

  • How can I strip non-XHTML tags from a string in C#?

    - by No Average Geek
    I need to be able to remove non-XHTML tags from a string containing XHTML that has been stored in a database. The string also contains references for controls (e.g. ) inside the XHTML, but I need clean XHTML with all standard tag contents unchanged. These control tags are varied (they could be any ASP.NET control), so there are too many to go looking for each one and remove them. The way they are closed is also varied, so not all of them have closing tags, some are self closing. How can I go about doing this? I've found some HTML cleaners on-line for including in my project, but they either remove everything or just HTML encode the entire string. Also, I'm dealing with parts of XHTML documents, not entire documents - don't know if that makes a difference. Any help would be appreciated.

    Read the article

  • Database model for keeping track of likes/shares/comments on blog posts over time

    - by gage
    My goal is to keep track of the popular posts on different blog sites based on social network activity at any given time. The goal is not to simply get the most popular now, but instead find posts that are popular compared to other posts on the same blog. For example, I follow a tech blog, a sports blog, and a gossip blog. The tech blog gets waaay more readership than the other two blogs, so in raw numbers every post on the tech blog will always out number views on the other two. So lets say the average tech blog post gets 500 facebook likes and the other two get an average of 50 likes per post. Then when there is a sports blog post that has 200 fb likes and a gossip blog post with 300 while the tech blog posts today have 500 likes I want to highlight the sports and gossip blog posts (more likes than average vs tech blog with more # of likes but just average for the blog) The approach I am thinking of taking is to make an entry in a database for each blog post. Every x minutes (say every 15 minutes) I will check how many likes/shares/comments an entry has received on all the social networks (facebook, twitter, google+, linkeIn). So over time there will be a history of likes for each blog post, i.e post 1234 after 15 min: 10 fb likes, 4 tweets, 6 g+ after 30 min: 15 fb likes, 15 tweets, 10 g+ ... ... after 48 hours: 200 fb likes, 25 tweets, 15 g+ By keeping a history like this for each blog post I can know the average number of likes/shares/tweets at any give time interval. So for example the average number of fb likes for all blog posts 48hrs after posting is 50, and a particular post has 200 I can mark that as a popular post and feature/highlight it. A consideration in the design is to be able to easily query the values (likes/shares) for a specific time-frame, i.e. fb likes after 30min or tweets after 24 hrs in-order to compute averages with which to compare against (or should averages be stored in it's own table?) If this approach is flawed or could use improvement please let me know, but it is not my main question. My main question is what should a database scheme for storing this info look like? Assuming that the above approach is taken I am trying to figure out what a database schema for storing the likes over time would look like. I am brand new to databases, in doing some basic reading I see that it is advisable to make a 3NF database. I have come up with the following possible schema. Schema 1 DB Popular Posts Table: Post post_id ( primary key(pk) ) url title Table: Social Activity activity_id (pk) url (fk) type (i.e. facebook,twitter,g+) value timestamp This was my initial instinct (base on my very limited db knowledge). As far as I under stand this schema would be 3NF? I searched for designs of similar database model, and found this question on stackoverflow, http://stackoverflow.com/questions/11216080/data-structure-for-storing-height-and-weight-etc-over-time-for-multiple-users . The scenario in that question is similar (recording weight/height of users overtime). Taking the accepted answer for that question and applying it to my model results in something like: Schema 2 (same as above, but break down the social activity into 2 tables) DB Popular Posts Table: Post post_id (pk) url title Table: Social Measurement measurement_id (pk) post_id (fk) timestamp Table: Social stat stat_id (pk) measurement_id (fk) type (i.e. facebook,twitter,g+) value The advantage I see in schema 2 is that I will likely want to access all the values for a given time, i.e. when making a measurement at 30min after a post is published I will simultaneous check number of fb likes, fb shares, fb comments, tweets, g+, linkedIn. So with this schema it may be easier get get all stats for a measurement_id corresponding to a certain time, i.e. all social stats for post 1234 at time x. Another thought I had is since it doesn't make sense to compare number of fb likes with number of tweets or g+ shares, maybe it makes sense to separate each social measurement into it's own table? Schema 3 DB Popular Posts Table: Post post_id (pk) url title Table: fb_likes fb_like_id (pk) post_id (fk) timestamp value Table: fb_shares fb_shares_id (pk) post_id (fk) timestamp value Table: tweets tweets__id (pk) post_id (fk) timestamp value Table: google_plus google_plus_id (pk) post_id (fk) timestamp value As you can see I am generally lost/unsure of what approach to take. I'm sure this typical type of database problem (storing measurements overtime, i.e temperature statistic) that must have a common solution. Is there a design pattern/model for this, does it have a name? I tried searching for "database periodic data collection" or "database measurements over time" but didn't find anything specific. What would be an appropriate model to solve the needs of this problem?

    Read the article

  • Ranking - an Introduction

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved Ranking Ranking is quite common in the internet. Readers are asked to rank their latest reading by clicking on one of 5 (sometimes 10) stars. The number of stars is then converted to a number and the average number of stars as selected by all the readers is proudly (or shamefully) displayed for future readers. SharePoint 2007 lacked this feature altogether. SharePoint 2010 allows the users to rank items in a list or documents in a library (the two are actually the same because a library is actually a list). But in SP2010 the computation of the average is done later on a timer rather than on-the-spot as it should be. I suspect that the reason for this shortcoming is that they did not involve a mathematician! Let me explain. Ranking is kept in a related list. When a user rates a document the rank-list is added an item with the item id, the user name, and his number of stars. The fact that a user already ranked an item prevents him from ranking it again. This prevents the creator of the item from asking his mother to rank it a 5 and do it 753 times, thus stacking the ballot. Some systems will allow a user to change his rating and this will be done by updating the rank-list item. Now, when the timer kicks off, the list is spanned and for each item the rank-list items containing this id are summed up and divided by the number of votes thus yielding the new average. This is obviously very time consuming and very server intensive. In the 18th century an early actuary named James Dodson used what the great Augustus De Morgan (of De Morgan’s law) later named Commutation tables. The labor involved in computing a life insurance premium was staggering and also very error prone. Clerks with pencil and paper would multiply and add mountains of numbers to do the task. The more steps the greater the probability of error and the more expensive the process. Commutation tables created a “summary” of many steps and reduced the work 100 fold. So had Microsoft taken a lesson in the history of computation, they would have developed a much faster way for rating that may be done in real-time and is also 100 times faster and less CPU intensive. How do we do this? We use a form of commutation. We always keep the number of votes and the total of stars. One simple division gives us the average. So we write an event receiver. When a vote is added, we just add the stars to the total-stars and 1 to the number of votes. We then recomputed the average. When a vote is updated, we reduce the total by the old vote, increase it by the new vote and leave the number of votes the same. Again we do the division to get the new average. When a vote is deleted (highly unlikely and maybe even prohibited), we reduce the total by that vote and reduce the number of votes by 1… Gone are the days of spanning lists, counting items, and tallying votes and we have no need for a timer process to run it all. This is the first of a few treatises on ranking. Even though I discussed the math and the history thereof, in here I am only going to solve the presentation issue. I wanted to create the CSS and Jscript needed to display the stars, create the various effects like hovering and clicking (onmouseover, onmouseout, onclick, etc.) and I wanted to create a general solution with any number of stars. When I had it all done, I created the ranking game so that I could test it. The game is interesting in and on itself, so here it is (or go to the games page and select “rank the stars”). BTW, when you play it, look at the source code and see how it was all done.  Next, how the 5 stars are displayed in the New and Update forms. When the whole set of articles will be done, you’ll be able to create the complete solution. That’s all folks!

    Read the article

  • Are CK Metrics still considered useful? Is there an open source tool to help?

    - by DeveloperDon
    Chidamber & Kemerer proposed several metrics for object oriented code. Among them, depth of inheritance tree, weighted number of methods, number of member functions, number of children, and coupling between objects. Using a base of code, they tried to correlated these metrics to the defect density and maintenance effort using covariant analysis. Are these metrics actionable in projects? Perhaps they can guide refactoring. For example weighted number of methods might show which God classes needed to be broken into more cohesive classes that address a single concern. Is there approach superseded by a better method, and is there a tool that can identify problem code, particularly in moderately large project being handed off to a new developer or team?

    Read the article

  • How to make "Chameleonic Ambiance Script" select a lighter hue for dark wallpapers?

    - by Nirmik
    I just started using an amazing script that makes the default "Ambiance" theme use that selected color. More details can be found Here I find this really amazing. But with my wallpaper being as shown below, the selection color, progress-bars color as after running the script are too dark (that can be seen in the following screenshot). I've learnt that what is done is the average color from the wallpaper is selected and then its tint is used (which is always a darker tint). So can I make this algorithm or whatever select a lighter tint of the average color?? Or can it be made to select the lightest color instead of the average color from the wallpaper??

    Read the article

  • Using Google Analytics to determine how much time a visitor spends in each section of my site

    - by flossfan
    I have a site with various pages, like: /about/history /about/team /contact/email-us /contact I want to figure out how much time people are spending on the entire /about section, and how much on the /contact section. If I run a query on the Google Analytics API and set the dimension to ga:pagePathLevel1 and the metric to ga:avgTimeOnPage, I get results like this: { pagePathLevel1: /about, avgTimeOnPage: 28 }, { pagePathLevel1: /contact, avgTimeOnPage: 10 } This looks roughly like what I want, but I'm not sure how to intepret it: Is the value of avgTimeOnPage the average time spent by any user on all pages that match that path? Or is it the average time spent by any user on any single page that matches that path? I'm looking for the average time spent across all pages matching that path, but the time estimates look shorter than I'd expect.

    Read the article

  • Google Analytics: understanding dimensions and metrics?

    - by flossfan
    If I run a query on the Google Analytics API and set the dimension to ga:pagePathLevel1 and the metric to ga:avgTimeOnPage, I get results like this: { pagePathLevel1: /about, avgTimeOnPage: 28 }, { pagePathLevel1: /contact, avgTimeOnPage: 10 } I'm not completely sure how to interpret this. Is the value of avgTimeOnPage the average time spent by any user on all pages that match that path? Or is 28 seconds the average time spent by any user on any single page that matches that path? I'm looking for the average time spent across all pages matching that path, but the time estimates look shorter than I'd expect. I hope that question makes sense! Please tell me if it doesn't.

    Read the article

  • Estimating file transfer time over network?

    - by rocko
    I am transferring file from one server to another. So, to estimate the time it would take to transfer some GB's of file over the network, I am pinging to that IP and taking the average time. For ex: i ping to 172.26.26.36 I get the average round trip time to be x ms, since ping send 32 bytes of data each time. I estimate speed of network to be 2*32*8(bits)/x = y Mbps -- multiplication with 2 because its average round trip time. So transferring 5GB of data will take 5000/y seconds Am I correct in my method of estimating time. If you find any mistake or any other good method please share.

    Read the article

  • Opening programs very slow on ubuntu 12.10

    - by Mislav Blaževic
    When I open any program for first time after boot, it loads VERY slowly (Doesn't matter how long after boot). For example, terminal takes 2-3 seconds, skype and firefox often take much longer. If I close it and open again, it loads in reasonable time (<1 second). I got Intel Core i5-2500K CPU @ 3.30GHz, so it shouldn't be hardware issue... What is causing this? Disk benchmark: Average read time: 115.5 MB/s (100 samples) Average write time: 98.7 MB/s (100 samples) Average access time: 11.79 msec (1000 samples)

    Read the article

  • mysql max function usage

    - by Simon
    the table videos has the folowing feels id,average,name how can i write the query, to select the name of video, which have the max average!!! i can do that vith two queries, by selecting the max(avege) from the table, and then find out the name, where ihe average equal to max!!! but i want to do that in one query!!! help me please!!!

    Read the article

  • PHP array : simple question about multidimensional array

    - by Tristan
    Hello, i've got a SQL query which returns multiple rows, and i have : $data = array( "nom" => $row['nom'] , "prix" => $row['rapport'], "average" => "$moyenne_ge" ); which is perfect, but only if my query returns one row. i tried that : $data = array(); $data[$row['nom']]["nom"] = $row['nom'] ; ... $data[$row['nom']]['average'] = "$moyenne_ge"; in order to have : $data[brand1][nom] = brand1 $data[brand1][average] = 150$ $data[brand2][nom] = brand2 $data[brand2][average] = 20$ ... but when i do : json_encode($data) i only have the latest JSON object instead of all JSON object from my request as if my array has only one brand instead of 10. I guess i did something stupid somewhere. Thanks for your help

    Read the article

  • update myqsl table

    - by Simon
    how can i write the query, to update the table videos, and set the value of field name to 'something' where the average is max(), or UPDATE the table, where average has the second value by size!!! i think the query must look like this!!! UPDATE videos SET name = 'something' WHERE average IN (SELECT `average` FROM `videos` ORDER BY `average` DESC LIMIT 1) but it doesn't work!!!

    Read the article

  • Why is my code not printing anything to stdout?

    - by WM
    I'm trying to calculate the average of a student's marks: import java.util.Scanner; public class Average { public static void main(String[] args) { int mark; int countTotal = 0; // to count the number of entered marks int avg = 0; // to calculate the total average Scanner Scan = new Scanner(System.in); System.out.print("Enter your average: "); String Name = Scan.next(); while (Scan.hasNextInt()) { mark = Scan.nextInt(); countTotal++; avg = avg + ((mark - avg) / countTotal); } System.out.print( Name + " " + avg ); } }

    Read the article

  • PHP is not returning me a number type

    - by Tristan
    Hello, i tryed to follow that great tutorial (STAR rating with css : http://stackoverflow.com/questions/1987524/turn-a-number-into-star-rating-display-using-jquery-and-css) but i've just a big problem : When i do <span class="stars">1.75</span> or $foo='1.75'; echo '<span class="stars">'.$foo.'</span> the stars is correctly shown, but as soon as i do : while($val = mysql_fetch_array($result)) { $average = ($val['services'] + $val['serviceCli'] + $val['interface'] + $val['qualite'] + $val['rapport'] ) / 5 ; <span class="stars">.$average.</span> } the stars stops working i double checked the data type in mysql : they're all TINYINT(2) and i tryed that : $average = intval($average); but it's still not working, Thank you

    Read the article

  • Many users, many cpus, no delays. Good for cloud?

    - by Eric
    I wish to set up a CPU-intensive time-important query service for users on the internet. A usage scenario is described below. Is cloud computing the right way to go for such an implementation? If so, what cloud vendor(s) cater to this type of application? I ask specifically, in terms of: 1) pricing 2) latency resulting from: - slow CPUs, instance creations, JIT compiles, etc.. - internal management and communication of processes inside the cloud (e.g. a queuing process and a calculation process) - communication between cloud and end user 3) ease of deployment A usage scenario I am expecting is: - A typical user sends a query (XML of size around 1K) once every 30 seconds on average. - Each query requires a numerical computation of average time 0.2 sec and max time 1 sec on a 1 GHz Pentium. The computation requires no data other than the query itself and is performed by the same piece of code each time. - The delay a user experiences between sending a query and receiving a response should be on average no more than 2 seconds and in general no more than 5 seconds. - A background save to a DB of the response should occur (not time critical) - There can be up to 30000 simultaneous users - i.e., on average 1000 queries a second, each requiring an average 0.2 sec calculation, so that would necessitate around 200 CPUs. Currently I'm look at GAE Java (for quicker deployment and less IT hassle) and EC2 (Speed and price optimization) as options. Where can I learn more about the right way to set ups such a system? past threads, different blogs, books, etc.. BTW, if my terminology is wrong or confusing, please let me know. I'd greatly appreciate any help.

    Read the article

  • im writing this code so the user would enter his/her name and followed by his/her marks to get the a

    - by WM
    What if the user wanted to enter his/her grades in the following way: Will 23, 34, 45, 45 how would i get rid of the commas public class StudentAverage { public static void main(String[] args) { int markSum = 0; double average = 0; Scanner input1 = new Scanner(System.in); System.out.print("Enter student record : "); String name = input1.next(); Scanner input2 = new Scanner(input1.nextLine()); int countTotal = 0; while (input2.hasNextInt()) { countTotal++; markSum += input2.nextInt(); } average = markSum / countTotal; // to calculate the total average System.out.print( name + " " + average ); } }

    Read the article

  • How can I strip line breaks from my XML with XSLT?

    - by Eric
    I have this XSLT: <xsl:strip-space elements="*" /> <xsl:template match="math"> <img class="math"> <xsl:attribute name="src">http://latex.codecogs.com/gif.latex?<xsl:value-of select="text()" /></xsl:attribute> </img> </xsl:template> Which is being applied to this XML (notice the line break): <math>\text{average} = \alpha \times \text{data} + (1-\alpha) \times \text{average}</math> Unfortunately, the transform creates this: <img class="math" src="http://latex.codecogs.com/gif.latex?\text{average} = \alpha \times \text{data} + (1-\alpha) \times&#10;&#9;&#9;&#9;&#9;&#9;\text{average}" /> Notice the whitespace character literals. Although it works, it's awfully messy. How can I prevent this?

    Read the article

  • averaging matrix efficiently

    - by user248237
    in Python, given an n x p matrix, e.g. 4 x 4, how can I return a matrix that's 4 x 2 that simply averages the first two columns and the last two columns for all 4 rows of the matrix? e.g. given: a = array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]]) return a matrix that has the average of a[:, 0] and a[:, 1] and the average of a[:, 2] and a[:, 3]. I want this to work for an arbitrary matrix of n x p assuming that the number of columns I am averaging of n is obviously evenly divisible by n. let me clarify: for each row, I want to take the average of the first two columns, then the average of the last two columns. So it would be: 1 + 2 / 2, 3 + 4 / 2 <- row 1 of new matrix 5 + 6 / 2, 7 + 8 / 2 <- row 2 of new matrix, etc. which should yield a 4 by 2 matrix rather than 4 x 4. thanks.

    Read the article

  • ORA-00937 error during select subquery

    - by RedRaven
    I am attempting to write a query that returns the the number of employees, the average salary, and the number of employees paid below the average. The query I have so far is: select trunc(avg(salary)) "Average Pay", count(salary) "Total Employees", ( select count(salary) from employees where salary < (select avg(salary) from employees) ) UnderPaid from employees; But when I run this I get the ora-00937 error in the subquery. I had thought that maybe the "count" function is what is causing the issue, but even running a simpler sub query such as: select trunc(avg(salary)) "Average Pay", count(salary) "Total Employees", ( select avg(salary) from employees ) UnderPaid from employees; still returns the same error. As both AVG and COUNT seem to be aggregate functions, I'm not sure why I'm getting the error? Thanks

    Read the article

  • Drupal Views: Render Null Result for Relationship as 0

    - by Kyle S
    I have a View configured in Drupal to return nodes, sorting them by their average vote in descending order. For the purpose of the View, the value of the average votes is a Relationship. I noticed that nodes with no votes are displayed after nodes with a negative average. Nodes with no votes should have an average of 0, but I believe the MySQL JOIN is causing NULL values to be returned (as there are no matching rows in the joined table, since a row is created after the first vote is cast for that item). I discovered that with MySQL it is possible to output all values that are NULL in a column as another value with IFNULL(column_name,'other value'). I feel like I would need to modify the Views module in order to obtain this functionality, but I'm hoping that there is some sort of option that returns NULL values in a relation (a relation doesn't exist for the item) as 0 instead of NULL, so that I can properly sort the nodes. The modules I am using include Views, Voting API, Vote Up/Down, and CTools. Thanks.

    Read the article

  • question about Littles Law

    - by davit-datuashvili
    I know that Little's Law states (paraphrased): the average number of things in a system is the product of the average rate at which things leave the system and the average time each one spends in the system, or: n=x*(r+z); x-throughput r-response time z-think time r+z - average response time now i have question about a problem from programming pearls: Suppose that system makes 100 disk accesses to process a transaction (although some systems require fewer, some systems will require several hundred disk access per transaction). How many transactions per hour per disk can the system handle? please help

    Read the article

  • Help with enums in Java

    - by devoured elysium
    Is it possible to have an enum change its value (from inside itself)? Maybe it's easier to understand what I mean with code: enum Rate { VeryBad(1), Bad(2), Average(3), Good(4), Excellent(5); private int rate; private Rate(int rate) { this.rate = rate; } public void increateRating() { //is it possible to make the enum variable increase? //this is, if right now this enum has as value Average, after calling this //method to have it change to Good? } } This is want I wanna achieve: Rate rate = Rate.Average; System.out.println(rate); //prints Average; rate.increaseRating(); System.out.println(rate); //prints Good Thanks

    Read the article

  • How to count NULL values in MySQL?

    - by abbr
    I want to know how can i find all the values that are NULL in the MySQL database for example I'm trying to display all the users who don't have an average yet. Here is the MySQL code. SELECT COUNT(average) as num FROM users WHERE user_id = '$user_id' AND average IS_NULL

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >