Search Results

Search found 142 results on 6 pages for 'weighted'.

Page 1/6 | 1 2 3 4 5 6  | Next Page >

  • Compute weighted averages for large numbers

    - by Travis
    I'm trying to get the weighted average of a few numbers. Basically I have: Price - 134.42 Quantity - 15236545 There can be as few as one or two or as many as fifty or sixty pairs of prices and quantities. I need to figure out the weighted average of the price. Basically, the weighted average should give very little weight to pairs like Price - 100000000.00 Quantity - 3 and more to the pair above. The formula I currently have is: ((price)(quantity) + (price)(quantity) + ...)/totalQuantity So far I have this done: double optimalPrice = 0; int totalQuantity = 0; double rolling = 0; System.out.println(rolling); Iterator it = orders.entrySet().iterator(); while(it.hasNext()) { System.out.println("inside"); Map.Entry order = (Map.Entry)it.next(); double price = (Double)order.getKey(); int quantity = (Integer)order.getValue(); System.out.println(price + " " + quantity); rolling += price * quantity; totalQuantity += quantity; System.out.println(rolling); } System.out.println(rolling); return totalQuantity / rolling; The problem is I very quickly max out the "rolling" variable. How can I actually get my weighted average? Thanks!

    Read the article

  • Weighted Average with LINQ

    - by jsmith
    My goal is to get a weighted average from one table, based on another tables primary key. Example Data: Table1 Key WEIGHTED_AVERAGE 0200 0 Table2 ForeignKey LENGTH PCR 0200 105 52 0200 105 60 0200 105 54 0200 105 -1 0200 47 55 I need to get a weighted average based on the length of a segment and I need to ignore values of -1. I know how to do this in SQL, but my goal is to do this in LINQ. It looks something like this in SQL: SELECT Sum(t2.PCR*t2.LENGTH)/Sum(t2.LENGTH) AS WEIGHTED_AVERAGE FROM Table1 t1, Table2 t2 WHERE t2.PCR <> -1 AND t2.ForeinKey = t1.Key; I am still pretty new to LINQ, and having a hard time figuring out how I would translate this. The result weighted average should come out to roughly 55.3. Thank you.

    Read the article

  • Weighted Average and Ratings

    - by Danten
    Maths isn't my strong point and I'm at a loss here. Basically, all I need is a simple formula that will give a weighted rating on a scale of 1 to 5. If there are very few votes, they carry less influence and the rating pressess more towards the average (in this case I want it to be 3, not the average of all other ratings). I've tried a few different bayesian implementations but these haven't worked out. I believe the graphical representation I am looking for could be shown as: ___ / ___/ Cheers

    Read the article

  • Create a fast algorithm for a "weighted" median

    - by Hameer Abbasi
    Suppose we have a set S with k elements of 2-dimensional vectors, (x, n). What would be the most efficient algorithm to calculate the median of the weighted set? By "weighted set", I mean that the number x has a weight n. Here is an example (inefficient due to sorting) algorithm, where Sx is the x-part, and Sn is the n-part. Assume that all co-ordinate pairs are already arranged in Sx, with the respective changes also being done in Sn, and the sum of n is sumN: sum <= 0; i<= 0 while(sum < sumN) sum <= sum + Sn(i) ++i if(sum > sumN/2) return Sx(i) else return (Sx(i)*Sn(i) + Sx(i+1)*Sn(i+1))/(Sn(i) + Sn(i+1)) EDIT: Would this hold in two or more dimensions, if we were to calculate the median first in one dimension, then in another, with n being the sum along that dimension in the second pass?

    Read the article

  • is Boost Library's weighted median broken?

    - by user624188
    I confess that I am no expert in C++. I am looking for a fast way to compute weighted median, which Boost seemed to have. But it seems I am not able to make it work. #include <iostream> #include <boost/accumulators/accumulators.hpp> #include <boost/accumulators/statistics/stats.hpp> #include <boost/accumulators/statistics/median.hpp> #include <boost/accumulators/statistics/weighted_median.hpp> using namespace boost::accumulators; int main() { // Define an accumulator set accumulator_set<double, stats<tag::median > > acc1; accumulator_set<double, stats<tag::median >, float> acc2; // push in some data ... acc1(0.1); acc1(0.2); acc1(0.3); acc1(0.4); acc1(0.5); acc1(0.6); acc2(0.1, weight=0.); acc2(0.2, weight=0.); acc2(0.3, weight=0.); acc2(0.4, weight=1.); acc2(0.5, weight=1.); acc2(0.6, weight=1.); // Display the results ... std::cout << " Median: " << median(acc1) << std::endl; std::cout << "Weighted Median: " << median(acc2) << std::endl; return 0; } produces the following output, which is clearly wrong. Median: 0.3 Weighted Median: 0.3 Am I doing something wrong? Any help will be greatly appreciated. * however, the weighted sum works correctly * @glowcoder: The weighted sum works perfectly fine like this. #include <iostream> #include <boost/accumulators/accumulators.hpp> #include <boost/accumulators/statistics/stats.hpp> #include <boost/accumulators/statistics/sum.hpp> #include <boost/accumulators/statistics/weighted_sum.hpp> using namespace boost::accumulators; int main() { // Define an accumulator set accumulator_set<double, stats<tag::sum > > acc1; accumulator_set<double, stats<tag::sum >, float> acc2; // accumulator_set<double, stats<tag::median >, float> acc2; // push in some data ... acc1(0.1); acc1(0.2); acc1(0.3); acc1(0.4); acc1(0.5); acc1(0.6); acc2(0.1, weight=0.); acc2(0.2, weight=0.); acc2(0.3, weight=0.); acc2(0.4, weight=1.); acc2(0.5, weight=1.); acc2(0.6, weight=1.); // Display the results ... std::cout << " Median: " << sum(acc1) << std::endl; std::cout << "Weighted Median: " << sum(acc2) << std::endl; return 0; } and the result is Sum: 2.1 Weighted Sum: 1.5

    Read the article

  • Best way to choose random element from weighted list

    - by Qqwy
    I want to create a simple game. Every so often, a power up should appear. Right now the different kinds of power ups are stored in an array. However, not every power up should appear equally often: For instance, a score multiplier should appear much more often than an extra life. What is the best/fastest way to pick an element at random from a list where some of the elements should be picked more often than others?

    Read the article

  • Estimate gaussian (mixture) density from a set of weighted samples

    - by Christian
    Assume I have a set of weighted samples, where each samples has a corresponding weight between 0 and 1. I'd like to estimate the parameters of a gaussian mixture distribution that is biased towards the samples with higher weight. In the usual non-weighted case gaussian mixture estimation is done via the EM algorithm. Does anyone know an implementation (any language is ok) that permits passing weights? If not, does anyone know how to modify the algorithm to account for the weights? If not, can some one give me a hint on how to incorporate the weights in the initial formula of the maximum-log-likelihood formulation of the problem? Thanks!

    Read the article

  • Find the centroid of a polygon with weighted vertices

    - by Calle Kabo
    Hi, I know how to find the centroid (center of mass) of a regular polygon. This assumes that every part of the polygon weighs the same. But how do I calculate the centroid of a weightless polygon (made from aerogel perhaps :), where each vertex has a weight? Simplified illustration of what I mean using straight line: 5kg-----------------5kg ^center of gravity 10kg---------------5kg ^center of gravity offset du to weight of vertices Of course, I know how to calculate the center of gravity on a straight line with weighted vertices, but how do I do it on a polygon with weighted vertices? Thanks for your time!

    Read the article

  • Weighted round robins via TTL - possible?

    - by Joe Hopfgartner
    I currently use DNS round robin for load balancing, which works great. The records look like this (I have a ttl of 120 seconds) ;; ANSWER SECTION: orion.2x.to. 116 IN A 80.237.201.41 orion.2x.to. 116 IN A 87.230.54.12 orion.2x.to. 116 IN A 87.230.100.10 orion.2x.to. 116 IN A 87.230.51.65 I learned that not every ISP / device treats such a response the same way. For example some DNS servers rotate the addresses randomly or always cycle them through. Some just propagate the first entry, others try to determine which is best (regionally near) by looking at the ip address. However if the userbase is big enough (spreads over multiple ISPs etc) it balances pretty well. The discrepancies from highest to lowest loaded server hardly every exceeds 15%. However now I have the problem that I am introducing more servers into the systems, that not all have the same capacities. I currently only have 1gbps servers, but I want to work with 100mbit and also 10gbps servers too. So what I want is I want to introduce a server with 10 GBps with a weight of 100, a 1 gbps server with a weight of 10 and a 100 mbit server with a weight of 1. I used to add servers twice to bring more traffic to them (which worked nice. the bandwidth doubled almost.) But adding a 10gbit server 100 times to DNS is a bit rediculous. So I thought about using the TTL. If I give server A 240 seconds ttl and server B only 120 seconds (which is about about the minimum to use for round robin, as a lot of dns servers set to 120 if a lower ttl is specified.. so i have heard) I think something like this should occour in an ideal scenario: first 120 seconds 50% of requests get server A -> keep it for 240 seconds. 50% of requests get server B -> keep it for 120 seconds second 120 seconds 50% of requests still have server A cached -> keep it for another 120 seconds. 25% of requests get server A -> keep it for 240 seconds 25% of requests get server B -> keep it for 120 seconds third 120 seconds 25% will get server A (from the 50% of Server A that now expired) -> cache 240 sec 25% will get server B (from the 50% of Server A that now expired) -> cache 120 sec 25% will have server A cached for another 120 seconds 12.5% will get server B (from the 25% of server B that now expired) -> cache 120sec 12.5% will get server A (from the 25% of server B that now expired) -> cache 240 sec fourth 120 seconds 25% will have server A cached -> cache for another 120 secs 12.5% will get server A (from the 25% of b that now expired) -> cache 240 secs 12.5% will get server B (from the 25% of b that now expired) -> cache 120 secs 12.5% will get server A (from the 25% of a that now expired) -> cache 240 secs 12.5% will get server B (from the 25% of a that now expired) -> cache 120 secs 6.25% will get server A (from the 12.5% of b that now expired) -> cache 240 secs 6.25% will get server B (from the 12.5% of b that now expired) -> cache 120 secs 12.5% will have server A cached -> cache another 120 secs ... i think i lost something at this point but i think you get the idea.... As you can see this gets pretty complicated to predict and it will for sure not work out like this in practice. But it should definitely have an effect on the distribution! I know that weighted round robin exists and is just controlled by the root server. It just cycles through dns records when responding and returns dns records with a set propability that corresponds to the weighting. My DNS server does not support this, and my requirements are not that precise. If it doesnt weight perfectly its okay, but it should go into the right direction. I think using the TTL field could be a more elegant and easier solution - and it deosnt require a dns server that controls this dynamically, which saves resources - which is in my opinion the whole point of dns load balancing vs hardware load balancers. My question now is... are there any best prectices / methos / rules of thumb to weight round robin distribution using the TTL attribute of DNS records? Edit: The system is a forward proxy server system. The amount of Bandwidth (not requests) exceeds what one single server with ethernet can handle. So I need a balancing solution that distributes the bandwidth to several servers. Are there any alternative methods than using DNS? Of course I can use a load balancer with fibre channel etc, but the costs are rediciulous and it also increases only the width of the bottleneck and does not eliminate it. The only thing i can think of are anycast (is it anycast or multicast?) ip addresses, but I don't have the means to set up such a system.

    Read the article

  • weighted RNG speed problem in C++

    - by supert
    I have a (fast) bit of C++ code that samples cards from a 52 card deck: void sample_allcards(int table[5], int holes[], int players) { int temp[5 + 2 * players]; bool try_again; int c, n, i; for (i = 0; i < 5 + 2 * players; i++) { try_again = true; while (try_again == true) { try_again = false; c = fast_rand52(); // reject collisions for (n = 0; n < i + 1; n++) { try_again = (temp[n] == c) || try_again; } temp[i] = c; } } copy_cards(table, temp, 5); copy_cards(holes, temp + 5, 2 * players); } I am implementing code to sample the hole cards according to a known distribution (stored as a 2d table). My code for this looks like: void sample_allcards_weighted(double weights[][HOLE_CARDS], int table[5], int holes[], int players) { // weights are distribution over hole cards int temp[5 + 2 * players]; int n, i; // table cards for (i = 0; i < 5; i++) { bool try_again = true; while (try_again == true) { try_again = false; int c = fast_rand52(); // reject collisions for (n = 0; n < i + 1; n++) { try_again = (temp[n] == c) || try_again; } temp[i] = c; } } for (int player = 0; player < players; player++) { // hole cards according to distribution i = 5 + 2 * player; bool try_again = true; while (try_again == true) { try_again = false; // weighted-sample c1 and c2 at once double w[1326]; memcpy(w, weights[player], sizeof(w)); // h is a number < 1325 int h = weighted_randi(w, HOLE_CARDS); // i2h uses h and sets temp[i] to the 2 cards implied by h i2h(&temp[i], h); // reject collisions for (n = 0; n < i; n++) { try_again = (temp[n] == temp[i]) || (temp[n] == temp[i+1]) || try_again; } } } copy_cards(table, temp, 5); copy_cards(holes, temp + 5, 2 * players); } My problem? The weighted sampling algorithm is a factor of 10 slower. Speed is very important for my application. Is there a way to improve the speed of my algorithm to something more reasonable? Am I doing something wrong in my implementation? Thanks.

    Read the article

  • Finding the heaviest length-constrained path in a weighted Binary Tree

    - by Hristo
    UPDATE I worked out an algorithm that I think runs in O(n*k) running time. Below is the pseudo-code: routine heaviestKPath( T, k ) // create 2D matrix with n rows and k columns with each element = -8 // we make it size k+1 because the 0th column must be all 0s for a later // function to work properly and simplicity in our algorithm matrix = new array[ T.getVertexCount() ][ k + 1 ] (-8); // set all elements in the first column of this matrix = 0 matrix[ n ][ 0 ] = 0; // fill our matrix by traversing the tree traverseToFillMatrix( T.root, k ); // consider a path that would arc over a node globalMaxWeight = -8; findArcs( T.root, k ); return globalMaxWeight end routine // node = the current node; k = the path length; node.lc = node’s left child; // node.rc = node’s right child; node.idx = node’s index (row) in the matrix; // node.lc.wt/node.rc.wt = weight of the edge to left/right child; routine traverseToFillMatrix( node, k ) if (node == null) return; traverseToFillMatrix(node.lc, k ); // recurse left traverseToFillMatrix(node.rc, k ); // recurse right // in the case that a left/right child doesn’t exist, or both, // let’s assume the code is smart enough to handle these cases matrix[ node.idx ][ 1 ] = max( node.lc.wt, node.rc.wt ); for i = 2 to k { // max returns the heavier of the 2 paths matrix[node.idx][i] = max( matrix[node.lc.idx][i-1] + node.lc.wt, matrix[node.rc.idx][i-1] + node.rc.wt); } end routine // node = the current node, k = the path length routine findArcs( node, k ) if (node == null) return; nodeMax = matrix[node.idx][k]; longPath = path[node.idx][k]; i = 1; j = k-1; while ( i+j == k AND i < k ) { left = node.lc.wt + matrix[node.lc.idx][i-1]; right = node.rc.wt + matrix[node.rc.idx][j-1]; if ( left + right > nodeMax ) { nodeMax = left + right; } i++; j--; } // if this node’s max weight is larger than the global max weight, update if ( globalMaxWeight < nodeMax ) { globalMaxWeight = nodeMax; } findArcs( node.lc, k ); // recurse left findArcs( node.rc, k ); // recurse right end routine Let me know what you think. Feedback is welcome. I think have come up with two naive algorithms that find the heaviest length-constrained path in a weighted Binary Tree. Firstly, the description of the algorithm is as follows: given an n-vertex Binary Tree with weighted edges and some value k, find the heaviest path of length k. For both algorithms, I'll need a reference to all vertices so I'll just do a simple traversal of the Tree to have a reference to all vertices, with each vertex having a reference to its left, right, and parent nodes in the tree. Algorithm 1 For this algorithm, I'm basically planning on running DFS from each node in the Tree, with consideration to the fixed path length. In addition, since the path I'm looking for has the potential of going from left subtree to root to right subtree, I will have to consider 3 choices at each node. But this will result in a O(n*3^k) algorithm and I don't like that. Algorithm 2 I'm essentially thinking about using a modified version of Dijkstra's Algorithm in order to consider a fixed path length. Since I'm looking for heaviest and Dijkstra's Algorithm finds the lightest, I'm planning on negating all edge weights before starting the traversal. Actually... this doesn't make sense since I'd have to run Dijkstra's on each node and that doesn't seem very efficient much better than the above algorithm. So I guess my main questions are several. Firstly, do the algorithms I've described above solve the problem at hand? I'm not totally certain the Dijkstra's version will work as Dijkstra's is meant for positive edge values. Now, I am sure there exist more clever/efficient algorithms for this... what is a better algorithm? I've read about "Using spine decompositions to efficiently solve the length-constrained heaviest path problem for trees" but that is really complicated and I don't understand it at all. Are there other algorithms that tackle this problem, maybe not as efficiently as spine decomposition but easier to understand? Thanks.

    Read the article

  • Minimizing distance to a weighted grid

    - by Andrew Tomazos - Fathomling
    Lets suppose you have a 1000x1000 grid of positive integer weights W. We want to find the cell that minimizes the average weighted distance.to each cell. The brute force way to do this would be to loop over each candidate cell and calculate the distance: int best_x, best_y, best_dist; for x0 = 1:1000, for y0 = 1:1000, int total_dist = 0; for x1 = 1:1000, for y1 = 1:1000, total_dist += W[x1,y1] * sqrt((x0-x1)^2 + (y0-y1)^2); if (total_dist < best_dist) best_x = x0; best_y = y0; best_dist = total_dist; This takes ~10^12 operations, which is too long. Is there a way to do this in or near ~10^8 or so operations?

    Read the article

  • Generate random number from an arbitrary weighted list

    - by Fernando
    Here's what I need to do, I'll be doing this both in PHP and JavaScript. I have a list of numbers that will range from 1 to 300-500 (I haven't set the limit yet). I will be running a drawing were 10 numbers will be picked at random from the given range. Here's the tricky part: I want some numbers to be less likely to be drawn up. A small set of those 300-500 will be flagged as "lucky numbers". For example, out of 100 drawings, most numbers have equal chances of being drawn, except for a few, that will only be picked once every 30-50 drawings. Basically I need to artificially set the probability of certain numbers to be picked while maintaining an even distribution with the rest of the numbers. The only similar thing I've found so far is this question: Generate A Weighted Random Number, the problem being that my spec has considerably more numbers (up to 500) so the weights would get very small and supposedly this could be a problem with that solution (Rejection Sampling). I'm still trying it, though, but I wonder if there other solutions. Math is not my thing so I appreciate any input. Thanks.

    Read the article

  • Weighted random selection using Walker's Alias Method (c# implementation)

    - by Chuck Norris
    I was looking for this algorithm (algorithm which will randomly select from a list of elements where each element has different probability of being picked (weight) ) and found only python and c implementations, after I did a C# one, a bit different (but I think simpler) I thought I should share it, and ask your opinion ? this is it: using System; using System.Collections.Generic; using System.Linq; namespace ChuckNorris { class Program { static void Main(string[] args) { var oo = new Dictionary<string, int> { {"A",7}, {"B",1}, {"C",9}, {"D",8}, {"E",11}, }; var rnd = new Random(); var pick = rnd.Next(oo.Values.Sum()); var sum = 0; var res = ""; foreach (var o in oo) { sum += o.Value; if(sum >= pick) { res = o.Key; break; } } Console.WriteLine("result is "+ res); } } } if anyone can remake it in f# please post your code

    Read the article

  • weighted matching algorithm in Perl

    - by srk
    Problem : We have equal number of men and women.each men has a preference score toward each woman. So do the woman for each man. each of the men and women have certain interests. Based on the interest we calculate the preference scores. So initially we have an input in a file having x columns. First column is the person(men/woman) id. id are nothing but 0.. n numbers.(first half are men and next half woman) the remaining x-1 columns will have the interests. these are integers too. now using this n by x-1 matrix... we have come up with a n by n/2 matrix. the new matrix has all men and woman as their rows and scores for opposite sex in columns. We have to sort the scores in descending order, also we need to know the id of person related to the scores after sorting. So here i wanted to use hash table. once we get the scores we need to make up pairs.. for which we need to follow some rules. My trouble is with the second matrix of n by n/2 that needs to give information of which man/woman has how much preference on a woman/man. I need these scores sorted so that i know who is the first preferred woman/man, 2nd preferred and so on for a man/woman. I hope to get good suggestions on the data structures i use.. I prefer php or perl. Thank you in advance

    Read the article

  • postgresql weighted average?

    - by milovanderlinden
    say I have a postgresql table with the following values: id | value ---------- 1 | 4 2 | 8 3 | 100 4 | 5 5 | 7 If I use postgresql to calculate the average, it gives me an average of 24.8 because the high value of 100 has great impact on the calculation. While in fact I would like to find an average somewhere around 6 and eliminate the extreme(s). I am looking for a way to eliminate extremes and want to do this "statistically correct". The extreme's cannot be fixed. I cannot say; If a value is over X, it has to be eliminated. I have been bending my head on the postgresql aggregate functions but cannot put my finger on what is right for me to use. Any suggestions?

    Read the article

  • Fastest way to do a weighted tag search in SQL Server

    - by Hasan Khan
    My table is as follows ObjectID bigint Tag nvarchar(50) Weight float Type tinyint I want to get search for all objects that has tags 'big' or 'large' I want the objectid in order of sum of weights (so objects having both the tags will be on top) select objectid, row_number() over (order by sum(weight) desc) as rowid from tags where tag in ('big', 'large') and type=0 group by objectid the reason for row_number() is that i want paging over results. The query in its current form is very slow, takes a minute to execute over 16 million tags. What should I do to make it faster? I have a non clustered index (objectid, tag, type) Any suggestions?

    Read the article

  • Weighted random numbers in MATLAB

    - by yuk
    How to randomly pick up N numbers from a vector a with weight assigned to each number? Let's say: a = 1:3; % possible numbers weight = [0.3 0.1 0.2]; % corresponding weights In this case probability to pick up 1 should be 3 times higher than to pick up 2. Sum of all weights can be anything.

    Read the article

  • Gomoku array-based AI-algorithm?

    - by Lasse V. Karlsen
    Way way back (think 20+ years) I encountered a Gomoku game source code in a magazine that I typed in for my computer and had a lot of fun with. The game was difficult to win against, but the core algorithm for the computer AI was really simply and didn't account for a lot of code. I wonder if anyone knows this algorithm and has some links to some source or theory about it. The things I remember was that it basically allocated an array that covered the entire board. Then, whenever I, or it, placed a piece, it would add a number of weights to all locations on the board that the piece would possibly impact. For instance (note that the weights are definitely wrong as I don't remember those): 1 1 1 2 2 2 3 3 3 444 1234X4321 3 3 3 2 2 2 1 1 1 Then it simply scanned the array for an open location with the lowest or highest value. Things I'm fuzzy on: Perhaps it had two arrays, one for me and one for itself and there was a min/max weighting? There might've been more to the algorithm, but at its core it was basically an array and weighted numbers Does this ring a bell with anyone at all? Anyone got anything that would help?

    Read the article

  • PHP Game weapon accuracy

    - by noko
    I'm trying to come up with a way for players to fire their weapons and only hit for a certain percentage. For example, one gun can only hit 70% of the time while another only hits 34% of the time. So far all I could come up with is weighted arrays. Attempt 1: private function weighted_random(&$weight) { $weights = array(($weight/100), (100-$weight)/100); $r = mt_rand(1,1000); $offset = 0; foreach($weights as $k => $w) { $offset += $w*1000; if($r <= $offset) return $k; } } Attempt 2: private function weapon_fired(&$weight) { $hit = array(); for($i = 0; $i < $weight; $i++) $hit[] = true; for($i = $weight; $i < 100; $i++) $hit[] = false; shuffle($hit); return $hit[mt_rand(0,100)]; } It doesn't seem that the players are hitting the correct percentages but I'm not really sure why. Any ideas or suggestions? Is anything glaringly wrong with these? Thanks

    Read the article

  • Ranking/ weighing search result

    - by biso
    I am trying to build an application that has a smart adaptive search engine (lets say for cars). If I search for for 4x4 then the DB will return all the 4x4 cars I have (100 cars) - but as time goes by and I start checking out cars, liking them, commenting on them, etc the order of the search result should be the different. That means 1 month later when searching for 4x4, I should get the same result set ordered differently as per my previous interaction with the site. If I was mainly liking and commenting on German cars, BMW should be on the top and Land cruiser should be further down. This ranking should be based on attributes that I captureduring user interaction (eg: car origin, user age, user location, car type[4x4, coupe, hatchback], price range). So for each car result I get, I will be weighing it based on how well it is performing on the 5 attributes above. I intend to use the DB just as a repository and do the ranking and the thinking on the server. My question is, what kind of algorithm should I be using to weigh/rank my search result? Thanks.

    Read the article

  • finding the total number of distinct shortest paths between 2 nodes in undirected weighted graph in linear time?

    - by logan
    I was wondering, that if there is a weighted graph G(V,E), and I need to find a single shortest path between any two vertices S and T in it then I could have used the Dijkstras algorithm. but I am not sure how this can be done when we need to find all the distinct shortest paths from S to T. Is it solvable on O(n) time? I had one more question like if we assume that the weights of the edges in the graph can assume values only in certain range lets say 1 <=w(e)<=2 will this effect the time complexity?

    Read the article

  • Enumerate all paths in a weighted graph from A to B where path length is between C1 and C2

    - by awmross
    Given two points A and B in a weighted graph, find all paths from A to B where the length of the path is between C1 and C2. Ideally, each vertex should only be visited once, although this is not a hard requirement. I supose I could use a heuristic to sort the results of the algorithm to weed out "silly" paths (e.g. a path that just visits the same two nodes over and over again) I can think of simple brute force algorithms, but are there any more sophisticed algorithms that will make this more efficient? I can imagine as the graph grows this could become expensive. In the application I am developing, A & B are actually the same point (i.e. the path must return to the start), if that makes any difference. Note that this is an engineering problem, not a computer science problem, so I can use an algorithm that is fast but not necessarily 100% accurate. i.e. it is ok if it returns most of the possible paths, or if most of the paths returned are within the given length range.

    Read the article

1 2 3 4 5 6  | Next Page >