Search Results

Search found 5638 results on 226 pages for 'scheduling algorithm'.

Page 94/226 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • Finding cities close to one another using longitude and latitude

    - by Jamie
    Each user in my db is associated to a city (with it's longitude and latitude) How would I go about finding out which cities are close to one another? i.e. in England, Cambridge is fairly close to London. So If I have a user who lives in Cambridge. Users close to them would be users living in close surrounding cities, such as London, Hertford etc. Any ideas how I could go about this? And also, how would I define what is close? i.e. in the UK close would be much closer than if it were in the US as the US is far more spread out. Ideas and suggestions. Also, do you know any services that provide this sort of functionality? Thanks

    Read the article

  • Can my tortoise vs. hare race be improved?

    - by FredOverflow
    Here is my code for detecting cycles in a linked list: do { hare = hare.next(); if (hare == back) return; hare = hare.next(); if (hare == back) return; tortoise = tortoise.next(); } while (tortoise != hare); throw new AssertionError("cyclic linkage"); Is there a way to get rid of the code duplication inside the loop? Am I right in assuming that I don't need a check after making the tortoise take a step forward? As I see it, the tortoise can never reach the end of the list before the hare (contrary to the fable). Any other ways to simplify/beautify this code?

    Read the article

  • O(log N) == O(1) - Why not?

    - by phoku
    Whenever I consider algorithms/data structures I tend to replace the log(N) parts by constants. Oh, I know log(N) diverges - but does it matter in real world applications? log(infinity) < 100 for all practical purposes. I am really curious for real world examples where this doesn't hold. To clarify: I understand O(f(N)) I am curious about real world examples where the asymptotic behaviour matters more than the constants of the actual performance. If log(N) can be replaced by a constant it still can be replaced by a constant in O( N log N). This question is for the sake of (a) entertainment and (b) to gather arguments to use if I run (again) into a controversy about the performance of a design.

    Read the article

  • How to find validity of a string of parentheses, curly brackets and square brackets?

    - by Rajendra
    I recently came in contact with this interesting problem. You are given a string containing just the characters '(', ')', '{', '}', '[' and ']', for example, "[{()}]", you need to write a function which will check validity of such an input string, function may be like this: bool isValid(char* s); these brackets have to close in the correct order, for example "()" and "()[]{}" are all valid but "(]", "([)]" and "{{{{" are not! I came out with following O(n) time and O(n) space complexity solution, which works fine: Maintain a stack of characters. Whenever you find opening braces '(', '{' OR '[' push it on the stack. Whenever you find closing braces ')', '}' OR ']' , check if top of stack is corresponding opening bracket, if yes, then pop the stack, else break the loop and return false. Repeat steps 2 - 3 until end of the string. This works, but can we optimize it for space, may be constant extra space, I understand that time complexity cannot be less than O(n) as we have to look at every character. So my question is can we solve this problem in O(1) space?

    Read the article

  • How does the Ableton warp algorithm work exactly?

    - by pepperdreamteam
    I'm looking for any documentation or definitive information on Ableton's warp feature. I understand that it has something to do with finding transients, aligning them with an even rhythm and shifting audio samples accordingly. I'm hoping to find ways to approximate warping with more basic audio editing tools. I understand that this is ableton's unique device, really any information about how it works would be helpful. So...does anyone have any 411?

    Read the article

  • Find the set of largest contiguous rectangles to cover multiple areas

    - by joelpt
    I'm working on a tool called Quickfort for the game Dwarf Fortress. Quickfort turns spreadsheets in csv/xls format into a series of commands for Dwarf Fortress to carry out in order to plot a "blueprint" within the game. I am currently trying to optimally solve an area-plotting problem for the 2.0 release of this tool. Consider the following "blueprint" which defines plotting commands for a 2-dimensional grid. Each cell in the grid should either be dug out ("d"), channeled ("c"), or left unplotted ("."). Any number of distinct plotting commands might be present in actual usage. . d . d c c d d d d c c . d d d . c d d d d d c . d . d d c To minimize the number of instructions that need to be sent to Dwarf Fortress, I would like to find the set of largest contiguous rectangles that can be formed to completely cover, or "plot", all of the plottable cells. To be valid, all of a given rectangle's cells must contain the same command. This is a faster approach than Quickfort 1.0 took: plotting every cell individually as a 1x1 rectangle. This video shows the performance difference between the two versions. For the above blueprint, the solution looks like this: . 9 . 0 3 2 8 1 1 1 3 2 . 1 1 1 . 2 7 1 1 1 4 2 . 6 . 5 4 2 Each same-numbered rectangle above denotes a contiguous rectangle. The largest rectangles take precedence over smaller rectangles that could also be formed in their areas. The order of the numbering/rectangles is unimportant. My current approach is iterative. In each iteration, I build a list of the largest rectangles that could be formed from each of the grid's plottable cells by extending in all 4 directions from the cell. After sorting the list largest first, I begin with the largest rectangle found, mark its underlying cells as "plotted", and record the rectangle in a list. Before plotting each rectangle, its underlying cells are checked to ensure they are not yet plotted (overlapping a previous plot). We then start again, finding the largest remaining rectangles that can be formed and plotting them until all cells have been plotted as part of some rectangle. I consider this approach slightly more optimized than a dumb brute-force search, but I am wasting a lot of cycles (re)calculating cells' largest rectangles and checking underlying cells' states. Currently, this rectangle-discovery routine takes the lion's share of the total runtime of the tool, especially for large blueprints. I have sacrificed some accuracy for the sake of speed by only considering rectangles from cells which appear to form a rectangle's corner (determined using some neighboring-cell heuristics which aren't always correct). As a result of this 'optimization', my current code doesn't actually generate the above solution correctly, but it's close enough. More broadly, I consider the goal of largest-rectangles-first to be a "good enough" approach for this application. However I observe that if the goal is instead to find the minimum set (fewest number) of rectangles to completely cover multiple areas, the solution would look like this instead: . 3 . 5 6 8 1 3 4 5 6 8 . 3 4 5 . 8 2 3 4 5 7 8 . 3 . 5 7 8 This second goal actually represents a more optimal solution to the problem, as fewer rectangles usually means fewer commands sent to Dwarf Fortress. However, this approach strikes me as closer to NP-Hard, based on my limited math knowledge. Watch the video if you'd like to better understand the overall strategy; I have not addressed other aspects of Quickfort's process, such as finding the shortest cursor-path that plots all rectangles. Possibly there is a solution to this problem that coherently combines these multiple strategies. Help of any form would be appreciated.

    Read the article

  • Result of Long Positive Integers & Search and element in array..

    - by AGeek
    Hi, I have two Questions for which I cannot find answers by googling, but I find these questions very important for preparation.. Kindly explain only the logic, I will be able to code. In Search of Efficient Logic..... in terms of Memory and Time. WAP to add two long positive integers. What Data structure / data type we can use to store the numbers and result. What is the best way to search an element from an array in shortest time. Size of the array could be large enough, and any elements could be stored in the array(i.e. no range). Thanks.

    Read the article

  • Fast, Vectorizable method of taking floating point number modulus of special primes?

    - by caffiend
    Is there a fast method for taking the modulus of a floating point number? With integers, there are tricks for Mersenne primes, so that its possible to calculate y = x MOD 2^31 without needing division. Can any similar tricks be applied for floating point numbers? Preferably, in a way that can be converted into vector/SIMD operations, or moved into GPGPU code. The primes I'm interested in would be 2^7 and 2^31, although if there are more efficient ones for floating point numbers, those would be welcome.

    Read the article

  • Finding the position of the max element

    - by Faken
    Is there a standard function that returns the position(not value) of the max element of an array of values? For example: say i have an array like this: sampleArray = [1, 5, 2, 9, 4, 6, 3] I want a function that returns the integer of 3 that tells me that sampleArray[3] is the largest value in the array.

    Read the article

  • Check if a string substitution rule will ever generate another string.

    - by Mgccl
    Given two strings S and T of same length. Given a set of replacement rules, that find substring A in S and replace it with string B. A and B have the same length. Is there a sequence of rule application, such that it make string S into string T? I believe there is no better way to answer this than try every single rule in every single state. Which would be exponential time. But I don't know if there are better solutions to it.

    Read the article

  • Space requirements of a merge-sort

    - by Arkaitz Jimenez
    I'm trying to understand the space requirements for a Mergesort, O(n). I see that time requirements are basically, amount of levels(logn) * merge(n) so that makes (n log n). Now, we are still allocating n per level, in 2 different arrays, left and right. I do understand that the key here is that when the recursive functions return the space gets deallocated, but I'm not seeing it too obvious. Besides, all the info I find, just states space required is O(n) but don't explain it. Any hint? function merge_sort(m) if length(m) = 1 return m var list left, right, result var integer middle = length(m) / 2 for each x in m up to middle add x to left for each x in m after middle add x to right left = merge_sort(left) right = merge_sort(right) result = merge(left, right) return result

    Read the article

  • arbitrary vire connection / search and replace

    - by fatai
    input :["vire_connection",[1, 2, [ 3, [ 4, "connect"]]], ["connect", [3 , 5] ] ] output:["vire_connection",[ 1, 2, [ 3, [ 4, [ 3, 5 ] ] ] ] ], [ [ 3 , 5] ] ] after connection ( simply copying [3,5] to other wanted position ) , remove connect word input :["vire_connection", [ [ [ ["connect", [ 3, 4 ] ] ] ] ], [ 2, "connect"]] output :["vire_connection",[[[[[3,4]]]]], [ 2, [ 3 , 4 ]]] after connection ( simply copying [3,4] to other wanted position ) , remove connect word how can I do ?

    Read the article

  • Given a word, how do I get the list of all words, that differ by one letter?

    - by user187809
    Let's say I have the word "CAT". These words differ from "CAT" by one letter (not the full list) CUT CAP PAT FAT COT etc. Is there an elegant way to generate this? Obviously, one way to do it is through brute force. pseduo code: while (0 to length of word) while (A to Z) replace one letter at a time, and check if the resulting word is a valid word If I had a 10 letter word, the loop would run 26 * 10 = 260 times. Is there a better, elegant way to do this?

    Read the article

  • How does Photoshop (Or drawing programs) blit?

    - by user146780
    I'm getting ready to make a drawing application in Windows. I'm just wondering, do drawing programs have a memory bitmap which they lock, then set each pixel, then blit? I don't understand how Photoshop can move entire layers without lag or flicker without using hardware acceleration. Also in a program like Expression Design, I could have 200 shapes and move them around all at once with no lag. I'm really wondering how this can be done without GPU help. I don't think super efficient algorithms could justify that? Thanks

    Read the article

  • Using c#,c/c++ or java to improve BBN with GA

    - by madicemickael
    I have a little problem in my little project , I wish that someone here could help me! I am planning to use a bayesian network as a decision factor in my game AI and I want to improve the decision making every step of the way , anyone knows how to do that ? Any tutorials / existing implementations will be very good,I hope some of you could help me. I heard that a programmer in this community did a good implementation of this put together for poker game AI.I am planning to use it like him ,but in another poker(Texas) or maybe Rentz. Looking for C/c++ or c# or java code. Thanks , Mike

    Read the article

  • Tracking/Counting Word Frequency

    - by Joel Martinez
    I'd like to get some community consensus on a good design to be able to store and query word frequency counts. I'm building an application in which I have to parse text inputs and store how many times a word has appeared (over time). So given the following inputs: "To Kill a Mocking Bird" "Mocking a piano player" Would store the following values: Word Count ------------- To 1 Kill 1 A 2 Mocking 2 Bird 1 Piano 1 Player 1 And later be able to quickly query for the count value of a given arbitrary word. My current plan is to simply store the words and counts in a database, and rely on caching word count values ... But I suspect that I won't get enough cache hits to make this a viable solution long term. Can anyone suggest algorithms, or data structures, or any other idea that might make this a well-performing solution?

    Read the article

  • Picking apples off a tree

    - by John Retallack
    I have the following problem: I am given a tree with N apples, for each apple I am given it's weight and height. I can pick apples up to a given height H, each time I pick an apple the height of every apple is increased with U. I have to find out the maximum weight of apples I can pick. 1 = N = 100000 0 < {H, U, apples' weight and height, maximum weight} < 231 Example: N=4 H=100 U=10 height weight 82 30 91 10 93 5 94 15 The answer is 45: first pick the apple with the weight of 15 then the one with the weight of 30. Could someone help me approach this problem?

    Read the article

  • Find the centroid of a polygon with weighted vertices

    - by Calle Kabo
    Hi, I know how to find the centroid (center of mass) of a regular polygon. This assumes that every part of the polygon weighs the same. But how do I calculate the centroid of a weightless polygon (made from aerogel perhaps :), where each vertex has a weight? Simplified illustration of what I mean using straight line: 5kg-----------------5kg ^center of gravity 10kg---------------5kg ^center of gravity offset du to weight of vertices Of course, I know how to calculate the center of gravity on a straight line with weighted vertices, but how do I do it on a polygon with weighted vertices? Thanks for your time!

    Read the article

  • Generate encoding String according to creation order.

    - by Tony
    I need to generate encoding String for each item I inserted into the database. for example: x00001 for the first item x00002 for the sencond item x00003 for the third item The way I chose to do this is counting the rows. Before I insert the third item, I count against the database, I know there're already 2 rows, so the next encoding is ended with 3. But there is a problem. If I delete the second item, the forth item will not be the x00004,but x00003. I can add additional columns to table, to store the next encoding, I don't know if there's other better solutions ?

    Read the article

  • An O(1) Sort ~~~

    - by FlySwat
    Before you stone me for being a heretic, There is a sort that proclaims to be O(1), called "Bead Sort" (http://en.wikipedia.org/wiki/Bead_sort) , however that is pure theory, when actually applied I found that it was actually O(N * M), which is pretty pathetic. That said, Lets list out some of the fastest sorts, and their best case and worst case speed in Big O notation. ~~ FlySwat ~~

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >