Search Results

Search found 12 results on 1 pages for 'asymptotically'.

Page 1/1 | 1 

  • Function keys on an external keyboard

    - by asymptotically
    So I bought a keyboard for my laptop. Unfortunately, it doesn't have the function key (though I know many people say it's useless). On my laptop, I control volume with the function key and F9-11. How can I get the same functionality on my external keyboard? The advanced keyboard settings don't have an option related to the function key. More specifically, it would be great if I could map it to my 'Menu' key which I'm never going to use. Or is there a way to get full functionality without it?

    Read the article

  • Efficiency of purely functional programming

    - by Sid
    Does anyone know what is the worst possible asymptotic slowdown that can happen when programming purely functionally as opposed to imperatively (i.e. allowing side-effects)? Clarification from comment by itowlson: is there any problem for which the best known non-destructive algorithm is asymptotically worse than the best known destructive algorithm, and if so by how much?

    Read the article

  • Most interesting and challenging programming tasks

    - by dsimcha
    Some programmers enjoy optimizing code to make the implementation as fast as humanly possible; or golfing to make code as compact as possible. Others enjoy metaprogramming to make code generic, or designing algorithms to be asymptotically efficient. What do you find most interesting and challenging as a programmer?

    Read the article

  • which is best algorithm?

    - by Lopa
    Consider two algorithms A and B which solve the same problem, and have time complexities (in terms of the number of elementary operations they perform) given respectively by a(n) = 9n+6 b(n) = 2(n^2)+1 (i) Which algorithm is the best asymptotically? (ii) Which is the best for small input sizes n, and for what values of n is this the case? (You may assume where necessary that n0.) i think its 9n+6. guys could you please help me with whether its right or wrong?? and whats the answer for part b. what exactly do they want?

    Read the article

  • Is regex too slow? Real life examples where simple non-regex alternative is better

    - by polygenelubricants
    I've seen people here made comments like "regex is too slow!", or "why would you do something so simple using regex!" (and then present a 10+ lines alternative instead), etc. I haven't really used regex in industrial setting, so I'm curious if there are applications where regex is demonstratably just too slow, AND where a simple non-regex alternative exists that performs significantly (maybe even asymptotically!) better. Obviously many highly-specialized string manipulations with sophisticated string algorithms will outperform regex easily, but I'm talking about cases where a simple solution exists and significantly outperforms regex. What counts as simple is subjective, of course, but I think a reasonable standard is that if it uses only String, StringBuilder, etc, then it's probably simple.

    Read the article

  • How well do zippers perform in practice, and when should they be used?

    - by Rob
    I think that the zipper is a beautiful idea; it elegantly provides a way to walk a list or tree and make what appear to be local updates in a functional way. Asymptotically, the costs appear to be reasonable. But traversing the data structure requires memory allocation at each iteration, where a normal list or tree traversal is just pointer chasing. This seems expensive (please correct me if I am wrong). Are the costs prohibitive? And what under what circumstances would it be reasonable to use a zipper?

    Read the article

  • How to judge the relative efficiency of algorithms given runtimes as functions of 'n'?

    - by Lopa
    Consider two algorithms A and B which solve the same problem, and have time complexities (in terms of the number of elementary operations they perform) given respectively by a(n) = 9n+6 b(n) = 2(n^2)+1 (i) Which algorithm is the best asymptotically? (ii) Which is the best for small input sizes n, and for what values of n is this the case? (You may assume where necessary that n0.) i think its 9n+6. guys could you please help me with whether its right or wrong?? and whats the answer for part b. what exactly do they want?

    Read the article

  • Optimality of Binary Search

    - by templatetypedef
    Hello all- This may be a silly question, but does anyone know of a proof that binary search is asymptotically optimal? That is, if we are given a sorted list of elements where the only permitted operation on those objects is a comparison, how do you prove that the search can't be done in o(lg n)? (That's little-o of lg n, by the way.) Note that I'm restricting this to elements where the only operation permitted operation is a comparison, since there are well-known algorithms that can beat O(lg n) on expectation if you're allowed to do more complex operations on the data (see, for example, interpolation search). Thanks so much! This has really been bugging me since it seems like it should be simple but has managed to resist all my best efforts. :-)

    Read the article

  • Worse is better. Is there an example?

    - by J.F. Sebastian
    Is there a widely-used algorithm that has time complexity worse than that of another known algorithm but it is a better choice in all practical situations (worse complexity but better otherwise)? An acceptable answer might be in a form: There are algorithms A and B that have O(N**2) and O(N) time complexity correspondingly, but B has such a big constant that it has no advantages over A for inputs less then a number of atoms in the Universe. Examples highlights from the answers: Simplex algorithm -- worst-case is exponential time -- vs. known polynomial-time algorithms for convex optimization problems. A naive median of medians algorithm -- worst-case O(N**2) vs. known O(N) algorithm. Backtracking regex engines -- worst-case exponential vs. O(N) Thompson NFA -based engines. All these examples exploit worst-case vs. average scenarios. Are there examples that do not rely on the difference between the worst case vs. average case scenario? Related: The Rise of ``Worse is Better''. (For the purpose of this question the "Worse is Better" phrase is used in a narrower (namely -- algorithmic time-complexity) sense than in the article) Python's Design Philosophy: The ABC group strived for perfection. For example, they used tree-based data structure algorithms that were proven to be optimal for asymptotically large collections (but were not so great for small collections). This example would be the answer if there were no computers capable of storing these large collections (in other words large is not large enough in this case). Coppersmith–Winograd algorithm for square matrix multiplication is a good example (it is the fastest (2008) but it is inferior to worse algorithms). Any others? From the wikipedia article: "It is not used in practice because it only provides an advantage for matrices so large that they cannot be processed by modern hardware (Robinson 2005)."

    Read the article

  • Where to find algorithms for standard math functions?

    - by dsimcha
    I'm looking to submit a patch to the D programming language standard library that will allow much of std.math to be evaluated at compile time using the compile-time function evaluation facilities of the language. Compile-time function evaluation has several limitations, the most important ones being: You can't use assembly language. You can't call C code or code for which the source is otherwise unavailable. Several std.math functions violate these and compile-time versions need to be written. Where can I get information on good algorithms for computing things such as logarithms, exponents, powers, and trig functions? I prefer just high level descriptions of algorithms to actual code, for two reasons: To avoid legal ambiguity and the need to make my code look "different enough" from the source to make sure I own the copyright. I want simple, portable algorithms. I don't care about micro-optimization as long as they're at least asymptotically efficient. Edit: D's compile time function evaluation model allows floating point results computed at compile time to differ from those computed at runtime anyhow, so I don't care if my compile-time algorithms don't give exactly the same result as the runtime version as long as they aren't less accurate to a practically significant extent.

    Read the article

  • Most efficient method to query a Young Tableau

    - by Matthieu M.
    A Young Tableau is a 2D matrix A of dimensions M*N such that: i,j in [0,M)x[0,N): for each p in (i,M), A[i,j] <= A[p,j] for each q in (j,N), A[i,j] <= A[i,q] That is, it's sorted row-wise and column-wise. Since it may contain less than M*N numbers, the bottom-right values might be represented either as missing or using (in algorithm theory) infinity to denote their absence. Now the (elementary) question: how to check if a given number is contained in the Young Tableau ? Well, it's trivial to produce an algorithm in O(M*N) time of course, but what's interesting is that it is very easy to provide an algorithm in O(M+N) time: Bottom-Left search: Let x be the number we look for, initialize i,j as M-1, 0 (bottom left corner) If x == A[i,j], return true If x < A[i,j], then if i is 0, return false else decrement i and go to 2. Else, if j is N-1, return false else increment j This algorithm does not make more than M+N moves. The correctness is left as an exercise. It is possible though to obtain a better asymptotic runtime. Pivot Search: Let x be the number we look for, initialize i,j as floor(M/2), floor(N/2) If x == A[i,j], return true If x < A[i,j], search (recursively) in A[0:i-1, 0:j-1], A[i:M-1, 0:j-1] and A[0:i-1, j:N-1] Else search (recursively) in A[i+1:M-1, 0:j], A[i+1:M-1, j+1:N-1] and A[0:i, j+1:N-1] This algorithm proceed by discarding one of the 4 quadrants at each iteration and running recursively on the 3 left (divide and conquer), the master theorem yields a complexity of O((N+M)**(log 3 / log 4)) which is better asymptotically. However, this is only a big-O estimation... So, here are the questions: Do you know (or can think of) an algorithm with a better asymptotical runtime ? Like introsort prove, sometimes it's worth switching algorithms depending on the input size or input topology... do you think it would be possible here ? For 2., I am notably thinking that for small size inputs, the bottom-left search should be faster because of its O(1) space requirement / lower constant term.

    Read the article

  • Why is insertion into my tree faster on sorted input than random input?

    - by Juliet
    Now I've always heard binary search trees are faster to build from randomly selected data than ordered data, simply because ordered data requires explicit rebalancing to keep the tree height at a minimum. Recently I implemented an immutable treap, a special kind of binary search tree which uses randomization to keep itself relatively balanced. In contrast to what I expected, I found I can consistently build a treap about 2x faster and generally better balanced from ordered data than unordered data -- and I have no idea why. Here's my treap implementation: http://pastebin.com/VAfSJRwZ And here's a test program: using System; using System.Collections.Generic; using System.Linq; using System.Diagnostics; namespace ConsoleApplication1 { class Program { static Random rnd = new Random(); const int ITERATION_COUNT = 20; static void Main(string[] args) { List<double> rndTimes = new List<double>(); List<double> orderedTimes = new List<double>(); rndTimes.Add(TimeIt(50, RandomInsert)); rndTimes.Add(TimeIt(100, RandomInsert)); rndTimes.Add(TimeIt(200, RandomInsert)); rndTimes.Add(TimeIt(400, RandomInsert)); rndTimes.Add(TimeIt(800, RandomInsert)); rndTimes.Add(TimeIt(1000, RandomInsert)); rndTimes.Add(TimeIt(2000, RandomInsert)); rndTimes.Add(TimeIt(4000, RandomInsert)); rndTimes.Add(TimeIt(8000, RandomInsert)); rndTimes.Add(TimeIt(16000, RandomInsert)); rndTimes.Add(TimeIt(32000, RandomInsert)); rndTimes.Add(TimeIt(64000, RandomInsert)); rndTimes.Add(TimeIt(128000, RandomInsert)); string rndTimesAsString = string.Join("\n", rndTimes.Select(x => x.ToString()).ToArray()); orderedTimes.Add(TimeIt(50, OrderedInsert)); orderedTimes.Add(TimeIt(100, OrderedInsert)); orderedTimes.Add(TimeIt(200, OrderedInsert)); orderedTimes.Add(TimeIt(400, OrderedInsert)); orderedTimes.Add(TimeIt(800, OrderedInsert)); orderedTimes.Add(TimeIt(1000, OrderedInsert)); orderedTimes.Add(TimeIt(2000, OrderedInsert)); orderedTimes.Add(TimeIt(4000, OrderedInsert)); orderedTimes.Add(TimeIt(8000, OrderedInsert)); orderedTimes.Add(TimeIt(16000, OrderedInsert)); orderedTimes.Add(TimeIt(32000, OrderedInsert)); orderedTimes.Add(TimeIt(64000, OrderedInsert)); orderedTimes.Add(TimeIt(128000, OrderedInsert)); string orderedTimesAsString = string.Join("\n", orderedTimes.Select(x => x.ToString()).ToArray()); Console.WriteLine("Done"); } static double TimeIt(int insertCount, Action<int> f) { Console.WriteLine("TimeIt({0}, {1})", insertCount, f.Method.Name); List<double> times = new List<double>(); for (int i = 0; i < ITERATION_COUNT; i++) { Stopwatch sw = Stopwatch.StartNew(); f(insertCount); sw.Stop(); times.Add(sw.Elapsed.TotalMilliseconds); } return times.Average(); } static void RandomInsert(int insertCount) { Treap<double> tree = new Treap<double>((x, y) => x.CompareTo(y)); for (int i = 0; i < insertCount; i++) { tree = tree.Insert(rnd.NextDouble()); } } static void OrderedInsert(int insertCount) { Treap<double> tree = new Treap<double>((x, y) => x.CompareTo(y)); for(int i = 0; i < insertCount; i++) { tree = tree.Insert(i + rnd.NextDouble()); } } } } And here's a chart comparing random and ordered insertion times in milliseconds: Insertions Random Ordered RandomTime / OrderedTime 50 1.031665 0.261585 3.94 100 0.544345 1.377155 0.4 200 1.268320 0.734570 1.73 400 2.765555 1.639150 1.69 800 6.089700 3.558350 1.71 1000 7.855150 4.704190 1.67 2000 17.852000 12.554065 1.42 4000 40.157340 22.474445 1.79 8000 88.375430 48.364265 1.83 16000 197.524000 109.082200 1.81 32000 459.277050 238.154405 1.93 64000 1055.508875 512.020310 2.06 128000 2481.694230 1107.980425 2.24 I don't see anything in the code which makes ordered input asymptotically faster than unordered input, so I'm at a loss to explain the difference. Why is it so much faster to build a treap from ordered input than random input?

    Read the article

1