Search Results

Search found 5819 results on 233 pages for 'compiler theory'.

Page 34/233 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • Can Haskell's Parsec library be used to implement a recursive descent parser with backup?

    - by Thor Thurn
    I've been considering using Haskell's Parsec parsing library to parse a subset of Java as a recursive descent parser as an alternative to more traditional parser-generator solutions like Happy. Parsec seems very easy to use, and parse speed is definitely not a factor for me. I'm wondering, though, if it's possible to implement "backup" with Parsec, a technique which finds the correct production to use by trying each one in turn. For a simple example, consider the very start of the JLS Java grammar: Literal: IntegerLiteral FloatingPointLiteral I'd like a way to not have to figure out how I should order these two rules to get the parse to succeed. As it stands, a naive implementation like this: literal = do { x <- try (do { v <- integer; return (IntLiteral v)}) <|> (do { v <- float; return (FPLiteral v)}); return(Literal x) } Will not work... inputs like "15.2" will cause the integer parser to succeed first, and then the whole thing will choke on the "." symbol. In this case, of course, it's obvious that you can solve the problem by re-ordering the two productions. In the general case, though, finding things like this is going to be a nightmare, and it's very likely that I'll miss some cases. Ideally, I'd like a way to have Parsec figure out stuff like this for me. Is this possible, or am I simply trying to do too much with the library? The Parsec documentation claims that it can "parse context-sensitive, infinite look-ahead grammars", so it seems like something like I should be able to do something here.

    Read the article

  • Throughput measurements

    - by dotsid
    I wrote simple load testing tool for testing performance of Java modules. One problem I faced is algorithm of throughput measurements. Tests are executed in several thread (client configure how much times test should be repeated), and execution time is logged. So, when tests are finished we have following history: 4 test executions 2 threads 36ms overall time - idle * test execution 5ms 9ms 4ms 13ms T1 |-*****-*********-****-*************-| 3ms 6ms 7ms 11ms T2 |-***-******-*******-***********-----| <-----------------36ms---------------> For the moment I calculate throughput (per second) in a following way: 1000 / overallTime * threadCount. But there is problem. What if one thread will complete it's own tests more quickly (for whatever reason): 3ms 3ms 3ms 3ms T1 |-***-***-***-***----------------| 3ms 6ms 7ms 11ms T2 |-***-******-*******-***********-| <--------------32ms--------------> In this case actual throughput is much better because of measured throughput is bounded by the most slow thread. So, my question is how should I measure throughput of code execution in multithreaded environment.

    Read the article

  • curious ill conditioned numerical problem

    - by aaa
    hello. somebody today showed me this curious ill conditioned problem (apparently pretty famous), which looks relatively simple ƒ = (333.75 - a^2)b^6 + a^2 (11a^2 b^2 - 121b^4 - 2) + 5.5b^8 + a/(2^b) where a = 77617 and b = 33096 can you determine correct answer?

    Read the article

  • How do I create a good evaluation function for a new board game?

    - by A. Rex
    I write programs to play board game variants sometimes. The basic strategy is standard alpha-beta pruning or similar searches, sometimes augmented by the usual approaches to endgames or openings. I've mostly played around with chess variants, so when it comes time to pick my evaluation function, I use a basic chess evaluation function. However, now I am writing a program to play a completely new board game. How do I choose a good or even decent evaluation function? The main challenges are that the same pieces are always on the board, so a usual material function won't change based on position, and the game has been played less than a thousand times or so, so humans don't necessarily play it enough well yet to give insight. (PS. I considered a MoGo approach, but random games aren't likely to terminate.) Any ideas? Game details: The game is played on a 10-by-10 board with a fixed six pieces per side. The pieces have certain movement rules, and interact in certain ways, but no piece is ever captured. The goal of the game is to have enough of your pieces in certain special squares on the board. The goal of the computer program is to provide a player which is competitive with or better than current human players.

    Read the article

  • Gaining information from nodes of tree

    - by jainp
    I am working with the tree data structure and trying to come up with a way to calculate information I can gain from the nodes of the tree. I am wondering if there are any existing techniques which can assign higher numerical importance to a node which appears less frequently at lower level (Distance from the root of the tree) than the same nodes appearance at higher level and high frequency. To give an example, I want to give more significance to node Book, at level 2 appearing once, then at level 3 appearing thrice. Will appreciate any suggestions/pointers to techniques which achieve something similar. Thanks, Prateek

    Read the article

  • count of distinct acyclic paths from A[a,b] to A[c,d]?

    - by Sorush Rabiee
    I'm writing a sokoban solver for fun and practice, it uses a simple algorithm (something like BFS with a bit of difference). now i want to estimate its running time ( O and omega). but need to know how to calculate count of acyclic paths from a vertex to another in a network. actually I want an expression that calculates count of valid paths, between two vertices of a m*n matrix of vertices. a valid path: visits each vertex 0 or one times. have no circuits for example this is a valid path: but this is not: What is needed is a method to find count of all acyclic paths between the two vertices a and b. comments on solving methods and tricks are welcomed.

    Read the article

  • Examples of useful or non-trival dual interfaces

    - by Scott Weinstein
    Recently Erik Meijer and others have show how IObservable/IObserver is the dual of IEnumerable/IEnumerator. The fact that they are dual means that any operation on one interface is valid on the other, thus providing a theoretical foundation for the Reactive Extentions for .Net Do other dual interfaces exist? I'm interested in any example, not just .Net based.

    Read the article

  • What is an XYZ-complete problem?

    - by TheMachineCharmer
    EDIT: Diagram: http://www.cs.umass.edu/~immerman/complexity_theory.html There must be some meaning to the word "complete" its used every now and then. Look at the diagram. I tried reading previous posts about NP- My question is what does the word "COMPLETE" mean? Why is it there? What is its significance? N- Non-deterministic - makes sense' P- Polynomial - makes sense but the "COMPLETE" is still a mystery for me.

    Read the article

  • What exactly are administrative redexes after CPS conversion?

    - by eljenso
    In the context of Scheme and CPS conversion, I'm having a little trouble deciding what administrative redexes (lambdas) exactly are: all the lambda expressions that are introduced by the CPS conversion only the lambda expressions that are introduced by the CPS conversion but you wouldn't have written if you did the conversion "by hand" or through a smarter CPS-converter If possible, a good reference would be welcome.

    Read the article

  • What is a data structure for quickly finding non-empty intersections of a list of sets?

    - by Andrey Fedorov
    I have a set of N items, which are sets of integers, let's assume it's ordered and call it I[1..N]. Given a candidate set, I need to find the subset of I which have non-empty intersections with the candidate. So, for example, if: I = [{1,2}, {2,3}, {4,5}] I'm looking to define valid_items(items, candidate), such that: valid_items(I, {1}) == {1} valid_items(I, {2}) == {1, 2} valid_items(I, {3,4}) == {2, 3} I'm trying to optimize for one given set I and a variable candidate sets. Currently I am doing this by caching items_containing[n] = {the sets which contain n}. In the above example, that would be: items_containing = [{}, {1}, {1,2}, {2}, {3}, {3}] That is, 0 is contained in no items, 1 is contained in item 1, 2 is contained in itmes 1 and 2, 2 is contained in item 2, 3 is contained in item 2, and 4 and 5 are contained in item 3. That way, I can define valid_items(I, candidate) = union(items_containing[n] for n in candidate). Is there any more efficient data structure (of a reasonable size) for caching the result of this union? The obvious example of space 2^N is not acceptable, but N or N*log(N) would be.

    Read the article

  • Git dirctet acyclic graph - children know their parents but not the other way around

    - by dayscott
    Git is implemented as a directed acyclic graph. Children know their parents but not the other way round. This makes sense because i can reach every commit only through a branch or a tag ( generally speaking through a reference). That's how i traverse the tree. What other reasons had the developers of Git to make "the children know their parents but not the other way around"?/ What are the key benefits of this?

    Read the article

  • What does the q in a q-grammar stand for?

    - by Aru
    So I've been reading sites and the classic books on compilers, reading about s-grammar and q-grammars I wondered what the s and q stand for, I think the s stands for simple grammar. While the q...well, I have no idea. What does the q in a q-grammar stand for?

    Read the article

  • determine if intersection of a set with conjunction of two other sets is empty

    - by koen
    For any three given sets A, B and C: is there a way to determine (programmatically) whether there is an element of A that is part of the conjunction of B and C? example: A: all numbers greater than 3 B: all numbers lesser than 7 C: all numbers that equal 5 In this case there is an element in set A, being the number 5, that fits. I'm implementing this as specifications, so this numerical range is just an example. A, B, C could be anything.

    Read the article

  • Parent Child Relationships with Fluent NHibernate?

    - by ElHaix
    I would like to create a cascading tree/list of N number of children for a given parent, where a child can also become a parent. Given the following data structure: CountryType=1; ColorType=3; StateType=5 6,7,8 = {Can, US, Mex} 10, 11, 12 = {Red, White, Blue} 20,21,22= {California, Florida, Alberta} TreeID ListTypeID ParentTreeID ListItemID 1 1 Null 6 (Canada is a Country) 2 1 Null 7 (US is a Country) 3 1 Null 8 (Mexico is a Country) 4 3 3 10 (Mexico has Red) 5 3 3 11 (Mexico has White) 6 5 1 22 (Alberta is in Canada) 7 5 7 20 (California is in US) 8 5 7 21 (Florida is in US) 9 3 6 10 (Alberta is Red) 10 3 6 12 (Alberta is Blue) 11 3 2 10 (US is Red) 12 3 2 11 (Us is Blue) How would this be represented in Fluent NHibernate classes? Some direction would be appreciated. Thanks.

    Read the article

  • How to randomize a sorted list?

    - by Faken
    Here's a strange question for you guys, I have a nice sorted list that I wish to randomize. How would i go about doing that? In my application, i have a function that returns a list of points that describe the outline of a discretized object. Due to the way the problem is solved, the function returns a nice ordered list. i have a second boundary described in math and want to determine if the two objects intersect each other. I simply itterate over the points and determine if any one point is inside the mathematical boundary. The method works well but i want to increase speed by randomizing the point data. Since it is likely that that my mathematical boundary will be overlapped by a series of points that are right beside each other, i think it would make sense to check a randomized list rather than iterating over a nice sorted one (as it only takes a single hit to declare an intersection). So, any ideas on how i would go about randomizing an ordered list?

    Read the article

  • Are mathematical Algorithms protected by copyright?

    - by analogy
    I wish to implement an algorithm which i read in a journal paper in my software (commercial). I want to know if this is allowed or not. The algorithm in question is described in http://arxiv.org/abs/0709.2938 It is a very simple algorithm and a number of implementations exist in python (http://igraph.sourceforge.net/) and java. One of them is in gpl another which i got from a different researcher and had no license attached. There are significant differences in two implementations, e.g. second one uses threads and multiple cores. It is possible to rewrite/ (not translate) the algorithm. So can I use it in my software or on a server for commercial purpose. Thanks UPDATE: I am completely aware of copyright on the text of paper, it was published in phys rev E. I am concerned with use of the algorithm, in commercial software. Also the publication means that unless the patent has been already filed. The method has been disclosed publicly hence barring patent in future. Also the GPL implementation is not by authors themselves but comes from a third party. Finally i am not using the GPL implementation but creating my own using C++.

    Read the article

  • Could a truly random number be generated using pings to psuedo-randomly selected IP addresses?

    - by _ande_turner_
    The question posed came about during a 2nd Year Comp Science lecture while discussing the impossibility of generating numbers in a deterministic computational device. This was the only suggestion which didn't depend on non-commodity-class hardware. Subsequently nobody would put their reputation on the line to argue definitively for or against it. Anyone care to make a stand for or against. If so, how about a mention as to a possible implementation?

    Read the article

  • The Immerman-Szelepcsenyi Theorem

    - by Daniel Lorch
    In the Immerman-Szelepcsenyi Theorem, two algorithms are specified that use non-determinisim. There is a rather lengthy algorithm using "inductive counting", which determines the number of reachable configurations for a given non-deterministic turing machine. The algorithm looks like this: Let m_{i+1}=0 For all configurations C Let b=0, r=0 For all configurations D Guess a path from I to D in at most i steps If found Let r=r+1 If D=C or D goes to C in 1 step Let b=1 If r<m_i halt and reject Let m_{i+1}=m_{i+1}+b I is the starting configuration. m_i is the number of configurations reachable from the starting configuration in i steps. This algorithm only calculates the "next step", i.e. m_i+1 from m_i. This seems pretty reasonable, but since we have nondeterminisim, why don't we just write: Let m_i = 0 For all configurations C Guess a path from I to C in at most i steps If found m_i = m_i + 1 What is wrong with this algorithm? I am using nondeterminism to guess a path from I to C, and I verify reachability I am iterating through the list of ALL configurations, so I am sure to not miss any configuration I respect space bounds I can generate a certificate (the list of reachable configs) I believe I have a misunderstanding of the "power" of non-determinisim, but I can't figure out where to look next. I am stuck on this for quite a while and I would really appreciate any help.

    Read the article

  • Calculating Divergent Paths on Subtending Rings

    - by Russ
    I need to calculate two paths from A to B in the following graph, with the constraint that the paths can't share any edges: hmm, okay, can't post images, here's a link. All edges have positive weights; for this example I think we can assume that they're equal. My naive approach is to use Djikstra's algorithm to calculate the first path, shown in the second graph in the above image. Then I remove the edges from the graph and try to calculate the second path, which fails. Is there a variation of Djikstra, Bellman-Ford (or anything else) that will calculate the paths shown in the third diagram above? (Without special knowledge and removal of the subtending link, is what I mean)

    Read the article

  • A two way minimum spanning tree of a directed graph

    - by mvid
    Given a directed graph with weighted edges, what algorithm can be used to give a sub-graph that has minimum weight, but allows movement from any vertex to any other vertex in the graph (under the assumption that paths between any two vertices always exist). Does such an algorithm exist?

    Read the article

  • Indentation control while developing a small python like language

    - by sap
    Hello, I'm developing a small python like language using flex, byacc (for lexical and parsing) and C++, but i have a few questions regarding scope control. just as python it uses white spaces (or tabs) for indentation, not only that but i want to implement index breaking like for instance if you type "break 2" inside a while loop that's inside another while loop it would not only break from the last one but from the first loop as well (hence the number 2 after break) and so on. example: while 1 while 1 break 2 'hello world'!! #will never reach this. "!!" outputs with a newline end 'hello world again'!! #also will never reach this. again "!!" used for cout end #after break 2 it would jump right here but since I don't have an "anti" tab character to check when a scope ends (like C for example i would just use the '}' char) i was wondering if this method would the the best: I would define a global variable, like "int tabIndex" on my yacc file that i would access in my lex file using extern. then every time i find a tab character on my lex file i would increment that variable by 1. when parsing on my yacc file if i find a "break" keyword i would decrement by the amount typed after it from the tabIndex variable, and when i reach and EOF after compiling and i get a tabIndex != 0 i would output compilation error. now the problem is, whats the best way to see if the indentation got reduced, should i read \b (backspace) chars from lex and then reduce the tabIndex variable (when the user doesn't use break)? another method to achieve this? also just another small question, i want every executable to have its starting point on the function called start() should i hardcode this onto my yacc file? sorry for the long question any help is greatly appreciated. also if someone can provide an yacc file for python would be nice as a guideline (tried looking on Google and had no luck). thanks in advance.

    Read the article

  • Can Goldberg algorithm in ocamlgraph be used to find Minimum Cost Flow graph?

    - by Tautrimas
    I'm looking for an implementation to the Minimum Cost Flow graph problem in OCaml. OCaml library ocamlgraph has Goldberg algorithm implementation. The paper called Efficient implementation of the Goldberg-Tarjan minimum-cost flow algorithm is noting that Goldberg-Tarjan algorithm can find minimum cost graph. Question is, does ocamlgraph algorithm also find the minimum cost? Library documentation only states, that it's suitable at least for the maximum flow problem. If not, does anybody have a good link to a nice any minimum cost optimization algorithm code? I will manually translate it into OCaml then. Forgive me, if I missed it on Wikipedia: there are too many algos on flow networks for the first day!

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >