Search Results

Search found 6839 results on 274 pages for 'functional tests'.

Page 37/274 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Yet another Haskell vs. Scala question

    - by Travis Brown
    I've been using Haskell for several months, and I love it—it's gradually become my tool of choice for everything from one-off file renaming scripts to larger XML processing programs. I'm definitely still a beginner, but I'm starting to feel comfortable with the language and the basics of the theory behind it. I'm a lowly graduate student in the humanities, so I'm not under a lot of institutional or administrative pressure to use specific tools for my work. It would be convenient for me in many ways, however, to switch to Scala (or Clojure). Most of the NLP and machine learning libraries that I work with on a daily basis (and that I've written in the past) are Java-based, and the primary project I'm working for uses a Java application server. I've been mostly disappointed by my initial interactions with Scala. Many aspects of the syntax (partial application, for example) still feel clunky to me compared to Haskell, and I miss libraries like Parsec and HXT and QuickCheck. I'm familiar with the advantages of the JVM platform, so practical questions like this one don't really help me. What I'm looking for is a motivational argument for moving to Scala. What does it do (that Haskell doesn't) that's really cool? What makes it fun or challenging or life-changing? Why should I get excited about writing it?

    Read the article

  • posmax: like argmax but gives the position(s) of the element x for which f[x] is maximal

    - by dreeves
    Mathematica has a built-in function ArgMax for functions over infinite domains, based on the standard mathematical definition. The analog for finite domains is a handy utility function. Given a function and a list (call it the domain of the function), return the element(s) of the list that maximize the function. Here's an example of finite argmax in action: http://stackoverflow.com/questions/471029/canonicalize-nfl-team-names/472213#472213 And here's my implementation of it (along with argmin for good measure): (* argmax[f, domain] returns the element of domain for which f of that element is maximal -- breaks ties in favor of first occurrence. *) SetAttributes[{argmax, argmin}, HoldFirst]; argmax[f_, dom_List] := Fold[If[f[#1]>=f[#2], #1, #2]&, First[dom], Rest[dom]] argmin[f_, dom_List] := argmax[-f[#]&, dom] First, is that the most efficient way to implement argmax? What if you want the list of all maximal elements instead of just the first one? Second, how about the related function posmax that, instead of returning the maximal element(s), returns the position(s) of the maximal elements?

    Read the article

  • Chain call in clojure?

    - by Konrad Garus
    I'm trying to implement sieve of Eratosthenes in Clojure. One approach I would like to test is this: Get range (2 3 4 5 6 ... N) For 2 <= i <= N Pass my range through filter that removes multiplies of i For i+1th iteration, use result of the previous filtering I know I could do it with loop/recur, but this is causing stack overflow errors (for some reason tail call optimization is not applied). How can I do it iteratively? I mean invoking N calls to the same routine, passing result of ith iteration to i+1th.

    Read the article

  • Closures and universal quantification

    - by Apocalisp
    I've been trying to work out how to implement Church-encoded data types in Scala. It seems that it requires rank-n types since you would need a first-class const function of type forAll a. a -> (forAll b. b -> b). However, I was able to encode pairs thusly: import scalaz._ trait Compose[F[_],G[_]] { type Apply = F[G[A]] } trait Closure[F[_],G[_]] { def apply[B](f: F[B]): G[B] } def pair[A,B](a: A, b: B) = new Closure[Compose[PartialApply1Of2[Function1,A]#Apply, PartialApply1Of2[Function1,B]#Apply]#Apply, Identity] { def apply[C](f: A => B => C) = f(a)(b) } For lists, I was able to get encode cons: def cons[A](x: A) = { type T[B] = B => (A => B => B) => B new Closure[T,T] { def apply[B](xs: T[B]) = (b: B) => (f: A => B => B) => f(x)(xs(b)(f)) } } However, the empty list is more problematic and I've not been able to get the Scala compiler to unify the types. Can you define nil, so that, given the definition above, the following compiles? cons(1)(cons(2)(cons(3)(nil)))

    Read the article

  • Explain ML type inference to a C++ programmer

    - by Tsubasa Gomamoto
    How does ML perform the type inference in the following function definition: let add a b = a + b Is it like C++ templates where no type-checking is performed until the point of template instantiation after which if the type supports the necessary operations, the function works or else a compilation error is thrown ? i.e. for example, the following function template template <typename NumType> NumType add(NumType a, NumType b) { return a + b; } will work for add<int>(23, 11); but won't work for add<ostream>(cout, fout); Is what I am guessing is correct or ML type inference works differently? PS: Sorry for my poor English; it's not my native language.

    Read the article

  • Technical choices in unmarshaling hash-consed data

    - by Pascal Cuoq
    There seems to be quite a bit of folklore knowledge floating about in restricted circles about the pitfalls of hash-consing combined with marshaling-unmarshaling of data. I am looking for citable references to these tidbits. For instance, someone once pointed me to library aterm and mentioned that the authors had clearly thought about this and that the representation on disk was bottom-up (children of a node come before the node itself in the data stream). This is indeed the right way to do things when you need to re-share each node (with a possible identical node already in memory). This re-sharing pass needs to be done bottom-up, so the unmarshaling itself might as well be, too, so that it's possible to do everything in a single pass. I am in the process of describing difficulties encountered in our own context, and the solutions we found. I would appreciate any citable reference to the kind of aforementioned folklore knowledge. Some people obviously have encountered the problems before (the aterm library is only one example). But I didn't find anything in writing. Even the little piece of information I have about aterm is hear-say. I am not worried it's not reliable (you can't make this up), but "personal communication" and "look how it's done in the source code" are considered poor form in citations. I have enough references on hash-consing alone. I am only interested in references where it interferes with other aspects of programming, such as marshaling or distribution.

    Read the article

  • Are Scala "continuations" just a funky syntax for defining and using Callback Functions?

    - by Alex R
    And I mean that in the same sense that a C/Java for is just a funky syntax for a while loop. I still remember when first learning about the for loop in C, the mental effort that had to go into understanding the execution sequence of the three control expressions relative to the loop statement. Seems to me the same sort of effort has to be applied to understand Continuations (in Scala and I guess probably other languages). And then there's the obvious follow-up question... if so, then what's the point? It seems like a lot of pain (language complexity, programmer errors, unreadable programs, etc) for no gain.

    Read the article

  • Scheme: what are the benefits of letrec?

    - by Ixmatus
    While reading "The Seasoned Schemer" I've begun to learn about letrec. I understand what it does (can be duplicated with a Y-Combinator) but the book is using it in lieu of recurring on the already defined function operating on arguments that remain static. An example of an old function using the defined function recurring on itself (nothing special): (define (substitute new old lat) (cond ((null? l) '()) ((eq? (car l) old) (cons new (substitute new old (cdr l)))) (else (cons (car l) (substitute new old (cdr l)))))) Now for an example of that same function but using letrec: (define (substitute new old lat) (letrec ((replace (lambda (l) (cond ((null? l) '()) ((eq? (car l) old) (cons new (replace (cdr l)))) (else (cons (car l) (replace (cdr l)))))))) (replace lat))) Aside from being slightly longer and more difficult to read I don't know why they are rewriting functions in the book to use letrec. Is there a speed enhancement when recurring over a static variable this way because you don't keep passing it?? Is this standard practice for functions with arguments that remain static but one argument that is reduced (such as recurring down the elements of a list)? Some input from more experienced Schemers/LISPers would help!

    Read the article

  • Problem determining how to order F# types due to circular references

    - by James Black
    I have some types that extend a common type, and these are my models. I then have DAO types for each model type for CRUD operations. I now have a need for a function that will allow me to find an id given any model type, so I created a new type for some miscellaneous functions. The problem is that I don't know how to order these types. Currently I have models before dao, but I somehow need DAOMisc before CityDAO and CityDAO before DAOMisc, which isn't possible. The simple approach would be to put this function in each DAO, referring to just the types that can come before it, so, State comes before City as State has a foreign key relationship with City, so the miscellaneous function would be very short. But, this just strikes me as wrong, so I am not certain how to best approach this. Here is my miscellaneous type, where BaseType is a common type for all my models. type DAOMisc = member internal self.FindIdByType item = match(item:BaseType) with | :? StateType as i -> let a = (StateDAO()).Retrieve i a.Head.Id | :? CityType as i -> let a = (CityDAO()).Retrieve i a.Head.Id | _ -> -1 Here is one dao type. CommonDAO actually has the code for the CRUD operations, but that is not important here. type CityDAO() = inherit CommonDAO<CityType>("city", ["name"; "state_id"], (fun(reader) -> [ while reader.Read() do let s = new CityType() s.Id <- reader.GetInt32 0 s.Name <- reader.GetString 1 s.StateName <- reader.GetString 3 ]), list.Empty ) This is my model type: type CityType() = inherit BaseType() let mutable name = "" let mutable stateName = "" member this.Name with get() = name and set restnameval=name <- restnameval member this.StateName with get() = stateName and set stateidval=stateName <- stateidval override this.ToSqlValuesList = [this.Name;] override this.ToFKValuesList = [StateType(Name=this.StateName);] The purpose for this FindIdByType function is that I want to find the id for a foreign key relationship, so I can set the value in my model and then have the CRUD functions do the operations with all the correct information. So, City needs the id for the state name, so I would get the state name, put it into the state type, then call this function to get the id for that state, so my city insert will also include the id for the foreign key. This seems to be the best approach, in a very generic way to handle inserts, which is the current problem I am trying to solve.

    Read the article

  • Please Help me add up the elements for this structure in Scheme/Lisp

    - by kunjaan
    I have an input which is of this form: (((lady-in-water . 1.25) (snake . 1.75) (run . 2.25) (just-my-luck . 1.5)) ((lady-in-water . 0.8235294117647058) (snake . 0.5882352941176471) (just-my-luck . 0.8235294117647058)) ((lady-in-water . 0.8888888888888888) (snake . 1.5555555555555554) (just-my-luck . 1.3333333333333333))) (context: the word denotes a movie and the number denotes the weighted rating submitted by the user) I need to add all the quantity and return a list which looks something like this ((lady-in-water 2.5) (snake 2.5) (run 2.25) (just-myluck 2.6)) How do I traverse the list and all the quantities? I am really stumped. Please help me. Thanks.

    Read the article

  • Truly declarative language?

    - by gjvdkamp
    Hi all, Does anyone know of a truly declarative language? The behaviour I'm looking for is kind of what Excel does, where I can define variables and formulas, and have the formula's result change when the input changes (without having set the answer again myself) The behaviour I'm looking for is best shown with this pseudo code: X = 10 // define and assign two variables Y = 20; Z = X + Y // declare a formula that uses these two variables X = 50 // change one of the input variables ?Z // asking for Z should now give 70 (50 + 20) I've tried this in a lot of languages like F#, python, matlab etc, but every time i try this they come up with 30 instead of 70. Wich is correct from an imperative point of view, but i'm looking for a more declerative behaviour if you know what i mean. And this is just a very simple calculation. When things get more difficult it should handle stuff like recursion and memoization automagically. The code below would obviously work in C# but it's just so much code for the job, i'm looking for something a bit more to the point without all that 'technical noise' class BlaBla{ public int X {get;set;} // this used to be even worse before 3.0 public int Y {get;set;} public int Z {get{return X + Y;}} } static void main(){ BlaBla bla = new BlaBla(); bla.X = 10; bla.Y = 20; // can't define anything here bla.X = 50; // bit pointless here but I'll do it anyway. Console.Writeline(bla.Z);// 70, hurray! } This just seems like so much code, curly braces and semicolons that add nothing. Is there a language/ application (apart from Exel) that does this? Maybe I'm no doing it right in the mentioned langauges, or I've completely missed an app that does just this. I prototyped a language/ application that does this (along with some other stuff) and am thinking of productizing it. I just can't believe it's not there yet. Don't want to waste my time. Thanks in advance, Gert-Jan

    Read the article

  • How well does Scala Perform Comapred to Java?

    - by Teja Kantamneni
    The Question actually says it all. The reason behind this question is I am about to start a small side project and want to do it in Scala. I am learning scala for the past one month and now I am comfortable working with it. The scala compiler itself is pretty slow (unless you use fsc). So how well does it perform on JVM? I previously worked on groovy and I had seen sometimes over performed than java. My Question is how well scala perform on JVM compared to Java. I know scala has some very good features(FP, dynamic lang, statically typed...) but end of the day we need the performance...

    Read the article

  • Any merit to a lazy-ish juxt function?

    - by NielsK
    In answering a question about a function that maps over multiple functions with the same arguments (A: juxt), I came up with a function that basically took the same form as juxt, but used map: (defn could-be-lazy-juxt [& funs] (fn [& args] (map #(apply %1 %2) funs (repeat args)))) => ((juxt inc dec str) 1) [2 0 "1"] => ((could-be-lazy-juxt inc dec str) 1) (2 0 "1") => ((juxt * / -) 6 2) [12 3 4] => ((could-be-lazy-juxt * / -) 6 2) (12 3 4) As posted in the original question, I have little clue about the laziness or performance of it, but timing in the REPL does suggest something lazy-ish is going on. => (time (apply (juxt + -) (range 1 100))) "Elapsed time: 0.097198 msecs" [4950 -4948] => (time (apply (could-be-lazy-juxt + -) (range 1 100))) "Elapsed time: 0.074558 msecs" (4950 -4948) => (time (apply (juxt + -) (range 10000000))) "Elapsed time: 1019.317913 msecs" [49999995000000 -49999995000000] => (time (apply (could-be-lazy-juxt + -) (range 10000000))) "Elapsed time: 0.070332 msecs" (49999995000000 -49999995000000) I'm sure this function is not really that quick (the print of the outcome 'feels' about as long in both). Doing a 'take x' on the function only limits the amount of functions evaluated, which probably is limited in it's applicability, and limiting the other parameters by 'take' should be just as lazy in normal juxt. Is this juxt really lazy ? Would a lazy juxt bring anything useful to the table, for instance as a compositing step between other lazy functions ? What are the performance (mem / cpu / object count / compilation) implications ? Is that why the Clojure juxt implementation is done with a reduce and returns a vector ? Edit: Somehow things can always be done simpler in Clojure. (defn could-be-lazy-juxt [& funs] (fn [& args] (map #(apply % args) funs)))

    Read the article

  • C++ Program Flow: Sockets in an Object and the Main Function

    - by jfm429
    I have a rather tricky problem regarding C++ program flow using sockets. Basically what I have is this: a simple command-line socket server program that listens on a socket and accepts one connection at a time. When that connection is lost it opens up for further connections. That socket communication system is contained in a class. The class is fully capable of receiving the connections and mirroring the data received to the client. However, the class uses UNIX sockets, which are not object-oriented. My problem is that in my main() function, I have one line - the one that creates an instance of that object. The object then initializes and waits. But as soon as a connection is gained, the object's initialization function returns, and when that happens, the program quits. How do I somehow wait until this object is deleted before the program quits? Summary: main() creates instance of object Object listens Connection received Object's initialization function returns main() exits (!) What I want is for main() to somehow delay until that object is finished with what it's doing (aka it will delete itself) before it quits. Any thoughts?

    Read the article

  • Haskell - Parsec Parsing <p> element

    - by Martin
    I'm using Text.ParserCombinators.Parsec and Text.XHtml to parse an input like this: This is the first paragraph example\n with two lines\n \n And this is the second paragraph\n And my output should be: <p>This is the first paragraph example\n with two lines\n</p> <p>And this is the second paragraph\n</p> I defined: line= do{ ;t<-manyTill (anyChar) newline ;return t } paragraph = do{ t<-many1 (line) ;return ( p << t ) } But it returns: <p>This is the first paragraph example\n with two lines\n\n And this is the second paragraph\n</p> What is wrong? Any ideas? Thanks!

    Read the article

  • cons operator (::) in F#

    - by Max
    The :: operator in F# always prepends elements to the list. Is there an operator that appends to the list? I'm guessing that using @ operator [1; 2; 3] @ [4] would be less efficient, than appending one element.

    Read the article

  • (Rails) Assert_Select's Annoying Warnings

    - by CalebHC
    Does anyone know how to make assert_select not output all those nasty html warnings during a rake test? You know, like this stuff: .ignoring attempt to close body with div opened at byte 1036, line 5 closed at byte 5342, line 42 attributes at open: {"class"=>"inner02"} text around open: "</script>\r\t</head>\r\t<body class=\"inner02" text around close: "\t</div>\r\t\t\t</div>\r\t\t</div>\r\t</body>\r</ht" Thanks

    Read the article

  • Using MonadPlus in FRP.Reactive.FieldTrip

    - by ony
    I'm studying FRP at this moment through FieldTrip adaptor. And hit the problem with strange way of frames scheduling and integration. So now I'm trying to build own marker Event for aligning Behaviour stepping. So... flipflop :: Behavior String flipflop = stepper "none" (xflip 2) where xflip t0 = do t <- withTimeE_ (atTime t0) return "flip" `mplus` xflop (t+3) xflop t0 = do t <- withTimeE_ (atTime t0) return "flop" `mplus` xflip (t+2) txtGeom = ((uscale2 (0.5::Float) *%) . utext . show <$>) main = anim2 (txtGeom . pure flipflop) Questions is: Why this example leads to memory leak? Is there safe way to build sequence of events where each next one is scheduled depending on previous?

    Read the article

  • Explicit method tables in C# instead of OO - good? bad?

    - by FunctorSalad
    Hi! I hope the title doesn't sound too subjective; I absolutely do not mean to start a debate on OO in general. I'd merely like to discuss the basic pros and cons for different ways of solving the following sort of problem. Let's take this minimal example: you want to express an abstract datatype T with functions that may take T as input, output, or both: f1 : Takes a T, returns an int f2 : Takes a string, returns a T f3 : Takes a T and a double, returns another T I'd like to avoid downcasting and any other dynamic typing. I'd also like to avoid mutation whenever possible. 1: Abstract-class-based attempt abstract class T { abstract int f1(); // We can't have abstract constructors, so the best we can do, as I see it, is: abstract void f2(string s); // The convention would be that you'd replace calls to the original f2 by invocation of the nullary constructor of the implementing type, followed by invocation of f2. f2 would need to have side-effects to be of any use. // f3 is a problem too: abstract T f3(double d); // This doesn't express that the return value is of the *same* type as the object whose method is invoked; it just expresses that the return value is *some* T. } 2: Parametric polymorphism and an auxilliary class (all implementing classes of TImpl will be singleton classes): abstract class TImpl<T> { abstract int f1(T t); abstract T f2(string s); abstract T f3(T t, double d); } We no longer express that some concrete type actually implements our original spec -- an implementation is simply a type Foo for which we happen to have an instance of TImpl. This doesn't seem to be a problem: If you want a function that works on arbitrary implementations, you just do something like: // Say we want to return a Bar given an arbitrary implementation of our abstract type Bar bar<T>(TImpl<T> ti, T t); At this point, one might as well skip inheritance and singletons altogether and use a 3 First-class function table class /* or struct, even */ TDictT<T> { readonly Func<T,int> f1; readonly Func<string,T> f2; readonly Func<T,double,T> f3; TDict( ... ) { this.f1 = f1; this.f2 = f2; this.f3 = f3; } } Bar bar<T>(TDict<T> td; T t); Though I don't see much practical difference between #2 and #3. Example Implementation class MyT { /* raw data structure goes here; this class needn't have any methods */ } // It doesn't matter where we put the following; could be a static method of MyT, or some static class collecting dictionaries static readonly TDict<MyT> MyTDict = new TDict<MyT>( (t) => /* body of f1 goes here */ , // f2 (s) => /* body of f2 goes here */, // f3 (t,d) => /* body of f3 goes here */ ); Thoughts? #3 is unidiomatic, but it seems rather safe and clean. One question is whether there are any performance concerns with it. I don't usually need dynamic dispatch, and I'd prefer if these function bodies get statically inlined in places where the concrete implementing type is known statically. Is #2 better in that regard?

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >