Search Results

Search found 762 results on 31 pages for 'haskell'.

Page 14/31 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • How to functionally generate a tree breadth-first. (With Haskell)

    - by Dennetik
    Say I have the following Haskell tree type, where "State" is a simple wrapper: data Tree a = Branch (State a) [Tree a] | Leaf (State a) deriving (Eq, Show) I also have a function "expand :: Tree a - Tree a" which takes a leaf node, and expands it into a branch, or takes a branch and returns it unaltered. This tree type represents an N-ary search-tree. Searching depth-first is a waste, as the search-space is obviously infinite, as I can easily keep on expanding the search-space with the use of expand on all the tree's leaf nodes, and the chances of accidentally missing the goal-state is huge... thus the only solution is a breadth-first search, implemented pretty decent over here, which will find the solution if it's there. What I want to generate, though, is the tree traversed up to finding the solution. This is a problem because I only know how to do this depth-first, which could be done by simply called the "expand" function again and again upon the first child node... until a goal-state is found. (This would really not generate anything other then a really uncomfortable list.) Could anyone give me any hints on how to do this (or an entire algorithm), or a verdict on whether or not it's possible with a decent complexity? (Or any sources on this, because I found rather few.)

    Read the article

  • Can Haskell's monads be thought of as using and returning a hidden state parameter?

    - by AJM
    I don't understand the exact algebra and theory behind Haskell's monads. However, when I think about functional programming in general I get the impression that state would be modelled by taking an initial state and generating a copy of it to represent the next state. This is like when one list is appended to another; neither list gets modified, but a third list is created and returned. Is it therefore valid to think of monadic operations as implicitly taking an initial state object as a parameter and implicitly returning a final state object? These state objects would be hidden so that the programmer doesn't have to worry about them and to control how they gets accessed. So, the programmer would not try to copy the object representing the IO stream as it was ten minutes ago. In other words, if we have this code: main = do putStrLn "Enter your name:" name <- getLine putStrLn ( "Hello " ++ name ) ...is it OK to think of the IO monad and the "do" syntax as representing this style of code? putStrLn :: IOState -> String -> IOState getLine :: IOState -> (IOState, String) main :: IOState -> IOState -- main returns an IOState we can call "state3" main state0 = putStrLn state2 ("Hello " ++ name) where (state2, name) = getLine state1 state1 = putStrLn state0 "Enter your name:"

    Read the article

  • Haskell function composition (.) and function application ($) idioms: correct use.

    - by Robert Massaioli
    I have been reading Real World Haskell and I am nearing the end but a matter of style has been niggling at me to do with the (.) and ($) operators. When you write a function that is a composition of other functions you write it like: f = g . h But when you apply something to the end of those functions I write it like this: k = a $ b $ c $ value But the book would write it like this: k = a . b . c $ value Now to me they look functionally equivalent, they do the exact same thing in my eyes. However, the more I look, the more I see people writing their functions in the manner that the book does: compose with (.) first and then only at the end use ($) to append a value to evaluate the lot (nobody does it with many dollar compositions). Is there a reason for using the books way that is much better than using all ($) symbols? Or is there some best practice here that I am not getting? Or is it superfluous and I shouldn't be worrying about it at all? Thanks.

    Read the article

  • in haskell, why do I need to specify type constraints, why can't the compiler figure them out?

    - by Steve
    Consider the function, add a b = a + b This works: *Main> add 1 2 3 However, if I add a type signature specifying that I want to add things of the same type: add :: a -> a -> a add a b = a + b I get an error: test.hs:3:10: Could not deduce (Num a) from the context () arising from a use of `+' at test.hs:3:10-14 Possible fix: add (Num a) to the context of the type signature for `add' In the expression: a + b In the definition of `add': add a b = a + b So GHC clearly can deduce that I need the Num type constraint, since it just told me: add :: Num a => a -> a -> a add a b = a + b Works. Why does GHC require me to add the type constraint? If I'm doing generic programming, why can't it just work for anything that knows how to use the + operator? In C++ template programming, you can do this easily: #include <string> #include <cstdio> using namespace std; template<typename T> T add(T a, T b) { return a + b; } int main() { printf("%d, %f, %s\n", add(1, 2), add(1.0, 3.4), add(string("foo"), string("bar")).c_str()); return 0; } The compiler figures out the types of the arguments to add and generates a version of the function for that type. There seems to be a fundamental difference in Haskell's approach, can you describe it, and discuss the trade-offs? It seems to me like it would be resolved if GHC simply filled in the type constraint for me, since it obviously decided it was needed. Still, why the type constraint at all? Why not just compile successfully as long as the function is only used in a valid context where the arguments are in Num? Thank you.

    Read the article

  • How does Haskell do pattern matching without us defining an Eq on our data types?

    - by devoured elysium
    I have defined a binary tree: data Tree = Null | Node Tree Int Tree and have implemented a function that'll yield the sum of the values of all its nodes: sumOfValues :: Tree -> Int sumOfValues Null = 0 sumOfValues (Node Null v Null) = v sumOfValues (Node Null v t2) = v + (sumOfValues t2) sumOfValues (Node t1 v Null) = v + (sumOfValues t1) sumOfValues (Node t1 v t2) = v + (sumOfValues t1) + (sumOfValues t2) It works as expected. I had the idea of also trying to implement it using guards: sumOfValues2 :: Tree -> Int sumOfValues2 Null = 0 sumOfValues2 (Node t1 v t2) | t1 == Null && t2 == Null = v | t1 == Null = v + (sumOfValues2 t2) | t2 == Null = v + (sumOfValues2 t1) | otherwise = v + (sumOfValues2 t1) + (sumOfValues2 t2) but this one doesn't work because I haven't implemented Eq, I believe: No instance for (Eq Tree) arising from a use of `==' at zzz3.hs:13:3-12 Possible fix: add an instance declaration for (Eq Tree) In the first argument of `(&&)', namely `t1 == Null' In the expression: t1 == Null && t2 == Null In a stmt of a pattern guard for the definition of `sumOfValues2': t1 == Null && t2 == Null The question that has to be made, then, is how can Haskell make pattern matching without knowing when a passed argument matches, without resorting to Eq?

    Read the article

  • Are there any purely functional Schemes or Lisps?

    - by nickname
    Over the past few months, I've put a lot of effort into learning (or attempting to learn) several functional programming languages. I really like math, so they have been very natural for me to use. Simply to be more specific, I have tried Common Lisp, Scheme, Haskell, OCaml, and (a little bit of) Erlang. I did not like the syntax of OCaml and do not have enough Erlang knowledge to make a judgment on it yet. Because of its consistent and beautiful (non-)syntax, I really like Scheme. However, I really do appreciate the stateless nature of purely functional programming languages such as Haskell. Haskell looks very interesting, but the amount of inconsistent and non-extendable syntax really bothered me. In the interest of preventing a Lisp vs Haskell flame war, just pretend that I can't use Haskell for some other reason. Therefore, my question is: Are there any purely functional Schemes (or Lisps in general)?

    Read the article

  • Best way to convert between [Char] and [Word8]?

    - by cmars232
    I'm new to Haskell and I'm trying to use a pure SHA1 implementation in my app (Data.Digest.Pure.SHA) with a JSON library (AttoJSON). AttoJSON uses Data.ByteString.Char8 bytestrings, SHA uses Data.ByteString.Lazy bytestrings, and some of my string literals in my app are [Char]. This article seems to indicate this is something still being worked out in the Haskell language/Prelude: http // hackage.haskell.org/trac/haskell-prime/wiki/CharAsUnicode And this one lists a few libraries but its a couple years old: http //blog.kfish.org/2007/10/survey-haskell-unicode-support.html [Links broken because SO doesn't trust me -- whatever...] What is the current best way to convert between these types, and what are some of the tradeoffs? I don't want to pick something that is obsolete... Thanks!

    Read the article

  • What does the `forall` keyword in Haskell/GHC do?

    - by JUST MY correct OPINION
    I've been banging my head on this one for (quite literally) years now. I'm beginning to kinda/sorta understand how the foreach keyword is used in so-called "existential types" like this: data ShowBox = forall s. Show s => SB s (This despite the confusingly-worded explanations of it in the fragments found all around the web.) This is only a subset, however, of how foreach is used and I simply cannot wrap my mind around its use in things like this: runST :: forall a. (forall s. ST s a) -> a Or explaining why these are different: foo :: (forall a. a -> a) -> (Char,Bool) bar :: forall a. ((a -> a) -> (Char, Bool)) Or the whole RankNTypes stuff that breaks my brain when "explained" in a way that makes me want to do that Samuel L. Jackson thing from Pulp Fiction. (Don't follow that link if you're easily offended by strong language.) The problem, really, is that I'm a dullard. I can't fathom the chicken scratchings (some call them "formulae") of the elite mathematicians that created this language seeing as my university years are over two decades behind me and I never actually had to put what I learnt into use in practice. I also tend to prefer clear, jargon-free English rather than the kinds of language which are normal in academic environments. Most of the explanations I attempt to read on this (the ones I can find through search engines) have these problems: They're incomplete. They explain one part of the use of this keyword (like "existential types") which makes me feel happy until I read code that uses it in a completely different way (like runST, foo and bar above). They're densely packed with assumptions that I've read the latest in whatever branch of discrete math, category theory or abstract algebra is popular this week. (If I never read the words "consult the paper whatever for details of implementation" again, it will be too soon.) They're written in ways that frequently turn even simple concepts into tortuously twisted and fractured grammar and semantics. (I suspect that the last two items are the biggest problem. I wouldn't know, though, since I'm too much a dullard to comprehend them.) It's been asked why Haskell never really caught on in industry. I suspect, in my own humble, unintelligent way, that my experience in figuring out one stupid little keyword -- a keyword that is increasingly ubiquitous in the libraries being written these days -- are also part of the answer to that question. It's hard for a language to catch on when even its individual keywords cause years-long quests to comprehend. Years-long quests which end in failure. So... On to the actual question. Can anybody completely explain the foreach keyword in clear, plain English (or, if it exists somewhere, point to such a clear explanation which I've missed) that doesn't assume I'm a mathematician steeped in the jargon?

    Read the article

  • Critique of the IO monad being viewed as a state monad operating on the world

    - by Petr Pudlák
    The IO monad in Haskell is often explained as a state monad where the state is the world. So a value of type IO a monad is viewed as something like worldState -> (a, worldState). Some time ago I read an article (or a blog/mailing list post) that criticized this view and gave several reasons why it's not correct. But I cannot remember neither the article nor the reasons. Anybody knows? Edit: The article seems lost, so let's start gathering various arguments here. I'm starting a bounty to make things more interesting.

    Read the article

  • Is there any practical use for the empty type in Common Lisp?

    - by Pedro Rodrigues
    The Common Lisp spec states that nil is the name of the empty type, but I've never found any situation in Common Lisp where I felt like the empty type was useful/necessary. Is it there just for completeness sake (and removing it wouldn't cause any harm to anyone)? Or is there really some practical use for the empty type in Common Lisp? If yes, then I would prefer an answer with code example. For example, in Haskell the empty type can be used when binding foreign data structures, to make sure that no one tries to create values of that type without using the data structure's foreign interface (although in this case, the type is not really empty).

    Read the article

  • Naming conventions for newtype deconstructors (destructors?)

    - by Petr Pudlák
    Looking into Haskell's standard library we can see: newtype StateT s m a = StateT { runStateT :: s -> m (a, s) } newtype WrappedMonad m a = WrapMonad { unwrapMonad :: m a } newtype Sum a = Sum { getSum :: a } Apparently, there (at least) 3 different prefixes used to unwrap a value inside a newtype: un-, run- and get-. (Moreover run- and get- capitalizes the next letter while un- doesn't.) This seems confusing. Are there any reasons for that, or is that just a historical thing? If I design my own newtype, what prefix should I use and why?

    Read the article

  • Why isn't functional language syntax more close to human language?

    - by JohnDoDo
    I'm interested in functional programming and decided to get head to head with Haskell. My head hurts... but I'll eventually get it... I have one curiosity though, why is the syntax so cryptic (in lack of another word)? Is there a reason why it isn't more expressive, more close to human language? I understand that FP is good at modelling mathematical concepts and it borrowed some of it's concise means of expression, but still it's not math... it's a language.

    Read the article

  • Unit testing statically typed functional code

    - by back2dos
    I wanted to ask you people, in which cases it makes sense to unit test statically typed functional code, as written in haskell, scala, ocaml, nemerle, f# or haXe (the last is what I am really interested in, but I wanted to tap into the knowledge of the bigger communities). I ask this because from my understanding: One aspect of unit tests is to have the specs in runnable form. However when employing a declarative style, that directly maps the formalized specs to language semantics, is it even actually possible to express the specs in runnable form in a separate way, that adds value? The more obvious aspect of unit tests is to track down errors that cannot be revealed through static analysis. Given that type safe functional code is a good tool to code extremely close to what your static analyzer understands. However a simple mistake like using x instead of y (both being coordinates) in your code cannot be covered. However such a mistake could also arise while writing the test code, so I am not sure whether its worth the effort. Unit tests do introduce redundancy, which means that when requirements change, the code implementing them and the tests covering this code must both be changed. This overhead of course is about constant, so one could argue, that it doesn't really matter. In fact, in languages like Ruby it really doesn't compared to the benefits, but given how statically typed functional programming covers a lot of the ground unit tests are intended for, it feels like it's a constant overhead one can simply reduce without penalty. From this I'd deduce that unit tests are somewhat obsolete in this programming style. Of course such a claim can only lead to religious wars, so let me boil this down to a simple question: When you use such a programming style, to which extents do you use unit tests and why (what quality is it you hope to gain for your code)? Or the other way round: do you have criteria by which you can qualify a unit of statically typed functional code as covered by the static analyzer and hence needs no unit test coverage?

    Read the article

  • Functional programming and stateful algorithms

    - by bigstones
    I'm learning functional programming with Haskell. In the meantime I'm studying Automata theory and as the two seem to fit well together I'm writing a small library to play with automata. Here's the problem that made me ask the question. While studying a way to evaluate a state's reachability I got the idea that a simple recursive algorithm would be quite inefficient, because some paths might share some states and I might end up evaluating them more than once. For example, here, evaluating reachability of g from a, I'd have to exclude f both while checking the path through d and c: So my idea is that an algorithm working in parallel on many paths and updating a shared record of excluded states might be great, but that's too much for me. I've seen that in some simple recursion cases one can pass state as an argument, and that's what I have to do here, because I pass forward the list of states I've gone through to avoid loops. But is there a way to pass that list also backwards, like returning it in a tuple together with the boolean result of my canReach function? (although this feels a bit forced) Besides the validity of my example case, what other techniques are available to solve this kind of problems? I feel like these must be common enough that there have to be solutions like what happens with fold* or map. So far, reading learnyouahaskell.com I didn't find any, but consider I haven't touched monads yet. (if interested, I posted my code on codereview)

    Read the article

  • Is LINQ to objects a collection of combinators?

    - by Jimmy Hoffa
    I was just trying to explain the usefulness of combinators to a colleague and I told him LINQ to objects are like combinators as they exhibit the same value, the ability to combine small pieces to create a single large piece. Though I don't know that I can call LINQ to objects combinators. I've seen 2 levels of definition for combinator that I generalize as such: A combinator is a function which only uses things passed to it A combinator is a function which only uses things passed to it and other standard atomic functions but not state The first is very rigid and can be seen in the combinatory calculus systems and in haskell things like $ and . and various similar functions meet this rule. The second is less rigid and would allow something like sum as it uses the + function which was not passed in but is standard and not stateful. Though the LINQ extensions in C# use state in their iteration models, so I feel I can't say they're combinators. Can someone who understands the definition of a combinator more thoroughly and with more experience in these realms give a distinct ruling on this? Are my definitions of 'combinator' wrong to begin with?

    Read the article

  • Performance of concurrent software on multicore processors

    - by Giorgio
    Recently I have often read that, since the trend is to build processors with multiple cores, it will be increasingly important to have programming languages that support concurrent programming in order to better exploit the parallelism offered by these processors. In this respect, certain programming paradigms or models are considered well-suited for writing robust concurrent software: Functional programming languages, e.g. Haskell, Scala, etc. The actor model: Erlang, but also available for Scala / Java (Akka), C++ (Theron, Casablanca, ...), and other programming languages. My questions: What is the state of the art regarding the development of concurrent applications (e.g. using multi-threading) using the above languages / models? Is this area still being explored or are there well-established practices already? Will it be more complex to program applications with a higher level of concurrency, or is it just a matter of learning new paradigms and practices? How does the performance of highly concurrent software compare to the performance of more traditional software when executed on multiple core processors? For example, has anyone implemented a desktop application using C++ / Theron, or Java / Akka? Was there a boost in performance on a multiple core processor due to higher parallelism?

    Read the article

  • How do you encode Algebraic Data Types in a C#- or Java-like language?

    - by Jörg W Mittag
    There are some problems which are easily solved by Algebraic Data Types, for example a List type can be very succinctly expressed as: data ConsList a = Empty | ConsCell a (ConsList a) consmap f Empty = Empty consmap f (ConsCell a b) = ConsCell (f a) (consmap f b) l = ConsCell 1 (ConsCell 2 (ConsCell 3 Empty)) consmap (+1) l This particular example is in Haskell, but it would be similar in other languages with native support for Algebraic Data Types. It turns out that there is an obvious mapping to OO-style subtyping: the datatype becomes an abstract base class and every data constructor becomes a concrete subclass. Here's an example in Scala: sealed abstract class ConsList[+T] { def map[U](f: T => U): ConsList[U] } object Empty extends ConsList[Nothing] { override def map[U](f: Nothing => U) = this } final class ConsCell[T](first: T, rest: ConsList[T]) extends ConsList[T] { override def map[U](f: T => U) = new ConsCell(f(first), rest.map(f)) } val l = (new ConsCell(1, new ConsCell(2, new ConsCell(3, Empty))) l.map(1+) The only thing needed beyond naive subclassing is a way to seal classes, i.e. a way to make it impossible to add subclasses to a hierarchy. How would you approach this problem in a language like C# or Java? The two stumbling blocks I found when trying to use Algebraic Data Types in C# were: I couldn't figure out what the bottom type is called in C# (i.e. I couldn't figure out what to put into class Empty : ConsList< ??? >) I couldn't figure out a way to seal ConsList so that no subclasses can be added to the hierarchy What would be the most idiomatic way to implement Algebraic Data Types in C# and/or Java? Or, if it isn't possible, what would be the idiomatic replacement?

    Read the article

  • why are transaction monitors on decline? or are they?

    - by mrkafk
    http://www.itjobswatch.co.uk/jobs/uk/cics.do http://www.itjobswatch.co.uk/jobs/uk/tuxedo.do Look at the demand for programmers (% of job ads that the keyword appears), first graph under the table. It seems like demand for CICS, Tuxedo has fallen from 2.5%/1% respectively to almost zero. To me, it seems bizarre: now we have more networked and internet enabled machines than ever before. And most of them are talking to some kind of database. So it would seem that use of products whose developers spent last 20-30 years working on distributing and coordinating and optimizing transactions should be on the rise. And it appears they're not. I can see a few causes but can't tell whether they are true: we forgot that concurrency and distribution are really hard, and redoing it all by ourselves, in Java, badly. Erlang killed them all. Projects nowadays have changed character, like most business software has already been built and we're all doing internet services, using stuff like Node.js, Erlang, Haskell. (I've used RabbitMQ which is written in Erlang, "but it was small specialized side project" kind of thing). BigData is the emphasis now and BigData doesn't need transactions very much (?). None of those explanations seem particularly convincing to me, which is why I'm looking for better one. Anyone?

    Read the article

  • How to write Haskell function to verify parentheses matching?

    - by Rizo
    I need to write a function par :: String -> Bool to verify if a given string with parentheses is matching using stack module. Ex: par "(((()[()])))" = True par "((]())" = False Here's my stack module implementation: module Stack (Stack, push, pop, top, empty, isEmpty) where data Stack a = Stk [a] deriving (Show) push :: a -> Stack a -> Stack a push x (Stk xs) = Stk (x:xs) pop :: Stack a -> Stack a pop (Stk (_:xs)) = Stk xs pop _ = error "Stack.pop: empty stack" top :: Stack a -> a top (Stk (x:_)) = x top _ = error "Stack.top: empty stack" empty :: Stack a empty = Stk [] isEmpty :: Stack a -> Bool isEmpty (Stk [])= True isEmpty (Stk _) = False So I need to implement a 'par' function that would test a string of parentheses and say if parentheses in it matches or not. How can I do that using a stack?

    Read the article

  • Why is Haskell used so little in the industry?

    - by bugspy.net
    It is a wonderful, very fast, mature and complete language. It exists for a very long time and has a big set of libraries. Yet, it appears not to be widely used. Why ? I suspect it is because it is pretty rough and unforgiving for beginners, and maybe because its lazy execution makes it even harder

    Read the article

  • In Haskell, will calling length on a Lazy ByteString force the entire string into memory?

    - by me2
    I am reading a large data stream using lazy bytestrings, and want to know if at least X more bytes is available while parsing it. That is, I want to know if the bytestring is at least X bytes long. Will calling length on it result in the entire stream getting loaded, hence defeating the purpose of using the lazy bytestring? If yes, then the followup would be: How to tell if it has at least X bytes without loading the entire stream? EDIT: Originally I asked in the context of reading files but understand that there are better ways to determine filesize. Te ultimate solution I need however should not depend on the lazy bytestring source.

    Read the article

  • In Haskell, what does it mean if a binding "shadows an existing binding"?

    - by Alistair
    I'm getting a warning from GHC when I compile: Warning: This binding for 'pats' shadows an existing binding in the definition of 'match_ignore_ancs' Here's the function: match_ignore_ancs (TextPat _ c) (Text t) = c t match_ignore_ancs (TextPat _ _) (Element _ _ _) = False match_ignore_ancs (ElemPat _ _ _) (Text t) = False match_ignore_ancs (ElemPat _ c pats) (Element t avs xs) = c t avs && match_pats pats xs Any idea what this means and how I can fix it? Cheers.

    Read the article

  • What is wrong with my definition of Zip in Haskell?

    - by kunjaan
    -- eg. myzip [’a’, ’b’, ’c’] [1, 2, 3, 4] -> [(’a’, 1), (’b’, 2), (’c’, 3)] myzip :: Ord a => [a] -> [a] -> [(a,a)] myzip list1 list2 = [(x,y) | [x, _] <-list1, [y,_] <-list2 ] I get this error message: Occurs check: cannot construct the infinite type: a = [a] When generalising the type(s) for `myzip' Failed, modules loaded: none.

    Read the article

  • what's the way to determine if an Int a perfect square in Haskell?

    - by valya
    I need a simple function is_square :: Int -> Bool which determines if an Int N a perfect square (is there an integer x such that x*x = N). Of course I can just write something like is_square n = sq * sq == n where sq = floor $ sqrt $ (fromIntegral n::Double) but it looks terrible! Maybe there is a common simple way to implement such predicate?

    Read the article

  • Why do compiled Haskell libraries see invalid static FFI storage?

    - by John Millikin
    I am using GHC 6.12.1, in Ubuntu 10.04 When I try to use the FFI syntax for static storage, only modules running in interpreted mode (ie GHCI) work properly. Compiled modules have invalid pointers, and do not work. I'd like to know whether anybody can reproduce the problem, whether this an error in my code or GHC, and (if the latter) whether it's a known issue. Given the following three modules: -- A.hs {-# LANGUAGE ForeignFunctionInterface #-} module A where import Foreign import Foreign.C foreign import ccall "&sys_siglist" siglist_a :: Ptr CString -- -- B.hs {-# LANGUAGE ForeignFunctionInterface #-} module B where import Foreign import Foreign.C foreign import ccall "&sys_siglist" siglist_b :: Ptr CString -- -- Main.hs {-# LANGUAGE ForeignFunctionInterface #-} module Main where import Foreign import Foreign.C import A import B foreign import ccall "&sys_siglist" siglist_main :: Ptr CString main = do putStrLn $ "siglist_a = " ++ show siglist_a putStrLn $ "siglist_b = " ++ show siglist_b putStrLn $ "siglist_main = " ++ show siglist_main peekSiglist "a " siglist_a peekSiglist "b " siglist_b peekSiglist "main" siglist_main peekSiglist name siglist = do ptr <- peekElemOff siglist 2 str <- maybePeek peekCString ptr putStrLn $ "siglist_" ++ name ++ "[2] = " ++ show str I would expect something like this output, where all pointer values identical and valid: $ runhaskell Main.hs siglist_a = 0x00007f53a948fe00 siglist_b = 0x00007f53a948fe00 siglist_main = 0x00007f53a948fe00 siglist_a [2] = Just "Interrupt" siglist_b [2] = Just "Interrupt" siglist_main[2] = Just "Interrupt" However, if I compile A.hs (with ghc -c A.hs), then the output changes to: $ runhaskell Main.hs siglist_a = 0x0000000040378918 siglist_b = 0x00007fe7c029ce00 siglist_main = 0x00007fe7c029ce00 siglist_a [2] = Nothing siglist_b [2] = Just "Interrupt" siglist_main[2] = Just "Interrupt"

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >