Search Results

Search found 762 results on 31 pages for 'haskell'.

Page 24/31 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Serialization of a TChan String

    - by J Fritsch
    I have declared the following type KEY = (IPv4, Integer) type TPSQ = TVar (PSQ.PSQ KEY POSIXTime) type TMap = TVar (Map.Map KEY [String]) data Qcfg = Qcfg { qthresh :: Int, tdelay :: Rational, cwpsq :: TPSQ, cwmap :: TMap, cw chan :: TChan String } deriving (Show) and would like this to be serializable in a sense that Qcfg can either be written to disk or be sent over the network. When I compile this I get the error No instances for (Show TMap, Show TPSQ, Show (TChan String)) arising from the 'deriving' clause of a data type declaration Possible fix: add instance declarations for (Show TMap, Show TPSQ, Show (TChan String)) or use a standalone 'deriving instance' declaration, so you can specify the instance context yourself When deriving the instance for (Show Qcfg) I am now not quite sure whether there is a chance at all to serialize my TChan although all individual nodes in it are members of the show class. For TMap and TPSQ I wonder whether there are ways to show the values in the TVar directly (because it does not get changed, so there should no need to lock it) without having to declare an instance that does a readTVar ?

    Read the article

  • C++11 support for higher-order list functions

    - by Giorgio
    Most functional programming languages (e.g. Common Lisp, Scheme / Racket, Clojure, Haskell, Scala, Ocaml, SML) support some common higher-order functions on lists, such as map, filter, takeWhile, dropWhile, foldl, foldr (see e.g. Common Lisp, Scheme / Racket, Clojure side-by-side reference sheet, the Haskell, Scala, OCaml, and the SML documentation.) Does C++11 have equivalent standard methods or functions on lists? For example, consider the following Haskell snippet: let xs = [1, 2, 3, 4, 5] let ys = map (\x -> x * x) xs How can I express the second expression in modern standard C++? std::list<int> xs = ... // Initialize the list in some way. std::list<int> ys = ??? // How to translate the Haskell expression? What about the other higher-order functions mentioned above? Can they be directly expressed in C++?

    Read the article

  • Introducing functional programming constructs in non-functional programming languages

    - by Giorgio
    This question has been going through my mind quite a lot lately and since I haven't found a convincing answer to it I would like to know if other users of this site have thought about it as well. In the recent years, even though OOP is still the most popular programming paradigm, functional programming is getting a lot of attention. I have only used OOP languages for my work (C++ and Java) but I am trying to learn some FP in my free time because I find it very interesting. So, I started learning Haskell three years ago and Scala last summer. I plan to learn some SML and Caml as well, and to brush up my (little) knowledge of Scheme. Well, a lot of plans (too ambitious?) but I hope I will find the time to learn at least the basics of FP during the next few years. What is important for me is how functional programming works and how / whether I can use it for some real projects. I have already developed small tools in Haskell. In spite of my strong interest for FP, I find it difficult to understand why functional programming constructs are being added to languages like C#, Java, C++, and so on. As a developer interested in FP, I find it more natural to use, say, Scala or Haskell, instead of waiting for the next FP feature to be added to my favourite non-FP language. In other words, why would I want to have only some FP in my originally non-FP language instead of looking for a language that has a better support for FP? For example, why should I be interested to have lambdas in Java if I can switch to Scala where I have much more FP concepts and access all the Java libraries anyway? Similarly: why do some FP in C# instead of using F# (to my knowledge, C# and F# can work together)? Java was designed to be OO. Fine. I can do OOP in Java (and I would like to keep using Java in that way). Scala was designed to support OOP + FP. Fine: I can use a mix of OOP and FP in Scala. Haskell was designed for FP: I can do FP in Haskell. If I need to tune the performance of a particular module, I can interface Haskell with some external routines in C. But why would I want to do OOP with just some basic FP in Java? So, my main point is: why are non-functional programming languages being extended with some functional concept? Shouldn't it be more comfortable (interesting, exciting, productive) to program in a language that has been designed from the very beginning to be functional or multi-paradigm? Don't different programming paradigms integrate better in a language that was designed for it than in a language in which one paradigm was only added later? The first explanation I could think of is that, since FP is a new concept (it isn't new at all, but it is new for many developers), it needs to be introduced gradually. However, I remember my switch from imperative to OOP: when I started to program in C++ (coming from Pascal and C) I really had to rethink the way in which I was coding, and to do it pretty fast. It was not gradual. So, this does not seem to be a good explanation to me. Or can it be that many non-FP programmers are not really interested in understanding and using functional programming, but they find it practically convenient to adopt certain FP-idioms in their non-FP language? IMPORTANT NOTE Just in case (because I have seen several language wars on this site): I mentioned the languages I know better, this question is in no way meant to start comparisons between different programming languages to decide which is better / worse. Also, I am not interested in a comparison of OOP versus FP (pros and cons). The point I am interested in is to understand why FP is being introduced one bit at a time into existing languages that were not designed for it even though there exist languages that were / are specifically designed to support FP.

    Read the article

  • Functional programming constructs in non-functional programming languages

    - by Giorgio
    This question has been going through my mind quite a lot lately and since I haven't found a convincing answer to it I would like to know if other users of this site have thought about it as well. In the recent years, even though OOP is still the most popular programming paradigm, functional programming is getting a lot of attention. I have only used OOP languages for my work (C++ and Java) but I am trying to learn some FP in my free time because I find it very interesting. So, I started learning Haskell three years ago and Scala last summer. I plan to learn some SML and Caml as well, and to brush up my (little) knowledge of Scheme. Well, a lot of plans (too ambitious?) but I hope I will find the time to learn at least the basics of FP during the next few years. What is important for me is how functional programming works and how / whether I can use it for some real projects. I have already developed small tools in Haskell. In spite of my strong interest for FP, I find it difficult to understand why functional programming constructs are being added to languages like C#, Java, C++, and so on. As a developer interested in FP, I find it more natural to use, say, Scala or Haskell, instead of waiting for the next FP feature to be added to my favourite non-FP language. In other words, why would I want to have only some FP in my originally non-FP language instead of looking for a language that has a better support for FP? For example, why should I be interested to have lambdas in Java if I can switch to Scala where I have much more FP concepts and access all the Java libraries anyway? Similarly: why do some FP in C# instead of using F# (to my knowledge, C# and F# can work together)? Java was designed to be OO. Fine. I can do OOP in Java (and I would like to keep using Java in that way). Scala was designed to support OOP + FP. Fine: I can use a mix of OOP and FP in Scala. Haskell was designed for FP: I can do FP in Haskell. If I need to tune the performance of a particular module, I can interface Haskell with some external routines in C. But why would I want to do OOP with just some basic FP in Java? So, my main point is: why are non-functional programming languages being extended with some functional concept? Shouldn't it be more comfortable (interesting, exciting, productive) to program in a language that has been designed from the very beginning to be functional or multi-paradigm? Don't different programming paradigms integrate better in a language that was designed for it than in a language in which one paradigm was only added later? The first explanation I could think of is that, since FP is a new concept (it isn't new at all, but it is new for many developers), it needs to be introduced gradually. However, I remember my switch from imperative to OOP: when I started to program in C++ (coming from Pascal and C) I really had to rethink the way in which I was coding, and to do it pretty fast. It was not gradual. So, this does not seem to be a good explanation to me. Also, I asked myself if my impression is just plainly wrong due to lack of knowledge. E.g., do C# and C++11 support FP as extensively as, say, Scala or Caml do? In this case, my question would be simply non-existent. Or can it be that many non-FP programmers are not really interested in using functional programming, but they find it practically convenient to adopt certain FP-idioms in their non-FP language? IMPORTANT NOTE Just in case (because I have seen several language wars on this site): I mentioned the languages I know better, this question is in no way meant to start comparisons between different programming languages to decide which is better / worse. Also, I am not interested in a comparison of OOP versus FP (pros and cons). The point I am interested in is to understand why FP is being introduced one bit at a time into existing languages that were not designed for it even though there exist languages that were / are specifically designed to support FP.

    Read the article

  • How do you keep down your urge to learn many things [closed]

    - by devsundar
    One of the difficulties i have is to lower my urge to learn new things (Languages, tools, frameworks etc.). I know it's good to stay the bleeding edge, but at the same time i want to learn things properly. I really see that i need to strike a balance between staying bleeding edge and knowing things properly. For example: Before choosing Arch (Desktop), Ubuntu(Server) and Knoppix(Portable) -- depending on situation -- as favourite distributions. Virtually i have tried all popular linux distributions. You name any popular linux (Redhat, Ubuntu, Arch, Suse, Knoppix, Slax, Slackware) i have tried it for some time. In fact i have spent few years experimenting the operating systems. Before choosing Python, Javascript (nodejs). I have tried all the languages i cameacross Scala, Haskell, Erlang, Ruby, Python, Perl, Scheme. Same applies for database. All popular db RDBMS (Oracle, Mysql, Postgres, SQLite[Favourite] etc) and NoSQL (Mongo, Couch, Neo4j etc.). Advantages i see: We get a overall picture of the technologies/tools/languages. It's useful to select the right tool for the job. We develop a taste and choose the One we like. Disadvantages: I feel that i spend somuch time and see a need to strike a balance. In summary, for e.g. If i see a blog post in HackerNews about CofeeScript i will try it out irrespective of what i am currently learning (Say Haskell). I switch back to learning Haskell, then again i see DART i check it out. And this continues.. Effectively i take more time to learn Haskell, but learnt about other new stuff on the way. The quetion i have is how do you strike a balance between staying bleeding edge and learning properly.

    Read the article

  • Succinct introduction to C++/CLI for C#/Haskell/F#/JS/C++/... programmer

    - by Henrik
    Hello everybody, I'm trying to write integrations with the operating system and with things like active directory and Ocropus. I know a bunch of programming languages, including those listed in the title. I'm trying to learn exactly how C++/CLI works, but can't find succinct, exact and accurate descriptions online from the searching that I have done. So I ask here. Could you tell me the pitfalls and features of C++/CLI? Assume I know all of C# and start from there. I'm not an expert in C++, so some of my questions' answers might be "just like C++", but could say that I am at C#. I would like to know things like: Converting C++ pointers to CLI pointers, Any differences in passing by value/doubly indirect pointers/CLI pointers from C#/C++ and what is 'recommended'. How do gcnew, __gc, __nogc work with Polymorphism Structs Inner classes Interfaces The "fixed" keyword; does that exist? Compiling DLLs loaded into the kernel with C++/CLI possible? Loaded as device drivers? Invoked by the kernel? What does this mean anyway (i.e. to load something into the kernel exactly; how do I know if it is?)? L"my string" versus "my string"? wchar_t? How many types of chars are there? Are we safe in treating chars as uint32s or what should one treat them as to guarantee language indifference in code? Finalizers (~ClassName() {}) are discouraged in C# because there are no garantuees they will run deterministically, but since in C++ I have to use "delete" or use copy-c'tors as to stack allocate memory, what are the recommendations between C#/C++ interactions? What are the pitfalls when using reflection in C++/CLI? How well does C++/CLI work with the IDisposable pattern and with SafeHandle, SafeHandleZeroOrMinusOneIsInvalid? I've read briefly about asynchronous exceptions when doing DMA-operations, what are these? Are there limitations you impose upon yourself when using C++ with CLI integration rather than just doing plain C++? Attributes in C++ similar to Attributes in C#? Can I use the full meta-programming patterns available in C++ through templates now and still have it compile like ordinary C++? Have you tried writing C++/CLI with boost? What are the optimal ways of interfacing the boost library with C++/CLI; can you give me an example of passing a lambda expression to an iterator/foldr function? What is the preferred way of exception handling? Can C++/CLI catch managed exceptions now? How well does dynamic IL generation work with C++/CLI? Does it run on Mono? Any other things I ought to know about?

    Read the article

  • Succinct introduction to C++/CLI for C#/Haskell/F#/JS/C++/... programmer

    - by Henrik
    Hello everybody, I'm trying to write integrations with the operating system and with things like active directory and Ocropus. I know a bunch of programming languages, including those listed in the title. I'm trying to learn exactly how C++/CLI works, but can't find succinct, exact and accurate descriptions online from the searching that I have done. So I ask here. Could you tell me the pitfalls and features of C++/CLI? Assume I know all of C# and start from there. I'm not an expert in C++, so some of my questions' answers might be "just like C++", but could say that I am at C#. I would like to know things like: Converting C++ pointers to CLI pointers, Any differences in passing by value/doubly indirect pointers/CLI pointers from C#/C++ and what is 'recommended'. How do gcnew, __gc, __nogc work with Polymorphism Structs Inner classes Interfaces The "fixed" keyword; does that exist? Compiling DLLs loaded into the kernel with C++/CLI possible? Loaded as device drivers? Invoked by the kernel? What does this mean anyway (i.e. to load something into the kernel exactly; how do I know if it is?)? L"my string" versus "my string"? wchar_t? How many types of chars are there? Are we safe in treating chars as uint32s or what should one treat them as to guarantee language indifference in code? Finalizers (~ClassName() {}) are discouraged in C# because there are no garantuees they will run deterministically, but since in C++ I have to use "delete" or use copy-c'tors as to stack allocate memory, what are the recommendations between C#/C++ interactions? What are the pitfalls when using reflection in C++/CLI? How well does C++/CLI work with the IDisposable pattern and with SafeHandle, SafeHandleZeroOrMinusOneIsInvalid? I've read briefly about asynchronous exceptions when doing DMA-operations, what are these? Are there limitations you impose upon yourself when using C++ with CLI integration rather than just doing plain C++? Attributes in C++ similar to Attributes in C#? Can I use the full meta-programming patterns available in C++ through templates now and still have it compile like ordinary C++? Have you tried writing C++/CLI with boost? What are the optimal ways of interfacing the boost library with C++/CLI; can you give me an example of passing a lambda expression to an iterator/foldr function? What is the preferred way of exception handling? Can C++/CLI catch managed exceptions now? How well does dynamic IL generation work with C++/CLI? Does it run on Mono? Any other things I ought to know about?

    Read the article

  • Learn Many Languages

    - by Jeff Foster
    My previous blog, Deliberate Practice, discussed the need for developers to “sharpen their pencil” continually, by setting aside time to learn how to tackle problems in different ways. However, the Sapir-Whorf hypothesis, a contested and somewhat-controversial concept from language theory, seems to hold reasonably true when applied to programming languages. It states that: “The structure of a language affects the ways in which its speakers conceptualize their world.” If you’re constrained by a single programming language, the one that dominates your day job, then you only have the tools of that language at your disposal to think about and solve a problem. For example, if you’ve only ever worked with Java, you would never think of passing a function to a method. A good developer needs to learn many languages. You may never deploy them in production, you may never ship code with them, but by learning a new language, you’ll have new ideas that will transfer to your current “day-job” language. With the abundant choices in programming languages, how does one choose which to learn? Alan Perlis sums it up best. “A language that doesn‘t affect the way you think about programming is not worth knowing“ With that in mind, here’s a selection of languages that I think are worth learning and that have certainly changed the way I think about tackling programming problems. Clojure Clojure is a Lisp-based language running on the Java Virtual Machine. The unique property of Lisp is homoiconicity, which means that a Lisp program is a Lisp data structure, and vice-versa. Since we can treat Lisp programs as Lisp data structures, we can write our code generation in the same style as our code. This gives Lisp a uniquely powerful macro system, and makes it ideal for implementing domain specific languages. Clojure also makes software transactional memory a first-class citizen, giving us a new approach to concurrency and dealing with the problems of shared state. Haskell Haskell is a strongly typed, functional programming language. Haskell’s type system is far richer than C# or Java, and allows us to push more of our application logic to compile-time safety. If it compiles, it usually works! Haskell is also a lazy language – we can work with infinite data structures. For example, in a board game we can generate the complete game tree, even if there are billions of possibilities, because the values are computed only as they are needed. Erlang Erlang is a functional language with a strong emphasis on reliability. Erlang’s approach to concurrency uses message passing instead of shared variables, with strong support from both the language itself and the virtual machine. Processes are extremely lightweight, and garbage collection doesn’t require all processes to be paused at the same time, making it feasible for a single program to use millions of processes at once, all without the mental overhead of managing shared state. The Benefits of Multilingualism By studying new languages, even if you won’t ever get the chance to use them in production, you will find yourself open to new ideas and ways of coding in your main language. For example, studying Haskell has taught me that you can do so much more with types and has changed my programming style in C#. A type represents some state a program should have, and a type should not be able to represent an invalid state. I often find myself refactoring methods like this… void SomeMethod(bool doThis, bool doThat) { if (!(doThis ^ doThat)) throw new ArgumentException(“At least one arg should be true”); if (doThis) DoThis(); if (doThat) DoThat(); } …into a type-based solution, like this: enum Action { DoThis, DoThat, Both }; void SomeMethod(Action action) { if (action == Action.DoThis || action == Action.Both) DoThis(); if (action == Action.DoThat || action == Action.Both) DoThat(); } At this point, I’ve removed the runtime exception in favor of a compile-time check. This is a trivial example, but is just one of many ideas that I’ve taken from one language and implemented in another.

    Read the article

  • Programming concepts taken from the arts and humanities

    - by Joey Adams
    After reading Paul Graham's essay Hackers and Painters and Joel Spolsky's Advice for Computer Science College Students, I think I've finally gotten it through my thick skull that I should not be loath to work hard in academic courses that aren't "programming" or "computer science" courses. To quote the former: I've found that the best sources of ideas are not the other fields that have the word "computer" in their names, but the other fields inhabited by makers. Painting has been a much richer source of ideas than the theory of computation. — Paul Graham, "Hackers and Painters" There are certainly other, much stronger reasons to work hard in the "boring" classes. However, it'd also be neat to know that these classes may someday inspire me in programming. My question is: what are some specific examples where ideas from literature, art, humanities, philosophy, and other fields made their way into programming? In particular, ideas that weren't obviously applied the way they were meant to (like most math and domain-specific knowledge), but instead gave utterance or inspiration to a program's design and choice of names. Good examples: The term endian comes from Gulliver's Travels by Tom Swift (see here), where it refers to the trivial matter of which side people crack open their eggs. The terms journal and transaction refer to nearly identical concepts in both filesystem design and double-entry bookkeeping (financial accounting). mkfs.ext2 even says: Writing superblocks and filesystem accounting information: done Off-topic: Learning to write English well is important, as it enables a programmer to document and evangelize his/her software, as well as appear competent to other programmers online. Trigonometry is used in 2D and 3D games to implement rotation and direction aspects. Knowing finance will come in handy if you want to write an accounting package. Knowing XYZ will come in handy if you want to write an XYZ package. Arguably on-topic: The Monad class in Haskell is based on a concept by the same name from category theory. Actually, Monads in Haskell are monads in the category of Haskell types and functions. Whatever that means...

    Read the article

  • Why do I not get the correct answer for Euler 56 in J?

    - by Gregory Higley
    I've solved 84 of the Project Euler problems, mostly in Haskell. I am now going back and trying to solve in J some of those I already solved in Haskell, as an exercise in learning J. Currently, I am trying to solve Problem 56. Let me stress that I already know what the right answer is, since I've already solved it in Haskell. It's a very easy, trivial problem. I will not give the answer here. Here is my solution in J: digits =: ("."0)":"0 eachDigit =: adverb : 'u@:digits"0' NB. I use this so often I made it an adverb. cartesian =: adverb : '((#~ #) u ($~ ([:*~#)))' >./ +/ eachDigit x: ^ cartesian : i. 99 This produces a number less than the desired result. In other words, it's wrong somehow. Any J-ers out there know why? I'm baffled, since it's pretty straightforward and totally brute force.

    Read the article

  • Learning functional programming [closed]

    - by Oni
    This question is similar to Choosing a functional programming language. I want to learn functional programming but I am having troubles choosing the right programming language. At the university I studied Haskell for 2 months, so I have a basic idea of what a functional language is. I have read a lot that functional programming change your way of think. I started to take a look to Clojure, which I like for several reasons(code as data, JVM, etc). What stops me from continue learning Clojure is that it is not a pure functional language and I am afraid of ending up using imperative/OO style. Should I learn Haskell or keep on learning Clojure? Thanks in advance P.D: I am open to any other language.

    Read the article

  • Languages with C/C++ output [closed]

    - by Vag
    Which languages have compilers able to emit plain standard C/C++ code? For a start: Haxe // uses Boehm GC Haskell (JHC) Haskell (old GHC) // -fvia-c, removed recently (emitted code is super ugly) Clay ATS Cython RPython (Shed Skin) // experimental RPython (PyPy) Python (Nuitka) // although author claims there are no speedups Common Lisp (ECL) COBOL (OpenCobol) Scheme (Chicken) APL // So far I've not found working implementation available for free download Ur/Web // GCC-specific output, and intended to be used only for web developments (included for completeness only) I'd like to build comprehensive up-to-date list but found only these ones so far. I've tested only Haxe and it works pretty well and quite fast. What about other ones? What is your expirience? How much ugly is generated code? Update. Any language chains (e.g. X - Scheme - C) will be perfectly OK as answer if its use is practical enough and suited for production use.

    Read the article

  • Are C or C++ The Only Viable Languages for a GC

    - by user95312
    Background I have just finished writing a compiler for a functional language compiling to the JVM as a learning project. However, since I'm just doing this to learn, I thought it might be interesting to write a native backend and a RTS for it. As I've been planning out what this new backend will look like, the one point I'm stumbling on is the garbage collector. I've implemented the compiler in Haskell. But I have no desire to write the GC in Haskell since, while it may be possible, it'd suck. Question I've looked at several FOSS garbage collectors prior to posting and most of them were implemented in good old ANSI C. Is this still the most accepted choice for writing a GC nowadays? I've seen that this site tends to frown upon questions with multiple answers so I hope this will make it more specific: If some startup was writing a professional grade gc today, are the only viable choice for them C or C++? It's my first question here so please comment and let me know if this question is ill-suited for for programmers.

    Read the article

  • Manual memory allocation and purity

    - by Eonil
    Language like Haskell have concept of purity. In pure function, I can't mutate any state globally. Anyway Haskell fully abstracts memory management, so memory allocation is not a problem here. But if languages can handle memory directly like C++, it's very ambiguous to me. In these languages, memory allocation makes visible mutation. But if I treat making new object as impure action, actually, almost nothing can be pure. So purity concept becomes almost useless. How should I handle purity in languages have memory as visible global object?

    Read the article

  • Are functional programming languages good for practical tasks?

    - by Clueless
    It seems to me from my experimenting with Haskell, Erlang and Scheme that functional programming languages are a fantastic way to answer scientific questions. For example, taking a small set of data and performing some extensive analysis on it to return a significant answer. It's great for working through some tough Project Euler questions or trying out the Google Code Jam in an original way. At the same time it seems that by their very nature, they are more suited to finding analytical solutions than actually performing practical tasks. I noticed this most strongly in Haskell, where everything is evaluated lazily and your whole program boils down to one giant analytical solution for some given data that you either hard-code into the program or tack on messily through Haskell's limited IO capabilities. Basically, the tasks I would call 'practical' such as Aceept a request, find and process requested data, and return it formatted as needed seem to translate much more directly into procedural languages. The most luck I have had finding a functional language that works like this is Factor, which I would liken to a reverse-polish-notation version of Python. So I am just curious whether I have missed something in these languages or I am just way off the ball in how I ask this question. Does anyone have examples of functional languages that are great at performing practical tasks or practical tasks that are best performed by functional languages?

    Read the article

  • Advice for a computer science sophomore in college?

    - by RDas
    Hi Everyone! I'm a sophomore in college majoring in Computer Science and Math. I have always loved programming. I started programming in C when I was nine years old and over the years I've picked up Visual Basic, C#, Java, C++, JavaScript, Objective-C, Python, Ruby, elementary Haskell and elementary Erlang, and I learned Perl back in the day which I've mostly forgotten. I have not done much network programming. I have done CGI programming, but that was about six/seven years ago. I've done some socket programming and written (school) programs to do interprocess communication, which I understood and liked. I'm taking a course on client/server programming and another one on network security next semester, which I am really looking forward to. I'm seeking advice on how to proceed with future learning. I've mostly done application (mobile and desktop) development, not much of web development. I'd like to pick up some web development this coming semester. Since I know Ruby and Python, should I start by learning Django and/or Rails? Any other suggestions on starting web development? I have a good understanding of HTML and CSS. Also, I'd also like to know how hard it is to pick up and be good (read: productive) in functional programming languages coming from a purely structured/object oriented background? I've been reading up on Erlang and Haskell, and I'd like to know your opinions on whether it's worth my time trying to learn them. What about Lisp, Scheme and other functional languages? Any help/ideas would be really appreciated.

    Read the article

  • Is there a modified LGPL license that allows static linking?

    - by Petr Pudlák
    úLGPL requires that it if a program uses LGPL-ed library, users must be able to re-link the program with a different version of the library: ... d) Do one of the following: 0) Convey the Minimal Corresponding Source under the terms of this License, and the Corresponding Application Code in a form suitable for, and under terms that permit, the user to recombine or relink the Application with a modified version of the Linked Version to produce a modified Combined Work, in the manner specified by section 6 of the GNU GPL for conveying Corresponding Source. 1) Use a suitable shared library mechanism for linking with the Library. A suitable mechanism is one that (a) uses at run time a copy of the Library already present on the user's computer system, and (b) will operate properly with a modified version of the Library that is interface-compatible with the Linked Version. ... However in some cases, this can pose considerable difficulties. In particular, Haskell programs are almost always statically compiled. Moreover, the compiler does cross-module optimizations so it's very hard to satisfy this condition. (See this link at Haskell Wiki.) Therefore, I'm looking for a standard LGPL-like license that wouldn't require the possibility of re-linking. Some projects use their own modification of LGPL, for example wxWidgets. But I'd rather use some standard license that is somewhat more official, perhaps checked by some law experts, and (L)GPL compatible. Is there some like that? (Also I'd be interested to know if are there some unforeseen consequences of such a modification of LGPL.)

    Read the article

  • Tmux causes Emacs glitch

    - by killy9999
    Recently I started using Tmux, but I noticed that it causes a strange Emacs glitch. When I open source code for elisp or haskell, the comments aren't highlighted. Only the comment sign is (; in case of elisp, -- in case of haskell). The rest of the commented line is in normal colour. When I run Emacs outside of Tmux everything works as expected - the whole commented line is highlighted in a colour denoting a comment. Any ideas why this is happening? SOLUTION: Based on Stefan's comment I added this to my .emacs file: (custom-set-variables (custom-set-faces '(font-lock-comment-face ((((class color) (min-colors 8) (background dark)) (:foreground "red")))))) Now the comments are displayed in red, just like comment delimiters.

    Read the article

  • Why am I not getting an sRGB default framebuffer?

    - by Aaron Rotenberg
    I'm trying to make my OpenGL Haskell program gamma correct by making appropriate use of sRGB framebuffers and textures, but I'm running into issues making the default framebuffer sRGB. Consider the following Haskell program, compiled for 32-bit Windows using GHC and linked against 32-bit freeglut: import Foreign.Marshal.Alloc(alloca) import Foreign.Ptr(Ptr) import Foreign.Storable(Storable, peek) import Graphics.Rendering.OpenGL.Raw import qualified Graphics.UI.GLUT as GLUT import Graphics.UI.GLUT(($=)) main :: IO () main = do (_progName, _args) <- GLUT.getArgsAndInitialize GLUT.initialDisplayMode $= [GLUT.SRGBMode] _window <- GLUT.createWindow "sRGB Test" -- To prove that I actually have freeglut working correctly. -- This will fail at runtime under classic GLUT. GLUT.closeCallback $= Just (return ()) glEnable gl_FRAMEBUFFER_SRGB colorEncoding <- allocaOut $ glGetFramebufferAttachmentParameteriv gl_FRAMEBUFFER gl_FRONT_LEFT gl_FRAMEBUFFER_ATTACHMENT_COLOR_ENCODING print colorEncoding allocaOut :: Storable a => (Ptr a -> IO b) -> IO a allocaOut f = alloca $ \ptr -> do f ptr peek ptr On my desktop (Windows 8 64-bit with a GeForce GTX 760 graphics card) this program outputs 9729, a.k.a. gl_LINEAR, indicating that the default framebuffer is using linear color space, even though I explicitly requested an sRGB window. This is reflected in the rendering results of the actual program I'm trying to write - everything looks washed out because my linear color values aren't being converted to sRGB before being written to the framebuffer. On the other hand, on my laptop (Windows 7 64-bit with an Intel graphics chip), the program prints 0 (huh?) and I get an sRGB default framebuffer by default whether I request one or not! And on both machines, if I manually create a non-default framebuffer bound to an sRGB texture, the program correctly prints 35904, a.k.a. gl_SRGB. Why am I getting different results on different hardware? Am I doing something wrong? How can I get an sRGB framebuffer consistently on all hardware and target OSes?

    Read the article

  • Thoughts on type aliases/synonyms?

    - by Rei Miyasaka
    I'm going to try my best to frame this question in a way that doesn't result in a language war or list, because I think there could be a good, technical answer to this question. Different languages support type aliases to varying degrees. C# allows type aliases to be declared at the beginning of each code file, and they're valid only throughout that file. Languages like ML/Haskell use type aliases probably as much as they use type definitions. C/C++ are sort of a Wild West, with typedef and #define often being used seemingly interchangeably to alias types. The upsides of type aliasing don't invoke too much dispute: It makes it convenient to define composite types that are described naturally by the language, e.g. type Coordinate = float * float or type String = [Char]. Long names can be shortened: using DSBA = System.Diagnostics.DebuggerStepBoundaryAttribute. In languages like ML or Haskell, where function parameters often don't have names, type aliases provide a semblance of self-documentation. The downside is a bit more iffy: aliases can proliferate, making it difficult to read and understand code or to learn a platform. The Win32 API is a good example, with its DWORD = int and its HINSTANCE = HANDLE = void* and its LPHANDLE = HANDLE FAR* and such. In all of these cases it hardly makes any sense to distinguish between a HANDLE and a void pointer or a DWORD and an integer etc.. Setting aside the philosophical debate of whether a king should give complete freedom to their subjects and let them be responsible for themselves or whether they should have all of their questionable actions intervened, could there be a happy medium that would allow the benefits of type aliasing while mitigating the risk of its abuse? As an example, the issue of long names can be solved by good autocomplete features. Visual Studio 2010 for instance will alllow you to type DSBA in order to refer Intellisense to System.Diagnostics.DebuggerStepBoundaryAttribute. Could there be other features that would provide the other benefits of type aliasing more safely?

    Read the article

  • How are objects modelled in a functional programming language?

    - by Giorgio
    In an answer to this question (written by Pete) there are some considerations about OOP versus FP. In particular, it is suggested that FP languages are not very suitable for modelling (persistent) objects that have an identity and a mutable state. I was wondering if this is true or, in other words, how one would model objects in a functional programming language. From my basic knowledge of Haskell I thought that one could use monads in some way, but I really do not know enough on this topic to come up with a clear answer. So, how are entities with an identity and a mutable persistent state normally modelled in a functional language? EDIT Here are some further details to clarify what I have in mind. Take a typical Java application in which I can (1) read a record from a database table into a Java object, (2) modify the object in different ways, (3) save the modified object to the database. How would this be implemented e.g. in Haskell? I would initially read the record into a record value (defined by a data definition), perform different transformations by applying functions to this initial value (each intermediate value is a new, modified copy of the original record) and then write the final record value to the database. Is this all there is to it? How can I ensure that at each moment in time only one copy of the record is valid / accessible? One does not want to have different immutable values representing different snapshots of the same object to be accessible at the same time.

    Read the article

  • Windows 7 PATH not expanding

    - by trinithis
    I am using the following to create and edit environment variables for Windows 7. Control Panel\All Control Panel Items\System -> Advanced system settings -> Environment Variables Under System variables I have the following pertinant variables: PROG32=C:\Program Files (x86) REALDWG_SDK_DIR=%PROG32%\Autodesk\RealDWG 2011 Path=%REALDWG_SDK_DIR%;%PROG32%\Haskell\bin However, the following happens: C:\>echo %PROG32% C:\Program Files (x86) C:\>echo %Path% %REALDWG_SDK_DIR%;C:\Program Files (x86)\Haskell\bin Is it possible to have a chain of variables expand? If I rename Path to something else, I sometimes get the problem, and sometimes I don't.

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >