Search Results

Search found 15300 results on 612 pages for 'programming languages'.

Page 15/612 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • How to determine if a programming language is verbose or terse?

    - by sunpech
    Programming languages can often be described as verbose or terse. From my understanding, a verbose language is easy to read and understand, while a terse language is concise and neat, but more difficult to read. Should there be other things to consider in the definitions? It seems much of the popular programming languages of today are verbose, and these terms two terms are only used to describe a language as being more or less, relative to than another language. How do we determine if a programming language is more verbose/terse over another? Example: Is C# more verbose than Java?

    Read the article

  • What are the options for setting up a UNIX environment to learn C using Kernighan and Richie's The C Programming Language?

    - by ssbrewster
    I'm a novice programmer and have been experimenting with Javascript, jQuery and PHP but felt I wasn't getting a real depth of understanding of what I was doing. So, after reading Joel Spolsky's response to a question on this site (which I can't find now!), I took it back to basics and read Charles Petzold's 'Code' and am about to move onto Kernighan and Richie's The C Programming Language. I want to learn this in a UNIX environment but only have access to a Windows system. I have Ubuntu 12.04 running on a virtualised machine via VMWare Player, and done some coding in the terminal. Is using a Linux distro the only option for programming in a UNIX environment on Windows? And what are the next steps to start programming in C in UNIX and where do I get a compiler from?

    Read the article

  • Learning by doing (and programming by trial and error)

    - by AlexBottoni
    How do you learn a new platform/toolkit while producing working code and keeping your codebase clean? When I know what I can do with the underlying platform and toolkit, I usually do this: I create a new branch (with GIT, in my case) I write a few unit tests (with JUnit, for example) I write my code until it passes my tests So far, so good. The problem is that very often I do not know what I can do with the toolkit because it is brand new to me. I work as a consulant so I cannot have my preferred language/platform/toolkit. I have to cope with whatever the customer uses for the task at hand. Most often, I have to deal (often in a hurry) with a large toolkit that I know very little so I'm forced to "learn by doing" (actually, programming by "trial and error") and this makes me anxious. Please note that, at some point in the learning process, usually I already have: read one or more five-stars books followed one or more web tutorials (writing working code a line at a time) created a couple of small experimental projects with my IDE (IntelliJ IDEA, at the moment. I use Eclipse, Netbeans and others, as well.) Despite all my efforts, at this point usually I can just have a coarse understanding of the platform/toolkit I have to use. I cannot yet grasp each and every detail. This means that each and every new feature that involves some data preparation and some non-trivial algorithm is a pain to implement and requires a lot of trial-and-error. Unfortunately, working by trial-and-error is neither safe nor easy. Actually, this is the phase that makes me most anxious: experimenting with a new toolkit while producing working code and keeping my codebase clean. Usually, at this stage I cannot use the Eclipse Scrapbook because the code I have to write is already too large and complex for this small tool. In the same way, I cannot use any more an indipendent small project for my experiments because I need to try the new code in place. I can just write my code in place and rely on GIT for a safe bail-out. This makes me anxious because this kind of intertwined, half-ripe code can rapidly become incredibly hard to manage. How do you face this phase of the development process? How do you learn-by-doing without making a mess of your codebase? Any tips&tricks, best practice or something like that?

    Read the article

  • How to get better at solving Dynamic programming problems

    - by newbie
    I recently came across this question: "You are given a boolean expression consisting of a string of the symbols 'true', 'false', 'and', 'or', and 'xor'. Count the number of ways to parenthesize the expression such that it will evaluate to true. For example, there is only 1 way to parenthesize 'true and false xor true' such that it evaluates to true." I knew it is a dynamic programming problem so i tried to come up with a solution on my own which is as follows. Suppose we have a expression as A.B.C.....D where '.' represents any of the operations and, or, xor and the capital letters represent true or false. Lets say the number of ways for this expression of size K to produce a true is N. when a new boolean value E is added to this expression there are 2 ways to parenthesize this new expression 1. ((A.B.C.....D).E) ie. with all possible parenthesizations of A.B.C.....D we add E at the end. 2. (A.B.C.(D.E)) ie. evaluate D.E first and then find the number of ways this expression of size K can produce true. suppose T[K] is the number of ways the expression with size K produces true then T[k]=val1+val2+val3 where val1,val2,val3 are calculated as follows. 1)when E is grouped with D. i)It does not change the value of D ii)it inverses the value of D in the first case val1=T[K]=N.( As this reduces to the initial A.B.C....D expression ). In the second case re-evaluate dp[K] with value of D reversed and that is val1. 2)when E is grouped with the whole expression. //val2 contains the number of 'true' E will produce with expressions which gave 'true' among all parenthesized instances of A.B.C.......D i) if true.E = true then val2 = N ii) if true.E = false then val2 = 0 //val3 contains the number of 'true' E will produce with expressions which gave 'false' among all parenthesized instances of A.B.C.......D iii) if false.E=true then val3=( 2^(K-2) - N ) = M ie. number of ways the expression with size K produces a false [ 2^(K-2) is the number of ways to parenthesize an expression of size K ]. iv) if false.E=false then val3 = 0 This is the basic idea i had in mind but when i checked for its solution http://people.csail.mit.edu/bdean/6.046/dp/dp_9.swf the approach there was completely different. Can someone tell me what am I doing wrong and how can i get better at solving DP so that I can come up with solutions like the one given above myself. Thanks in advance.

    Read the article

  • HTML5 game programming style

    - by fnx
    I am currently trying learn javascript in form of HTML5 games. Stuff that I've done so far isn't too fancy since I'm still a beginner. My biggest concern so far has been that I don't really know what is the best way to code since I don't know the pros and cons of different methods, nor I've found any good explanations about them. So far I've been using the worst (and propably easiest) method of all (I think) since I'm just starting out, for example like this: var canvas = document.getElementById("canvas"); var ctx = canvas.getContext("2d"); var width = 640; var height = 480; var player = new Player("pic.png", 100, 100, ...); also some other global vars... function Player(imgSrc, x, y, ...) { this.sprite = new Image(); this.sprite.src = imgSrc; this.x = x; this.y = y; ... } Player.prototype.update = function() { // blah blah... } Player.prototype.draw = function() { // yada yada... } function GameLoop() { player.update(); player.draw(); setTimeout(GameLoop, 1000/60); } However, I've seen a few examples on the internet that look interesting, but I don't know how to properly code in these styles, nor do I know if there are names for them. These might not be the best examples but hopefully you'll get the point: 1: Game = { variables: { width: 640, height: 480, stuff: value }, init: function(args) { // some stuff here }, update: function(args) { // some stuff here }, draw: function(args) { // some stuff here }, }; // from http://codeincomplete.com/posts/2011/5/14/javascript_pong/ 2: function Game() { this.Initialize = function () { } this.LoadContent = function () { this.GameLoop = setInterval(this.RunGameLoop, this.DrawInterval); } this.RunGameLoop = function (game) { this.Update(); this.Draw(); } this.Update = function () { // update } this.Draw = function () { // draw game frame } } // from http://www.felinesoft.com/blog/index.php/2010/09/accelerated-game-programming-with-html5-and-canvas/ 3: var engine = {}; engine.canvas = document.getElementById('canvas'); engine.ctx = engine.canvas.getContext('2d'); engine.map = {}; engine.map.draw = function() { // draw map } engine.player = {}; engine.player.draw = function() { // draw player } // from http://that-guy.net/articles/ So I guess my questions are: Which is most CPU efficient, is there any difference between these styles at runtime? Which one allows for easy expandability? Which one is the most safe, or at least harder to hack? Are there any good websites where stuff like this is explained? or... Does it all come to just personal preferance? :)

    Read the article

  • Is there any evidence that one of the current alternate JVM languages might catch on?

    - by FarmBoy
    There's been a lot of enthusiasm about JRuby, Jython, Groovy, and now Scala and Clojure as the language to be the successor to Java on the JVM. But currently only Groovy and Scala are in the TIOBE top 100, and none are in the top 50. Is there any reason to think that any of this bunch will ever gain significant adoption? My question is not primarily about TIOBE, but about any evidence that you might see that could indicate that one of these languages could get significant backing that goes beyond the enthusiasts.

    Read the article

  • What are the canonical problem sets or problem domains for the different types of languages?

    - by SnOrfus
    I'm just wondering what some of the canonical problem sets are for certain types of languages? ie. Fill in the blanks: __ is the perfect language to use for solving __. The question was arrived at reading or hearing people say statements like such and such module in our codebase would be much easier to implement using a functional language. Fee free to include examples that would seem obvious to you.

    Read the article

  • Something similar to Objective-C categories in other languages?

    - by adig
    I understand Objective-C categories and how they become useful, but I always have a hard time explaining the concept to other programmers that are not familiar with Objective C. Maybe I'm just bad at explaining things, but I was thinking at another way to explain it by comparing to similar features offered by other (more popular) languages. (ex : I can explain the similarities between Objective C protocols and Java Interfaces) Any examples similar to Categories ?

    Read the article

  • Do any OO languages support a mechanism to guarantee an overriden method will call the base?

    - by Aaron Anodide
    I think this might be a useful language feature and was wondering if any languages already support it. The idea is if you have: class C virtual F statement1 statement2 and class D inherits C override F statement1 statement2 C.F() There would be a keyword applied to C.F() such that removing the last line of code above would cause a compiler error because it's saying "This method can be overridden but the implementation here needs to run no matter what".

    Read the article

  • What is the most efficient way to study multiple languages, frameworks, and APIs as a developer?

    - by Akromyk
    I know there are those out there who have read a slurry of books on a specific technology and only code in that one particular language, but this question is aimed at those who need bounce around between using multiple technologies and yet still manage to be productive. What is the most efficient way to study multiple languages, frameworks, and APIs as a developer without becoming a cheap swiss army knife? And how much time should one dedicate to a particular subject before moving to another?

    Read the article

  • What are the tools, programming languages and development processes of AAA games?

    - by Pan.student
    Only thing I am able to find about "big" games like ac, hl, bf, cod is engine used to run the game. But I am interested in what software development methodology, programming and scripting languages were used. As well as tools for creating models, music, animations and other media. Further, were the team team organisations and so on for a certain game (or game series). Is this information even available to the public?

    Read the article

  • Why do programming languages allow shadowing/hiding of variables and functions?

    - by Simon
    Many of the most popular programming languges (such as C++, Java, Python etc.) have the concept of hiding / shadowing of variables or functions. When I've encountered hiding or shadowing they have been the cause of hard to find bugs and I've never seen a case where I found it necessary to use these features of the languages. To me it would seem better to disallow hiding and shadowing. Does anybody know of a good use of these concepts?

    Read the article

  • What could be some objective criteria to compare languages? [closed]

    - by rvcoutinho
    I am performing a study on different programming languages (and its related technologies) for a mature corporate architecture. In order to conduct these studies, I need formulate some criteria prior to this evaluation. Some general (and well known) criteria are: readability, writability, reliability, cost and others (such as well-definedness, generality and portability). That said, I present the following questions: What criteria should I not forget? How to make these criteria objective?

    Read the article

  • Using different languages in one project

    - by Tarbal
    I recently heard about the use of several different languages in a (big) project, I also read about famous services such as Twitter using Rails as frontend, mixed with some other languages, and Scala I think it was as backend. Is this common practice? Who does that? I'm sure there are disadvantages to this. I think that you will have problems with the different interpreters/compilers and seamlessly connecting the different languages. Is this true? Why is this actually done? For performance?

    Read the article

  • Typical tasks/problems to demonstrate differences between programming languages

    - by Space_C0wb0y
    Somewhere some guy said (I honestly do not know where I got this from), that one should learn one programming language per year. I can see where that might be a good idea, because you learn new patterns and ways to look at the same problems by solving them in different languages. Typically, when learning a new language, I look at how certain problems are supposed to be solved in that language. My question now is, what, in you experience, are good, simple, and clearly defined tasks that demostrate the differences between programming languages. The Idea here is to have a set of tasks, that, when I solve all of them in the language I am learning, gives me a good overview of how things are supposed to be done in that language. I do not know if that is even possible, but it sure would be a useful thing to have. A typical example one often sees especially in tutorials for functional languages is the implementation of quicksort.

    Read the article

  • Interpreted languages: The higher-level the faster?

    - by immersion
    I have designed around 5 experimental languages and interpreters for them so far, for education, as a hobby and for fun. One thing I noticed: The assembly-like language featuring only subroutines and conditional jumps as structures was much slower than the high-level language featuring if, while and so on. I developed them both simultaneously and both were interpreted languages. I wrote the interpreters in C++ and I tried to optimize the code-execution part to be as fast as possible. My hypothesis: In almost all cases, performance of interpreted languages rises with their level (high/low). Am I basically right with this? (If not, why?)

    Read the article

  • STL algorithms and concurrent programming

    - by Andrew
    Hello everyone, Can any of STL algorithms/container operations like std::fill, std::transform be executed in parallel if I enable OpenMP for my compiler? I am working with MSVC 2008 at the moment. Or maybe there are other ways to make it concurrent? Thanks.

    Read the article

  • Your thoughts on Best Practices for Scientific Computing?

    - by John Smith
    A recent paper by Wilson et al (2014) pointed out 24 Best Practices for scientific programming. It's worth to have a look. I would like to hear opinions about these points from experienced programmers in scientific data analysis. Do you think these advices are helpful and practical? Or are they good only in an ideal world? Wilson G, Aruliah DA, Brown CT, Chue Hong NP, Davis M, Guy RT, Haddock SHD, Huff KD, Mitchell IM, Plumbley MD, Waugh B, White EP, Wilson P (2014) Best Practices for Scientific Computing. PLoS Biol 12:e1001745. http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001745 Box 1. Summary of Best Practices Write programs for people, not computers. (a) A program should not require its readers to hold more than a handful of facts in memory at once. (b) Make names consistent, distinctive, and meaningful. (c) Make code style and formatting consistent. Let the computer do the work. (a) Make the computer repeat tasks. (b) Save recent commands in a file for re-use. (c) Use a build tool to automate workflows. Make incremental changes. (a) Work in small steps with frequent feedback and course correction. (b) Use a version control system. (c) Put everything that has been created manually in version control. Don’t repeat yourself (or others). (a) Every piece of data must have a single authoritative representation in the system. (b) Modularize code rather than copying and pasting. (c) Re-use code instead of rewriting it. Plan for mistakes. (a) Add assertions to programs to check their operation. (b) Use an off-the-shelf unit testing library. (c) Turn bugs into test cases. (d) Use a symbolic debugger. Optimize software only after it works correctly. (a) Use a profiler to identify bottlenecks. (b) Write code in the highest-level language possible. Document design and purpose, not mechanics. (a) Document interfaces and reasons, not implementations. (b) Refactor code in preference to explaining how it works. (c) Embed the documentation for a piece of software in that software. Collaborate. (a) Use pre-merge code reviews. (b) Use pair programming when bringing someone new up to speed and when tackling particularly tricky problems. (c) Use an issue tracking tool. I'm relatively new to serious programming for scientific data analysis. When I tried to write code for pilot analyses of some of my data last year, I encountered tremendous amount of bugs both in my code and data. Bugs and errors had been around me all the time, but this time it was somewhat overwhelming. I managed to crunch the numbers at last, but I thought I couldn't put up with this mess any longer. Some actions must be taken. Without a sophisticated guide like the article above, I started to adopt "defensive style" of programming since then. A book titled "The Art of Readable Code" helped me a lot. I deployed meticulous input validations or assertions for every function, renamed a lot of variables and functions for better readability, and extracted many subroutines as reusable functions. Recently, I introduced Git and SourceTree for version control. At the moment, because my co-workers are much more reluctant about these issues, the collaboration practices (8a,b,c) have not been introduced. Actually, as the authors admitted, because all of these practices take some amount of time and effort to introduce, it may be generally hard to persuade your reluctant collaborators to comply them. I think I'm asking your opinions because I still suffer from many bugs despite all my effort on many of these practices. Bug fix may be, or should be, faster than before, but I couldn't really measure the improvement. Moreover, much of my time has been invested on defence, meaning that I haven't actually done much data analysis (offence) these days. Where is the point I should stop at in terms of productivity? I've already deployed: 1a,b,c, 2a, 3a,b,c, 4b,c, 5a,d, 6a,b, 7a,7b I'm about to have a go at: 5b,c Not yet: 2b,c, 4a, 7c, 8a,b,c (I could not really see the advantage of using GNU make (2c) for my purpose. Could anyone tell me how it helps my work with MATLAB?)

    Read the article

  • What Functional features are worth a little OOP confusion for the benefits they bring?

    - by bonomo
    After learning functional programming in Haskell and F#, the OOP paradigm seems ass-backwards with classes, interfaces, objects. Which aspects of FP can I bring to work that my co-workers can understand? Are any FP styles worth talking to my boss about retraining my team so that we can use them? Possible aspects of FP: Immutability Partial Application and Currying First Class Functions (function pointers / Functional Objects / Strategy Pattern) Lazy Evaluation (and Monads) Pure Functions (no side effects) Expressions (vs. Statements - each line of code produces a value instead of, or in addition to causing side effects) Recursion Pattern Matching Is it a free-for-all where we can do whatever the programming language supports to the limit that language supports it? Or is there a better guideline?

    Read the article

  • Are closures with side-effects considered "functional style"?

    - by Giorgio
    Many modern programming languages support some concept of closure, i.e. of a piece of code (a block or a function) that Can be treated as a value, and therefore stored in a variable, passed around to different parts of the code, be defined in one part of a program and invoked in a totally different part of the same program. Can capture variables from the context in which it is defined, and access them when it is later invoked (possibly in a totally different context). Here is an example of a closure written in Scala: def filterList(xs: List[Int], lowerBound: Int): List[Int] = xs.filter(x => x >= lowerBound) The function literal x => x >= lowerBound contains the free variable lowerBound, which is closed (bound) by the argument of the function filterList that has the same name. The closure is passed to the library method filter, which can invoke it repeatedly as a normal function. I have been reading a lot of questions and answers on this site and, as far as I understand, the term closure is often automatically associated with functional programming and functional programming style. The definition of function programming on wikipedia reads: In computer science, functional programming is a programming paradigm that treats computation as the evaluation of mathematical functions and avoids state and mutable data. It emphasizes the application of functions, in contrast to the imperative programming style, which emphasizes changes in state. and further on [...] in functional code, the output value of a function depends only on the arguments that are input to the function [...]. Eliminating side effects can make it much easier to understand and predict the behavior of a program, which is one of the key motivations for the development of functional programming. On the other hand, many closure constructs provided by programming languages allow a closure to capture non-local variables and change them when the closure is invoked, thus producing a side effect on the environment in which they were defined. In this case, closures implement the first idea of functional programming (functions are first-class entities that can be moved around like other values) but neglect the second idea (avoiding side-effects). Is this use of closures with side effects considered functional style or are closures considered a more general construct that can be used both for a functional and a non-functional programming style? Is there any literature on this topic? IMPORTANT NOTE I am not questioning the usefulness of side-effects or of having closures with side effects. Also, I am not interested in a discussion about the advantages / disadvantages of closures with or without side effects. I am only interested to know if using such closures is still considered functional style by the proponent of functional programming or if, on the contrary, their use is discouraged when using a functional style.

    Read the article

  • Different programming languages possibilities

    - by b-gen-jack-o-neill
    Hello. This should be very simple question. There are many programming languages out there, compiled into machine code or managed code. I first started with ASM back in high school. Assembler is very nice, since you know what exactly CPU does. Next, (as you can see from my other questions here) I decided to learn C and C++. I choosed C becouse from what I read it is the language with output most close to assembler-written programs. But, what I want to know is, can any other Windows programming language out there call win32 API? To be exact, like C has its special header and functions for win32 api interactions, is this assumed to be some important part of programming language? Or are there any languages that have no support for calling win32 API, or just use console to IO and some functions for basic file IO? Becouse, for Windows programming with graphic output, it is essential to have acess to win32 API. I know this question might seem silly, but still please, help me, I ask for study porposes. Thanks.

    Read the article

  • Scheme vs Haskell for an Introduction to Functional Programming?

    - by haziz
    I am comfortable with programming in C and C#, and will explore C++ in the future. I may be interested in exploring functional programming as a different programming paradigm. I am doing this for fun, my job does not involve computer programming, and am somewhat inspired by the use of functional programming, taught fairly early, in computer science courses in college. Lambda calculus is certainly beyond my mathematical abilities, but I think I can handle functional programming. Which of Haskell or Scheme would serve as a good intro to functional programming? I use emacs as my text editor and would like to be able to configure it more easily in the future which would entail learning Emacs Lisp. My understanding, however, is that Emacs Lisp is fairly different from Scheme and is also more procedural as opposed to functional. I would likely be using "The Little Schemer" book, which I have already bought, if I pursue Scheme (seems to me a little weird from my limited leafing through it). Or would use the "Learn You a Haskell for Great Good" if I pursue Haskell. I would also watch the Intro to Haskell videos by Dr Erik Meijer on Channel 9. Any suggestions, feedback or input appreciated. Thanks. P.S. BTW I also have access to F# since I have Visual Studio 2010 which I use for C# development, but I don't think that should be my main criteria for selecting a language.

    Read the article

  • Why are IOC containers unnecessary with dynamic languages

    - by mikemay
    Someone on the Herding Code podcast No. 68, http://herdingcode.com/?p=231, stated that IOC containers had no place with Python or Javascript, or words to that effect. I'm assuming this is conventional wisdom and that it applies to all dynamic languages. Why? What is it about dynamic languages that makes IOC containers unnecessary?

    Read the article

  • Which conveniences does CEDET bring to dynamic languages ?

    - by julien
    I've been looking into CEDET, but it seems that most of its features would appeal more to developpers working in statically typed languages, and I'm kind of getting cold feet from the amount of tinkering it seems to require. As I work mainly with ruby and javascript, I'm wondering what kind of improvements it could bring when working with these dynamic languages, over a plain TAGS file ?

    Read the article

  • Comparisons of web programming languages (on speed, etc.)

    - by Dave
    I'm looking for a site / report / something that can compares "identical" programs (programs that do the same thing) in different web-programming languages and then compares the speeds of each of them. I agree that there will be MANY MANY criteria on which this information can be sliced and diced by, but has anyone done any real comparison of this? I am interested in web-based languages only, ie php, perl, C, C++, java, asp, asp.net, etc.

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >