Search Results

Search found 36520 results on 1461 pages for 'default editing language'.

Page 778/1461 | < Previous Page | 774 775 776 777 778 779 780 781 782 783 784 785  | Next Page >

  • Can higher-order functions in FP be interpreted as some kind of dependency injection?

    - by Giorgio
    According to this article, in object-oriented programming / design dependency injection involves a dependent consumer, a declaration of a component's dependencies, defined as interface contracts, an injector that creates instances of classes that implement a given dependency interface on request. Let us now consider a higher-order function in a functional programming language, e.g. the Haskell function filter :: (a -> Bool) -> [a] -> [a] from Data.List. This function transforms a list into another list and, in order to perform its job, it uses (consumes) an external predicate function that must be provided by its caller, e.g. the expression filter (\x -> (mod x 2) == 0) [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] selects all even numbers from the input list. But isn't this construction very similar to the pattern illustrated above, where the filter function is the dependent consumer, the signature (a -> Bool) of the function argument is the interface contract, the expression that uses the higher-order is the injector that, in this particular case, injects the implementation (\x -> (mod x 2) == 0) of the contract. More in general, can one relate higher-order functions and their usage pattern in functional programming to the dependency injection pattern in object-oriented languages? Or in the inverse direction, can dependency injection be compared to using some kind of higher-order function?

    Read the article

  • XPath & XML EDI B2B

    - by PearlFactory
    GoodToGo :) Best XML Editor is Altova XMLSpy 2011 http://www.torrenthound.com/hash/bfdbf55baa4ca6f8e93464c9a42cbd66450bb950/torrent-info/Altova-XMLSpy-Enterprise-Edition-SP1-2011-v13-0-1-0-h33t-com-Full For whatever reason Piratebay has trojans and other nasties..Search in torrent.eu for Altova XMLSpy Enterprise Edition SP1 2011 v13.0.1.0 Also if you like the product purchase it in a Commercial Enviroment Any well structured/complex XML can be parsed @ the speed of light using XPATH querys and not the C# objects XPathNodeIterator and others etc ....Never do loops  or Genirics or whatever highlevel language technology. Use the power of XPATH i.e Will use a Simple (Do while) as an example. We could have many different techs all achieveing the same result Instead of   xmlNI2 = xmlNav.Select("/p:BookShop")         if (xmlNI2.Count != 0)            {                                 while ((xmlNI2.MoveNext()))              string aNode =xmlNI2.SelectSingleNode('Book', nsmgr); if (aNode =="The Book I am after")       Console.WriteLine("Found My Book);   This lengthy cumbersome task can be achieved with a simple XPATH query Console.WriteLine((xmlNavg.SelectSingleNode("/p:BookShop/Book[.='The Book I am aFter ']", nsmgr)).Value.ToString()); Use the power of the parser and eliminate the middleman C#/MSIL/JIT etc etc Get Started Fast and use the parser as Outlined 1) Open XML and goto Grid Mode 2) Select XPATH tab on the bottom viewer/window as shown  From here you get intellisense and can quickly learn how to navigate/find the data using XPATH A key component to Navigation with XPATH is to use the "../ " command . This basically says from where I am now go up 1 level . With Xpath all commands are cumalative. i.e you can search for a book title @ the 2nd level of the XML and from there traverse 15 layers to paragraphs or words on a page with expression validation occuring throughout this process etc  (So in essence you may have arrived @ a node within the XML and have met 15 conditions along the way ) Given 1-2 days with XmlSpy and XPATH you unlock a technology that is super fast and simple to use. XML is a core component to what lays under the hood of so many techs. So it is no wonder that you want to be able to goto  the atomic level to achieve the result you want Justin P.S For a long time I saw XML as slow and a bit boring but now converted

    Read the article

  • How do references work in R?

    - by djechlin
    I'm finding R confusing because it has such a different notion of reference than I am used to in languages like C, Java, Javascript... Ruby, Python, C++, well, pretty much any language I have ever programmed in ever. So one thing I've noticed is variable names are not irrelevant when passing them to something else. The reference can be part of the data. e.g. per this tutorial a <- factor(c("A","A","B","A","B","B","C","A","C")) results <- table(a) Leads to $a showing up as an attribute as $dimnames$a. We've also witnessed that calling a function like a <- foo(alpha=1, beta=2) can create attributes in a of names alpha and beta, or it can assign or otherwise compute on 1 and 2 to properties already existing. (Not that there's a computer science distinction here - it just doesn't really happen in something like Javascript, unless you want to emulate it by passing in the object and use key=value there.) Functions like names(...) return lvalues that will affect the input of them. And the one that most got me is this. x <- c(3, 5, 1, 10, 12, 6) y = x[x <= 5] x[y] <- 0 is different from x <- c(3, 5, 1, 10, 12, 6) x[x <= 5] <- 0 Color me confused. Is there a consistent theory for what's going on here?

    Read the article

  • Multiplayer online game engine/pipeline

    - by Slav
    I am implementing online multiplayer game where client must be written in AS3 (Flash) to embed game into browser and server in C++ (abstract part of which is already written and used with other games). Networking models may differ from each other, but currently I'm looking toward game's logic run on both client and server parts but they're written on different languages while it's not the main problem. My previous game (pretty big one - was implemented with efforts of ~5 programmers in 1.5 years) was mainly "written" within electronic tables as structured objects with implemented inheritance: was written standalone tool which generated AS3 and C++ (languages of platforms to which the game was published) using specified electronic tables file (.xls or .ods). That file contained ~50 tables with ~50 rows and ~50 columns each and was mainly written by game designers which do not know any programming languages. But that game was single-player. Having declared problem with my currently implementing MMO, I'm looking toward some vast pipeline, where will be resolved such problems like: game objects descriptions (which starships exist within game, how much HP they have, how fast move, what damage deal...) actions descriptions (what players or NPCs can do: attack each other, collect resources, build structures, move, teleport, cast spells) - actions are transmitted through server between clients influences (what happens when specified action applied on specified object, e.i "Ship A attacked Ship B: field "HP" of Ship B reduced by amount of field "damage" of Ship A" Influences can be much more difficult, yes, e.i. "damage is twice it's size when Ship has =5 allies around him in a 200 units range during night" and so on. If to be able to write such logic within some "design document" it will be easily possible to: let designers to do their job without programmer's intervention or any bug-prone programming validate described logic transfer (transform, convert) to any programming language where it will be executed Did somebody worked on something like that? Is there some tools/engines/pipelines which concernes with it? How to handle all of this problems simultaneously in a best way or do I properly imagine my tasks and problems to myself?

    Read the article

  • Review of TechEd 2012 - so far

    - by Stefan Barrett
    Disclaimer: probably going to next years TechEd.  (but not 100% sure) As with most TechEd's, this is not one of the best - but it's not bad.  Some impressions so far: The food is not bad, through perhaps not as much choice as in previous years.  The snacks, while a bit limited, are at least available.  The alumni lounge is ok, through perhaps not as good as last years.  Wifi is a bit worse than previous years - not really working in the big room, and a bit sporadic in the rest of the building. The device seems to make a big difference - the iPad seems to connect the easiest, while the iPhone & Lumia 800 are really struggling.  The real problem is the content - not as developer focused as in previous years.  This shows up in a number of different ways, for example while there is a visual studio booth, there is not much sign of anybody from the language teams.  This is one of few TechEd's where I don't feel very surprised about anything - seen most of the developer stuff in previews. One example where I was surprised was the pre-conf on c++ - its been years since I did any c++, but based on that session perhaps I should start again. While there are sessions, I'm not finding my schedule very challenged. For each time-slot there only seems to 1, or rarely 2, interesting sessions.  The focus seems to be on windows 8, Azure and the phone, which while interesting (might give win8 a go), are not enough.

    Read the article

  • Is there a LOGO interpreter that actually has a turtle?

    - by Tim Post
    This is not a repeat of the now infamous "How do I move the turtle in LOGO?" Recently, I had the following conversation with my five year old daughter: Daughter: Daddy, do you write programs? Me: Yes! Daughter: Daddy, what's a program? Me: A program is a set of instructions that a computer follows. Daughter: Daddy, can I write a program too? Me: Sure! This got me scrambling to think of a very basic language that a five year old could get some satisfaction from mastering rather quickly. I'm ashamed to admit that the first thing that came to mind was this: 10 INPUT "Tell me a secret" A$ 20 PRINT "Wow really? :" A$ 30 GOTO 10 That isn't going to hold a five year old's attention for very long and it requires too much of a lecture. However, moving a turtle around and drawing neat pictures might just work. Sadly, my search for a LOGO interpreter yielded noting but ad ridden sites, flight simulators and a whole bunch of other stuff that I really don't want. I'm hoping to find a cross platform (Java / Python) LOGO interpreter (dare I call it simulator?) with the following features: Can save / replay commands (stored programs) Has an actual turtle Sound effects are a plus Have you stumbled across something like this, if so, can you provide a link? I hate to ask a 'shopping' sort of question, but it seemed much better than "Is LOGO appropriate for a five year old?"

    Read the article

  • What are `Developmental Milestones` for programming skills?

    - by Holmes
    I studied in the field of Computer Science for 6 years, bachelor's degree and master's degree. I have studied all the basic programming like C, Java, VB, C#, Python, and etc. When I have free times, I will learn new programming languages and follow new programming trends by myself , such as PHP, HTML5, CSS5, LESS, Bootstrap, Symfony2, and GitHub. So, if someone wants me to write some instructions using these languages, I'm certain that I can do it, not so good but I can get a job done. However, I don't have any favorite programming language. Moreover, I also have studied about algorithms, database, and etc. Everything I just wrote so far seems that I know a lot in this field. In fact, I feel I am very stupid. I cannot answer 80% of the questions on SO. In spite of those languages??, I have studied. Perhaps it is because I have never worked before. As there is the Developmental Milestones for children, which refers to how a child becomes able to do more complex things as they get older, I would like to evaluate the same thing but for programming skills. What are the set of functional skills or age-specific tasks that most programmers can do at a certain age range? In order to evaluate myself, I would like to ask your opinions that all of the skills I mentioned above, are they enough for programmers to know when they are 25 years old? What are your suggestions in order to improve the skills in this field?

    Read the article

  • Languages with a clear distinction between subroutines that are purely functional, mutating, state-changing, etc?

    - by CPX
    Lately I've become more and more frustrated that in most modern programming languages I've worked with (C/C++, C#, F#, Ruby, Python, JS and more) there is very little, if any, language support for determining what a subroutine will actually do. Consider the following simple pseudo-code: var x = DoSomethingWith(y); How do I determine what the call to DoSomethingWith(y) will actually do? Will it mutate y, or will it return a copy of y? Does it depend on global or local state, or is it only dependent on y? Will it change the global or local state? How does closure affect the outcome of the call? In all languages I've encountered, almost none of these questions can be answered by merely looking at the signature of the subroutine, and there is almost never any compile-time or run-time support either. Usually, the only way is to put your trust in the author of the API, and hope that the documentation and/or naming conventions reveal what the subroutine will actually do. My question is this: Does there exist any languages today that make symbolic distinctions between these types of scenarios, and places compile-time constraints on what code you can actually write? (There is of course some support for this in most modern languages, such as different levels of scope and closure, the separation between static and instance code, lambda functions, et cetera. But too often these seem to come into conflict with each other. For instance, a lambda function will usually either be purely functional, and simply return a value based on input parameters, or mutate the input parameters in some way. But it is usually possible to access static variables from a lambda function, which in turn can give you access to instance variables, and then it all breaks apart.)

    Read the article

  • How do you keep your basic skills from atrophy?

    - by kojiro
    I've been programming for about 10 years, and I've started to migrate to more of a project management position. I still do coding, but less often now. One of the things that I think is holding me back in my career is that I can't "let go". I think I fear letting hard-won programming skills atrophy while I sit in meetings and annotate requirements. (Not to mention I don't trust people to write requirements who don't understand the code.) I can't just read books and magazines about coding. I'm involved in some open source projects in my free time, and stackoverflow and friends help a bit, because I get the opportunity to help people solve their programming problems without micromanaging, but neither of these are terribly structured, so it's tempting to work first on the problems I can solve easily. I guess what I'd like to find is a structured set of exercises (don't care what language or environment) that… …I can do periodically …has some kind of time requirement so I can tell if I've been goofing off …has some kind of scoring so I can tell if I'm making mistakes Is there such a thing? What would you do to keep your skills fresh?

    Read the article

  • Count function on tree structure (non-binary)

    - by Spevy
    I am implementing a tree Data structure in c# based (largely on Dan Vanderboom's Generic implementation). I am now considering approach on handling a Count property which Dan does not implement. The obvious and easy way would be to use a recursive call which Traverses the tree happily adding up nodes (or iteratively traversing the tree with a Queue and counting nodes if you prefer). It just seems expensive. (I also may want to lazy load some of my nodes down the road). I could maintain a count at the root node. All children would traverse up to and/or hold a reference to the root, and update a internally settable count property on changes. This would push the iteration problem to when ever I want to break off a branch or clear all children below a given node. Generally less expensive, and puts the heavy lifting what I think will be less frequently called functions. Seems a little brute force, and that usually means exception cases I haven't thought of yet, or bugs if you prefer. Does anyone have an example of an implementation which keeps a count for an Unbalanced and/or non-binary tree structure rather than counting on the fly? Don't worry about the lazy load, or language. I am sure I can adjust the example to fit my specific needs. EDIT: I am curious about an example, rather than instructions or discussion. I know this is not technically difficult...

    Read the article

  • Do you think Scala will be the dominant JVM langauge, ie be the next Java? [on hold]

    - by user1037729
    From what I've read about Scala do far I think it has some nice features but I do not think it should be "the next Java". It might however end up being the next Java (due to fashion rather than fact) but lets not hope it does not... To me adds a lot of complexity over Java which is a simple and scalable language. Scala Pattern matching allows you to perform some type/value checking in a more concise way, this is possible in Java, Scala's pattern matching has a limit to it, you cannot continuously match deeper and deeper down the object graph, so why not just stick to Java and use decent invariants? Scala provides tuples, easy enough to make in Java, create a static factory method and it all reads nicely too. Scala provides mixins, why not just use composition? I believe Scala implicit's are bad, they can lead to code becoming complex and hard to maintain, explicitness is good. Scala provides closures, well they will be in Java 8 too. Scala has lazy keyword for lazy instantiation, this is easy enough to do in Java by calling a getter which creates the instance when needed, no hidden magic here. Scala can be used with AKKA, well so can Java, there is an Java AKKA implementation. Scala offers addition functional features but these can all be created in Java, there are many frameworks with have implemented functional features in Java. All in all Scala seems to offer is addition complexity and thats it...

    Read the article

  • How to start and maintain an after-work project

    - by Sam
    I work as a full time developer. My workplace, however, is very limiting in the technologies and programming languages I can use. All of the work is done in C++. It is clear that C++ is rapidly losing (or maybe already lost) its leading position. (please don't flame me, I have years and years of C++ experience, and I love this language, I am merely stating a fact). I have a few ideas for java/android projects as well as a project I would like to implement in C#. I see this as a way for me to stay current with the job market's trends and I hope that it will help me find my next job in a more up to date area. So here's the problem, my normal workday is 10-11 hours, after finishing with the kids and house chores I get about 1-2.5 hours before I am too tired to think much less code. at that point I am going to bed frustrated, disappointed with myself for not being able to stick with my plans, and then I wake up the next morning to do it all again. I have a few hours more during the weekends but clearly I would need to do something different if I want to reach any of my goals. Is there any way for me to make better use of the time I have? Did any of you guys have a similar problem, and had succefully resolved it?

    Read the article

  • Writing Java in Java

    - by Skeith
    I have been using Java for several months at work now and am becoming mildly competent in it. The problem I think I am having is that I program C++ in Java . By that I mean I have always used C++ and am treating Java as a simple syntax change instead of appreciate if for its own language. For instance a static variable in C++ is the same as a normal variable in Java as Java is all classes so they maintain there values between function calls. Little things like this are tripping me up constantly as I am self taught. What I want is to invest the time to become a good java programmer not just a C++ programmer that can write in Java. The problem is I do not know how to do this. I have tried reading the Java doc pages but I find them very clinical and hard to understand. So what I am looking for is recommendations on how I can learn to think in Java. Books that teach Java concepts not Java syntax, online tutorials that I can work through that give it a context, established Java traditions/best practices and any other thing that you could recommend.

    Read the article

  • What are some good courses to take my programming to the next level?

    - by absentx
    I am in search of either some in person, or online training that could take my coding to the next level. I am looking to attack two specific areas: Javascript: While I have been getting by with javascript for three or four years, I still feel like it takes a back seat to my other programming. I use Jquery a lot but would prefer to be proficient in pure JS also. PHP: I feel pretty proficient at PHP but I know there is only room to improve. Here I am interested in something that can teach me the more advanced aspects of the language, improve my code writing and perhaps cover object oriented php in depth also. I have looked into Netcom's training courses before but I can't tell if there advanced webmaster professional would be a good fit or not. Seems kind of like a force fed course but I am interested in it because I am looking for something in the one to two week range that is targeted at what I am looking for. I have zero experience with any type of online courses in terms of programming. It appears lots are available, but I am not sure on the quality.

    Read the article

  • Sucking Less Every Year ?

    - by AdityaGameProgrammer
    Sucking Less Every Year A trail of thought that had been on my mind for a while Quoting directly from the post I've often thought that sucking less every year is how humble programmers improve. You should be unhappy with code you wrote a year ago. If you aren't, that means either A) you haven't learned anything in a year, B) your code can't be improved, or C) you never revisit old code. All of these are the kiss of death for software developers. How often does this happen or not happen to you? How long before you see an actual improvement in your coding ? month, year? Do you ever revisit Your old code? How often does your old code plague you? or how often do you have to deal with your technical debt. It is definitely very painful to fix old bugs n dirty code that we may have done to quickly meet a deadline and those quick fixes ,some cases we may have to rewrite most of the application/code. No arguments about that. Some of the developers i had come across argued that they were already at the evolved stage where their coding doesn't need improvement or cant get improved anymore. Does this happen? If so how many years into coding on a particular language does one expect this to happen?

    Read the article

  • My 2012 Professional Development Goals

    - by kerry
    Once again I am going to declare some professional goals for my upcoming year. Convert my blog to Jekyll hosted on github – I am tired of wordpress, tired of spam, and would like to try something new.  I have already started on this.  Just need to finish it up. Launch my GWT / Google App Engine application – I am currently developing a GWT application to be deployed to Google App Engine. Do another presentation at the user group – At least a few lightning talks.  I have a few ideas. Attend a tech conference – Dev Nexus is the likely target Post more often – I did 10 posts last year, would like to maybe double that next year (including this one) Attend a user group meeting outside of Nashville JUG – A rollover from last year, I will probably be regularly attend the Interactive Developers meeting Study another language – I have been thinking about looking in to Dart or perhaps Go Launch an Android app – Another holdover from last year I am thinking of doing a small app having to do with managing the silent state of the phone

    Read the article

  • Is it better to specialize in a single field I like, or expand into other fields to broaden my horizons?

    - by Oak
    This is a dilemma about which I have been thinking for quite a while. I'm a graduate student and my topics of interest are programming language design, code analysis, compilation, etc. So far, this field has been very interesting and rewarding for me, so I was thinking about finding a job in that field and continuing to specialize in it. I feel like it's a relatively solid field which won't "get out of style" anytime soon. I've always thought that in such complex fields it's better to be a real expert than just another guy who superficially understand what the experts are talking about. On the other hand, I feel that by specializing this way I really limit my future option. I have always been a strong believer in multidisciplinary approaches to problems. Maybe I should go search for a general programming job in which I could gain experience in other fields, as well as occasionally apply my favorite field for solving problems. Specializing in only one or two fields can prevent me from thinking outside the box and cause stagnation. I would really like to hear more opinions about this choice. The truth is I'm already leaning towards one of the choices, so basic psychology says nothing will change my mind, but I would still love to hear some feedback.

    Read the article

  • Which architecture should I choose for this project?

    - by Jichao
    I have a project. The server which based on a phone PCI-board is responsible for received phone calls from the customer and then redirect the phone calls to the operators. I have decided to code the server using c++ programming language and qt framework because the PCI-board SDK's interface is c/c++ originated and for the sake of portability. The server need to send the information of the the customer to the operator while ringing the operator and the ui interface of the operator client should be browser-based. Now the key problem is how could the server notify the operator that there is a phone call for he/she. One architecture I have considered is like this, The operator browser client use ajax pooling the web server to check whether there is call to the client; the web server pooling the database server to check whether there is call; the desktop server(c++) wait for the phone calls and set the information in the database. The other operations such as hang up the phone call from the client, retransfer the phone call to the other operator also use this architecture. Then, is there any way other than pooling the server(js code setInterval('getDail', 1000)) to decide whether there is a call to the operator? Is this architecture feasible or should I use some terrific techniques that I do know such as web services,xml-rpc, soap???

    Read the article

  • Best multi-platform mobile development tool, or use iPhone tools?

    - by Jesse Millikan
    I may be building a mobile app for a client soon. Their primary focus is the iPhone, but my boss would like to be able to target multiple platforms if it's feasible. The app will probably be a large but technically simple business application backed by a web service. So, here's the question as I see it: What is currently the strongest cross-platform mobile development tool that supports iOS? Would you choose it over native development tools? If you choose native, contrast it with a cross-platform tool you've used. In addition, For a project of the type we're expecting, what's the level of effort for your chosen tool versus other tools? What's the actual level of support of the tool for other platforms and their unique look and feel, capabilities, etc.? How thorough is the documentation of the product? How well do you like the development experience itself, e.g. the language, tools, documentation? Is it something you would choose to do long-term? I'll put a bounty out unless I get fantastic answers.

    Read the article

  • Understanding Application binary interface (ABI)

    - by Tim
    I am trying to understand the concept of Application binary interface (ABI). From The Linux Kernel Primer: An ABI is a set of conventions that allows a linker to combine separately compiled modules into one unit without recompilation, such as calling conventions, machine interface, and operating-system interface. Among other things, an ABI defines the binary interface between these units. ... The benefits of conforming to an ABI are that it allows linking object files compiled by different compilers. From Wikipedia: an application binary interface (ABI) describes the low-level interface between an application (or any type of) program and the operating system or another application. ABIs cover details such as data type, size, and alignment; the calling convention, which controls how functions' arguments are passed and return values retrieved; the system call numbers and how an application should make system calls to the operating system; and in the case of a complete operating system ABI, the binary format of object files, program libraries and so on. I was wondering whether ABI depends on both the instruction set and the OS. Are the two all that ABI depends on? What kinds of role does ABI play in different stages of compilation: preprocessing, conversion of code from C to Assembly, conversion of code from Assembly to Machine code, and linking? From the first quote above, it seems to me that ABI is needed for only linking stage, not the other stages. Is it correct? When is ABI needed to be considered? Is ABI needed to be considered during programming in C, Assembly or other languages? If yes, how are ABI and API different? Or is it only for linker or compiler? Is ABI specified for/in machine code, Assembly language, and/or of C?

    Read the article

  • Oracle is Child&rsquo;s Play&hellip;in NSW

    - by divya.malik
    A few weeks ago, my colleague Michael Seback posted a blog entry on Oracle’s acquisition of Haley.  We recently read  an interesting report from Down Under, and here was our press release on the  implementation of Oracle’s Policy automation software in New South Wales, which I thought I would share. We always love hearing about our software “at work”, and especially in the Public Sector- social services area, where it makes a big difference to people’s lives. Here were some of the reasons, why NSW chose Oracle software: “One of the things Oracle’s Policy Automation system is good at is allowing you take decision  trees and rules that are obviously written in English and code them up using very much a natural language approach,” said Holling (CIO for Human Services). “So it was quite a short process to translate the final set of rules that were written on paper into business rules that were actually embedded in the system.” “Another reason why we chose Oracle’s Automation tool is because with future versions of Siebel it comes very tightly integrated with that. It allows us to then to basically take the results of the Policy Automation survey and actually populate our client management system database with that information,” said Holling. As per Surend Dayal, North America VP, Oracle’s Policy automation has applications across a wide range of industries, including public sector—especially health and human services—also financial services, insurance, and even airline rewards programs. In other words, any business process that requires consistent, accurate decision-making where complex legislation and/or internal policies are involved. Click here to read more about Oracle and Haley.

    Read the article

  • Legacy Code Retreat Questions

    - by MarkPearl
    I recently heard of the concept of a Legacy Code Retreat. Since I have attended and helped facilitate some normal Code Retreats I thought it might be interesting in trying a Legacy Code Retreat, but I have a few questions on how a legacy CR differs from a normal one. If anyone has attended a Legacy CR and has some suggestions on how best to host these event’s please leave a comment on what has worked for you in the past or if you have any answers to my questions below… Should you restrict the languages that people can do the sessions in? In the normal CR’s I have been involved in the past we have had people attend and code in their programming language of choice. A normal CR lends itself to  this because each session starts with no code. With a legacy CR each session seems to start with an existing code base. Is there some sort of limitation on the languages that people can work in during the sessions? If not, how do you give them a base to start from? What happens as the beginning of each session? In the normal CR that I have attended each session would have a constraint set on it – i.e. no if statements used, no primitives, etc. With a legacy CR it seems more like patterns for refactoring are learned. Does the facilitator explain the pattern used before the session starts or are they just given a code base to start from and an objective to achieve

    Read the article

  • What's a way to implement a flexible buff/debuff system?

    - by gkimsey
    Overview: Lots of games which RPG-like statistics allow for character "buffs", ranging from simple "Deal 25% extra damage" to more complicated things like "Deal 15 damage back to attackers when hit." The specifics of each type of buff aren't really relevant. I'm looking for a (presumably object-oriented) way to handle arbitrary buffs. Details: In my particular case, I have multiple characters in a turn-based battle environment, so I envisioned buffs being tied to events like "OnTurnStart", "OnReceiveDamage", etc. Perhaps each buff is a subclass of a main Buff abstract class, where only the relevant events are overloaded. Then each character could have a vector of buffs currently applied. Does this solution make sense? I can certainly see dozens of event types being necessary, it feels like making a new subclass for each buff is overkill, and it doesn't seem to allow for any buff "interactions". That is, if I wanted to implement a cap on damage boosts so that even if you had 10 different buffs which all give 25% extra damage, you only do 100% extra instead of 250% extra. And there's more complicated situations that ideally I could control. I'm sure everyone can come up with examples of how more sophisticated buffs can potentially interact with each other in a way that as a game developer I may not want. As a relatively inexperienced C++ programmer (I generally have used C in embedded systems), I feel like my solution is simplistic and probably doesn't take full advantage of the object-oriented language. Thoughts? Has anyone here designed a fairly robust buff system before?

    Read the article

  • Installation causing broken packages

    - by AWE
    Here I come I am so determined to use Ubuntu that I paid a professional to install it for me (dualboot). When I got it I got a lot of things from the software center. Skype did not have a download button so I googled it and Ubuntu help told me to do this: sudo add-apt-repository "deb http://archive.canonical.com/ $(lsb_release -sc) partner" and then this: sudo apt-get update && sudo apt-get install skype The terminal told me "that this is potentially harmful..." but I thought it was Ubuntu language meaning "are you sure?" Now items cannot be installed or removed until the package catalog is repaired, so I want to repair it but the package operation fails. "sudo aptitude -f install" - command not found Synaptic package manager tells me that I have two broken packages, libc6 and libc6-dev but doesn't help, only makes life complicated. What the *#$%&!!! I don't want to be forced to become a computer scientist just to be able to use a free source os. P.s. the sound stopped working

    Read the article

  • Question regarding Readability vs Processing Time

    - by Jordy
    I am creating a flowchart for a program with multiple sequential steps. Every step should be performed if the previous step is succesful. I use a c-based programming language so the lay-out would be something like this: METHOD 1: if(step_one_succeeded()) { if(step_two_succeeded()) { if(step_three_succeeded()) { //etc. etc. } } } If my program would have 15+ steps, the resulting code would be terribly unfriendly to read. So I changed my design and implemented a global errorcode that I keep passing by reference, make everything more readable. The resulting code would be something like this: METHOD 2: int _no_error = 0; step_one(_no_error); if(_no_error == 0) step_two(_no_error); if(_no_error == 0) step_three(_no_error); if(_no_error == 0) step_two(_no_error); The cyclomatic complexibility stays the same. Now let's say there are N number of steps. And let's assume that checking a condition is 1 clock long and performing a step doesn't take up time. The processing speed of Method1 can be anywhere between 1 and N. The processing speed of Method2 however is always equal to N-1. So Method1 will be faster most of the time. Which brings me to my question, is it bad practice to sacrifice time in order to make the code more readable? And why (not)?

    Read the article

< Previous Page | 774 775 776 777 778 779 780 781 782 783 784 785  | Next Page >