Search Results

Search found 15912 results on 637 pages for 'cross language'.

Page 433/637 | < Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >

  • Most suited technology for browser games?

    - by Tingle
    I was thinking about making a 2D MMO which I would in the long run support on various plattforms like desktop, mac, browser, android and ios. The server will be c++/linux based and the first client would go in the browser. So I have done some research and found that webgl and flash 11 support hardware accelerated rendering, I saw some other things like normal HTML5 painting. So my question is, which technology should I use for such a project? My main goal would be that the users have a hassle free experience using what there hardware can give them with hardware acceleration. And the client should work on the most basic out-of-the-box pc's that any casual pc or mac user has. And another criteria would be that it should be developer friendly. I've messed with webgl abit for example and that would require writing a engine from scratch - which is acceptable but not preferred. Also, in case of non-actionscript, which kind language is most prefered in terms of speed and flexability. I'm not to fond of javascript due to the garbage collector but have learned to work around it. Thank you for you time.

    Read the article

  • Creating new games on Android and/or iPhone

    - by James Clifton
    I have a succesfull facebook poker game that is running very nicely, now some people have asked if I can port this to other platforms - mainly mobile devices (and I have been asked to make a tablet version, do I really need a seperate version?) I am currently a PHP programmer (and game designer) and I simply dont' have the time to learn Android and other languages - so I have decided to pay third parties to program them (if viable). The information I need to know is what programming language is needed for the following four devices - Android mobile phone, iPhone, iPad and tablets? Can they all run off a central sql database? If they can't then i'm not interested :( Do any of these run FLASH? Have I covered all my main bases here? For example if a person programs for a ANDROID mobile phone is that to much differant to an ANDROID tablet? They will have slightly differant graphics (because the tablet has a greater screen area might as well use it) but do they need to be started from scratch? Same goes for iPhone/iPad, do they really need to be programmed differantly if the only differance is the graphics?

    Read the article

  • How do you avoid name similarities between your classes and the native ones?

    - by Oscar
    I just ran into an "interesting problem", which I would like your opinion about: I am developing a system and for many reasons (meaning: abstraction, technology independence, etc) we create our own types for exchanging information. For instance: if there is a method which is called SendEmail and is invoked by the business logic, it way have a parameter of type OurCompany.EMailMessage, which is an object which is completely technology independent and contains only "business relevant data" (for instance, no information abut head encoding). Inside the SendEmail function, we get this information from our EMailMEssage object and create a MailMessage (this one is technolgy specific) object so it can be sent over the network. As you can already notice, our class has a very similar name to the "native" language class. The problem is: this is exactly what they are, email messages, so it is hard to find another meaningful name for them. Do you have this problem often? How do you manage it? Edit: @mgkrebbs just commented about using fully qualified names. This is our current approach, but a little bit too verbose, IMHO. I would like something cleaner, if possible.

    Read the article

  • Java and what to do with it

    - by SterAllures
    I've been browsing through several websites and several topics on this website. Now I'm just a starting programmer and I want to make a good decision. From what I understand is that Java is used alot for server stuff, and web applets but not really for computer applications running on a client, it's also used for Android programming and several other mobiles. I'm really interested in Android programming, I really love to program for mobile devices, in this case Android because I really think it has a lot of potential and I don't like the iPhone. If I want to program on Android I have to learn Java (aside from Mono). but if my decision changes over the next couple of years I don't think Java is the right language to get a job that programs computer applications. I think I get a job where I have to program server stuff, rather than computer applications. That's why I think C# is a good choice. I can program for Windows Phone 7 (I hope that will get big). and I have the feeling C# is more widely used for computer applications. so I think C# is more versatile looking at Mobile programming and computer programming. Or am I totally wrong thinking this?

    Read the article

  • Sucking Less Every Year ?

    - by AdityaGameProgrammer
    Sucking Less Every Year A trail of thought that had been on my mind for a while Quoting directly from the post I've often thought that sucking less every year is how humble programmers improve. You should be unhappy with code you wrote a year ago. If you aren't, that means either A) you haven't learned anything in a year, B) your code can't be improved, or C) you never revisit old code. All of these are the kiss of death for software developers. How often does this happen or not happen to you? How long before you see an actual improvement in your coding ? month, year? Do you ever revisit Your old code? How often does your old code plague you? or how often do you have to deal with your technical debt. It is definitely very painful to fix old bugs n dirty code that we may have done to quickly meet a deadline and those quick fixes ,some cases we may have to rewrite most of the application/code. No arguments about that. Some of the developers i had come across argued that they were already at the evolved stage where their coding doesn't need improvement or cant get improved anymore. Does this happen? If so how many years into coding on a particular language does one expect this to happen?

    Read the article

  • Question regarding Readability vs Processing Time

    - by Jordy
    I am creating a flowchart for a program with multiple sequential steps. Every step should be performed if the previous step is succesful. I use a c-based programming language so the lay-out would be something like this: METHOD 1: if(step_one_succeeded()) { if(step_two_succeeded()) { if(step_three_succeeded()) { //etc. etc. } } } If my program would have 15+ steps, the resulting code would be terribly unfriendly to read. So I changed my design and implemented a global errorcode that I keep passing by reference, make everything more readable. The resulting code would be something like this: METHOD 2: int _no_error = 0; step_one(_no_error); if(_no_error == 0) step_two(_no_error); if(_no_error == 0) step_three(_no_error); if(_no_error == 0) step_two(_no_error); The cyclomatic complexibility stays the same. Now let's say there are N number of steps. And let's assume that checking a condition is 1 clock long and performing a step doesn't take up time. The processing speed of Method1 can be anywhere between 1 and N. The processing speed of Method2 however is always equal to N-1. So Method1 will be faster most of the time. Which brings me to my question, is it bad practice to sacrifice time in order to make the code more readable? And why (not)?

    Read the article

  • I've inherited 200K lines of spaghetti code -- what now?

    - by kmote
    I hope this isn't too general of a question; I could really use some seasoned advice. I am newly employed as the sole "SW Engineer" in a fairly small shop of scientists who have spent the last 10-20 years cobbling together a vast code base. (It was written in a virtually obsolete language: G2 -- think Pascal with graphics). The program itself is a physical model of a complex chemical processing plant; the team that wrote it have incredibly deep domain knowledge but little or no formal training in programming fundamentals. They've recently learned some hard lessons about the consequences of non-existant configuration management. Their maintenance efforts are also greatly hampered by the vast accumulation of undocumented "sludge" in the code itself. I will spare you the "politics" of the situation (there's always politics!), but suffice to say, there is not a consensus of opinion about what is needed for the path ahead. They have asked me to begin presenting to the team some of the principles of modern software development. They want me to introduce some of the industry-standard practices and strategies regarding coding conventions, lifecycle management, high-level design patterns, and source control. Frankly, it's a fairly daunting task and I'm not sure where to begin. Initially, I'm inclined to tutor them in some of the central concepts of The Pragmatic Programmer, or Fowler's Refactoring ("Code Smells", etc). I also hope to introduce a number of Agile methodologies. But ultimately, to be effective, I think I'm going to need to hone in on 5-7 core fundamentals; in other words, what are the most important principles or practices that they can realistically start implementing that will give them the most "bang for the buck". So that's my question: What would you include in your list of the most effective strategies to help straighten out the spaghetti (and prevent it in the future)?

    Read the article

  • Unable to debug an encodded javascript?

    - by miles away
    I’m having some problems debugging an encoded javacscript. This script I’m referring to given in this link over here. The encoding here is simple and it works by shifting the unicodes values to whatever Codekey was use during encoding. The code that does the decoding is given here in plain English below:- <script language="javascript"> function dF(s){ var s1=unescape(s.substr(0,s.length-1)); var t=''; for(i=0;i<s1.length;i++)t+=String.fromCharCode(s1.charCodeAt(i)-s.substr(s.length-1,1)); document.write(unescape(t)); } </script> I’m interested in knowing or understanding the values (e.g s1,t). Like for example when the value of i=0 what values would the following attributes / method would hold s1.charCodeAt(i) and s.substr(s.length-1,1) The reason I’m doing this is to understand as to how a CodeKey function really works. I don’t see anything in the code above which tells it to decode on the basis of codekey value. The only thing I can point in the encoding text is the last character which is set to 1 , 2 ,3 or 4 depending upon the codekey selected during encoding process. One can verify using the link I have given above. However, to debug, I’m using firebug addon with the script running as localhost on my wamp server. I’m able to put a breakpoint on the js using firebug but I’m unable to retrieve any of the user defined parameters or functions I mentioned above. I want to know under this context what would be best way to debug this encoded js.

    Read the article

  • Engine for 2D Top-Down Physics-Based Skeletal Animation

    - by RylandAlmanza
    I just watched at the Sui Generis video, and was completely amazed. Specifically, the part where the big troll thing is beating up the player with his flail. This got me really excited, and I would like to try implementing something like this in a 2D Top-Down format. Something like this. That atloria example seems simple enough, but it's not exactly what I'm looking to make. I think atloria is using predefined animations, where as I would like to make something more physics-based like the Sui Generis engine does. So, I'm wondering what physics engines might work for something like this, and if I'd need to implement my own skeletal system, or if I could just use "joints" and such from the engine. The only experience I have in terms of physics engines is Box2D, which I've heard shouldn't be used for top-down settings, and I can think of a few reasons it wouldn't work out well. One of those reasons being gravity. In box 2D, gravity pulls towards a side of the screen (usually the bottom.) I wouldn't want my player's forearms constantly being pulled to one side. :) Also should mention that the programming language doesn't matter all that much to me. I'm currently playing with HTML5 stuff, though. :) Thanks in advance!

    Read the article

  • To PHP or Not to PHP? [closed]

    - by Vad
    Should I learn PHP in depth for my smaller projects or not? My main knowledge is Java/JavaScript for the web. My old small projects were written in classic ASP. However, ASP had its days. Now I am looking into going deeper with another scripting language which I can use for small website projects. Though I know PHP on a basic level I never liked PHP. But I have to admit it is so widely used that I better start liking it. And all hosting services offer mostly PHP solutions. However, there is quite a number of issues with PHP when I google for it. Developers seem to not like it a lot. I wish I would use server-side JavaScript for all my needs, but hosting is an issue plus many small businesses already want to improve their existing PHP sites. And lastly, say I want to create a web app for distribution. PHP sounds like the best bet. Or am I wrong?

    Read the article

  • Are there legitimate reasons for returning exception objects instead of throwing them?

    - by stakx
    This question is intended to apply to any OO programming language that supports exception handling; I am using C# for illustrative purposes only. Exceptions are usually intended to be raised when an problem arises that the code cannot immediately handle, and then to be caught in a catch clause in a different location (usually an outer stack frame). Q: Are there any legitimate situations where exceptions are not thrown and caught, but simply returned from a method and then passed around as error objects? This question came up for me because .NET 4's System.IObserver<T>.OnError method suggests just that: exceptions being passed around as error objects. Let's look at another scenario, validation. Let's say I am following conventional wisdom, and that I am therefore distinguishing between an error object type IValidationError and a separate exception type ValidationException that is used to report unexpected errors: partial interface IValidationError { } abstract partial class ValidationException : System.Exception { public abstract IValidationError[] ValidationErrors { get; } } (The System.Component.DataAnnotations namespace does something quite similar.) These types could be employed as follows: partial interface IFoo { } // an immutable type partial interface IFooBuilder // mutable counterpart to prepare instances of above type { bool IsValid(out IValidationError[] validationErrors); // true if no validation error occurs IFoo Build(); // throws ValidationException if !IsValid(…) } Now I am wondering, could I not simplify the above to this: partial class ValidationError : System.Exception { } // = IValidationError + ValidationException partial interface IFoo { } // (unchanged) partial interface IFooBuilder { bool IsValid(out ValidationError[] validationErrors); IFoo Build(); // may throw ValidationError or sth. like AggregateException<ValidationError> } Q: What are the advantages and disadvantages of these two differing approaches?

    Read the article

  • Java Magazine: Developer Tools and More

    - by Tori Wieldt
    The May/June issue of Java Magazine explores the tools and techniques that can help you bring your ideas to fruition and make you more productive. In “Seven Open Source Tools for Java Deployment,” Bruno Souza and Edson Yanaga present a set of tools that you can use now to drastically improve the deployment process on projects big or small—enabling you and your team to focus on building better and more-innovative software in a less stressful environment. We explore the future of application development tools at Oracle in our interview with Oracle’s Chris Tonas, who discusses plans for NetBeans IDE 9, Oracle’s support for Eclipse, and key trends in the software development space. For more on NetBeans IDE, don’t miss “Quick and Easy Conversion to Java SE 8 with NetBeans IDE 8” and “Build with NetBeans IDE, Deploy to Oracle Java Cloud Service.” We also give you insight into Scrum, an iterative and incremental agile process, with a tour of a development team’s Scrum sprint. Find out if Scrum will work for your team. Other article topics include mastering binaries in Maven-based projects, creating sophisticated applications with HTML5 and JSF, and learning to program with BlueJ. At the end of the day, tools don’t make great code—you do. What tools are vital to your development process? How are you innovating today? Let us know. Send a tweet to @oraclejavamag. The next big thing is always just around the corner—maybe it’s even an idea that’s percolating in *your* brain. Get started today with this issue of Java Magazine. Java Magazine is a FREE, bi-monthly, online publication. It includes technical articles on the Java language and platform; Java innovations and innovators; JUG and JCP news; Java events; links to online Java communities; and videos and multimedia demos. Subscriptions are free, registration required.

    Read the article

  • Installation causing broken packages

    - by AWE
    Here I come I am so determined to use Ubuntu that I paid a professional to install it for me (dualboot). When I got it I got a lot of things from the software center. Skype did not have a download button so I googled it and Ubuntu help told me to do this: sudo add-apt-repository "deb http://archive.canonical.com/ $(lsb_release -sc) partner" and then this: sudo apt-get update && sudo apt-get install skype The terminal told me "that this is potentially harmful..." but I thought it was Ubuntu language meaning "are you sure?" Now items cannot be installed or removed until the package catalog is repaired, so I want to repair it but the package operation fails. "sudo aptitude -f install" - command not found Synaptic package manager tells me that I have two broken packages, libc6 and libc6-dev but doesn't help, only makes life complicated. What the *#$%&!!! I don't want to be forced to become a computer scientist just to be able to use a free source os. P.s. the sound stopped working

    Read the article

  • Languages with a clear distinction between subroutines that are purely functional, mutating, state-changing, etc?

    - by CPX
    Lately I've become more and more frustrated that in most modern programming languages I've worked with (C/C++, C#, F#, Ruby, Python, JS and more) there is very little, if any, language support for determining what a subroutine will actually do. Consider the following simple pseudo-code: var x = DoSomethingWith(y); How do I determine what the call to DoSomethingWith(y) will actually do? Will it mutate y, or will it return a copy of y? Does it depend on global or local state, or is it only dependent on y? Will it change the global or local state? How does closure affect the outcome of the call? In all languages I've encountered, almost none of these questions can be answered by merely looking at the signature of the subroutine, and there is almost never any compile-time or run-time support either. Usually, the only way is to put your trust in the author of the API, and hope that the documentation and/or naming conventions reveal what the subroutine will actually do. My question is this: Does there exist any languages today that make symbolic distinctions between these types of scenarios, and places compile-time constraints on what code you can actually write? (There is of course some support for this in most modern languages, such as different levels of scope and closure, the separation between static and instance code, lambda functions, et cetera. But too often these seem to come into conflict with each other. For instance, a lambda function will usually either be purely functional, and simply return a value based on input parameters, or mutate the input parameters in some way. But it is usually possible to access static variables from a lambda function, which in turn can give you access to instance variables, and then it all breaks apart.)

    Read the article

  • Is it better to specialize in a single field I like, or expand into other fields to broaden my horizons?

    - by Oak
    This is a dilemma about which I have been thinking for quite a while. I'm a graduate student and my topics of interest are programming language design, code analysis, compilation, etc. So far, this field has been very interesting and rewarding for me, so I was thinking about finding a job in that field and continuing to specialize in it. I feel like it's a relatively solid field which won't "get out of style" anytime soon. I've always thought that in such complex fields it's better to be a real expert than just another guy who superficially understand what the experts are talking about. On the other hand, I feel that by specializing this way I really limit my future option. I have always been a strong believer in multidisciplinary approaches to problems. Maybe I should go search for a general programming job in which I could gain experience in other fields, as well as occasionally apply my favorite field for solving problems. Specializing in only one or two fields can prevent me from thinking outside the box and cause stagnation. I would really like to hear more opinions about this choice. The truth is I'm already leaning towards one of the choices, so basic psychology says nothing will change my mind, but I would still love to hear some feedback.

    Read the article

  • Laptop keyboard and touchpad disabled on startup

    - by JAM
    I use Ubuntu 14.04 LTS on my Toshiba Satellite L775D laptop. 14.04 is the only operating system installed. I am new to Linux and only barely scratching the surface of doing things in terminal. When I boot my laptop keyboard and touchpad are disabled (99.99% of the time) if I do nothing. The only direct effect I can have is to keep pressing the "numlock" key during boot when I notice the "numlock" light goes off. If I do this then I have a 95% chance of the keyboard and touchpad working when I am in the operating system. I am able to use my wireless mouse regardless. I have not seen any messages during boot. Previously I have tried playing with input method settings and utilities as well as language support settings. This same problem exists with the 12.... and 13... versions of ubuntu. With everything I have tried (from looking at other posts/suggestions) it seems I can have only a temporary effect. Please help me find a permanent solution to this problem. thank you.

    Read the article

  • Extracting Frustum Planes (Hartmann & Gribbs method)

    - by DAVco
    I have been grappling with the Hartmann/Gribbs method of extracting the Frustum planes for some time now, with little success. There doesn't appear to be a "definitive" topic or tutorial which combines all the necessary information, so perhaps this can be it First of all, I am attempting to do this in C# (For Playstation Mobile), using OpenGL style Column-Major matrices in a Right-Handed coordinate system but obviously the math will work in any language. My projection matrix has a Near plane at 1.0, Far plane at 1000, FOV of 45.0 and Aspect of 1.7647. I want to get my planes in World-Space, so I build my frustum from the View-Projection Matrix (that's projectionMatrix * viewMatrix). The view Matrix is the inverse of the camera's World-Transform. The problem is; regardless of what I tweak, I can't seem to get a correct frustum. I think that I may be missing something obvious. Focusing on the Near and Far planes for the moment (since they have the most obvious normals when correct), when my camera is positioned looking down the negative z-axis, I get two planes facing in the same direction, rather than opposite directions. If i strafe my camera left and right (while still looking along the z axis) the x value of the normal vector changes. Obviously, something is fundamentally wrong here; I just can't figure out what - maybe someone here can?

    Read the article

  • Web Crawler for Learnign Topics on Wikipedia

    - by Chris Okyen
    When I want to learn a vast topic on wikipedia, I don't know where to start. For instance say I want to learn about Binary Stars, I then have to know other things linked on that pages and linked pages on all the linked pages and so on for the specified number of levels. I want to write a web crawler like HTTracker or something similiar, that will display a heiarchy of the links on a certain page and the links on those linked pages.I wish to use as much prewritten code as possible. Here is an example: Pretending we are bending the rules by grabing links from only the first sentence of each pages The example archives and "processes" two levels deep The page is Ternary operation The First Level In mathematics a ternary operation is an N-ary operation The Second Level Under Mathmatics: Mathematics (from Greek µ???µa máthema, “knowledge, study, learning”) is the abstract study of topics encompassing quantity, structure, space, change and others; it has no generally accepted definition. Under N-ary In logic,mathematics, and computer science, the arity i/'ær?ti/ of a function or operation is the number of arguments or operands that the function takes Under Operation In its simplest meaning in mathematics and logic, an operation is an action or procedure which produces a new value from one or more input values ------------------------------------------------------------------------- I need some way to determine what oder to approach all these wiki pages to learn the concept ( in this case ternary operations )... Following along with this exmpakle, one way to show the path to read would a printout flowout like so: This shows that the first sentence of the Mathematics page doesn't link to the first sentence of pages linked on ternary page two levels deep. (Please tell me how I should explain this ) --- In otherwords, the child node of the top pages first sentence, ternary_operation, does not have any child nodes that reference the children of the top pages other children nodes- N-ary and operation. Thus it is safe to read this first. Since N-ary has a link to operations we shoudl read the operation page second and finally read the N-ary page last. Again, I wish to use as much prewritten code as possible, and was wondering what language to use and what would be the simpliest way to go about doing this if there isn't already somethign out there? Thank You!

    Read the article

  • Unit testing to prove balanced tree

    - by Darrel Hoffman
    I've just built a self-balancing tree (red-black) in Java (language should be irrelevant for this question though), and I'm trying to come up with a good means of testing that it's properly balanced. I've tested all the basic tree operations, but I can't think of a way to test that it is indeed well and truly balanced. I've tried inserting a large dictionary of words, both pre-sorted and un-sorted. With a balanced tree, those should take roughly the same amount of time, but an unbalanced tree would take significantly longer on the already-sorted list. But I don't know how to go about testing for that in any reasonable, reproducible way. (I've tried doing millisecond tests on these, but there's no noticeable difference - probably because my source data is too small.) Is there a better way to be sure that the tree is really balanced? Say, by looking at the tree after it's created and seeing how deep it goes? (That is, without modifying the tree itself by adding a depth field to each node, which is just wasteful if you don't need it for anything other than testing.)

    Read the article

  • I have an MIS degree. How do I sell myself as a programmer?

    - by hydroparadise
    So, I graduated with a BSBA in Management Information Systems with honors almost 2 years ago which is more of a business degree. As of right now, I do have a job title of "Programmer", but it's more of a report writing position in an arbitrary, proprietary language called PowerOn with the occasional interesting project using more mainstream technologies like .Net and Java. I am also somewhat isoloated being the only programmer in the workplace, which I beleive is a detriment to my career path. The only people I have to bounce ideas against are those on the various SE sites. I don't regret going MIS, but over the past couple of years I have discovered my passion for coding, even though I have been doing some form of coding profesionally and as an enthusiast for years. I do want to persue my Masters in CS (at a later time), but I am not sure if I necessarily need a CS degree to get in with a team of programmers. In addition, I do have a number classes I have taken for different laguanges on the way (C++, Java, SQL, and VB.Net) I beleive my strength is in problem solving where code is just a tool to tackling to problem if needed. My question: How do I best sell myself as a programmer? Should I continue pounding out reports and wait till I have my masters in CS? Or am I viable to be a programmer as I stand?

    Read the article

  • Costs/profit of/when starting an indie company

    - by Jack
    In short, I want to start a game company. I do not have much coding experience (just basic understanding and ability to write basic programs), any graphics design experience, any audio mixing experience, or whatever else technical. However, I do have a lot of ideas, great analytical skills and a very logical approach to life. I do not have any friends who are even remotely technical (or creative in regards to games for that matter). So now that we've cleared that up, my question is this: how much, minimally, would it cost me to start such a company? I know that a game could be developed in under half a year, which means it would have to operate for half a year prior, and that's assuming that the people working on the first project do their jobs good, don't leave game breaking bugs, a bunch of minor bugs, etc.. So how much would it cost me, and what would be the likely profit in half a year? I'm looking at minimal costs here, as to do it, I would have to sell my current apartment and buy a new, smaller one, pay taxes, and likely move to US/CA/UK to be closer to technologically advanced people (and be able to speak the language of course). EDIT: I'm looking at a small project for starters, not a huge AAA title.

    Read the article

  • What am I missing about PHP?

    - by Aerovistae
    It's like this mythical thing that a dominating portion of developers say is just the best option for back-end development, a part of development about which I know virtually nothing beyond the absolute basics. So I've looked up PHP tutorials a bunch of times, trying to figure out why it's so powerful and common, but it's annoying as hell-- all the tutorials treat you like a new programmer. You know, this is how you make an If Else statement, here's a for loop, etc. The "Advanced Topics" show you how to make POST and GET statements and whatnot. But there must be more to it! I don't get it! That's practically no different from JavaScript. What am I missing about this language? What else can it do? Where's the power and versatility? I've heard it called a function soup; where are all the functions? Please chide me. I'm clearly missing something.

    Read the article

  • Getting started on Large Projects

    - by Mercfh
    So I just graduated from my College with a B.S. in Comp. Science (although it was a good school, we're the only accredited CS department in our state.....for w/e that means lol) I feel like im a decent programmer, not amazing....but not terrible. Anyways I got my first job about 2 weeks ago, it's a pretty entry level job: firmware development/tester (I know I know people look down on testers...but I gotta start somewhere). Anyways there isn't a whole lot of coding to be had right now (mostly simple stuff) but here soon I have the option of helping out with development (which is what I want to do) Thing is....I have NEVER worked on a huge project. I mean in school sure we had "group" projects but nothing really big. So I'm not super familiar with HUGE classes and such (main language was C++)....Is this something I'll just get used to with time? Some fellow students were used to that with internships and such...but I never got that chance. My job was mostly a "one man job" kinda thing. Mostly little things. Plus in class we never did huge projects anyways. So how do you guys I guess "plan" out these things? Do you use a whiteboard and plan out classes and such....or what. Also...another worry of mine is that I have to use google......ALOT for examples of code, because sometimes I just don't get how something works. Is this normal? It makes me feel sorta.....stupid I guess. I mean "technically" i've had 4-5 years coding experience......but it really only feels like I had 2 years of REAL experience. If that makes any sense? Thanks

    Read the article

  • Floating point undesirable in highly critical code?

    - by Kirt Undercoffer
    Question 11 in the Software Quality section of "IEEE Computer Society Real-World Software Engineering Problems", Naveda, Seidman, lists fp computation as undesirable because "the accuracy of the computations cannot be guaranteed". This is in the context of computing acceleration for an emergency braking system for a high speed train. This thinking seems to be invoking possible errors in small differences between measurements of a moving object but small differences at slow speeds aren't a problem (or shouldn't be), small differences between two measurements at high speed are irrelevant - can there be a problem with small roundoff errors during deceleration for an emergency braking system? This problem has been observed with airplane braking systems resulting in hydroplaning but could this actually happen in the context of a high speed train? The concern about fp errors seems to not be well-founded in this context. Any insight? The fp is used for acceleration so perhaps the concern is inching over a speed limit? But fp should be just fine if they use a double in whatever implementation language. The actual problem in the text states: During the inspection of the code for the emergency braking system of a new high speed train (a highly critical, real-time application), the review team identifies several characteristics of the code. Which of these characteristics are generally viewed as undesirable? The code contains three recursive functions (well that one is obvious). The computation of acceleration uses floating point arithmetic. All other computations use integer arithmetic. The code contains one linked list that uses dynamic memory allocation (second obvious problem). All inputs are checked to determine that they are within expected bounds before they are used.

    Read the article

  • Assignments in mock return values

    - by zerkms
    (I will show examples using php and phpunit but this may be applied to any programming language) The case: let's say we have a method A::foo that delegates some work to class M and returns the value as-is. Which of these solutions would you choose: $mock = $this->getMock('M'); $mock->expects($this->once()) ->method('bar') ->will($this->returnValue('baz')); $obj = new A($mock); $this->assertEquals('baz', $obj->foo()); or $mock = $this->getMock('M'); $mock->expects($this->once()) ->method('bar') ->will($this->returnValue($result = 'baz')); $obj = new A($mock); $this->assertEquals($result, $obj->foo()); or $result = 'baz'; $mock = $this->getMock('M'); $mock->expects($this->once()) ->method('bar') ->will($this->returnValue($result)); $obj = new A($mock); $this->assertEquals($result, $obj->foo()); Personally I always follow the 2nd solution, but just 10 minutes ago I had a conversation with couple of developers who said that it is "too tricky" and chose 3rd or 1st. So what would you usually do? And do you have any conventions to follow in such cases?

    Read the article

< Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >