Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 315/563 | < Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >

  • Shouldn't storage classes be taught early in a C class or book?

    - by Adam Mendoza
    Shouldn't storage classes be taught early in a C class or book? I notice that a lot of books, even some of the better ones, covert it toward and end of the book and some books just add it as an appendix. I would teach it together with variables. This is so foundational and I think unfortunately many do not make it that far in a book. Now that auto has a different meaning (vs being optional) it may confuse people that didn't realize it has always been there. for example: C Programming: A Modern Approach 18.2 Storage Classes 401 Properties of Variables 401 The auto Storage Class 402 The static Storage Class 403 The extern Storage Class 404 The register Storage Class 405 The Storage Class of a Function 406 Summary 407

    Read the article

  • How to explain that writing universally cross-platform C++ code and shipping products for all OSes is not that easy?

    - by sharptooth
    Our company ships a range of desktop products for Windows and lots of Linux users complain on forums that we should have been written versions of our products for Linux years ago and the reason why we don't do that is we're a greedy corporation all our technical specialists are underqualified idiots Our average product is something like 3 million lines of C++ code. My and my colleagues analysis is the following: writing cross-platform C++ code is not that easy preparing a lot of distribution packages and maintaining them for all widespread versions of Linux takes time our estimate is that Linux market is something like 5-15% of all users and those users will likely not want to pay for our effort when this is brought up the response is again that we're greedy underqualified idiots and that when everything is done right all this is easy and painless. How reasonable are our evaluations of the fact that writing cross-platform code and maintaining numerous ditribution packages takes lots of effort? Where can we find some easy yet detailed analysis with real life stories that show beyond the shadow of a doubt what amount of effort exactly it takes?

    Read the article

  • Know Thy Operating System?

    - by AdityaGameProgrammer
    As developers how much time, or do you spend time, In learning the hidden features tricks of your operating system ? How important do you feel is this for productivity in day to day programming? tasks. What do you mean when you list knowledge of an OS in your resume? What are your favorite hidden -less known features For example: A common problem of How can i open the cmd window in a specific location a do it yourself solution in say xp and what to do if something breaks Are these something you look into as and when you find the need to do so?

    Read the article

  • Does your organization still use the term "screens" to describe a user interface?

    - by bit-twiddler
    I have been in the field long enough to remember when the term "screen" entered our lexicon. As difficult as it is to believe, the early systems on which I worked had no user interface (UI), that is, unless one counts a keypunch machine and job listings as a user interface. These systems ran as "card image" production jobs back in a day when being a computer operator required a reasonably deep understanding of how computers worked. Flashing forward to today: I cringe every time I hear a systems practitioner use the term "screen." The metaphor no longer fits the medium. The term somewhat fit back when the user dialog consumed 100% of available monitor real estate; however, the term lost its relevance the moment we moved to windowed environments. With the above said, does your organization still use the term "screens" to describe an application's UI? Has anyone successfully purged the term from an organization? For those who do not use the term to describe UI dialog elements, what term do you use in place of “screen.”

    Read the article

  • Facebook android app changes

    - by jogabonito
    I am referring to this article about how Facebook has rolled out a native app for android replacing their previous HTML5 based one. From my usage, things have definitely become much faster. I was wondering whether this native app is purely java based, or involves some JNI. Image loading for one has become faster, which is generally not thought of as a java strong point. (IMHO) Are there any details on what Facebook has done?

    Read the article

  • How do I pitch ASP.NET over PHP to a potential client?

    - by roman m
    I work at a Microsoft shop doing mainly web development. We had a client who asked us to review (improve) the data model for his web app, but said that he wants to develop his app in PHP (he knows "a guy" who can do it). When I asked him why he wants to go with PHP, he gave me the standard set of arguments from the 90's: Microsoft is evil, and PHP is free Writing an ASP.NET app is more expensive (software-wise) Why would Facebook use PHP if it was a bad idea? [classic] He had a few more comments about the costs associated with going .NET. The truth is that "Microsoft is expensive" does not hold water any longer, with their "Express" suite, you can develop an ASP.NET app without paying anything for software. When it comes to hosting, you can save a few bucks with PHP over .NET, but that's a small fraction of the projected development costs (we quoted 10-15k). Going back to my question, what arguments would I give to a client in favor of ASP.NET over PHP? [please provide sources for quantitative claims]

    Read the article

  • TDD: Write a separate test for object initialization or relying on other tests exercising it

    - by DXM
    This seems to be the common pattern that's emerging in some of the tests I've worked on lately. We have a class, and quite often this is legacy code whose design can't be easily altered, which has a bunch of member variables. There's some kind of "Initialize" or "Load" function which would put an object into a valid state. Only after it is initialized/loaded, are the members in the proper state so that other methods can be exercised. So when we start writing tests, first test is "TestLoad" and all we put in there is exercising initialization logic. Then we might add one (or few) TestLoadFailureXXX tests and those are definitely valuable. Then we start writing tests to verify other behaviors but all of them require the object to be loaded. So they all start by running exactly the same code as "TestLoad". So my question: Is TestLoad even necessary? Do you take it and let other tests simply exercise the loading? Or leave it so things are more explicit? I know that each unit test function should have no (or as little as possible) overlap with other test functions, but it seems like in cases of loading, this is unavoidable. And whether we like it or not, if something in the loading code breaks, we will end up with a whole test suite of failures. Is there another approach that I might be missing here? Thank you for the responses. It definitely makes sense that you want to see "InitializationTest" and if that fails you know where to start looking. In case it matters, this question is mostly about C++ and we use CppUnit framework. And now, thanks to sleske, I'll be constantly wishing that CppUnit supported test dependencies. Might have to hack something in one of these days :)

    Read the article

  • Looking for a real-world example illustrating that composition can be superior to inheritance

    - by Job
    I watched a bunch of lectures on Clojure and functional programming by Rich Hickey as well as some of the SICP lectures, and I am sold on many concepts of functional programming. I incorporated some of them into my C# code at a previous job, and luckily it was easy to write C# code in a more functional style. At my new job we use Python and multiple inheritance is all the rage. My co-workers are very smart but they have to produce code fast given the nature of the company. I am learning both the tools and the codebase, but the architecture itself slows me down as well. I have not written the existing class hierarchy (neither would I be able to remember everything about it), and so, when I started adding a fairly small feature, I realized that I had to read a lot of code in the process. At the surface the code is neatly organized and split into small functions/methods and not copy-paste-repetitive, but the flip side of being not repetitive is that there is some magic functionality hidden somewhere in the hierarchy chain that magically glues things together and does work on my behalf, but it is very hard to find and follow. I had to fire up a profiler and run it through several examples and plot the execution graph as well as step through a debugger a few times, search the code for some substring and just read pages at the time. I am pretty sure that once I am done, my resulting code will be short and neatly organized, and yet not very readable. What I write feels declarative, as if I was writing an XML file that drives some other magic engine, except that there is no clear documentation on what the XML should look like and what the engine does except for the existing examples that I can read as well as the source code for the 'engine'. There has got to be a better way. IMO using composition over inheritance can help quite a bit. That way the computation will be linear rather than jumping all over the hierarchy tree. Whenever the functionality does not quite fit into an inheritance model, it will need to be mangled to fit in, or the entire inheritance hierarchy will need to be refactored/rebalanced, sort of like an unbalanced binary tree needs reshuffling from time to time in order to improve the average seek time. As I mentioned before, my co-workers are very smart; they just have been doing things a certain way and probably have an ability to hold a lot of unrelated crap in their head at once. I want to convince them to give composition and functional as opposed to OOP approach a try. To do that, I need to find some very good material. I do not think that a SCIP lecture or one by Rich Hickey will do - I am afraid it will be flagged down as too academic. Then, simple examples of Dog and Frog and AddressBook classes do not really connivence one way or the other - they show how inheritance can be converted to composition but not why it is truly and objectively better. What I am looking for is some real-world example of code that has been written with a lot of inheritance, then hit a wall and re-written in a different style that uses composition. Perhaps there is a blog or a chapter. I am looking for something that can summarize and illustrate the sort of pain that I am going through. I already have been throwing the phrase "composition over inheritance" around, but it was not received as enthusiastically as I had hoped. I do not want to be perceived as a new guy who likes to complain and bash existing code while looking for a perfect approach while not contributing fast enough. At the same time, my gut is convinced that inheritance is often the instrument of evil and I want to show a better way in a near future. Have you stumbled upon any great resources that can help me?

    Read the article

  • Languages on a resume: Is it better to put "C/C++" or "C, C++"?

    - by Kevin
    I'm graduating in a couple of weeks, and my resume (as expected) lists the languages that I've had experience with. Previously I've put "C/C++", however back then I didn't have that much experience with these two languages as I do now. Now that I've formally learned these two languages, it has become evident to me (and anyone who really knows these languages) that they are similar, and completely disimilar at the same time. Sure, most C code is compilable C++ code, but syntax and incorporation of library functions is pretty much where these similarities end. In most non-trivial problems, chances are that the desirable C++ solution will be different from the desirable C solution. My question: Will recruiters take note or care about whether you put "C/C++" as opposed to "C, C++"? Will they assume a lack of knowledge of the workings of either because of the inclusion of the first form, or perhaps see the inclusion of the second form as a potential "resume beefer" (listing them as 2 languages, instead of "one")? Furthermore, for jobs that you've applied to that were particularly interested in these two langauges, did the interview process include questions about the differences between C programming and C++ programming (so, about actual programming techniques, not only the extra paradigms in the latter)?

    Read the article

  • What have you learned from the bugs you helped discover and fix?

    - by Ethel Evans
    I liked the core of this question, and wanted to re-ask it in a way that made it less about 'fun' and more about 'What do these past mistakes tell us about how we can write and test software better?' As an SDET, I'm always looking for anecdotes about new and interesting ways that programs can fail. I've learned a lot from these tales in the past, and would like to get that from the intelligent people in this community as well. I'd be interested in hearing what the issue was, how it was caught, if you think there was anything that could have reasonably done to catch it earlier or to avoid the same issue on later projects, and any other interesting lessons you took away from this bug. Please only write about bugs you personally were involved with, ideally on a project you worked on (e.g., no "10 years before I was born, this happened and it was FUNNY!" answers). Please vote up answers that are thought-provoking or could change how you develop or test in some way, so this isn't just 'social fun'. Try to avoid voting up something just because it was funny.

    Read the article

  • Automated testing tool development challenges (for embedded software)

    - by Karthi prime
    My boss want to come up with the proposal for the following tool: An IDE: Able to build, compile, debug, via JTAG programming for the micro-controller. A Test Suite, reads the code in the IDE, auto generates the test cases, and it gives the in-target unit testing results(which is done by controlling code execution in the micro-controller via IDE). A no-overhead code coverage tool which interacts with the test suite and IDE. My work is to obtain the high level architecture of this tool, so as to proceed further. My current knowledge: There are tool-chains available from the chip manufacturer for the micro-controllers which can be utilized along with an open-source IDE like Eclipse, and along with an open-source burner, a complete IDE for a micro-controller can be done. Test cases can be auto-generated by reading the source file through the process of parsing, scripting, based on keywords. Test suite must be able to command the IDE to control, through breakpoints, and read the register contents from the microcontroller - This enables the in-target unit testing. An no-overhead code coverage should be done by no-overhead code instrumentation so as to execute those in the resource constraint environment of the micro-controller. I have the following questions: Any advice on the validity of my understanding? What are the challenges I will have during the development? What are the helpful open-source tools regarding this? What is the development time for this software? Thanks

    Read the article

  • How to adopt scrum agile methodology for a small .Net team

    - by Thabo
    I am working on a small product based company developing .Net applications. There is a small team with 5-6 developers. I am a person responsible for planning everything. But my primary role is Software developer. Now our current project is very unstable because of poor organization. Today my boss called me and told to submit a report about required resources, appropriate methodology, required man power and their salary scales to make the current project success. I know I don’t have enough organization skills and I need to go deep in my programming skills. So I need to focus only in the development. So I can’t manage the project anymore. Now I am searching some other ways to make ongoing development success. My questions are What is the suitable agile methodology to my team? Is Scrum is suitable for above mentioned scenario? If we adopt Scrum, what we have to do next? (I think hiring new one to manage the project is more suitable. So we have to get Scrum master and some other developers.) Are there any resources (books, Blogs and etc) to get some tips and advices to solve this problem? If Scrum is not a suitable methodology for our scenario, what else can be more suitable methodology to adopt? Can anyone give a good solution for my problem?

    Read the article

  • Studying parallel programming

    - by mort
    I'm currently finishing my Bachelor's degree in Computer Science and thinking a lot about which specialisation to choose in my Master's degree. One subject I'm particularly interested in is parallel programming. However, this topic does not seem to be a standard topic in Computer Science degrees, although it is something that is used more and more - new processors nowadays are usually dual or quad cores. So I was wandering: does anybody know a good study program in this field? I was mostly looking for it at universities in Germany, but they tend to combine the application side with some type of engineering or natural science. Thus, programs are more the "Computational Engineering" or "Computational Science" type, but I'm more interested in the Computer Science part of it, i.e. parallel programming, languages and compilers, algorithms and hardware.

    Read the article

  • Good book for improving c# skills?

    - by JMarsch
    Hello: I was asked to recommend a good book for a mid-level experienced developer who wants to improve their coding skills (c# developer). I was thinking about: Code Complete: http://www.amazon.com/Code-Complete-Practical-Handbook-Construction/dp/0735619670/ref=sr_1_1?ie=UTF8&qid=1291221928&sr=8-1 The Pragmatic Programmer: http://www.amazon.com/Pragmatic-Programmer-Journeyman-Master/dp/020161622X/ref=sr_1_3?ie=UTF8&qid=1291221928&sr=8-3 or Effective C#: http://www.amazon.com/Effective-Covers-4-0-Specific-Development/dp/0321658701/ref=sr_1_1?s=books&ie=UTF8&qid=1291222038&sr=1-1 What do you think about those? Any other suggestions?

    Read the article

  • Are there any real-world cases for C++ without exceptions?

    - by Martin
    In When to use C over C++, and C++ over C? there is a statement wrt. to code size / C++ exceptions: Jerry answers (among other points): (...) it tends to be more difficult to produce truly tiny executables with C++. For really small systems, you're rarely writing a lot of code anyway, and the extra (...) to which I asked why that would be, to which Jerry responded: the main thing is that C++ includes exception handling, which (at least usually) adds some minimum to the executable size. Most compilers will let you disable exception handling, but when you do the result isn't quite C++ anymore. (...) which I do not really doubt on a technical real world level. Therefore I'm interested (purely out of curiosity) to hear from real world examples where a project chose C++ as a language and then chose to disable exceptions. (Not just merely "not use" exceptions in user code, but disable them in the compiler, so that you can't throw or catch exceptions.) Why does a project chose to do so (still using C++ and not C, but no exceptions) - what are/were the (technical) reasons? Addendum: For those wishing to elaborate on their answers, it would be nice to detail how the implications of no-exceptions are handled: STL collections (vector, ...) do not work properly (allocation failure cannot be reported) new can't throw Constructors cannot fail

    Read the article

  • Android, OpenGL and extending GLSurfaceView?

    - by Spoon Thumb
    This question is part-technical, part-meta, part-subjective and very specific: I'm an indie game dev working on android, and for the past 6 months I've struggled and finally succeeded in making my own 3D game app for android. So I thought I'd hop on SO and help out others struggling with android and openGL-ES However, the vast majority of questions relate to extending GLSurfaceView. I made my whole app without extending GLSurfaceView (and it runs fine). I can't see any reason at all to extend GLSurfaceView for the majority of questions I come across. Worse, the android documentation implies that you ought to, but gives no detailed explaination of why or what the pros/cons are vs not extending and doing everything through implementing your own GLSurfaceView.Renderer as I did Still, the sheer volume of questions where the problem is purely to do with extending GLSurfaceView is making me wonder whether actually there is some really good reason for doing it that way vs the way I've been doing it (and suggesting in my answers to others to do). So, is there something I'm missing? Should I stop answering questions in the meantime? Android openGL documentation

    Read the article

  • I'm Sick of Web Development - What avenues are open to me ...?

    - by 5arx
    I've been working as a web developer since 1998, mostly with ASP/ASP.Net/C# but began as a Java coder (JSP mostly). Its been fun but now I'm feeling jaded and can see the need for a change. I've considered iOS/Cocoa development and Android development recently (Objective-C looks hard while I'm a former Java developer) but I'm not sure of Career opportunities being proficient in either would afford The incline of the learning curve - I just turned 40 and I know I'm not as sharp or as quick as I once was. Does anyone have any experiences/ opinions/ advice for me? Thanks.

    Read the article

  • What are some high quality Enterprise Architecture conferences or training programs?

    - by Stimy
    I am looking for a conference or training which will give me a broad exposure to enterprise level software architecture. I've been with the same company for 10 years and we've grown to the size where we really need to lay out a framework for the applications which support our company's business. The organic growth over the last 10 years has left us with a tightly coupled and fairly messy set of applications. We need to do a better job at componentizing our business entities and have more rigorous control on the interfaces between those entities and our business processes. I'm looking to get a broad, yet practical exposure on design patterns to support that architecture (SOA, messaging, ESB's etc). I'm hoping to gain insight from folks who have direct experience with implementing or working with what would be considered an enterprise class architecture.

    Read the article

  • Deprecate a web API: Best Practices?

    - by TheLQ
    Eventually you need to depreciate parts of your public web API. However I'm confused on what would be the best way to do it. If you have a large 3rd party app base just yanking old versions of the API seems like the wrong way to do it as almost all apps would fail overnight. However you can't keep ancient web api's available forever as it might be outdated or there are significant changes that make working with it impossible. What are some best practices for deprecating old web api's?

    Read the article

  • Penny auction concept and how the timer works

    - by madi
    I am creating a penny auction site using PHP yii framework. The main consideration of the system is to update the database records of all active auctions (max 15 auctions) with the current ticker timer. I am seeking for advice on how i should design the system where every auction item will have a its own countdown timer stored in the database and when someone bids the auction item, the counter resets to 2 min. Every users who are connected to the system should see the same countdown timer for that particular auction. I am little confused on how i should design the system. Will there be a performance issue when there are frequent updates to the database (Mysql) where 15 active auctions are updated every seconds, the countdown timer decreases by a second in the database table for the particular auction. Schema Sample for auction_lots: Auction_id,startdatetime,counter_timer,status I am seeking for advice on how I should design this. Please help. Thank you!

    Read the article

  • What determines which Javascript functions are blocking vs non-blocking?

    - by Sean
    I have been doing web-based Javascript (vanilla JS, jQuery, Backbone, etc.) for a few years now, and recently I've been doing some work with Node.js. It took me a while to get the hang of "non-blocking" programming, but I've now gotten used to using callbacks for IO operations and whatnot. I understand that Javascript is single-threaded by nature. I understand the concept of the Node "event queue". What I DON'T understand is what determines whether an individual javascript operation is "blocking" vs. "non-blocking". How do I know which operations I can depend on to produce an output synchronously for me to use in later code, and which ones I'll need to pass callbacks to so I can process the output after the initial operation has completed? Is there a list of Javascript functions somewhere that are asynchronous/non-blocking, and a list of ones that are synchronous/blocking? What is preventing my Javascript app from being one giant race condition? I know that operations that take a long time, like IO operations in Node and AJAX operations on the web, require them to be asynchronous and therefore use callbacks - but who is determining what qualifies as "a long time"? Is there some sort of trigger within these operations that removes them from the normal "event queue"? If not, what makes them different from simple operations like assigning values to variables or looping through arrays, which it seems we can depend on to finish in a synchronous manner? Perhaps I'm not even thinking of this correctly - hoping someone can set me straight. Thanks!

    Read the article

  • Communicating with a remote host via HTTPS

    - by user619818
    I have developed a solution where a Java applet makes a socket connection to a port on a socket server (which happens to run on a web server). But a new client has implemented https within their LAN and so I am told communication must be via HTTPS. With standard socket communication you connect to a port on a host. But the clients HTTPS uses port 443. So will it be possible to connect to a socket server using a different port? I assume it must be possible? Any help would be much appreciated.

    Read the article

  • Case convention- Why the variation between languages?

    - by Jason
    Coming from a Java background, I'm very used to camelCase. When writing C, using the underscore wasn't a big adjustment, since it was only used sparingly when writing simple Unix apps. In the meantime, I stuck with camelCase as my style, as did most of the class. However, now that I'm teaching myself C# in preparation for my upcoming Usability Design class in the fall, the PascalCase convention of the language is really tripping me up and I'm having to rely on intellisense a great deal in order to make sure the correct API method is being used. To be honest, switching to the PascalCase layout hasn't quite sunk in the muscle memory just yet, and that is frustrating from my point of view. Since C# and Java are considered to be brother languages, as both are descended from C++, why the variation in the language conventions? Was it a personal decision by the creators based on their comfort level, or was it just to play mindgames with new introductees to the language?

    Read the article

  • Balancing full time work and personal coding projects.

    - by pllee
    I am nearing the end of developing the major pieces of my website that I have been working on in my spare time for the last 3 months. My goal is to get it released by the end of next month and hopefully start making some money on it. Unfortunately the next step will be to write a lot of specific data handling and ui code that I can see becoming very tedious and boring. When I was first started the project I was able to find time for working on it easily, it was interesting and writing the back-end was new. Once I got to the start of writing stuff that I know and do at work (ui), it seemed harder and harder to make myself work on the project, sometimes the last thing I want to do when I get home from work is code again. Anyone in the same situation? Any tips on how to find time and effort for side projects without burning out? Any tips on staying on the right track?

    Read the article

  • Functional programming compared to OOP with classes

    - by luckysmack
    I have been interested in some of the concepts of functional programming lately. I have used OOP for some time now. I can see how I would build a fairly complex app in OOP. Each object would know how to do things that object does. Or anything it's parents class does as well. So I can simply tell Person().speak() to make the person talk. But how do I do similar things in functional programming? I see how functions are first class items. But that function only does one specific thing. Would I simply have a say() method floating around and call it with an equivalent of Person() argument so I know what kind of thing is saying something? So I can see the simple things, just how would I do the comparable of OOP and objects in functional programming, so I can modularize and organize my code base? For reference, my primary experience with OOP is Python, PHP, and some C#. The languages that I am looking at that have functional features are Scala and Haskell. Though I am leaning towards Scala. Basic Example (Python): Animal(object): def say(self, what): print(what) Dog(Animal): def say(self, what): super().say('dog barks: {0}'.format(what)) Cat(Animal): def say(self, what): super().say('cat meows: {0}'.format(what)) dog = Dog() cat = Cat() dog.say('ruff') cat.say('purr')

    Read the article

< Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >