Search Results

Search found 16554 results on 663 pages for 'programmers identity'.

Page 165/663 | < Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >

  • Where should I put bindings for dependency injection?

    - by Mike G
    I'm new to dependency injection and though I've really liked it so far, I'm not sure where bindings should go. I'm using Guice in Java, so some of what I say might be specific to just Guice. As I see it, there's two options: Accompanying the class(s) its needed for. Then, just write install(OtherClassModule.class) in whatever other modules want to be able to use said class. As I see it, the advantage of this is that classes that want to use it (or manage classes that want to use it) don't need to know any of the implementation detail. The issue I see is that what if two classes want to use two different versions of the same class? There's a lot of customization possible because of DI and this seems to restrict it a lot. Implemented in the module of the class(s) its needed for. It's the flip of what I said above. Now you have customization, but not encapsulation. Is there a third option? Am I misunderstanding something obvious? What's the best practice?

    Read the article

  • Advice on designing a robust program to handle a large library of meta-information & programs

    - by Sam Bryant
    So this might be overly vague, but here it is anyway I'm not really looking for a specific answer, but rather general design principles or direction towards resources that deal with problems like this. It's one of my first large-scale applications, and I would like to do it right. Brief Explanation My basic problem is that I have to write an application that handles a large library of meta-data, can easily modify the meta-data on-the-fly, is robust with respect to crashing, and is very efficient. (Sorta like the design parameters of iTunes, although sometimes iTunes performs more poorly than I would like). If you don't want to read the details, you can skip the rest Long Explanation Specifically I am writing a program that creates a library of image files and meta-data about these files. There is a list of tags that may or may not apply to each image. The program needs to be able to add new images, new tags, assign tags to images, and detect duplicate images, all while operating. The program contains an image Viewer which has tagging operations. The idea is that if a given image A is viewed while the library has tags T1, T2, and T3, then that image will have boolean flags for each of those tags (depending on whether the user tagged that image while it was open in the Viewer). However, prior to being viewed in the Viewer, image A would have no value for tags T1, T2, and T3. Instead it would have a "dirty" flag indicating that it is unknown whether or not A has these tags or not. The program can introduce new tags at any time (which would automatically set all images to "dirty" with respect to this new tag) This program must be fast. It must be easily able to pull up a list of images with or without a certain tag as well as images which are "dirty" with respect to a tag. It has to be crash-safe, in that if it suddenly crashes, all of the tagging information done in that session is not lost (though perhaps it's okay to loose some of it) Finally, it has to work with a lot of images (10,000) I am a fairly experienced programmer, but I have never tried to write a program with such demanding needs and I have never worked with databases. With respect to the meta-data storage, there seem to be a few design choices: Choice 1: Invidual meta-data vs centralized meta-data Individual Meta-Data: have a separate meta-data file for each image. This way, as soon as you change the meta-data for an image, it can be written to the hard disk, without having to rewrite the information for all of the other images. Centralized Meta-Data: Have a single file to hold the meta-data for every file. This would probably require meta-data writes in intervals as opposed to after every change. The benefit here is that you could keep a centralized list of all images with a given tag, ect, making the task of pulling up all images with a given tag very efficient

    Read the article

  • Best Practices in Setting up a Build and Deployment environment for the Java Platform

    - by Genadinik
    I have a project for which "quick and dirty" isn't the best solution. What is the most stable and currently accepted set of procedures/tools that I should look into when setting up my build/deploy dev (and later production) environment? What I mean is: Should I use ANT? Or has there been something better that has evolved? In what instances should I use Maven? What are some best practices to create a continuous integration/deployment environment? What are best practices for doing test-driven development? Anything else?

    Read the article

  • Haskell: Best tools to validate textual input?

    - by Ana
    In Haskell, there are a few different options to "parsing text". I know of Alex & Happy, Parsec and Attoparsec. Probably some others. I'd like to put together a library where the user can input pieces of a URL (scheme e.g. HTTP, hostname, username, port, path, query, etc.) I'd like to validate the pieces according to the ABNF specified in RFC 3986. In other words, I'd like to put together a set of functions such as: validateScheme :: String -> Bool validateUsername :: String -> Bool validatePassword :: String -> Bool validateAuthority :: String -> Bool validatePath :: String -> Bool validateQuery :: String -> Bool What is the most appropriate tool to use to write these functions? Alex's regexps is very concise, but it's a tokenizer and doesn't straightforwardly allow you to parse using specific rules, so it's not quite what I'm looking for, but perhaps it can be wrangled into doing this easily. I've written Parsec code that does some of the above, but it looks very different from the original ABNF and unnecessarily long. So, there must be an easier and/or more appropriate way. Recommendations?

    Read the article

  • Moving from windows to linux

    - by rincewind
    I need to reconcile these 2 facts: I don't feel comfortable working on Linux; I need to develop software for Linux. Some background: I have a 10+ years of programming experience on Windows (almost exclusively C/C++, but some .NET as well), I was a user of FreeBSD at home for about 3 years or so (then had to go back to Windows), and I've never had much luck with Linux. And now I have to develop software for Linux. I need a plan. On Windows, you can get away with just knowing a programming language, an API you're coding against, your IDE (VisualStudio) and some very basic tools for troubleshooting (Depends, ProcessExplorer, DebugView, WinDbg). Everything else comes naturally. On Linux, it's a very different story. How the hell would I know what DLL (sorry, Shared Object) would load, if I link to it from Firefox plugin? What's the Linux equivalent of inserting __asm int 3/DebugBreak() in the source and running the program, and then letting the OS call a debugger? Why the hell release builds use something, called appLoader, while debug builds work somehow different? Worst of all: how to provision Linux development environment? So, taking into account that hatred is usually associated with not knowing enough, what would you recommend? I'm ok with Emacs and GCC. I need to educate myself as a Linux admin/user, and I need to learn proper troubleshooting tools (strace is cool, btw), equivalents to the ones I mentioned above. Do I need to do Linux From Scratch? Or do I need to just read some books (I've read "UNIX programming enviornment" by Kernighan and "Advanced Programming..." by Stevens, but I need to learn something more practical)? Or do I need to have some Linux distro on my home computer?

    Read the article

  • Does the LGPL allow me to do this?

    - by user1229892
    I am planning to develop a commercial software using a LGPL software. In the LGPL software that I am using some functions in a class are not fully implemented. I want to modify the LGPL code so that the class and not-implemented functions are made visible outside the dll by adding dllexport infront of class and by adding virtual keyword infront of function. Then I plan to implement those functions in my proprietary software. I am ready to distribute the modified LGPL code but not proprietary software that implements functions in the way I want. Does that violate LGPL terms and conditions?

    Read the article

  • How can I thoroughly evaluate a prospective employer?

    - by glenviewjeff
    We hear much about code smells, test smells, and even project smells, but I have heard no discussion about employer "smells" outside of the Joel Test. After much frustration working for employers with a bouquet of unpleasant corporate-culture odors, I believe it's time for me to actively seek a more mature development environment. I've started assembling a list of questions to help vet employers by identifying issues during a job interview, and am looking for additional ideas. I suppose this list could easily be modified by an employer to vet an employee as well, but please answer from the interviewee's perspective. I think it would be important to ask many of these questions of multiple people to find out if consistent answers are given. For the most part, I tried to put the questions in each section in the order they could be asked. An undesired answer to an early question will often make follow-ups moot. Values What constitutes "well-written" software? What attributes does a good developer have? Same question for manager. Process Do you have a development process? How rigorously do you follow it? How do you decide how much process to apply to each project? Describe a typical project lifecycle. Ask the following if they don't come up otherwise: Waterfall/iterative: How much time is spent in upfront requirements gathering? upfront design? Testing Who develops tests (developers or separate test engineers?) When are they developed? When are the tests executed? How long do they take to execute? What makes a good test? How do you know you've tested enough? What percentage of code is tested? Review What is the review process like? What percentage of code is reviewed? Design? How frequently can I expect to participate as code/design reviewer/reviewee? What are the criteria applied to review and where do the criteria come from? Improvement What new tools and techniques have you evaluated or deployed in the past year? What training courses have your employees been given in the past year? What will I be doing for the first six months in your company (hinting at what kind of organized mentorship/training has been thought through, if any) What changes to your development process have been made in the past year? How do you improve and learn from your mistakes as an organization? What was your organizations biggest mistake in the past year, and how was it addressed? What feedback have you given to management lately? Was it implemented? If not, why? How does your company use "best practices?" How do you seek them out from the outside or within, and how do you share them with each other? Ethics Tell me about an ethical problem you or your employees experienced recently and how was it resolved? Do you use open-source software? What open-source contributions have you made? Follow-Ups I liked what @jim-leonardo said on this Stack Overflow question: Really a thing to ask yourself: "Does this person seem like they are trying to recruit me and make me interested?" I think this is one of the most important bits. If they seem to be taking the attitude that the only one being interviewed is you, then they probably will treat you poorly. Good interviewers understand they have to sell the position as much as the candidate needs to sell themselves. @SethP added: Glassdoor.com is a good web site for researching potential employers. It contains information about how specific companies conduct interviews...

    Read the article

  • Is it OK to learn an algorithm from an open source project, and then implement it in a closed source project?

    - by Chris Barry
    Reference The post that started it all In order to clear up the original question I asked in a provocative manner, I have posed this question. If you learn an algorithm from an open source project, is it OK to use that algorithm in a separate closed sourced project? And if not, does that imply that you cannot use that knowledge ever again? If you can use it, what circumstance could that be? Just to clarify, I am not trying to evade a licence, otherwise I would not have asked the question in the first place. I believe this presents a difficult question and it is interesting to know where the debate can end up.

    Read the article

  • What type of career path / jobs for a developer to have best work life balance?

    - by programmx10
    I know some people may look down on a question like this but I've been thinking lately a lot about what direction I can take my career to have a good work life balance, since I have been working for a startup where hours tend to drag on, etc and I find it often drains the life out of me. I have been going to interviews and some other companies are also startups / new companies and seem to have a similar attitude about working long hours. Maybe its the technologies I use, the type of development, I don't know but I'm curious if anyone can offer advice on what a path is to be a programmer / developer but work for a company that respects a regular work week and would only rarely find the need to move past this. I realize this won't lead to being the highest paid in my field but I'm ok with that and feel the tradeoff would be worth it as it would also give me time for my own projects, etc. I know some people may say this is too general but I believe it is a programmer specific question because I believe there tends to be a higher than average rate of working overtime, etc and people working in "startup" venture situations than in many other fields and there is definitely a mindset among a lot of people in the field of working long hours that doesn't exist in every industry.

    Read the article

  • Is the C programming language still used?

    - by Pankaj Upadhyay
    I am a C# programmer, and most of my development is for websites along with a few Windows application. As far as C goes, I haven't used it in a long time, as there was no need to. It came to me as a surprise when one of my friends said that she needs to learn C for testing jobs, while I was helping her learn C#. I figured that someone would only learn C for testing only if there is development done in C. In my knowledge, all the development related to COM and hardware design are also done in C++. Therefore, learning C doesn't make sense if you need to use C++. I also don't believe in historic significance, so why waste time and money in learning C? Is C is still used in any kind of new software development or anything else?

    Read the article

  • Persisting NLP parsed data

    - by tjb1982
    I've recently started experimenting with NLP using Stanford's CoreNLP, and I'm wondering what are some of the standard ways to store NLP parsed data for something like a text mining application? One way I thought might be interesting is to store the children as an adjacency list and make good use of recursive queries (postgres supports this and I've found it works really well). Something like this: Component ( id, POS, parent_id ) Word ( id, raw, lemma, POS, NER ) CW_Map ( component_id, word_id, position int ) But I assume there are probably many standard ways to do this depending on what kind of analysis is being done that have been adopted by people working in the field over the years. So what are the standard persistence strategies for NLP parsed data and how are they used?

    Read the article

  • Does software rot refer primarily to performance, or to messy code?

    - by Kazark
    Wikipedia's definition of software rot focuses on the performance of the software. This is a different usage than I am used to; I had thought of it much more in terms of the cleanliness and design of the code—in terms of the code's having all the standard quality characteristics: readability, maintainability, etc. Now, performance is likely to go down when the code becomes unreadable, because no one knows what is going on. But does the term software rot have special reference to performance? or am I right in thinking it refers to the cleanliness of the code? or is this perhaps a case of multiple senses of the term being in common usage—from the user's perspective, it has do with performance; but for the software craftsman, it has to do more specifically with how the code reads?

    Read the article

  • Do immutable objects and DDD go together?

    - by SnOrfus
    Consider a system that uses DDD (as well: any system that uses an ORM). The point of any system realistically, in nearly every use case, will be to manipulate those domain objects. Otherwise there's no real effect or purpose. Modifying an immutable object will cause it to generate a new record after the object is persisted which creates massive bloat in the datasource (unless you delete previous records after modifications). I can see the benefit of using immutable objects, but in this sense, I can't ever see a useful case for using immutable objects. Is this wrong?

    Read the article

  • Software Management Tools for Agile Process Development

    - by Graviton
    We would like to implement the Agile/ Scrum process in our daily software management, so as to provide better progress visibility and feature managements, here are some of the activities that we want to do: Daily stand-up Release cycles of 6 weeks with 3 2-week iterations. Having a product back-log of tasks (integrate with bugzilla) and bugs estimated out. Printing a daily burn down to make velocity visible. When used as motivator, it's great. Easy feature development tracking and full blown visibility, especially for the sales and stake holders ( this means that it must be a web based tool). My team is distributed, so physical whiteboards aren't feasible. Is there such a web based tool that meets our needs? I heard icescrum may be one, but I've never used it so I don't know. There are a few more suggestions as here, but I've never heard of them, anyone cares to elaborate or suggest new tools?

    Read the article

  • What to watch out for when writing code at an Interview?

    - by Philip
    Hi, I have read that at a lot of companies you have to write code at an interview. On the one hand I see that it makes sense to ask for a work sample. On the other hand: What kind of code do you expect to be written in 5 minutes? And what if they tell me "Write an algorithm that does this and that" but I cannot think of a smart solution or even write code that doesn't semantically work? I am particularly interested in that question because I do not have that much commercial programming experience, 2 years part-time, one year full-time. (But I am interested in programming languages since nearly 15 years though usually I was more concentrated in playing with the language rather than writing large applications...) And actually I consider my debugging and problem solving skills much better than my coding skills. I sometimes see myself not writing the most beautiful code when looking back, but on the other hand I often come up with solutions for hard problems. And I think I am very good at optimizing, fixing, restructuring existing code, but I have problems with writing new applications from scratch. The software design sucks... ;-) Therefore I don't feel comfortable when thinking about this code writing situation at an interview... So what do the interviewers expect? What kind of information about my code writing are they interested in? Philip

    Read the article

  • Why doesn't the Visual Studio C compiler like this? [migrated]

    - by justin
    The following code compiles fine on Linux using gcc -std=c99 but gets the following errors on the Visual Studio 2010 C compiler: Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 16.00.40219.01 for 80x86 Copyright (C) Microsoft Corporation. All rights reserved. fib.c fib.c(42) : error C2057: expected constant expression fib.c(42) : error C2466: cannot allocate an array of constant size 0 fib.c(42) : error C2133: 'num' : unknown size The user inputs the amount of Fibonacci numbers to generate. I'm curious as to why the Microsoft compiler doesn't like this code. http://pastebin.com/z0uEa2zw

    Read the article

  • When are Getters and Setters Justified

    - by Winston Ewert
    Getters and setters are often criticized as being not proper OO. On the other hand most OO code I've seen has extensive getters and setters. When are getters and setters justified? Do you try to avoid using them? Are they overused in general? If your favorite language has properties (mine does) then such things are also considered getters and setters for this question. They are same thing from an OO methodology perspective. They just have nicer syntax. Sources for Getter/Setter Criticism (some taken from comments to give them better visibility): http://www.javaworld.com/javaworld/jw-09-2003/jw-0905-toolbox.html http://typicalprogrammer.com/?p=23 http://c2.com/cgi/wiki?AccessorsAreEvil http://www.darronschall.com/weblog/2005/03/no-brain-getter-and-setters.cfm http://www.adam-bien.com/roller/abien/entry/encapsulation_violation_with_getters_and To state the criticism simply: Getters and Setters allow you to manipulate the internal state of objects from outside of the object. This violates encapsulation. Only the object itself should care about its internal state. And an example Procedural version of code. struct Fridge { int cheese; } void go_shopping(Fridge fridge) { fridge.cheese += 5; } Mutator version of code: class Fridge { int cheese; void set_cheese(int _cheese) { cheese = _cheese; } int get_cheese() { return cheese; } } void go_shopping(Fridge fridge) { fridge.set_cheese(fridge.get_cheese() + 5); } The getters and setters made the code much more complicated without affording proper encapsulation. Because the internal state is accessible to other objects we don't gain a whole lot by adding these getters and setters. The question has been previously discussed on Stack Overflow: http://stackoverflow.com/questions/565095/java-are-getters-and-setters-evil http://stackoverflow.com/questions/996179

    Read the article

  • Why Beta versions have so many bugs?

    - by Eugene
    When I worked in Microsoft on MSE antivirus, we had several stages of pre-release: Alpha, Beta, RC. The biggest difference between these stages were bug fixes. Each time we fixed more bugs, and usually updated some minor features. Apple works in the same way. Look here: http://www.techrepublic.com/article/pro-tip-how-to-create-a-bootable-usb-installer-for-os-x-yosemite/ They released a beta of Yosemite OS, but the beta is full of bugs. Why do these big companies work like this? Isn't it better to release a beta version that was tested very thoroughly but one that just doesn't provide all the features that the final version will provide?

    Read the article

  • complex access control system

    - by Atul Gupta
    I want to build a complex access control system just like Facebook. How should I design my database so that: A user may select which streams to see (via liking a page) and may further select to see all or only important streams. Also he get to see posts of a friend, but if a friend changes visibility he may or may not see it. A user may be an owner or member of a group and accordingly he have access. So for a user there is so many access related information and also for each data point. I use Perl/MySQL/Apache. Any help will be appreciated.

    Read the article

  • What is meant by Scope of a variable?

    - by Appy
    I think of the scope of a variable as - "The scope of a particular variable is the range within a program's source code in which that variable is recognized by the compiler". That statement is from "Scope and Lifetime of Variables in C++", which I read many months ago. Recently I came across this in LeMoyne-Owen College courses: What exactly is the difference between the scope of variables in C# and (C99, C++, Java) when However a variable still must be declared before it can be used

    Read the article

  • agile as our first project management methodology [closed]

    - by Hasan Khan
    we are a small web development company that has till now been working on client projects. we employed little to no project management and that has cost us a lot. we've used only the barest of tools (wireframing, prototyping etc) but no formal project management process has been put into place. we've learnt from our mistakes and want to prevent them from happening in the future. also, we are looking to develop our own products and we understand that putting in a proper project management paradigm will help. after a lot of research, we've sort of settled on agile for a few reasons: agile seems to scale well with team size. our team is small right now and we hope to grow and agile seems to be a process that we can put in place now and grow with. agile will help us with customers who just can't seem to make up their minds and keep changing requirements. we'd appreciate the community's thoughts on this. is this a correct way to think? will agile be a good system to put into place, where there has been none till now? are there any resources that may help us in our position? pretty much all of the resources that we've found start by comparing agile to x (where x = any management methodology) and why its better than x and how agile can be implemented in place of x. we're looking for resources that can help us out in our particular situation. thanks for all your help!

    Read the article

  • Code Review process

    - by Rubio
    I'm looking for a light-weight code review process. A couple of requirements, the reviewer must be able to do the review alone at the time of his/her choosing (not tied to check-ins), the reviewer must be able to easily find the target code, the review has to leave some document showing what was reviewed. I know there are tools available for code review but I work in a very ridig environment and introducing new tools is not an option. One idea I've been thinking about is to create a new Visual Studio Task List token called REVIEW, and use it to mark the code that needs reviewing. Something like, // REVIEW doe_john: New method, not sure about the exception. Then we would add a Review workitem in TFS (we're using the CMM template). Another possibility, which I would actually prefer, would be to have developers create a TFS Review workitem and add links to code to it, but I don't know if this is possible. Obviously you can add a link to a file, but I'd like to have a link to a particular method.

    Read the article

  • Forking a dual licensed app: How to license on my end?

    - by TheLQ
    I forked a project that was dual licensed under the GPL and a commercial license. Since my code was open source and the GPL being what it is, I started by releasing my app under the GPL. But now I'm thinking about dual licensing the project and can't figure out what to do. Since I have copyright on a majority of the code (most of the code was either rewritten or new), can I just pick a commercial license or do I have to buy the upstream commercial license since I'm technically a "derivative" of the project?

    Read the article

  • Is it possible to create and distribute an app for the BlackBerry Playbook that doesn't go into App World?

    - by Drackir
    My company is looking to create an app that we'll use internally on several (about 20) BlackBerry Playbooks. We don't want it to be put up on App World because it's just an internal application. I'm wondering if there are any: Costs involved with this outside of paying a programmer to develop it - i.e. Are there any license fees, deployment fees, etc. License issues involved with deploying the app to multiple Playbooks without deploying it to App World Limitations on functionality of the app Other things we should be taking into consideration If it matters, the app will be collecting information and downloading it to a computer via USB.

    Read the article

  • How should I take being told that I was wrong?

    - by Chris
    On a fairly important project with short timelines I decided to use SubSonic for straight forward data access. I wired up a handful of forms, created matching database tables and POCO's for each and used SubSonic's simple repository mode for the data access. Everything worked well and I was able to bang these forms out pretty quickly and I moved on to other things. Since that time I have heard that using SubSonic was a 'cowboy move' and that it was implemented 'incorrectly' and that 'the person who used it, didn't even know how to use SubSonic'. What I would like to know is, how should I take this? There were and still are no standards for data access at this company, so there is no violation of a standard. The forms worked exactly as requested and saved the data to the database correctly. And with only spending a few days on the forms instead of weeks, saved a lot of time which was used for other functionality in the project. So in light of all of this, I am confused as to what was 'incorrect'. Am I missing something here? Thanks for your answers.

    Read the article

< Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >