Search Results

Search found 16554 results on 663 pages for 'programmers identity'.

Page 271/663 | < Previous Page | 267 268 269 270 271 272 273 274 275 276 277 278  | Next Page >

  • What is a user-friendly solution to editing email templates with replacement variables?

    - by Daniel Magliola
    I'm working on a system where we rely a lot of "admins / managers" emailing users from the database. One of the key features is being able to email several people at the same time, with specific information relevant to each of them. Another key feature is to be able to hand-craft emails, because it tends to be be necessary to slightly modify them each time, but having a basic template saves a lot of time. For this, we have the typical "templates" solution, where we have a template that looks kind of like this: Hello {{recipient.full_name}}, Your application to {{activity.title}} has been accepted. You have requested to participate on dates {{application.dates}}, in role {{application.role}} Blah blah blah The problem we are having is obviously that (as we expected), managers don't get the whole "variables" idea, and they do things like overwriting them, which doesn't let them email more than one person at a time, assuming those are not going to get replaced and that the system is broken, or even inexplicable things like "Hello {{John}}". The big problem is that this isn't relegated, as usual, to an "admin" section where only a few power users have access to editing the templates that are automatically send out, and they're expected to know what they are doing. Every user of the system gets exposed to this problem. The obvious solution would be to replace the variables before showing this template for the user to edit, but that doesn't work when emailing several people. This seems like a reasonably common problem, and we are kind of hoping that someone has already solved it. Have you seen anywhere/created/can think of good solutions to this problem?

    Read the article

  • What triggered the popularity of lambda functions in modern mainstream programming languages?

    - by Giorgio
    In the last few years anonymous functions (AKA lambda functions) have become a very popular language construct and almost every major / mainstream programming language has introduced them or is planned to introduce them in an upcoming revision of the standard. Yet, anonymous functions are a very old and very well-known concept in Mathematics and Computer Science (invented by the mathematician Alonzo Church around 1936, and used by the Lisp programming language since 1958, see e.g. here). So why didn't today's mainstream programming languages (many of which originated 15 to 20 years ago) support lambda functions from the very beginning and only introduced them later? And what triggered the massive adoption of anonymous functions in the last few years? Is there some specific event, new requirement or programming technique that started this phenomenon? IMPORTANT NOTE The focus of this question is the introduction of anonymous functions in modern, main-stream (and therefore, maybe with a few exceptions, non functional) languages. Also, note that anonymous functions (blocks) are present in Smalltalk, which is not a functional language, and that normal named functions have been present even in procedural languages like C and Pascal for a long time. Please do not overgeneralize your answers by speaking about "the adoption of the functional paradigm and its benefits", because this is not the topic of the question.

    Read the article

  • What effects do various drugs have on coding style / productivity? [closed]

    - by codecraft
    Can anyone tell me what the effect of various drugs are on coding style, and if coding on drugs can be more productive, or more fun? Are some types of drugs better suited to certain tasks and phases of software development? And which programming languages are best suited to coding on drugs? It would be great if you could back up your answers by data, probably even code snippets showcasing the effect of the drug experience.

    Read the article

  • What is the difference between development and R&D?

    - by MainMa
    I was asked by a colleague to explain clearly the difference between ordinary development and research and development (R&D) and was unable to do it. After reading Wikipedia, I still don't have the precise answer. According to Wikipedia (slightly modified): There are two primary models: In one model, the primary function is to develop new products; in the other model, the primary function is to discover and create new knowledge about scientific and technological topics for the purpose of uncovering and enabling development of valuable new products, processes, and services. The first model is confusing. Does it mean that development (not R&D) consists exclusively in adding new features to a product, solving bugs and doing maintenance? What if something which was previously developed as a new feature becomes a separate product? The second model is less confusing, but still, how to qualify whether something is new knowledge or existent knowledge which is just rediscovered? Later, Wikipedia adds that ordinary development is different from R&D because of its: nearly immediate profit or immediate improvement. It's still not clear enough. How to qualify "nearly immediate profit"? What if a task has an immediate profit but requires heavy research? Or if it is basic but has uncertain profit, like the enforcement of a common style over the codebase? For example, does it belong to development or R&D to: Develop an engine which abstracts the access to the database, simplifying and shortening enormously the code of other applications (existent or ones which will be written in future) which should access to the database? Establish a new service-oriented architecture for the entire organization of company resources, in order to move from a bunch of separate and autonomous applications to a set of well-organized, interconnected web services, like what is used by Amazon? Design a new communication protocol to allow faster replication of data between two data centers of the company? Conceive a new type of software testing while working on a specific product, knowing that this type of testing will improve/simplify the testing process? Prove that Functional programming is more appropriate than OOP for a specific application, based on evidence, logic and previous experience? Enhance the existent application by adding gestures on tactile screens, after doing studies and testing that shows that those gestures improve the productivity of the users by a ratio of at least 1.4 for a precise set of tasks? Find a way to strongly enhance the Power usage effectiveness (PUE) of a data center? Create a Domain-Specific Language (DSL)? In short, how could I determine whether I'm doing R&D while working on something?

    Read the article

  • Relative encapsulation design

    - by taher1992
    Let's say I am doing a 2D application with the following design: There is the Level object that manages the world, and there are world objects which are entities inside the Level object. A world object has a location and velocity, as well as size and a texture. However, a world object only exposes get properties. The set properties are private (or protected) and are only available to inherited classes. But of course, Level is responsible for these world objects, and must somehow be able to manipulate at least some of its private setters. But as of now, Level has no access, meaning world objects must change its private setters to public (violating encapsulation). How to tackle this problem? Should I just make everything public? Currently what I'm doing is having a inner class inside game object that does the set work. So when Level needs to update an objects location it goes something like this: void ChangeObject(GameObject targetObject, int newX, int newY){ // targetObject.SetX and targetObject.SetY cannot be set directly var setter = new GameObject.Setter(targetObject); setter.SetX(newX); setter.SetY(newY); } This code feels like overkill, but it doesn't feel right to have everything public so that anything can change an objects location for example.

    Read the article

  • How does one handle sensitive data when using Github and Heroku?

    - by Jonas
    I am not yet accustomed with the way Git works (And wonder if someone besides Linus is ;)). If you use Heroku to host you application, you need to have your code checked in a Git repo. If you work on an open-source project, you are more likely going to share this repo on Github or other Git hosts. Some things should not be checked in the public repo; database passwords, API keys, certificates, etc... But these things still need to be part of the Git repo since you use it to push your code to Heroku. How to work with this use case? Note: I know that Heroku or PHPFog can use server variables to circumvent this problem. My question is more about how to "hide" parts of the code.

    Read the article

  • What is the meaning of 'high cohesion'?

    - by Max
    I am a student who recently joined a software development company as an intern. Back at the university, one of my professors used to say that we have to strive to achieve "Low coupling and high cohesion". I understand the meaning of low coupling. It means to keep the code of separate components separately, so that a change in one place does not break the code in another. But what is meant by high cohesion. If it means integrating the various pieces of the same component well with each other, I dont understand how that becomes advantageous. What is meant by high cohesion? Can an example be explained to understand its benefits?

    Read the article

  • Payments for Android through Checkout/AdSense

    - by David Cesarino
    To those that don't know, Android developers in some countries recently transitioned from AdSense to Checkout for Play Store payments. This is told to existing seller accounts: Q: What happens if I have funds in my AdSense account but am not eligible for a payout yet? A: AdSense accounts have minimum thresholds for payouts. If you’re not eligible for a payout through AdSense for [month of migration], the funds will be automatically transferred back to your Google Checkout account. Once you enter bank account information through your Checkout account and have accrued at least $100 USD, your first wire transfer will be issued during the next monthly payout cycle. However, AdSense is still holding my funds, and since Checkout already paid me directly, following the new directives, I'm afraid the funds will be held in AdSense forever (I used AdSense only for Play Store payments, as required). Obviously, this is no replacement for Google support (a crusade to reach them, but nevermind...), I'm just asking if someone experienced this problem during the transition and how it was fixed.

    Read the article

  • Popular programming books which have been translated into Russian

    - by arikfr
    I'm looking for recommendations of popular programming books that have been translated into Russian. I'm talking about books like: Test-Driven Development by Example by Kent Beck Code Complete The Pragmatic Programmer And other books like them. Also, recommendations for books in Russian by other authors but about similar topics (TDD, BDD, general programming methodologies) will be appreciated.

    Read the article

  • Best industry to work for as a developer.

    - by The Elite Gentleman
    Hi guys, Hmmm, StackOverflow now warns me of: "The question you're asking appears subjective and is likely to be closed." I've been an avid java (J2SE, JEE) developer for over 5 years now (and I'm not complaining, even though I want to go back to Delphi & C++). My contract has just ended and I'm wondering of possible job to further tackle. I've done banking and insurance industry for all my career (with 1 year on a Fortuner 500 company) and pretty much, banking is the slowest (and boring) industry to work for (IMHO) since they're strict in their business practices (fair enough). The upside is that they pay. My question is: What is the best industry for a developer, who tends to get bored relatively quickly, to work for? Is it also worthwhile for me to do consultancy (and if so, what type of consultancy)? PS There's no gaming industry in South Africa, so suggesting it requires that I have to travel to a country where gaming is alive! I don't see the Community Wiki checkbox, so I don't know how to make this a wiki.

    Read the article

  • Motivating developers in a project perceived as **dull** ?

    - by Fanatic23
    As a manager, I can't always end up generating work that'd be cutting edge. Some of the projects do run on maintenance mode, and generate a healthy free cash flow for the company. As a developer what would it take for you to stick around in this project? I have been thinking of re-branding the work, but I could do with a lot of help here. Appreciate a single response per post. Please don't suggest an increased pay-packet, this creates more problems than it solves.

    Read the article

  • How to avoid getting carried away with details?

    - by gablin
    When I program, I often get too involved with details. There might be some little thing that doesn't do exactly what I want it to, or maybe there's some little feature I want to add. Either way, none are really essential to the application - it's just a minor niusance (if any). However, when trying to fix it, I may happen to spend way more time on it than I planned, and there are things much more important that I should be doing instead of dealing with this little detail. What can I do to avoid getting carried away with details, when there's more essential things that need doing? (I didn't know how to tag this question, so feel free to add whatever appropriate tags that are missing.)

    Read the article

  • What is the advantage to using a factor of 1024 instead of 1000 for disk size units?

    - by Joe Z.
    When considering the disk space of a storage medium, normally the computer or operating system will represent it in terms of powers of 1024 - a kilobyte is 1,024 bytes, a megabyte is 1,048,576 bytes, a gigabyte is 1,073,741,824 bytes, and so on. But I don't see any practical reason why this convention was adopted. Usually when disk size is represented in kilo-, mega-, or giga-bytes, it has to be converted into decimal first. In places where a power-of-two byte count actually matters (like the block size on a file system), the size is given in bytes anyway (e.g. 4096 bytes). Was it just a little aesthetic novelty that computer makers decided to adopt, but storage medium vendors decided to disregard? Whenever you buy a hard drive, there's always a disclaimer nowadays that says "One gigabyte means one billion bytes". It would feel like using the binary definition of "gigabyte" would artificially inflate the byte count of a device, making drive-makers have to pack 1.1 terabytes into a drive in order to have it show up as "1 TB", or to simply pack 1 terabyte in and have it show up as "931 GB" (and most of them do the latter). Some people have decided to use units like "KiB" or "MiB" in favour of "KB" and "MB" in order to distinguish the two. But is there any merit to the binary prefixes in the first place? There's probably a bit of old history I'm not aware of on this topic, and if there is, I'm looking for somebody to explain it. (Apologies if this is in the wrong place. I felt that a question on best practice might belong here, but I have faith that it will be migrated to the right place if it's incorrect.)

    Read the article

  • How does session middleware generally verify browser sessions?

    - by BBnyc
    I've been using session middleware to build web apps for years: from PHP's built-in session handling layer to node's connect session middleware. However, I've never tried (or needed) to roll my own session handling layer. How would one go about it? What sort of checks are necessary to provide at least some modicum of security against HTTP session highjacking? I figure setting a cookie with a token to keep track of the session, and then perhaps some check to see that the originating IP address of the session doesn't change and that the client browser software remains consistent. Hoping to hear about current best-practices...

    Read the article

  • .mdf Database Filetype

    - by James Izzard
    Would somebody be kind enough to correct my understanding of the following (if incorrect)? Microsoft's .mdf file-type can be used by both the LocalDB and the full Server database engines (apologies if engine is not the correct word?). The .mdf file does not care which of these two options are accessing it - so you could use either to access any given .mdf file, provided you had permissions and password etc. The LocalDB and the SQL Server are two options that can be interchangeably chosen to access .mdf files depending on the application requirements. Appreciate any clarification. Thanks

    Read the article

  • Which is more important in a web application code promotion hierarchy? production environment to repo equivalence or unidirectional propagation?

    - by ghbarratt
    Lets say you have a code promotion hierarchy consisting of several environments, (the polar end) two of which are development (dev) and production (prod). Lets say you also have a web application where important (but not developer controlled) files are created (and perhaps altered) in the production environment. Lets say that you (or someone above you) decided that the files which are controlled/created/altered/deleted in the production environment needed to go into the repository. Which of the following two sets of practice / approaches do you find best: Committing these non-developed file modifications made in the production environment so that the repository reflects the production environment as closely and as often as possible. Generally ignoring the non-developed production environment alterations, placing confidence in backups to restore the production environment should it be harmed, and keeping a resolution to avoid pushing developments through the promotion hierarchy in the reverse direction (avoiding pushing from prod to dev), only committing the files found in the production environment if they were absolutely necessary in other environments for development. So, 1 or 2, and why? PS - I am currently slightly biased toward maintaining production environment to repository equivalence (option 1), but I keep an open mind and would accept an answer supporting either.

    Read the article

  • Is it correct to fix bugs without adding new features when releasing software for system testing?

    - by Pratik
    This question is to experienced testers or test leads. This is a scenario from a software project: Say the dev team have completed the first iteration of 10 features and released it to system testing. The test team has created test cases for these 10 features and estimated 5 days for testing. The dev team of course cannot sit idle for 5 days and they start creating 10 new features for next iteration. During this time the test team found defects and raised some bugs. The bugs are prioritised and some of them have to be fixed before next iteration. The catch is that they would not accept the new release with any new features or changes to existing features until all those bugs fixed. The test team says that's how can we guarantee a stable release for testing if we also introduce new features along with the bug fix. They also cannot do regression tests of all their test cases each iteration. Apparently this is proper testing process according to ISQTB. This means the dev team has to create a branch of code solely for bug fixing and another branch where they continue development. There is more merging overhead specially with refactoring and architectural changes. Can you agree if this is a common testing principle. Is the test team's concern valid. Have you encountered this in practice in your project.

    Read the article

  • How to coach a developer with dyslexia to improve his spelling and grammar capabilities?

    - by Uwe Keim
    Just having read this question regarding developers with dyslexia, I still have some open questions on how to deal with it: I am working on a project sinc approx. 6 month with a new developer who just finished university. I see that the code quality is high, what he's missing is the ability to write texts (even short ones) in an error-free manner (both, syntax and grammar errors). He is working on some UI stuff (VS.NET 2010, ASP.NET 4) and, beside coding, has to write short text for labels, buttons, grid view headers, page titles, etc. Since even those texts have errors inside, no matter how much I try to discuss the need for a professional, text-error-free UI, he seems to not manage to get this right, although he really tries. So my questions are: Are there any hints how he (or I) should proceed to enhance the text quality? Do you know any tools (like inline spell checkers) for VS.NET to highlight syntax and grammar errors? (We are working on a German-only UI, if this is important to know)

    Read the article

  • How are you using the Managed Extensibility Framework?

    - by dboarman
    I have been working with MEF for about 2 weeks. I started thinking about what MEF is for, researching to find out how to use MEF, and finally implementing a Host with 3 modules. The contracts are proving to be easy to grasp and the modules are easily managed. Although MEF has a very practical use, I am wondering to what extent? I mean, is everyone going to be rewriting existing applications for extensibility? Yes, that sounds, and is insanely impractical. Rhetorically speaking: how is MEF affecting the current trends in programming? have you begun looking for opportunities to use MEF? have you begun planning a major re-write of an existing app that may benefit from extensibility? That said, my questions are: how do I know when I should plan a new project with extensibility? how will I know if an existing project needs to be re-written for extensibility? Is anyone using MEF?

    Read the article

  • How to integrate technical line/functional manager into Scrum team?

    - by thegreendroid
    We have recently had a new line manager start who is managing our Scrum team. He is immensely experienced in our field but is relatively inexperienced at Agile/Scrum. He has extensive technical expertise in embedded software (the team's domain) that would go to waste if not utilised properly. However, the team is wary of making a line manager part of the Scrum team. The general consensus is that the line manager should not be part of the Scrum team at all. There are a number of issues that may crop up, e.g. the team may start "reporting" to the manager (i.e. a daily status update!), the manager may start to micro-manage team members etc etc. As it currently stands, he has already said that he feels like an outsider within the team. We really want to make use of his technical skills, we'd be foolish if we didn't because we are a relatively inexperienced and young team of twenty somethings. What would be the best approach to integrate a senior "technical" line manager in a Scrum team and make him feel like he is part of the team?

    Read the article

  • Help us with our git workflow

    - by Brandon Cordell
    We have a web application that gets deployed to multiple regions around our state. An instance of the application for each region. We maintain a staging and production (master) branch in our repository, but we were wondering what is the best way of maintaining each instances codebase. It's similar at the core, but we have to give each region the ability to make specific requests that may not make it into the core of the application. Right now we have branches for each region, like region_one_staging, and region_one_production. At the rate we're growing we'll have hundreds of branches here in the next few years. Is there a better way to do this?

    Read the article

  • Should a poll framework be closed sourced

    - by samquo
    I was having a chat with a coworker who is working on a polling app and framework. He was asking technical questions and I suggested he open source the application to get more quality opinions from developers who are interested in this problem and are willing to give it heavy though. He has a different point of view which I think is still valid so I want to open this question for discussion here. He says he believes something like a polling framework should not be open sourced because it will reduce its security and validity as people reveal loopholes through which they can cheat. Can't say I completely disagree. I see a somewhat valid point there, but I always believed that solutions by a group of people are almost always better than a solution thought by a single person asking a small number of coworkers, no matter how smart that person is. Again I'm willing to accept that maybe some types of applications are different. Does anyone have an argument in his favor? I'd really like to present your responses to him.

    Read the article

  • Can I release complementary Windows 8 and WP8 apps on their respective stores?

    - by Clay Shannon
    I am creating a pair of apps, one to run preferably on tablets, but also laptops and PCs, and the other for WP8. These apps are complementary - having one is of no use without the other. I know there is a Windows Store, and a Windows Phone store, so one would be released on one, and one on the other. My question is: as these apps are useless by themselves (although in most cases it won't be the same people running both apps), will there be a problem with offering these useless-when-used-alone apps? IOW: Person A will use the Windows 8 app to interact with some people that have the WP8 app installed; those with the WP8 app will interact with a person or people who have the Windows 8 app installed. What I'm worried about is if these apps go through a certification process where they must be useful "standalone" - is that the case?

    Read the article

  • Actor library / framework for C++

    - by Giorgio
    In the C++ project I am working for we would like to use something like Scala actors and remote actors (see e.g. this tutorial). Being able to use remote actors (actors living in different processes, possibly on different machines and communicating via TCP/IP) has higher priority for us because we have an application consisting of several processes deployed on different machines. Being able to use several actors living in the same process (possibly different threads) is also interesting, but has lower priority for the moment. On wikipedia I have found some links to actor libraries for C++ and I have started to look at Theron. Before I dive too deep into the details and build an extended example with Theron, I wanted to ask if anybody has experience with any of these libraries and which one they would recommend.

    Read the article

  • How do you choose to use a specific programming language?

    - by Jesús Bracamonte
    I was having a small talk between teammates about how you choose a programming language for use in a project which lead me to think that there are many criteria to choose one in the beginning of a project but no real standard. Do you chose a programming language for the syntax and semantics? Or do you choose one because it has the best support to do certain things? Or because you have better libraries? Or do you choose it for the paradigm? What criteria do you use to choose one language when you are going to do a project?

    Read the article

< Previous Page | 267 268 269 270 271 272 273 274 275 276 277 278  | Next Page >