Search Results

Search found 16554 results on 663 pages for 'programmers identity'.

Page 196/663 | < Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >

  • Why is C++ predominant in programming contests and competitions?

    - by daniels
    I understand that C++ is a very fast language, but ain't C just as fast, or faster in some cases? Then you might say that C++ has OOP, but the amount of OOP you need for most programming puzzles is not that big, and in my opinion C would be able handle that. Here's why I am asking this: I am very interested in programming contests and competitions, and I am used to coding in C on those. However, I noticed that the vast majority of people use C++ (e.g., 17 out of 25 finalists on Google Code Jam 2011 used it, while no one used C), so I am wondering if I am at a disadvantage going with C. Apart from the Object Orientation, what makes C++ a more suitable language for programming competitions? What are the features of the language I should learn and use to perform better on the competitions? For background, I consider myself pretty proficient in C, but I am just starting to learn C++.

    Read the article

  • Separating a "wad of stuff" utility project into individual components with "optional" dependencies

    - by romkyns
    Over the years of using C#/.NET for a bunch of in-house projects, we've had one library grow organically into one huge wad of stuff. It's called "Util", and I'm sure many of you have seen one of these beasts in your careers. Many parts of this library are very much standalone, and could be split up into separate projects (which we'd like to open-source). But there is one major problem that needs to be solved before these can be released as separate libraries. Basically, there are lots and lots of cases of what I might call "optional dependencies" between these libraries. To explain this better, consider some of the modules that are good candidates to become stand-alone libraries. CommandLineParser is for parsing command lines. XmlClassify is for serializing classes to XML. PostBuildCheck performs checks on the compiled assembly and reports a compilation error if they fail. ConsoleColoredString is a library for colored string literals. Lingo is for translating user interfaces. Each of those libraries can be used completely stand-alone, but if they are used together then there are useful extra features to be had. For example, both CommandLineParser and XmlClassify expose post-build checking functionality, which requires PostBuildCheck. Similarly, the CommandLineParser allows option documentation to be provided using the colored string literals, requiring ConsoleColoredString, and it supports translatable documentation via Lingo. So the key distinction is that these are optional features. One can use a command line parser with plain, uncolored strings, without translating the documentation or performing any post-build checks. Or one could make the documentation translatable but still uncolored. Or both colored and translatable. Etc. Looking through this "Util" library, I see that almost all potentially separable libraries have such optional features that tie them to other libraries. If I were to actually require those libraries as dependencies then this wad of stuff isn't really untangled at all: you'd still basically require all the libraries if you want to use just one. Are there any established approaches to managing such optional dependencies in .NET?

    Read the article

  • What guidelines do you suggest for using Objective-C Properties?

    - by adarsha
    Objective-C 2.0 introduced properties. While I personally think properties are nice addition to the language, I have seen a trend of making every instance variable as a property. Apple sample codes are no exceptions to this. I believe this is against the spirit of OOP, and since it exposes a lot more implementation details of a class to the client than they need to know. What guidelines do you suggest for the proper usage properties in Objective C?

    Read the article

  • ColdFusion Search with Excel Sheet creation

    - by user71602
    I created a form search with two buttons - one for results on the web page and one to create an excel sheet. Problem: When a user conducts a search with results on the web page - the user would have to select the criteria again then click the create excel button to create the sheet. How can I work it, so the user after conducting a search can just click one button to create the excel sheet? (Using "Post" for the form)

    Read the article

  • Where could Distributed Version Control Systems currently be in Gartner's hype cycle?

    - by dukeofgaming
    Edit: Given the recent downvoting (+8/-6 at this point) it was made clear to me that Gartner's lifecycle is a biased metric from a programmer's perspective. This is something that is part of a paper I'm going to present to management, and management types are part of Gartner's audience. Giving DVCS exposure & enthusiasm (that "could" be deemed as hype, or at least attacked as such), think about the following question when reading this one: "how could I use Gartner's hype cycle to convince management that DVCSs are ready (or ready-enough) for us, and that it is not just hype" Just asking if DVCSs is hype wouldn't be constructive, Gartner's hype cycle is a more objective instrument than just asking that (even if this instrument is regarded as biased). If you know any other instrument please, by all means, mention it. Edit #2: I agree that Gartner's Life Cycle is not for every technology, but I consider it may have generated enough buzz to be considered hype by some, so it maybe deserves to be at least evaluated/pondered as such by using this instrument in order to prove/disprove it to whatever degree. I'm an advocate of DVCS, BTW. I'm doing research for a whitepaper I'm writing in favor of DVCS adoption at company and I stumbled upon the concept of social proof. I want to prove that the social proof of DVCS adoption is not necessarily cargo cult and doing further research I now stumbled upon Gartner's hype cycle which describes technology maturity in 5 phases. My question is: what could be an indicator of the current location of Distributed Version Control Systems (I mean git, mercurial, bazaar, etc. in general) at a particular phase in the hype cycle?... in other (less convoluted) words, would you say that currently expectations of DVCSs are a) starting, b)inflated, c)decreasing (disillusionment), d)increasing (enlightenment) or e)stabilizing (mature) and (more importantly) why? I know it is a hard question and there is subjectivity involved, but I'll grant the answer (and the traditional cookie) to the clearest argument/evidence for a particular phase.

    Read the article

  • Computer vision algorithms (how is this possible?)

    - by Maxim Gershkovich
    I recently stumbled across a company that has created what appears to be a computer vision technology that is capable of detecting shoplifting automatically and alert its users. LINK Watching some of the videos and examples provided by the company has left me completely baffled and amazed as to how on earth they may have achieved this functionality. I understand that no-one here will be able to tell me exactly how this may have been achieved but is anyone aware - and could point me to - research in this field or alternatively perhaps provide details as to how something like this could be implemented or guidance of where one might start? My understanding was the computer vision algorithms were many years away from being this sophisticated. Is this sort of application really possible? Anyone willing to hazard a guess at how they achieved this?

    Read the article

  • Is creating a full application in Silverlight advisable?

    - by Anthony
    Is creating a huge public site fully in Silverlight really advisable? for eg. an ecommerce site. I don't want to start any debate but actually I feel Silverlight shouldn't be used for full website because the biggest loss you incur is of SEO. No search engines till today can parse the xap file and index it based on it's content. You can get around it by doing ifs and thens like if Silverlight is not supported then make an Asp.Net equivalent page for it but that only doubles our effort of making application, more than anything else. Why write double code in 2 applications meant for the same purpose. If that is the only option why not create Asp.Net application only. What are your views? Thanks in advance :)

    Read the article

  • Online Cv examples

    - by Reza M.
    I'm a soon to be software engineer, hopefully... I wanted to start my online cv... As I looked around, I found the old school types where its just a plain text while the new ones are all colorful but seem overpowering. I was thinking of a more section wise cv. One that would link to categories. But here is the thing, I'm a noob at this ... Any hints, help, or examples would be much appreciated. In short, I would want a cv plus a portfolio that would be able to work on all different browsers. So my question: Any examples or guidelines or templates, to creating a perfect ONLINE cv preferably with portfolio?

    Read the article

  • How to increase ad revenue

    - by Brian515
    I have an Android game in which I've included ads from Millennial Media and Admob. Between these two networks, my fill rate is consistently above 95%. Right now, I'm getting over 45,000 requests for ads per day, and I'm starting to see exponential growth. But, my eCpm for both Millennial and Admob is ridiculously low. Millennial's is $.12 and Admob's is $.01. I've been reading some other posts and it seems like people are getting between $1 and $4 per 1,000 views. How can increase mine? Thank you!

    Read the article

  • Recommendations for adjustable sit-stand workstations?

    - by Chris Phillips
    Recently, I've been feeling the discomfort of sitting at my desk all day long. I'm fairly active, stretch, and take regular breaks, but some days it's still pretty uncomfortable to sit all day long, whether in a nice chair or on an exercise ball. I would really like able to stand at my computer for part of the day. My current setup is a large desk with two 26" lcds and a 17" laptop. I don't mind if the laptop isn't adjustable, as I don't use it as regularly as the monitors. I would like to be able to fairly easily switch from a sitting position to a standing position and back again as necessary. I've been looking into adjustable height desks and stands and found that they tend to be either really expensive, or don't quite meet my needs. (For example, the Ergotron WorkFit-S Dual LCD workstation looks like the ideal feature set at a reasonable price, but won't fit with my monitors.) Any suggestions or thoughts? Update: fixed a typo. Thanks @RDL!

    Read the article

  • Should I start learning WPF?

    - by questron
    Hi, I've been studying C# for about 2 months so far, I have a few years experience programming however in VBA and VB6, but nothing too in depth (I've written simple tools, but nothing too advanced). I am very interested in WPF and I was wondering whether it would be too early for me to start learning how to use it? For reference, I am about 400 pages into 'Head First C#' and I've written a program that can quickly tag image files for moving into a predefined folder or to delete (Allows the user to pull images off of a camera memory card and sort VERY quickly). That's the most advanced app I've written.

    Read the article

  • Python as a first language?

    - by user64085
    I have just started working in Information Security World. I want to learn the Python language for creating my own automated tool for Fuzzing, SQL-Injection etc. My question is I don't know much about C language (only basic knowledge) but I want to learn directly Python Language so is it good? I have seen there is lots of difference between Python and C (obviously) and for Information Security field Python = GOD so I want to know learning Python need any experience on C language? If not so can I start learning Python directly?

    Read the article

  • Build one to throw away vs Second-system effect

    - by m3th0dman
    One one hand there is an advice that says "Build one to throw away". Only after finishing a software system and seeing the end product we realize what went wrong in the design phase and understand how we should have really done it. On the other hand there is the "second-system effect" which says that the second system of the same kind that is designed is usually worse than the first one; there are many features that did not fit in the first project and were pushed into the second version usually leading to overly complex and overly engineered. Isn't here some contradiction between these principles? What is the correct view over the problems and where is the border between these two? I believe that these "good practices" are were firstly promoted in the seminal book The Mythical Man-Month by Fred Brooks. I know that some of these issues are solved by Agile methodologies, but deep down, the problem is still the principles still stand; for example we would not make important design changes 3 sprints before going live.

    Read the article

  • Advice for a getting a job in algorithmic trading - writing faster code

    - by Alex
    I am currently an intermediate Java developer working in the financial industry. I am considering trying to get into an algorithmic trading developer position. I am looking for any advice/resources that may help me obtain such a job. My naive initial thoughts are to concentrate on learning how to write faster, more memory efficient code whilst maintaining readability. Can anyone point me in the right direction of some useful resources for what I am aiming to achieve?

    Read the article

  • Test Data in a Distributed System

    - by Davin Tryon
    A question that has been vexing me lately has been about how to effectively test (end-to-end) features in a distributed system. Particuarly, how to effectively manage (through time) test data for feature testing. The system in question is a typical SOA setup. The composition is done in JavaScript when call to several REST APIs. Each service is built as an independent block. Each service has some kind of persistent storage (SQL Server in most cases). The main issue at the moment is how to approach test data when testing end-to-end features. Functional end-to-end testing occurs through the UI, and it is therefore necessary for test data to be set up before the test run (this could be manual or automated testing). As is typical in a distributed system, identifiers from one service are used as a link in another service. So, some level of synchronization needs to be present in the data to effectively test. What is the best way to manage and set up this data after a successful deployment to a test environment? For example, is it better to manage this test data inside each service? Or package it together with the testing suite? Does that testing suite exist as a separate project? I'm interested in design guidance about how to store and manage this test data as the application features evolve.

    Read the article

  • Immutable Method in Java

    - by Chris Okyen
    In Java, there is the final keyword in lieu of the const keyword in C and C++. In the latter languages there are mutable and immutable methods such as stated in the answer by Johannes Schaub - litb to the question How many and which are the uses of “const” in C++? Use const to tell others methods won't change the logical state of this object. struct SmartPtr { int getCopies() const { return mCopiesMade; } }ptr1; ... int var = ptr.getCopies(); // returns mCopiesMade and is specified that to not modify objects state. How is this performed in Java?

    Read the article

  • How do you explain refactoring to a non-technical person?

    - by Benjol
    (This question was inspired by the most-voted answer here) How do you go about explaining refactoring (and technical debt) to a non-technical person (typically a PHB or customer)? ("What, it's going to cost me a month of your work with no visible difference?!") UPDATE Thanks for all the answers so far, I think this list will provide several useful analogies to which we can point the appropriate people (though editing out references to PHBs may be wise!)

    Read the article

  • Is it possible to execute keyboard input programmatically in Linux?

    - by Taylor Hawkes
    For example is there a Linux command or way that I could from a program (c++ | python| or other) enter a series of keyboard inputs that are interpreted as though they are keyboard inputs. I have a bad case of Repetitive Stress Injury (RSI) from typing. To ease my pain I developed a voice controlled interface using pocket sphinx and a custom grammar and to run a number of very common commands. ex: "open chrome" , "open vim". Basically what is shown here, but with slightly diff tools: http://bloc.eurion.net/archives/2008/writing-a-command-and-control-application-with-voice-recognition/ I have run into some limitation as I can only execute command line commands given a voice command. Rather than having a "voice command" - "command line command" mapping, I would like to have "voice command" - "keyboard input" mapping. So when my active window is a browser and I type + n, and new tab opens. If I'm in vim and new vim tab opens. Any suggestions, ideas, tools or approaches to this problem would be much appreciated. I understand the answer may not be simple, but would like to develop it none the less.

    Read the article

  • Data structures in functional programming

    - by pwny
    I'm currently playing with LISP (particularly Scheme and Clojure) and I'm wondering how typical data structures are dealt with in functional programming languages. For example, let's say I would like to solve a problem using a graph pathfinding algorithm. How would one typically go about representing that graph in a functional programming language (primarily interested in pure functional style that can be applied to LISP)? Would I just forget about graphs altogether and solve the problem some other way?

    Read the article

  • Is there a design pattern for chained observers?

    - by sharakan
    Several times, I've found myself in a situation where I want to add functionality to an existing Observer-Observable relationship. For example, let's say I have an Observable class called PriceFeed, instances of which are created by a variety of PriceSources. Observers on this are notified whenever the underlying PriceSource updates the PriceFeed with a new price. Now I want to add a feature that allows a (temporary) override to be set on the PriceFeed. The PriceSource should still update prices on the PriceFeed, but for as long as the override is set, whenever a consumer asks PriceFeed for it's current value, it should get the override. The way I did this was to introduce a new OverrideablePriceFeed that is itself both an Observer and an Observable, and that decorates the actual PriceFeed. It's implementation of .getPrice() is straight from Chain of Responsibility, but how about the handling of Observable events? When an override is set or cleared, it should issue it's own event to Observers, as well as forwarding events from the underlying PriceFeed. I think of this as some kind of a chained observer, and was curious if there's a more definitive description of a similar pattern.

    Read the article

  • Empirical Evidence of Popularity of Git and Mercurial

    - by ana
    It's 2012! Mercurial and Git are both still strong. I understand the trade-offs of both. I also understand everyone has some sort of preference for one or the other. That's fine. I'm looking for some information on level of usage of both. For example, on stackoverflow.com, searching for Git gets you 12000 hits, Mercurial gets you 3000. Google Trends says it's 1.9:1.0 for Git. What other empirical information is available to estimate the relative usage of both tools?

    Read the article

  • How to design database having multiple interrelated entities

    - by Sharath Chandra
    I am designing a new system which is more of a help system for core applications in banks or healthcare sector. Given the nature of the system this is not a heavy transaction oriented system but more of read intensive. Now within this application I have multiple entities which are related to each other. For e.g. Assume the following entities in the system User Training Regulations Now each of these entities have M:N Relationship with each other. Assuming the usage of a standard RDBMS, the design may involve many relationship tables each containing the relationships one other entity ("User_Training", "User_Regulations", "Training_Regulations"). This design is limiting since I have more than 3 entities in the system and maintaining the relationship graph is difficult this way. The most frequently used operation is "given an entity get me all the related entities" . I need to design the database where this operation is relatively inexpensive. What are the different recommendations for modelling this kind of database.

    Read the article

  • When is an object oriented program truly object oriented?

    - by Syed Aslam
    Let me try to explain what I mean: Say, I present a list of objects and I need to get back a selected object by a user. The following are the classes I can think of right now: ListViewer Item App [Calling class] In case of a GUI application, usually click on a particular item is selection of the item and in case of a command line, some input, say an integer representing that item. Let us go with command line application here. A function lists all the items and waits for the choice of object, an integer. So here, I get the choice, is choice going to conceived as an object? And based on the choice, return back the object in the list. Does writing this program like the way explained above make it truly object oriented? If yes, how? If not, why? Or is the question itself wrong and I shouldn't be thinking along those lines?

    Read the article

  • How to access an encrypted INI file from C on an embedded system with little RAM

    - by Mawg
    I want to encrypt an INI file using a Delphi program on a Windows PC. Then I need to decrypt & access it in C on an embedded system with little RAM. I will do that once & fetch all info; I will not be consutinuously accessing the INI file whenever my program needs data from the file. Any advice as to which encryption to use? Nothing too heavyweight, just good enough for "Security through obscurity" and FOSS for both Delphi & C. And how can I decrypt, get all the info from the INI file - using as little RAM as possible, and then free any allocated RAM? I hope that someone can help. [Update] I am currently using an Atmel UC3, although I am not sure if that will be the final case. It has 512kB falsh & 128kB RAM. For an INI file, I am talking of max 8 sections, with a total of max 256 entries, each max 8 chars. I chose INI (but am not married to it), because i have had major problems in the past when the format of a data fiel changes, no matter whether binary, or text. For tex, I prefer the free format of INI (on PC), but suppose I could switch to line_1=data_1, line_2=data_2 and accept that if I add new fields in future software erleases they must come at the end, even if it is not pretty when read directly by humans. I suppose if I choose a fixed format text file then I never need get more than one line into RAM at a time ...

    Read the article

  • How to manage a developer who has poor communication skills

    - by djcredo
    I manage a small team of developers on an application which is in the mid-point of its lifecycle, within a big firm. This unfortunately means there is commonly a 30/70 split of Programming tasks to "other technical work". This work includes: Working with DBA / Unix / Network / Loadbalancer teams on various tasks Placing & managing orders for hardware or infrastructure in different regions Running tests that have not yet been migrated to CI Analysis Support / Investigation Its fair to say that the Developers would all prefer to be coding, rather than doing these more mundane tasks, so I try to hand out the fun programming jobs evenly amongst the team. Most of the team was hired because, though they may not have the elite programming skills to write their own compiler / game engine / high-frequency trading system etc., they are good communicators who "can get stuff done", work with other teams, and somewhat navigate the complex beaurocracy here. They are good developers, but they are also good all-round technical staff. However, one member of the team probably has above-average coding skills, but below-average communication skills. Traditionally, the previous Development Manager tended to give him the Programming tasks and not the more mundane tasks listed above. However, I don't feel that this is fair to the rest of the team, who have shown an aptitute for developing a well-rounded skillset that is commonly required in a big-business IT department. What should I do in this situation? If I continue to give him more programming work, I know that it will be done faster (and conversly, I would expect him to complete the other work slower). But it goes against my principles, and promotes the idea that you can carve out a "comfortable niche" for yourself simply by being bad at the tasks you don't like.

    Read the article

< Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >