Search Results

Search found 15103 results on 605 pages for 'programmers notepad'.

Page 230/605 | < Previous Page | 226 227 228 229 230 231 232 233 234 235 236 237  | Next Page >

  • LINQ Style preference

    - by Erin
    I have come to use LINQ in my every day programming a lot. In fact, I rarely, if ever, use an explicit loop. I have, however, found that I don't use the SQL like syntax anymore. I just use the extension functions. So rather then saying: from x in y select datatransform where filter I use: x.Where(c => filter).Select(c => datatransform) Which style of LINQ do you prefer and what are others on your team are comfortable with?

    Read the article

  • The perfect crossfade

    - by epologee
    I find it hard to describe this problem in words, which is why I made a video (45 seconds) to illustrate it. Here's a preview of the questions, please have a look at it on Vimeo: http://vimeo.com/epologee/perfect-crossfade The issue of creating a flawless crossfade or dissolve of two images or shapes has been recurring to me in a number of fields over the last decade. First in video editing, then in Flash animation and now in iOS programming. When you start googling it, there are many workarounds to be found, but I really want to solve this without a hack this time. The summary: What is the name of the technique or curve to apply in crossfading two semi-transparent, same-colored bitmaps, if you want the resulting transparency to match the original of either one? Is there a (mathematical) function to calculate the neccessary partial transparency/alpha values during the fade? Are there programming languages that have these functions as a preset, similar to the ease in, ease out or ease in out functions found in ActionScript or Cocoa?

    Read the article

  • What do you do if you reach a design dead-end in evolutionary methods like Agile or XP?

    - by Dipan Mehta
    As I was reading Martin Fowler's famous blog post Is Design Dead?, one of the striking impressions I got is that given the fact that in Agile Methodology and Extreme Programming, the design as well as programming is evolutionary, there are always points where things need to get refactored. It may be possible that when a programmer's level is good, and they understand design implications and don't make critical mistakes, the code continues to evolve. However, in a normal context, what is the ground reality in this context? In a normal day given some significant development goes into product, and when critical change occurs in requirement isn't it a constraint that how much ever we wish, fundamental design aspects cannot be modified? (without throwing away major part of the code). Is it not quite likely that one reaches dead-end on any further possible improvement on design and requirements? I am not advocating any non-Agile practice here, but I want to know from people who practice agile or iterative or evolutionary development methods, as for their real experiences. Have you ever reached such dead-ends? How have you managed to avoid it or escaped it? Or are there measures to ensure that design remains clean and flexible as it evolves?

    Read the article

  • What php programmer should know?

    - by emchinee
    I've dig the database here and didn't found any answer for my question. What is a standard for a php programmer to know? I mean, literally, what group of language functions, mechanisms, variables should person know to consider oneself a (good) php programmer? (I know 'being good' is beyond language syntax, still I'm considering syntax of plain php only) To give an example what I mean: functions to control http sessions, cookies functions to control connection with databases functions to control file handling functions to control xml etc.. I omit phrases like 'security' or 'patterns' or 'framework' intentionally as it applies to every programming language. Hope I made myself clear, any input appreciated :) Note: Michael J.V. is right claiming that databases are independent from language, so to put my question more precisely and emphasise differences: Practises or security, are some ideas to implement (there is no 'Pattern' object with 'Decorator()' method, is there?) while using databases means knowing a mysqli and a set of its methods.

    Read the article

  • How did craigspro license Craigslist content? [closed]

    - by Joshua Frank
    There's an app called craigspro that provides a much better interface to Craigslist on mobile devices. They claim that the app is Officially Licensed by Craigslist, but I thought Craigslist never licensed their content, and the only thing I can find on the subject in the terms of use is this: Any copying, aggregation, display, distribution, performance or derivative use of craigslist or any content posted on craigslist whether done directly or through intermediaries (including but not limited to by means of spiders, robots, crawlers, scrapers, framing, iframes or RSS feeds) is prohibited. As a limited exception, general purpose Internet search engines and noncommercial public archives will be entitled to access craigslist without individual written agreements executed with CL that specifically authorize an exception to this prohibition if ... Does anyone know how do get a "written agreement" with Craigslist, and roughly what their terms would be? Do they charge a fee, or just check that you're not evil? I'll try next with Craigslist directly, but I'd like to get a sense of the landscape before stumbling in.

    Read the article

  • Statistical Software Quality Control References

    - by Xodarap
    I'm looking for references about hypothesis testing in software management. For example, we might wonder whether "crunch time" leads to an increase in defect rate - this is a surprisingly difficult thing to do. There are many questions on how to measure quality - this isn't what I'm asking. And there are books like Kan which discuss various quality metrics and their utilities. I'm not asking this either. I want to know how one applies these metrics to make decisions. E.g. suppose we decide to go with critical errors / KLOC. One of the problems we'll have to deal with with that this is not a normally distributed data set (almost all patches have zero critical errors). And further, it's not clear that we really want to examine the difference in means. So what should our alternative hypothesis be? (Note: Based on previous questions, my guess is that I'll get a lot of answers telling me that this is a bad idea. That's fine, but I'd request that it's based on published data, instead of your own experience.)

    Read the article

  • Which platform to choose, Java or .NET?

    - by salman
    I am working in a private bank, a leading mid size bank in local market. We are going to create our core banking solution. Existing solution has been developed on Java using IBM Visual Age 4.0. It is very important to discuss architecture first, we have currently more than 350 branches working in standalone mode, and it means they are working in self contained environment. They have their own database server (IBM DB2 9.7) and they are communicating with other branches via sockets to send and receive data. Having experience of .NET for more than 5 years I am trying to convince my superiors to choose .NET platform, but they are reluctant and unwilling. It is my job to encourage them for choosing best available platform to create large scale enterprise application. In simple word, we are going to create a very large scale enterprise financial application, a centralize and integrated which connects all branch networks plus having scalable, solid architecture that easily evolve over time. I want professional people to comment on above scenarios. Which platform to choose .NET or Java? Our all resource is currently working in Java, we have homogeneous environment (no Linux, no Mac and no UNIX). Any idea, any thoughts, any points technical or non-technical i.e. administrative or management point of view will be really appreciated.

    Read the article

  • How to keep AST for feature access?

    - by greenoldman
    Consider such code (let's say it is C++) Foo::Bar.get().X How one should keep the AST for this -- as "tree" with root at left Foo(Bar(get(X)), or with root at right (((Foo)Bar)get)X? Or maybe as a flat structure (list)? The first one seems more convenient when resolving names, the second when working with it as expression. I set tag parsing but I am asking from semantic analysis POV really (there is no such tag).

    Read the article

  • Breadcrumbs in a modern web application, make sense? [on hold]

    - by Xtreme Biker
    I'm currently beginning with the development of a new web application. The whole web application is going to be bookmarkable and all the pages accesible via GET requests and url parameters. Having said that, let's suppose I've got three entities in my application, Customer, Team and City. Each Customer and Team belong to a city and I've got a city-detail page which displays the detail for a concrete city. So next navigation cases are possible: Customers - Customer detail (id=2) - City detail (id=3) Football teams - Team detail (id=5) - City detail (id=3) Cities - City detail (id=3) There are three possible ways of ending up in a city detail view. My question is, does it make sense to implement a breadcrumb to show such a history, having it available in the browser itself? Would it be more appropiate to show a breadcrumb with the last case, no matter where we're coming from (hierarchical breadcrumb)? That's what Jakob Nielsen points out here: Offering users a Hansel-and-Gretel-style history trail is basically useless, because it simply duplicates functionality offered by the Back button, which is the Web’s second-most-used feature. A history trail can also be confusing: users often wander in circles or go to the wrong site sections. Having each point in a confused progression at the top of the current page doesn’t offer much help. Finally, a history trail is useless for users who arrive directly at a page deep within the site. Also, even if the history trail seems the most natural way to implement it, it requires an extra effort to keep the whole track being HTTP a stateless mean.

    Read the article

  • What is the difference between Static code analysis and code review?

    - by Xander
    I just wanted to know what is the difference between static code analysis and code review. How these two are done? What are the tools available today for code review/ static analysis of PHP. I also like to know about good tools for any language code review. Thanks in Advance. Xander Cage Note: I am asking this because I was not able to understand the difference. Please, I expect some answers than "I am Mr.Geek and you asked an irrelevant bla bla..... this is closed". I know this sounds mean. But I am sorry.

    Read the article

  • New vs Clone Git in Eclipse with EGit

    - by Matty F
    I'm not sure I have either my new repo or clone repo workflow, or both, setup correctly. When I create a new project I create a repo on github, can't clone from it as it's empty so I create a new project which goes into my workspace and then the git init runs on the workspace copy. So I end up with everything in workspace\project-name. However, when I clone from github first I need to clone the repo and this goes into my default git directory (C:\git) as git\cloned-project-name, I then need to import this Git repo as a project into my workspace and I end up with workspace\cloned-project-name effectively duplicating the project folder in the git area. I've tried to clone to workspace\cloned-project-name but then it asks to import the git project and if I try to use workspace\cloned-project-name again, it errs. What am I doing wrong? Thanks, Matt.

    Read the article

  • Does semi-normalization exist as a concept? Is it "normalized"?

    - by Gracchus
    If you don't mind, a tldr on my experience: My experience tldr I have an application that's heavily dependent upon uncertainty, a bane to database design. I tried to normalize it as best as I could according to the capabilities of my database of choice, but a "simple" query took 50ms to read. Nosql appeals to me, but I can't trust myself with it, and besides, normalization has cut down my debugging time immensely over and over. Instead of 100% normalization, I made semi-redundant 1:1 tables with very wide primary keys and equivalent foreign keys. Read times dropped to a few ms, and write times barely degraded. The semi-normalized point Given this reality, that anyone who's tried to rely upon views of fully normalized data is aware of, is this concept codified? Is it as simple as having wide unique and foreign keys, or are there any hidden secrets to this technique? Or is uncertainty merely a special case that has extremely limited application and can be left on the ash heap?

    Read the article

  • How to manage focus for a small set of simple widgets

    - by Christoph
    I'm developing a set of simple widgets for a small (128x128) display. For example I'd like to have a main screen with an overlay menu which I can use to toggle visibilty of main screen elements. Each option would be an icon with a box around it while it is selected. Button (left, up, right, down, enter) events should be given to the widget that has "focus". Focus is a simple thing to understand when using a GUI, but I'm having trouble implementing this. Can you suggest a simple concept for managing focus and input events? I have these simple ideas: Only one widget can have focus, so I need a single pointer to that widget. When this widget gets some kind of "cycle" input (as in "highlight the next item in this list"), the focus is given to a different widget. a widget must have a way of telling the application which widget the focus is given to next. if a widget cannot give a "next focus" hint, the application must be able to figure out where the focus should go. Widgets are currently structured like this: A widget can have a parent, which is passed to the constructor. Widgets are created statically, as I want to avoid dynamic memory allocation (I only have 16kB of RAM and I'd like to have control over that). widgets have siblings, implemented as an intrusive linked list (they have a next member). A parent has a pointer to the head of its list of children. Input events are arguments to the widgets buttonEvent methods which can accept or ignore that event. If it ignores the event, it can pass the event to its parent. My Questions: How can I manage focus for these widgets? Am I making this too complicated?

    Read the article

  • Parsing stdout with custom format or standard format?

    - by linquize
    To integrate with other executables, a executable may launch another executable and capture its output from stdout. But most programs writes the output message to stdout in custom format and usually in human readable format. So it requires the system integrator to write a function to parse the output, which is considered trouble and the parser code may be buggy. Do you think this is old fashioned? Most Unix-style programs do that. Very few programs write to stdout in standard format such as XML or JSON, which is more modern. Example: Veracity (DVCS) writes JSON to stdout. Should we switch to use modern formats? For a console program, human readable or easy parsable: which is more important ?

    Read the article

  • Why do people think SOAP is deprecated?

    - by user98q37479
    While browsing SO today I found this question here and it starts with this: Sure, you're gonna tell me that SOAP is depracated and all, well i'm forced to use it Found lots of statement like this one on SO up till now, this one just triggered me to ask this question. REST has its uses, SOAP has its uses, in some places they intersect as functionality but they are not replaceable to one another. So I wonder, why do people think SOAP is "deprecated"? Is it ignorance? Complexity of SOAP and WS-* specs? REST hype? What? If you think SOAP is deprecated please tell me why. I'm curious!

    Read the article

  • Listing Unix experience on resume.

    - by beacon
    I have been using Linux for quite a long time, and I am familiar with many Unix commands and tools. However, my only experience with Unix is through various Linux distributions. How should I communicate on my resume that I am familiar with the Unix command line even though I have never used a UNIX(R) system? It seems strange to me to list Unix when I've never used UNIX(R). Some people refer to Unix clones as *nix, but I'm afraid that might fly over the heads of some HR people.

    Read the article

  • If your unit test code "smells" does it really matter?

    - by Buttons840
    Usually I just throw my unit tests together using copy and paste and all kind of other bad practices. The unit tests usually end up looking quite ugly, they're full of "code smell," but does this really matter? I always tell myself as long as the "real" code is "good" that's all that matters. Plus, unit testing usually requires various "smelly hacks" like stubbing functions. How concerned should I be over poorly designed ("smelly") unit tests?

    Read the article

  • Analysing Group & Individual Member Performance -RUP

    - by user23871
    I am writing a report which requires the analysis of performance of each individual team member. This is for a software development project developed using the Unified Process (UP). I was just wondering if there are any existing group & individual appraisal metrics used so I don't have to reinvent the wheel... EDIT This is by no means correct but something like: Individual Contribution (IC) = time spent (individual) / time spent (total) = Performance = ? (should use individual contribution (IC) combined with something to gain a measure of overall performance).... Maybe I am talking complete hash and I know generally its really difficult to analyse performance with numbers but any mathematicians out there that can lend a hand or know a somewhat more accurate method of analysing performance than arbitrary marking (e.g. 8 out 10)

    Read the article

  • Company wants to write custom project management tool, rather then use third party product.

    - by Jason Evans
    At the company I work, we are really wanting to get into the agile methodology for developing software. One thing that I'm not excited about is the fact that management wants us to build a custom project management feature inside the company's Intranet. I think this is a total waste of time. There are many great third party tools available (e.g. Axosoft OnTime) that can do everything we need, and more. For how much development time it would cost us to build our own project management module, we could buy numerous licences for a third party product. One concern is that, whilst we are writing code for a client, and using our custom Intranet project management module, we find bugs in the module that need fixing ASAP. That means having to stop work on the client code to fix the Intranet. That just puts shivers down my spine. Another worry I have is lack of functionality. This custom module is going to be so basic, that it will just feel really crap to use. That might sound a bit snooty, but for goodness sake, many third party tools are so feature rich, that the idea of having to write our own tool makes feel very uneasy. In fact, I can't be bothered. What do you guys think? I'm going to raise this issue with my boss, since I feel it's such an important topic to talk about. EDIT: Thanks for the great responses, much appreciated. To summarize some of them: Money Naturally my boss does want to save money, by not forking out a few hundred £'s for licences. However, for us to write a custom tool, it will take x number of days, multiplied by approx £500, which is our costs. I don't see the business value in this. Management have mentioned that they want to sell the Intranet as a product in the future, but it's so custom to our needs (and downright basic), that in order to give it to another client, I can see us having to fork a version of the code and rebuild the majority of it anyway. So it's not like we're gaining anything there in reuse. Features Having our own custom module means not feature bloat - only the functionality we require will be in the product. My issue is that there are plenty of free, open-source project management tools out there with minimal features already. So even if cost is an issue, we could look into open-source. Again it all boils down to the fact that I don't see the point in writing a project management tool in this day and age. It's a bit like writing your own web browser - why?, what's the point? Although management are asking for this tool, just because they are, it does not mean I'm going to please them and do it just because they asked for it. If something does not make sense, then I will raise it as a concern. At the end of the day, it's the developers who write the code, it's the developers who make money for a business. Thus, as far I'm concerned, the devs have a very big role in deciding how a company should manage projects and what tools are used. "I am Spartan, argh!" :) Hmm, I've not been able to make this question a wiki for some reason, thus I'm going to have to pick an answer to accept. Cheers. Jas.

    Read the article

  • Why do Windows Forms / Swing frameworks favour inheritance instead of Composition?

    - by devoured elysium
    Today a professor of mine commented that he found it odd that while SWT's philosophy is one of making your own controls by composition, Swing seems to favour inheritance. I have almost no contact with both frameworks, but from what I remember in C#'s Windows Forms one usually extends controls, just like Swing. Being that generally people tend to prefer composition over inheritance, why didn't Swing/Windows Forms folks favour composition instead of inheritance?

    Read the article

  • Is there really Object-relational impedance mismatch?

    - by user52763
    It is always stated that it is hard to store applications objects in relational databases - the object-relational impedance mismatch - and that is why Document databases are better. However, is there really an impedance mismatch? And object has a key (albeit it may be hidden away by the runtime as a pointer to memory), a set of values, and foreign keys to other objects. Objects are as much made up of tables as it is a document. Neither really fit. I can see a use for databases to model the data into specific shapes for scenarios in the application - e.g. to speed up database lookup and avoid joins, etc., but won't it be better to keep the data as normalized as possible at the core, and transform as required?

    Read the article

  • What's the best structure for a repository?

    - by jpmelos
    I've looked into many open source software repositories, and I've found some common elements and somethings people do in different fashion from one another. For example, every repository has a README file, a INSTALL file, a COPYING file and stuff like that. Other things differ: Some projects, like git, have their source code in the root level, while others have the source code in a src/ folder and others, like the Linux kernel, have the source code spread in different folders in root level, that divide code by areas; Some have their tests in a t/ folder, while others in a tests/ folder, or named otherwise; Some have files about submitting patches and who the maintainers are, and those might be inside some Documentation/ or in the root level. Are there recommendations? A best practice? For example: personally, I don't like the code in the root level, git-fashion. It looks messy and confuses one trying to start as a contributor (especially because they have some code inside folders, and scripts in the root level as well, it's really messy). If I were to start a project of my own and wanted to start right from the start, are there recommendations? Best practices? How can I make a clean and clear structure? Thank you!

    Read the article

  • Avoiding Object Oriented Pitfalls, Migrating from C, What Worked for You?

    - by Stephen
    I've been programming in procedural languages for quite some time now, and my first reaction to a problem is to start breaking it down into tasks to perform rather than to consider the different entities (objects) that exist and their relationships. I have had a university course in OOP, and understand the fundamentals of encapsulation, data abstraction, polymorphism, modularity and inheritance. I read Learning to think in the Object Oriented Way and Learning object oriented thinking, and will be looking at some of the books pointed to in those answers. I think that several of my medium to large sized projects will benefit from effective use of OOP but as a novice I would like to avoid time consuming, common errors. Based on your experiences, what are these pitfalls and what are reasonable ways around them? If you could explain why they are pitfalls, and how your suggestion is effective in addressing the issue it'd be appreciated. I'm thinking along the lines of something like "Is it common to have a fair number of observer and modifier methods and use private variables or are there techniques for consolidating/reducing them?" I'm not worried about using C++ as a pure OO language, if there are good reasons to mix methods. (Reminiscent of the reasons to use GOTOs, albeit sparingly.) Thank you!

    Read the article

  • Great Programmer Productivity - Accounting for 10,000 fold difference?

    - by TheImpact
    "A great lathe operator commands several times the wage of an average lathe operator, but a great writer of software code is worth 10,000 times the price of an average software writer." - Bill Gates Say there's a "great" software engineer and an "average" software engineer on the same team. How can you account for one engineer being 10,000 times more productive? I can't quite fathom this, given they're both taking on their share of features, bugs and investigations, and consistently deliver with quality. Would my description possibly justify them to be above "average"? "great"? In a corporation like Microsoft, what % of software engineers are "average"? What % "great"?

    Read the article

  • Implementing the transport layer for a SIP UAC

    - by Jonathan Henson
    I have a somewhat simple, but specific, question about implementing the transport layer for a SIP UAC. Do I expect the response to a request on the same socket that I sent the request on, or do I let the UDP or TCP listener pick up the response and then route it to the correct transaction from there? The RFC does not seem to say anything on the matter. It seems that especially using UDP, which is connection-less, that I should just let the listeners pick up the response, but that seems sort of counter intuitive. Particularly, I have seen plenty of UAC implementations which do not depend on having a Listener in the transport layer. Also, most implementations I have looked at do not have the UAS receiving loop responding on the socket at all. This would tend to indicate that the client should not be expecting a reply on the socket that it sent the request on. For clarification: Suppose my transport layer consists of the following elements: TCPClient (Sends Requests for a UAC via TCP) UDPClient (Sends Requests for a UAC vid UDP) TCPSever (Loop receiving Requests and dispatching to transaction layer via TCP) UDPServer (Loop receiving Requests and dispatching to transaction layer via UDP) Obviously, the *Client sends my Requests. The question is, what receives the Response? The *Client waiting on a recv or recvfrom call on the socket it used to send the request, or the *Server? Conversely, the *Server receives my requests, What sends the Response? The *Client? doesn't this break the roles of each member a bit?

    Read the article

< Previous Page | 226 227 228 229 230 231 232 233 234 235 236 237  | Next Page >