Search Results

Search found 15103 results on 605 pages for 'programmers notepad'.

Page 292/605 | < Previous Page | 288 289 290 291 292 293 294 295 296 297 298 299  | Next Page >

  • Developer momentum on open source projects

    - by sashang
    Hi I've been struggling to develop momentum contributing to open source projects. I have in the past tried with gcc and contributed a fix to libstdc++ but it was a once off and even though I spent months in my spare time on the dev mailing list and reading through things I just never seemed to develop any momentum with the code. Eventually I unsubscribed and got my free time back and uncluttered my mailbox. Like a lot of people I have some little open source defunct projects lying around on the net, but they're not large and I'm the only contributor. At the moment I'm more interested in contributing to a large open source project and want to know how people got started because I find it difficult while working full time to develop any momentum with the code base. Other more regular contributors, who are on the project full-time, are able to make changes at will and as result enter that positive feedback cycle where they understand the code and also know where it's heading. It makes the barrier to entry higher for those that come along later. My questions are to people who actively contribute to large opensource projects, like the Linux kernel, or gcc or clang/llvm or anything else with say a developer head count of more than 10. How did you get started? Was there a large chunk of time in your life that you just could dedicate to working on the project? I know in Linus's case he had a chunk of time (6 months) to get it started. What barriers to entry did you encounter? Can you describe the initial stages of the time spent with the project, from when you had little understanding of the code to when you understood enough to commit regularly. Thanks

    Read the article

  • Developing Salesforce apps in Ruby on Rails

    - by Robert S.
    I want to build a web app that uses Salesforce.com data, and I want to build it fast. I'm a .NET developer (WPF, C#, ASP.NET MVC). I understand Ruby and RoR fairly well, but I haven't delivered any Rails apps. I'm wondering, is Ruby on Rails a suitable tool for rapidly building Salesforce apps, or is it better for the "traditional" web2.0 stuff like Groupon and Twitter? In other words, would using RoR help me achieve my fast (e.g., three months) goal over using .NET, which I already know?

    Read the article

  • Which language is best for OpenGL?

    - by Neurofluxation
    Well, this is silly - I've had to post a question about programming on a site other than stackoverflow... Worst thing they have ever done.. I have the following list of languages: C++ C# VB.Net D Delphi Euphoria GLUT Java Power Basic Python REALbasic Visual Basic I would like to know what people think is the: *a) Best (most efficient) language to work with OpenGL* and b) What is the easiest language to work with OpenGL Thanks in advance guys n gals!

    Read the article

  • What are the essential things one needs to know about UML?

    - by Hanno Fietz
    I want my scribbles of a program's design and behaviour to become more streamlined and have a common language with other developers. I looked at UML and in principle it seems to be what I'm looking for, but it seems to be overkill. The information I found online also seems very bloated and academic. How can I understand UML in plain-English way, enough to be able to explain it to my colleagues? What are the canonical resources for understanding UML at a ground level?

    Read the article

  • What should come first: testing or code review?

    - by Silver Light
    Hello! I'm quite new to programming design patterns and life cycles and I was wondering, what should come first, code review or testing, regarding that those are done by separate people? From the one side, why bother reviewing code if nobody checked if it even works? From the other, some errors can be found early, if you do the review before testing. Which approach is recommended and why? Thank you!

    Read the article

  • Method chaining vs encapsulation

    - by Oak
    There is the classic OOP problem of method chaining vs "single-access-point" methods: main.getA().getB().getC().transmogrify(x, y) vs main.getA().transmogrifyMyC(x, y) The first seems to have the advantage that each class is only responsible for a smaller set of operations, and makes everything a lot more modular - adding a method to C doesn't require any effort in A, B or C to expose it. The downside, of course, is weaker encapsulation, which the second code solves. Now A has control of every method that passes through it, and can delegate it to its fields if it wants to. I realize there's no single solution and it of course depends on context, but I would really like to hear some input about other important differences between the two styles, and under what circumstances should I prefer either of them - because right now, when I try to design some code, I feel like I'm just not using the arguments to decide one way or the other.

    Read the article

  • Why does a proportional controller have a steady state error?

    - by Qantas 94 Heavy
    I've read about feedback loops, how much this steady state error is for a given gain and what to do to remove this steady state error (add integral and/or derivative gains to the controller), but I don't understand at all why this steady state error occurs in the first place. If I understand how a proportional control works correctly, the output is equal to the current output plus the error, multiplied by the proportional gain (Kp). However, wouldn't the error slowly diminish over time as it is added (reaching 0 at infinite time), not have a steady state error? From my confusion, it seems I'm completely misunderstanding how it works - a proper explanation of how this steady state error eventuates would be fantastic.

    Read the article

  • Self-documenting code vs Javadocs?

    - by Andiaz
    Recently I've been working on refactoring parts of the code base I'm currently dealing with - not only to understand it better myself, but also to make it easier for others who are working on the code. I tend to lean on the side of thinking that self-documenting code is nice. I just think it's cleaner and if the code speaks for itself, well... That's great. On the other hand we have documentation such as javadocs. I like this as well, but there's a certain risk that comments here gets outdated (as well as comments in general of course). However, if they are up-to-date they can be extremely useful of say, understanding a complex algorithm. What are the best practices for this? Where do you draw the line between self-documenting code and javadocs?

    Read the article

  • Custom graph comparison?

    - by user57828
    I'm trying to compare two graphs using hash value ( i.e, at the time of comparison, try to avoid traversing the graph ) Is there a way to make a function such that the hash values compared can also lead to determining at which height the graphs differ? The comparisons between two graphs are to be made by comparing children at a certain level. One way to compare the graphs is have a final hash value for the root node and compare them, but that wouldn't directly reflect at which level the graphs differ, since their immediate children might be the same ( or any other case ).

    Read the article

  • better way to track defect sources in tfs

    - by deostroll
    What is the best way to track defect sources in tfs? We have various teams for a project like the vulnerability team, the customer, pre-sales, etc. We give a build and these teams independently test it. They do not have access to our tfs system. So they usually send in their defects via email. It will usually be send in an excel format. Our testing team takes these up and logs them into tfs. Sometimes they modify the original defect description (in excel) and add the expected/actual results. Sometimes they miss to cite the source. I am talking about managing the various sources as such. Is there a way we can add these sources into tfs, and actually link this particular source with the defects, with individual comments associated with them (saying where in the source we can find the actual material for the defect).

    Read the article

  • Lack of PHP experience = reduced compensation?

    - by Jake Tacholsavacky
    I interviewed for a C++ position at this company, and while they say that I knocked my interview out of the park, they also say that I am too senior for the position. Instead they would like to offer me a job programming in PHP. Unfortunately my PHP experience is limited. Because of this, they are expecting me to take a significant drop in salary with the hope that I will master PHP within a year and be promoted at my first annual performance review. I don't believe that this is reasonable. I agree with Joel when he writes, "Don’t look for experience with particular technologies." If they thought I was such a superstar, then they should have realized that I am capable of being productive in PHP within weeks, not a year! It seems like just an excuse to underpay me. Notwithstanding the fact that I was insulted by this offer, I think that I would be taking on much more risk than they would be; they won't guarantee what my post-PHP salary would be. What does the Stack Exchange community think about this offer? Would you take it?

    Read the article

  • Seperation of project responsibilities in new project

    - by dreza
    We have very recently started a new project (MVC 3.0) and some of our early discussion has been around how the work and development will be split amongst the team members to ensure we get the least amount of overlap of work and so help make it a bit easier for each developer to get on and do their work. The project is expected to take about 6 months - 1 year (although not all developers are likely to be on and might filter off towards the end), Our team is going to be small so this will help out a bit I believe. The team will essentially consist of: 3 x developers (1 a slightly more experienced and will be the lead) 1 x project manager / product owner / tester An external company responsbile for doing our design work General project/development decisions so far have included: Develop in an Agile way using SCRUM techniques (We are still very much learning this approach as a company) Use MVVM archectecture Use Ninject and DI where possible Attempt to use as TDD as much as possible to drive development. Keep our controllers as skinny as possible Keep our views as simple as possible During our discussions two approaches have been broached as too how to seperate the workload given our objectives outlined above. OPTION 1: A framework seperation where each person is responsible for conceptual areas with overlap and discussion primarily in the integration areas. The integration areas would the responsibily of both developers as required. View prototypes (**Graphic designer**) | - Mockups | Views (Razor and view helpers etc) & Javascript (**Developer 1**) | - View models (Integration point) | Controllers and Application logic (**Developer 2**) | - Models (Integration point) | Domain model and persistence (**Developer 3**) PROS: Integration points are quite clear and so developers can work without dependencies on others fairly easily Code practices such as naming conventions and style is more easily managed in regards to consistancy as primarily only one developer will be handling an area CONS: Completion of an entire feature becomes a bit grey as no single person is responsible for an entire feature (story?) A person might not have a full appreciation for all areas of the project and so code overlap might be lacking if suddenly that person left. OPTION 2: A more task orientated approach where each person is responsible for the completion of the entire task from view - controller - model. PROS: A person is responsible for one entire feature so it's "complete" state can be clearly defined Code overlap into different areas will occur so each individual has good coverage over the entire application CONS: Overlap of development will occur in all the modules and developers can develop/extend without a true understanding of what the original code owner was intending. This could potentially lead more easily to code bloat? Following a convention might be harder as developers are adding to all areas of the project If a developer sets up a way of doing things would it be harder to enforce the other developers to follow that convention or even build on it (or even discuss it?). Dunno.. Bugs could more easily be introduced into areas not thought about by the developer It's easier to possibly to carry a team member in so far as one member just hacks code together to complete a task whilst another takes time to build a foundation that could be used by others and so help make future tasks easier i.e. starts building a framework? QUESTION: As it might appear I'm more in favor of option 1, however I'm interested to see how others might have approached this or what is the standard or best or preferred way of undertaking a project. Or indeed any different approach to handling this?

    Read the article

  • Is there an excuse for short variable names?

    - by KChaloux
    This has become a large frustration with the codebase I'm currently working in; many of our variable names are short and undescriptive. I'm the only developer left on the project, and there isn't documentation as to what most of them do, so I have to spend extra time tracking down what they represent. For example, I was reading over some code that updates the definition of an optical surface. The variables set at the start were as follows: double dR, dCV, dK, dDin, dDout, dRin, dRout dR = Convert.ToDouble(_tblAsphere.Rows[0].ItemArray.GetValue(1)); dCV = convert.ToDouble(_tblAsphere.Rows[1].ItemArray.GetValue(1)); ... and so on Maybe it's just me, but it told me essentially nothing about what they represented, which made understanding the code further down difficult. All I knew was that it was a variable parsed out specific row from a specific table, somewhere. After some searching, I found out what they meant: dR = radius dCV = curvature dK = conic constant dDin = inner aperture dDout = outer aperture dRin = inner radius dRout = outer radius I renamed them to essentially what I have up there. It lengthens some lines, but I feel like that's a fair trade off. This kind of naming scheme is used throughout a lot of the code however. I'm not sure if it's an artifact from developers who learned by working with older systems, or if there's a deeper reason behind it. Is there a good reason to name variables this way, or am I justified in updating them to more descriptive names as I come across them?

    Read the article

  • Graphic card for parallel programming vs traditional methods

    - by Sambatyon
    With a simple search in amazon one can see that the modern approach for parallel programming is to use your graphic card. However I am still a little bit skeptical about it. My last computer has an 8 core CPU which I need is enough for basic all my parallel needs, if I need more I will probably use MPI through a network using my old machines. All in all, Why and/or when should I use CUDA or another method which uses my graphic card instead of traditional methods like pthreads, java threads, boost threads or the new C++ 11 threads? What about using processes?

    Read the article

  • Backtrack My "Education"

    - by perl.j
    About a year or so ago, I decided to start programming. I really, just jumped into a language (Perl) and went from there. What I regret is that I just jumped in: I didn't learn the basics (if you would call them basics). I didn't learn about Computer Science. This issue, I believe, is holding me back. Where should I "restart"? Are there any books, articles, etc. that I should read? Are there any topics an experienced programmer should know? What's your advice? Thanks! Please don't advise me to take a college/high-school class.

    Read the article

  • Is there a quasi-standard set of attributes to annotate thread safety, immutability etc.?

    - by Eugene Beresovksy
    Except for a blog post here and there, describing the custom attributes someone created, but that do not seem to get any traction - like one describing how to enforce immutability, another one on Documenting Thread Safety, modeling the attributes after JCIP annotations - is there any standard emerging? Anything MS might be planning for the future? This is something that should be standard, if there's to be any chance of interoperability between libraries concurrency-wise. Both for documentation purposes, and also to feed static / dynamic test tools. If MS isn't doing anything in that direction, it could be done on CodePlex - but I couldn't find anything there, either. <opinion>Concurrency and thread safety are really hard in imperative and object-languages like C# and Java, we should try to tame it, until we hopefully switch to more appropriate languages.</opinion>

    Read the article

  • How to make a non-english clone of CoffeeScript?

    - by Ans
    I want to make a non-english programming language that is identical to what CoffeeScript is to JavaScript. What I mean is that I don't want to build my own design or syntax. Just want to have a non-english programming language that compiles to JavaScript. I want to follow everything CoffeeScript fellows so I don't really want to make any design decisions. For example: This is coffeescript: number = 42 opposite = true number = -42 if opposite I want my language to be something like: ??? = 42 ??? = ???? ??? = -42 ??? ??? that get compiled to: var number, opposite; number = 42; opposite = true; if (opposite) { number = -42; }

    Read the article

  • How useful are Lisp macros?

    - by compman
    Common Lisp allows you to write macros that do whatever source transformation you want. Scheme gives you a hygienic pattern-matching system that lets you perform transformations as well. How useful are macros in practice? Paul Graham said in Beating the Averages that: The source code of the Viaweb editor was probably about 20-25% macros. What sorts of things do people actually end up doing with macros?

    Read the article

  • Why were annotations introduced in Spring and Hibernate?

    - by Chandrashekhar
    I would like to know why were annotations introduced in Spring and Hibernate? For earlier versions of both the frameworks book authors were saying that if we keep configuration in xml files then it will be easier to maintain (due to decoupling) and just by changing the xml file we can re-configure the application. If we use annotations in our project and in future we want to re-configure application then again we have to compile and build the project. So why were these annotations introduced in these frameworks? From my point of view annotations make the apps dependent on certain framework. Isn't it true?

    Read the article

  • How can I ensure our project gets developed successfully, without having any project management experience? [migrated]

    - by Raven13
    I'm a web developer who is part of a three-man team that has been tasked with a rather large and complex development project. Other than some direction and impetus from management, we're pretty much on our own to develop the new website. None of us have any project management experience nor do my two coworkers seem like they would be interested in taking on that role, so I feel like it's up to me to implement some kind of structure to the development process in order to avoid issues down the road. What can I do as a developer without project management experience to ensure that our project gets developed successfully and avoid the pitfalls of developing a project without a plan?

    Read the article

  • Organization standards for large programs

    - by Chronicide
    I'm the only software developer at the company where I work. I was hired straight out of college, and I've been working here for several years. When I started, eveeryone was managing their own data as they saw fit (lots of filing cabinets). Until recently, I've only been tasked with small standalone projects to help with simple workflows. In the beginning of the year I was asked to make a replacement for their HR software. I used SQL Server, Entity Framework, WPF, along with MVVM and Repository/Unit of work patterns. It was a huge hit. I was very happy with how it went, and it was a very solid program. As such, my employer asked me to expand this program into a corporate dashboard that tracks all of their various corporate data domains (People, Salary, Vehicles/Assets, Statistics, etc.) I use integrated authentication, and due to the initial HR build, I can map users to people in positions, so I know who is who when they open the program, and I can show each person a customized dashboard given their work functions. My concern is that I've never worked on such a large project. I'm planning, meeting with end users, developing, documenting, testing and deploying it on my own. I'm part way through the second addition, and I'm seeing that my code is getting disorganized. It's still programmed well, I'm just struggling with the organization of namespaces, classes and the database model. Are there any good guidelines to follow that will help me keep everything straight? As I have it now, I have folders for Data, Repositories/Unit of Work, Views, View Models, XAML Resources and Miscellaneous Utilities. Should I make parent folders for each data domain? Should I make separate EF models per domain instead of the one I have for the entire database? Are there any standards out there for organizing large programs that span multiple data domains? I would appreciate any suggestions.

    Read the article

  • Converting 3 dimension byte array to a single byte array [on hold]

    - by Andrew Simpson
    I have a 3 dimensional byte array. The 3-d array represents a jpeg image. Each channel/array represents part of the RGB spectrum. I am not interested in retaining black pixels. A black pixel is represented by this atypical arrangement: myarray[0,0,0] =0; myarray[0,0,1] =0; myarray[0,0,2] =0; So, I have flattened this 3d array out to a 1d array by doing this byte[] AFlatArray = new byte[width x height x 3] and then assigning values respective to the coordinate. But like I said I do not want black pixels. So this array has to only contain color pixels with the x,y coordinate. The result I want is to re-represent the image from the i dimension byte array that only contains non-black pixels. How do I do that? It looks like I have to store black pixels as well because of the xy coordinate system. I have tried writing to a binary file but the size of that file is greater than the jpeg file as the jpeg file is compressed. I am using c#.

    Read the article

  • Data Transformation Pipeline

    - by davenewza
    I have create some kind of data pipeline to transform coordinate data into more useful information. Here is the shell of pipeline: public class PositionPipeline { protected List<IPipelineComponent> components; public PositionPipeline() { components = new List<IPipelineComponent>(); } public PositionPipelineEntity Process(Position position) { foreach (var component in components) { position = component.Execute(position); } return position; } public PositionPipeline RegisterComponent(IPipelineComponent component) { components.Add(component); return this; } } Every IPipelineComponent accepts and returns the same type - a PositionPipelineEntity. Code: public interface IPipelineComponent { PositionPipelineEntity Execute(PositionPipelineEntity position); } The PositionPipelineEntity needs to have many properties, many which are unused in certain components and required in others. Some properties will also have become redundant at the end of the pipeline. For example, these components could be executed: TransformCoordinatesComponent: Parse the raw coordinate data into a Coordinate type. DetermineCountryComponent: Determine and stores country code. DetermineOnRoadComponent: Determine and store whether coordinate is on a road. Code: pipeline .RegisterComponent(new TransformCoordinatesComponent()) .RegisterComponent(new DetermineCountryComponent()) .RegisterComponent(new DetermineOnRoadComponent()); pipeline.Process(positionPipelineEntity); The PositionPipelineEntity type: public class PositionPipelineEntity { // Only relevant to the TransformCoordinatesComponent public decimal RawCoordinateLatitude { get; set; } // Only relevant to the TransformCoordinatesComponent public decimal RawCoordinateLongitude { get; set; } // Required by all components after TransformCoordinatesComponent public Coordinate CoordinateLatitude { get; set; } // Required by all components after TransformCoordinatesComponent public Coordinate CoordinateLongitude { get; set; } // Set in DetermineCountryComponent, not required anywhere. // Requires CoordinateLatitude and CoordinateLongitude (TransformCoordinatesComponent) public string CountryCode { get; set; } // Set in DetermineOnRoadComponent, not required anywhere. // Requires CoordinateLatitude and CoordinateLongitude (TransformCoordinatesComponent) public bool OnRoad { get; set; } } Problems: I'm very concerned about the dependency that a component has on properties. The way to solve this would be to create specific types for each component. The problem then is that I cannot chain them together like this. The other problem is the order of components in the pipeline matters. There is some dependency. The current structure does not provide any static or runtime checking for such a thing. Any feedback would be appreciated.

    Read the article

  • Algorithms or patterns for a linked question and answer cost calculator

    - by kmc
    I've been asked to build an online calculator in PHP (and the Laravel framework). It will take the answers to a series of questions to estimate the cost of a home extension. For example, a couple of questions may be: What is the lie of your property? Flat, slightly inclined, heavily inclined. (these suggestive values could have specific values in the underlying calculator like, 0 degrees, 5 degrees, 10 degrees. What is your current flooring system? Wooden, or concrete? These would then impact the results of other questions. Once the size of the extension has been input, the lie of the land will affect how much site works will cost, and how much rubbish collection will cost. The second question will impact the cost of the extensions flooring, as stumping and laying floorboards is a different cost to laying foundations and a concrete slab. It will also influence what heating and cooling systems are available in the calculator. So it's VERY interlinked. The answer to any question can influence the options of other questions, and the end result. I'm having trouble figuring out an approach to this that will allow new options and questions to be plugged in at a later stage without having things too coupled. The Observer pattern, or Laravel's events may be handy, but currently the sheer breadth of the calculator has me struggling to gather my thoughts and see a sensible implementation. Are there any patterns or OO approaches that may help? Thanks!

    Read the article

  • I don't understand the definition of side effects

    - by Chris Okyen
    I don't understand the wikipedia article on Side Effects: In computer science, a function or expression is said to have a side effect if, in addition to returning a value, it also 1.) Modifies some state or 2.) Has an observable interaction with calling functions or the outside world. I know an example of the first thing that causes a function or expression to have side effects - modifying a state Function and Expression modifying a state : 1.) foo(int X) { return x = x % x; } a = a + 1; What does 2.) - Has an observable interaction with calling functions or the outside world," mean? - Please give an example. The article continues on to say, "For example, a function might modify a global or static variable, modify one of its arguments, raise an exception, write data to a display or file, read data, or call other side-effecting functions...." Are all these examples, examples of 1.) - Modifiying some state , or are they also part of 2.) - Has an observable interaction with calling functions or the outside world?

    Read the article

< Previous Page | 288 289 290 291 292 293 294 295 296 297 298 299  | Next Page >