Search Results

Search found 25198 results on 1008 pages for 'non programmers'.

Page 199/1008 | < Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >

  • Web Application Tasks Estimation

    - by Ali
    I know the answer depends on the exact project and its requirements but i am asking on the avarage % of the total time goes into the tasks of the web application development statistically from your own experiance. While developing a web application (database driven) How much % of time does each of the following activities usually takes: -- database creation & all related stored procedures -- server side development -- client side development -- layout settings and designing I know there are lots of great web application developers around here and each one of you have done fair amount of web development and as a result there could be an almost fixed percentage of time going to each of the above activities of web developments for standard projects Update : I am not expecting someone to tell me number of hours i am asking about the average percentage of time that goes on each of the activities as per your experience i.e. server side dev 50%, client side development 20% ,,,,, I repeat there will be lots of cases that differs from the standard depending on the exact requirments of each web application project but here i am asking about Avarage for standard (no special requirment) web project

    Read the article

  • Can I use CodeSynthesis XSD (C++/Tree mapping) together with a GPLv3-licensed library?

    - by Erik Sjölund
    Is it possible to write an open source project that uses generated code from CodeSynthesis XSD (C++/Tree) and then link it to a third-party library that is licensed under the GPL version 3? Some background information: CodeSynthesis XSD is licensed under the GPL version 2 but with an extra FLOSS exception (http://www.codesynthesis.com/projects/xsd/FLOSSE). C++ source code generated from CodeSynthesis XSD (C++/Tree) needs to be linked against Xerces (http://xerces.apache.org/xerces-c/) that is licensed under the Apache License 2.0. Update I posted a similar question on the xsd-users mailing list two years ago but I didn't fully understand the answers. In that email thread, I wrote: I think it is the GPL version 3 software that doesn't allow itself be linked to software that can't be "relicensed" to GPL version 3 ( for instance GPL version 2 software ). That would also include XSD as the FLOSS exception doesn't give permission to "relicense" XSD to GPL version 3.

    Read the article

  • Do private static methods in C# hurt anything?

    - by fish
    I created a private validation method for a certain validation that happens multiple times in my class (I can't store the validated data for various reasons). Now, ReSharper suggests that the function could be made static. I'm a little reluctant to do so due known problems with static methods. It would be a private static method. My question is, can private static methods cause similar coupling and testing problems like public static methods? Is it a bad practice? I would guess not, but I'm not sure if there is a pitfall here.

    Read the article

  • How should I manage "reverting" a branch done with bookmarks in mercurial?

    - by Earlz
    I have an open source project on bitbucket. Recently, I've been working on an experimental branch which I (for whatever reason) didn't make an actual branch for. Instead what I did was use bookmarks. So I made two bookmarks at the same revision test --the new code I worked on that should now be abandoned(due to an experiment failure) main -- the stable old code that works I worked in test. I also pushed from test to my server, which ended up switching the tip tag to the new unstable code, when I really would've rather it stayed at main. I "switched" back to the main bookmark by doing a hg update main and then committing an insignificant change. So, I pushed this with hg push -f and now my source control is "correct" on the server. I know that there should be a cleaner way to "switch" branches. What should I do in the future for this kind of operation?

    Read the article

  • What principles does your software engineering or development organization follow?

    - by user11347
    What principles does your software engineering or development organization follow? I am very interested in seeing a list of principles from someone who works at a company where these principles are discussed, published, followed, etc. The closest I have seen to a principles-based engineering organization are companies which are agile and follow the agile principles. Here is a list of Marick's values/challenges: http://www.agilejourneyman.com/2010/02/4-challenges-and-5-guiding-values-of.html I am looking for pointers to more stuff like this. Ideally, I'd like to hear from people who have actually implemented a principles-based approach in their organization.

    Read the article

  • Why some consider static analysis a testing and some do not?

    - by user970696
    Preparing myself also to ISTQB certification, I found they call static analysis actually as a static testing, while some engineering book distinct between static analysis and testing, which is the dynamic activity. I tent to think that static analysis is not a testing in the true sense as it does not test, it checks/verifies. But sure I would love to hear opinion of the true experts here. Thank you

    Read the article

  • Is eval the defmacro of javascript?

    - by Florian Margaine
    In Common Lisp, defmacro basically allows us to build our own DSL. I read this page today and it explains something cleverly done: But I wasn't about to write out all these boring predicates myself, so I defined a function that, given a list of words, builds up the text for such a predicate automatically, and then evals it to produce a function. Which just looks like defmacro to me. Is eval the defmacro of javascript? Could it be used as such?

    Read the article

  • Is running "milli"-benchmarks a good idea?

    - by Konstantin Weitz
    I just came across the Caliper project and it looks very nice. Reading the introduction to microbenchmarks, one gets the feeling that the developers would not suggest to use the framework if the benchmark takes longer than a second or so. I looked at the code and it looks like a RuntimeOutOfRangeException is actually thrown if a scenario takes longer than 10s to execute. Could you explain to me what the problems are with running larger benchmarks? My motivation for using Caliper was to compare two join-algorithm implementations. Those will definitely run for quite some time and will do some disk IO, yet running the entire database would make it hard to do the comparison, because the configuration of the algorithms and the visualization of the results would be a pain.

    Read the article

  • Amazon SOA: database as a Service

    - by Martin Lee
    There is an interesting interview with Werner Vogels which is partly about how Amazon does Service Oriented Architecture: For us service orientation means encapsulating the data with the business logic that operates on the data, with the only access through a published service interface. No direct database access is allowed from outside the service, and there’s no data sharing among the services. I do not understand that. Why do they need to 'wrap' a database into some layer if it already can be consumed as a service by other service through database adaptors? Does Amazon do that just because they need to expose the database to third parties or because of anything else? Why "no direct database access is allowed"? What are the advantages of such an architectural decision?

    Read the article

  • Expando Object and dynamic property pattern

    - by Al.Net
    I have read about 'dynamic property pattern' of Martin Fowler in his site under the tag 1997 in which he used dictionary kind of stuff to achieve this pattern. And I have come across about Expando object in c# very recently. When I see its implementation, I am able to see IDictionary implemented. So Expando object uses dictionary to store dynamic properties and is it what, Martin Fowler already defined 15 years ago?

    Read the article

  • Compiler Dependencies [closed]

    - by asghar ashgari
    I'm a newbie researcher who's passion is programming languages (Web era). I'm wondering why all the Web frameworks and Web-based general purposes languages, have a huge number of dependencies when you want to install and then use (e.g., extend, alternate, etc.) their compilers. For example, Ruby on Rails or Scala. If I want to download their source code, and try to build it again, to me at least, feels like a can of worms. I have a MAC, so I need to install MACports, then update my XCode, then get the compiler source code that has bunch of other dependencies, then its hard to set things up; just to see the installed open-source compiler works fine.

    Read the article

  • Is a code review which uses only code comments a good idea?

    - by gaRex
    Preconditions Team uses DVCS IDE supports comments parsing (like TODO and etc.) Tools like CodeCollaborator are expensive for budget Tools like gerrit are too complex for install or not usable Workflow Author publishes somewhere on central repo feature branch Reviewer fetch it and start review In case of some question/issue reviewer create comment with special label, like "REV". Such label MUST not be in production code -- only on review stage: $somevar = 123; // REV Why do echo this here? echo $somevar; When reviewer finish post comments -- it just commits with stupid message "comments" and pushes back Author pulls feature branch back and answer comments in similar way or improve code and push it back When "REV" comments have gone we can think, that review has successfully finished. Author interactively rebases feature branch, squashes it to remove those "comment" commits and now is ready to merge feature to develop or make any action that usualy could be after successful internal review IDE support I know, that custom comment tags are possible in eclipse & netbeans. Sure it also should be in blablaStorm family. Questions Do you think this methodology is viable? Do you know something similar? What can be improved in it?

    Read the article

  • Structuring multi-threaded programs

    - by davidk01
    Are there any canonical sources for learning how to structure multi-threaded programs? Even with all the concurrency utility classes that Java provides I'm having a hard time properly structuring multi-threaded programs. Whenever threads are involved my code becomes very brittle, any little change can potentially break the program because the code that jumps back and forth between the threads tends to be very convoluted.

    Read the article

  • Obtaining Embedded Linux Experience

    - by Thomas Matthews
    As an embedded firmware developer, I have used operating systems such as WinCE, Nucleus, ThreadX, VRTX and some background loops. There are more opportunities for me if I had Linux OS experience, or perhaps some certification. In my research, the only way to get Linux experience is to have your company move to a Linux OS. All the recruiters and HR folks won't let you in the door unless you have Linux experience. I haven't found any Universities that teach Linux. Recruiters and HR want some tangible proof (starting up your own Ubuntu box or playing with it doesn't count). So, how does one get into the area of Embedded Linux without Linux experience (I have Unix and Cygwin experience, but not Linux)?

    Read the article

  • Data structure: sort and search effectively

    - by Jiten Shah
    I need to have a data structure with say 4 keys . I can sort on any of these keys. What data structure can I opt for? Sorting time should be very little. I thought of a tree, but it will be only help searching on one key. For other keys I'll have to remake the tree on that particular key and then find it. Is there any data structure that can take care of all 4 keys at the same time? these 4 fields are of total 12 bytes and total size for each record - 40 bytes.. have memory constraints too... operations are : insertion, deletion, sorting on different keys.

    Read the article

  • What are must have tools for web development?

    - by Amir Rezaei
    Which are must have tools for web development under windows? It can include tools such as design, coding etc. Update: Please post only one and the best tool in your opinion for each type of tools. For instance post only the name of the best design tool and not a list of them. Update: By tools I don't necessary mean small softwares. I didn't find this question, hope no one has already asked it.

    Read the article

  • Is it reasonable to expect knowing the whole stack bottom up?

    - by Vaibhav Garg
    I am an Sr. developer/architect/Product Manager for embedded systems. The systems that I have had experience with have typically been small to medium size codebases - typically close to 25-30K LOC in C, using 8-16 and 32 bit low end microcontrollers. The systems have been entirely bootstrapped by our team - meaning right from the start-up code to the end application code has either been written by the team, or at the very least, is thoroughly understood and maintained by us. Now, if we were to start developing more complex systems with complex peripherals, such as USB OTG et al. (think, low end cell phones), there are libraries and stacks available commercially and from chip vendors that reduce the task to just calling the right APIs and being able to use those peripherals. Now, from a habit point of view, this does not give me and the team a comfortable feeling, not being able to comprehend the entire code tree, with virtual black boxes at the lower layers. Is it reasonable to devote, and reserve, time getting into the details of how the APIs are implemented, assuming that the same would also entail getting into details of relevant standards (again, for USB as an example)? Or, alternatively, should a thorough understanding of the top level usage of the APIs be sufficient? This of course assumes that the source codes to all libraries are available, which they are, in almost all cases. Edit: In partial response to @Abhi Beckert, the documentation is refreshingly very comprehensive and meticulously maintained, AFAIK and been able to judge. I have not had a long experience with the same.

    Read the article

  • How to dealing with the "programming blowhard"?

    - by Peter G.
    (Repost, I posted this in the wrong section before, sorry) So I'm sure everyone has run into this person at one point or another, someone catches wind of your project or idea and initially shows some interest. You get to talking about some of your methods and usually around this time they interject stating how you should use method X instead, or just use library Y. But not as a friendly suggestion, but bordering on a commandment. Often repeating the same advice over and over like a overzealous parrot. Personally, I like to reinvent the wheel when I'm learning, or even just for fun, even if it turns out worse than what's been done before. But this person apparently cannot fathom recreating ANY utility for such purposes, or possibly try something that doesn't strictly follow traditional OOP practices, and will settle for nothing except their sense of perfection, and thus naturally heave their criticism sludge down my ears full force. To top it off, they eventually start justifying their advice (retardation) by listing all the incredibly complex things they've coded single-handedly (usually along the lines of "trust me, I've made/used program X for a long time, blah blah blah"). Now, I'm far from being a programming master, I'm probably not even that good, and as such I value advice and critique, but I think advice/critique has a time and place. There is also a big difference between being helpful and being narcissistic. In the past I probably would have used a somewhat stronger George Carlin style dismissal, but I don't think burning bridges is the best approach anymore. Maybe I'm just an asshole, but do you have any advice on how to deal with this kind of verbal flogging?

    Read the article

  • Estimating time for planning and technical design using Evidence Based Scheduling

    - by Turgs
    I'm at the beginning of a development project in a large organization. The Functional Requirements are currently being worked out and documented with our business stakeholders by our Enterprise Design department. I'm required to produce Technical Design Documents and manage the team to actually build the solution. I'm wanting to try Evidence Based Scheduling, but as I understand, part of that is breaking the job down into small tasks that are less than 14 hours in duration, which requires me to have already done the Technical Design. Therefore, can Evidence Based Scheduling only be used after the Technical Design has been done? How do you then plan and estimate the time it may take to come up with the Technical Design?

    Read the article

  • What real life bad habits has programming given you?

    - by Jacob T. Nielsen
    Programming has given me a lot of bad habits and it continues to give me more everyday. But I have also gotten some bad habits from the mindset that I have put myself in. There simply are some things that are deeply rooted in my nature, though some of them I wish I could get rid of. A few: Looking for polymorphism, inheritance and patterns in all of God's creations. Explaining the size of something in pixels and colors in hex code. Using code related abstract terms in everyday conversations. How have you been damaged?

    Read the article

  • Licensing a JavaScript library

    - by Kendall Frey
    I am developing a free, open-source (duh) JavaScript library, and wondering how to license it. I was considering the GNU GPL, but I heard that I must distribute the license with the software, and I'm not sure anymore. I would like the library to be available much like jQuery: In a free, downloadable script, preferably in either original or minified form. Am I mistaken about the GNU GPL license terms? jQuery is dual licensed under GNU GPL or MIT licenses. How does the GPL apply to single script files like that? Can I license my library with nothing more than a few sentences in the script file? Is there another license that better suits my needs? What would be nice is a license that allows you to put the URL in the source, for people to read if they want. I don't know that many do, unless I am mistaken. I am generally looking to release the library as free software like the GPL specifies, but don't want to have to force licensees to download the full license unless they wish to read it.

    Read the article

  • Dapper and object validation/business rules enforcement

    - by Eugene
    This isn't really Dapper-specific, actually, as it relates to any XML-serializeable object.. but it came up when I was storing an object using Dapper. Anyways, say I have a user class. Normally, I'd do something like this: class User { public string SIN {get; private set;} public string DisplayName {get;set;} public User(string sin) { if (string.IsNullOrWhiteSpace(sin)) throw new ArgumentException("SIN must be specified"); this.SIN = sin; } } Since a SIN is required, I'd just create a constructor with a sin parameter, and make it read-only. However, with a Dapper (and probably any other ORM), I need to provide a parameterless constructor, and make all properties writeable. So now I have this: class User: IValidatableObject { public int Id { get; set; } public string SIN { get; set; } public string DisplayName { get; set; } public IEnumerable<ValidationResult> Validate(ValidationContext validationContext) { // implementation } } This seems.. can't really pick the word, a bad smell? A) I'm allowing to change properties that should not be changed ever after an object has been created (SIN, userid) B) Now I have to implement IValidatableObject or something like that to test those properties before updating them to db. So how do you go about it ?

    Read the article

  • Considerations for accepting contributed code to an Open Source project

    - by Jason Holland
    I’m working on a "for fun" project in my spare time. I may end up making it Open Source and this makes me wonder what I need to think about if someone cares enough to contribute to it. Do I need to have some sort of legal mumbo jumbo about “if you give me yer code, it accepts the same license as the project, bla bla bla” (what is the norm here?) Is there a way to check contributed code to make sure it is not plagiarized or would that liability fall on the contributer? Are there any other gotchas, standard/common practices I should follow, recommendations, things I need to think about?

    Read the article

  • What's the term for re-implementing an old API in terms of a newer API

    - by dodgy_coder
    The reason for doing it is to solve the case when a newer API is no longer backwards compatible with an older API. To explain, just say that there is an old API v1.0. The maker of this API decides it is broken and works on a new API v1.1 that intentionally breaks compatibility with the old API v1.0. Now, any programs written against the old API cannot be recompiled as-is with the new API. Then lets say there is a large app written against the old API and the developer doesn't have access to the source code. A solution would be to re-implement a "custom" old API v1.0 in terms of the new API v1.1 calls. So the "custom" v1.0 API is actually keeping the same interface/methods as the v1.0 API but inside its implementation it is actually making calls to the new API v1.1 methods. So the large app can be then compiled and linked against the "custom" v1.0 API and the new v1.1 API without any major source code changes. Is there a term for this practice? There's a recent example of this happening in Jamie Zawinski's port of XScreenSaver to the iPhone - he re-implemented the OpenGL 1.3 API in terms of the OpenGL ES 1.1 API. In this case, OpenGL 1.3 represents the "old" API and OpenGL ES 1.1 represents the "new" API.

    Read the article

  • Merge sort versus quick sort performance

    - by Giorgio
    I have implemented merge sort and quick sort using C (GCC 4.4.3 on Ubuntu 10.04 running on a 4 GB RAM laptop with an Intel DUO CPU at 2GHz) and I wanted to compare the performance of the two algorithms. The prototypes of the sorting functions are: void merge_sort(const char **lines, int start, int end); void quick_sort(const char **lines, int start, int end); i.e. both take an array of pointers to strings and sort the elements with index i : start <= i <= end. I have produced some files containing random strings with length on average 4.5 characters. The test files range from 100 lines to 10000000 lines. I was a bit surprised by the results because, even though I know that merge sort has complexity O(n log(n)) while quick sort is O(n^2), I have often read that on average quick sort should be as fast as merge sort. However, my results are the following. Up to 10000 strings, both algorithms perform equally well. For 10000 strings, both require about 0.007 seconds. For 100000 strings, merge sort is slightly faster with 0.095 s against 0.121 s. For 1000000 strings merge sort takes 1.287 s against 5.233 s of quick sort. For 5000000 strings merge sort takes 7.582 s against 118.240 s of quick sort. For 10000000 strings merge sort takes 16.305 s against 1202.918 s of quick sort. So my question is: are my results as expected, meaning that quick sort is comparable in speed to merge sort for small inputs but, as the size of the input data grows, the fact that its complexity is quadratic will become evident? Here is a sketch of what I did. In the merge sort implementation, the partitioning consists in calling merge sort recursively, i.e. merge_sort(lines, start, (start + end) / 2); merge_sort(lines, 1 + (start + end) / 2, end); Merging of the two sorted sub-array is performed by reading the data from the array lines and writing it to a global temporary array of pointers (this global array is allocate only once). After each merge the pointers are copied back to the original array. So the strings are stored once but I need twice as much memory for the pointers. For quick sort, the partition function chooses the last element of the array to sort as the pivot and scans the previous elements in one loop. After it has produced a partition of the type start ... {elements <= pivot} ... pivotIndex ... {elements > pivot} ... end it calls itself recursively: quick_sort(lines, start, pivotIndex - 1); quick_sort(lines, pivotIndex + 1, end); Note that this quick sort implementation sorts the array in-place and does not require additional memory, therefore it is more memory efficient than the merge sort implementation. So my question is: is there a better way to implement quick sort that is worthwhile trying out? If I improve the quick sort implementation and perform more tests on different data sets (computing the average of the running times on different data sets) can I expect a better performance of quick sort wrt merge sort? EDIT Thank you for your answers. My implementation is in-place and is based on the pseudo-code I have found on wikipedia in Section In-place version: function partition(array, 'left', 'right', 'pivotIndex') where I choose the last element in the range to be sorted as a pivot, i.e. pivotIndex := right. I have checked the code over and over again and it seems correct to me. In order to rule out the case that I am using the wrong implementation I have uploaded the source code on github (in case you would like to take a look at it). Your answers seem to suggest that I am using the wrong test data. I will look into it and try out different test data sets. I will report as soon as I have some results.

    Read the article

< Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >