Search Results

Search found 16554 results on 663 pages for 'programmers identity'.

Page 208/663 | < Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >

  • Dual Inspection / Four Eyes Principle

    - by Ralf
    I have the requirement to implement some kind of dual inspection or four-eyes principle as a feature of my software, meaning that every change of an object done by user A has to be checked by user B. A trivial example would be a publishing system where an author writes an article and another has to proofread it before it is published. I am a little bit surprised that you find nearly nothing about it on the net. No patterns, no libraries (besides cibet), no workflow solutions etc. Is this requirement really so uncommon? Or am I searching for the wrong terms? I am not looking for a specific solution. More for a pattern or best practice approach. Update: the above example is really trivial. Let's add some more complexity to it. The article has been published, but it now needs an update. Putting the article offline for the update is not an option, but the update has to be proof read, too.

    Read the article

  • As a programmer how do I plan to learn new things in my spare time

    - by priyank patel
    I am a Asp.Net/C# developer, I develop few projects in my spare time. I try to utilize my time as much as I can. I have been working for past 7 months. Suddenly these days I am a bit worried about learning the new stuff that is there for me as a programmer. I develop in my spare time so I don't get enough time to read books or blogs. So my question to some of you guys is how should I plan to learn new things, should I at least dedicate two-three evenings for new stuff, maybe ebooks while travelling is a good option too. How do you people plan to learn,should I also start to develop with whatever I learnt? As far as learning is concerned, should I just pick up the basics and then implement it or I should seek deeper understanding of the subject?

    Read the article

  • What programming Language Would you learn to Re-engineer USB Devices? [closed]

    - by user70113
    Currently Work in IT support and am retraining in electrical engineering / electronics, I am also interested in Reverse Engineering which language would be best for Hardware RE, I have seen a few sources say C, C++ and Python? I am not familiar with Linux, but installed Ubuntu to learn with. I am not a programmer. Far from it. But, I can understand enough basic VB,Java and PHP to edit it for simple things. One of my immediate projects would be to learn to reverse engineer USB devices and write my own low level drivers. I know there are porting kits, but I really want to know it from the ground up. Thanks for any advise folks Most Appreciated.

    Read the article

  • Open Source Web-based CMS for writing and managing API documentation

    - by netcoder
    This is a question that have somewhat been asked before (i.e.: How to manage an open source project's documentation). However, my question is a little different because: We're not developing open source software, but proprietary software The documentation has to be hand-written, because we do not want to publish the actual software API documentation, but only the public API documentation I do want developers and project managers to write the documentation collaboratively Obviously, wikis are a solution, but they're very generic. I'm looking for a more specialized tool for this job. I've looked around and found a few like Adobe Robohelp, SaaS solutions and such, but I'd like to know if any open source software exists for that purpose. Do you know any Open Source Web-based CMS for writing and managing API and software documentation?

    Read the article

  • Best practice for managing dynamic HTML modules?

    - by jt0dd
    I've been building web apps that add and remove lots of dynamic content and even structure within the page, and I'm not impressed by the method I'm using to do it. When I want to add a section or module into a position in the interface, I'm storing the html in the code, and I don't like that: if (rank == "moderator") { $("#header").append('<div class="mod_controls">' + // content, using + to implement line breaks '</div>'); } This seems like such a bad programming practice.. There must be a better way. I thought of building a function to convert a JSON structure to html, but it seems like overkill. For the functionality of the apps I'm using: Node.js JS JQuery AJAX Is there some common way to store HTML modules externally for AJAX importation?

    Read the article

  • what's the point of method overloading?

    - by David
    I am following a textbook in which I have just come across method overloading. It briefly described method overloading as: when the same method name is used with different parameters its called method overloading. From what I've learned so far in OOP is that if I want different behaviors from an object via methods, I should use different method names that best indicate the behavior, so why should I bother with method overloading in the first place?

    Read the article

  • Why use link classes in oql instead of classes that contain links

    - by Isaac
    itop abstracts its very complex database design with an object query language (oql). For this there are classes definded, like 'Ticket' and 'Server'. Now a Ticket usually is linked to a Server. In my naive way I would give the Ticket class an attribute 'affected_server_list', where I could reference the affected servers. itop does it different: neither Servers nor Tickets know of each other. Instead there is a class 'linkTicketToServer', which provides the link between the two. The first thing I noticed is that it makes oql queries more complex. So I wondered why they designed it this way. One thing that occured to me is that it allows for more flexiblity, in that I can add links without modifying the original classes. Is this allready why one would implement it this way, or are there other reasons for this kind of design?

    Read the article

  • Would Using a PHP Framework Be Beneficial in My Context?

    - by Fractal
    I've just started work at a small start-up company who mainly uses PHP to develop their front-end apps. I had no prior PHP experience before joining, and this has led to my apps becoming large pieces of spaghetti code. I essentially started by adding code to implement an initial feature, and then continued to hack in more code to implement further features – without much thought for the overall design. The apps themselves output XML to render on small mobile devices. I recently started looking into frameworks that I could use. I reckon an advantage would be that they seem to force developers to modularise their programs using good-practice design patterns. This seems great for someone in my position. The extra functions they provide, for example: interfacing with databases in such a way as to make SQL injection impossible, would be very useful too. The downside I can see is that there will be a lot of overhead for me in terms of the time taken to learn the framework itself (while still getting to grips with PHP itself). I'm also worried that it will be overkill for the scale of the apps we develop. They tend to be programs that interface with a fairly simple back-end DB, and will generate about 5 different XML screens. Probably around 1 or 2 thousand lines of code. The time it takes just to configure the frameworks may not be worth it. The final problem I can see is that developers in the company – who have to go over my code, and who do not know the PHP framework I may use – will have a much harder time understanding it. Given those pros and cons, I'm still not sure on what the best course of action will be; so any advice will be greatly appreciated.

    Read the article

  • from Java to SAS

    - by Giovanni Rossi
    I am a seasoned python,java,...other programmer having a (fairly advanced) mathematical education (so I do understand statistics and data mining, for example) . For various reasons I am thinking to switch to SAS/BI area (I am naming SAS because it might be, for me, a possible way to enter in BI). My question, for whoever might have an experience of both: is it, in BI current state, worth it? I mean, the days of big ideas in BI for business seem to be over (there are the APIs, managers think that they know what you can do with them), and my mathematical background might turn out to be superflous. Also, the big companies now have their data organized, have their BI procedures well established, and trying to analyze it from a different standpoint might not be what they want. Another difference is: while in Java etc. development one codes and codes and codes, I don't know if this is the case for BI; in fact, from what I read on the net, a BI (or OLAP, ...etc) developer, in a big organization, is usually in a state of standby, and does in fact little coding. Any opinions, and in particular strong opinions, will be appreciated.

    Read the article

  • How many developers before continuous integration becomes effective for us?

    - by Carnotaurus
    There is an overhead associated with continuous integration, e.g., set up, re-training, awareness activities, stoppage to fix "bugs" that turn out to be data issues, enforced separation of concerns programming styles, etc. At what point does continuous integration pay for itself? EDIT: These were my findings The set-up was CruiseControl.Net with Nant, reading from VSS or TFS. Here are a few reasons for failure, which have nothing to do with the setup: Cost of investigation: The time spent investigating whether a red light is due a genuine logical inconsistency in the code, data quality, or another source such as an infrastructure problem (e.g., a network issue, a timeout reading from source control, third party server is down, etc., etc.) Political costs over infrastructure: I considered performing an "infrastructure" check for each method in the test run. I had no solution to the timeout except to replace the build server. Red tape got in the way and there was no server replacement. Cost of fixing unit tests: A red light due to a data quality issue could be an indicator of a badly written unit test. So, data dependent unit tests were re-written to reduce the likelihood of a red light due to bad data. In many cases, necessary data was inserted into the test environment to be able to accurately run its unit tests. It makes sense to say that by making the data more robust then the test becomes more robust if it is dependent on this data. Of course, this worked well! Cost of coverage, i.e., writing unit tests for already existing code: There was the problem of unit test coverage. There were thousands of methods that had no unit tests. So, a sizeable amount of man days would be needed to create those. As this would be too difficult to provide a business case, it was decided that unit tests would be used for any new public method going forward. Those that did not have a unit test were termed 'potentially infra red'. An intestesting point here is that static methods were a moot point in how it would be possible to uniquely determine how a specific static method had failed. Cost of bespoke releases: Nant scripts only go so far. They are not that useful for, say, CMS dependent builds for EPiServer, CMS, or any UI oriented database deployment. These are the types of issues that occured on the build server for hourly test runs and overnight QA builds. I entertain that these to be unnecessary as a build master can perform these tasks manually at the time of release, esp., with a one man band and a small build. So, single step builds have not justified use of CI in my experience. What about the more complex, multistep builds? These can be a pain to build, especially without a Nant script. So, even having created one, these were no more successful. The costs of fixing the red light issues outweighed the benefits. Eventually, developers lost interest and questioned the validity of the red light. Having given it a fair try, I believe that CI is expensive and there is a lot of working around the edges instead of just getting the job done. It's more cost effective to employ experienced developers who do not make a mess of large projects than introduce and maintain an alarm system. This is the case even if those developers leave. It doesn't matter if a good developer leaves because processes that he follows would ensure that he writes requirement specs, design specs, sticks to the coding guidelines, and comments his code so that it is readable. All this is reviewed. If this is not happening then his team leader is not doing his job, which should be picked up by his manager and so on. For CI to work, it is not enough to just write unit tests, attempt to maintain full coverage, and ensure a working infrastructure for sizable systems. The bottom line: One might question whether fixing as many bugs before release is even desirable from a business prespective. CI involves a lot of work to capture a handful of bugs that the customer could identify in UAT or the company could get paid for fixing as part of a client service agreement when the warranty period expires anyway.

    Read the article

  • Algorithm to use for shop floor layout?

    - by jkohlhepp
    I ran into a classroom problem yesterday (business oriented class, not computer science) and I found it interesting from an algorithmic perspective. The problem goes something like this: Assume there is a shop floor with N different rooms, and you have N different departments that need to go in those rooms. The departments and the rooms are all the same size, so any department could go in any room. There is a known travel distance from each room to each other room. There is also a known amount of trips necessary from one department to another (trips are counted the same regardless which room they originate from, so a trip from A to B is equivalent to a trip from B to A). Given those inputs, determine a layout of departments into rooms which minimizes travel time. What is the best way to approach this problem algorithmically? Is there already a particular algorithm or class of algorithms designed to solve this type of problem? Does this type of problem have a name in computer science? I am not looking for you to design an algorithm to solve this, although feel free to do so if you would like. I'm wondering if this is a problem space that has already been well defined and studied algorithmically and if so get some links to research further. I can see a lot of different data structures and algorithms that might apply to this and I'm curious which approach would be "best". And don't worry, you are not doing my homework for me. This is not a homework problem per se, as this is a business course and we were simply discussing the concepts and not trying to solve the problem algorithmically.

    Read the article

  • Career Advice: finding challenging work in software and web development

    - by dianovich
    Having left my physics degree early, I started out in the realm of web design / front end web development and was able to get work quite quickly. I moved on to spend a chunk of my time on servers and gained experience with frameworks like Wordpress and Drupal, then the likes of Codeigniter and CakePHP and became comfortable in Debian-based and RHEL/CentOS environments. I ventured in to iOS development and published a couple of native apps to the app store too! I have started to spend a good deal of my time writing Python and have invested a little time in Django. The problem is, I still spend a fair chunk of my time doing more front end web development (writing markup and CSS for site themes, design-lead JavaScript, small applications for which application architecture and software engineering are relatively unimportant or too time consuming to invest in) in my job. What I want to do is really exercise the systematic/logical portion of my brain and tackle challenging problems on a daily basis. I want to have to care about big-oh running times, modularity in software, DRY, performance tuning and development methodologies. I want to work for a firm whose clients say: "Yes, these things are important to us and we'll pay you to get them right." But it is difficult: I have no formal training and am potentially becoming a jack of all trades. Not that being a jack of many trades is necessarily a bad thing, but the scope of work I find myself involved in is far too broad. And, there are only so many hours in a day outside of work! My question is: where do I go from here? I am starting to work on a few open source projects and have started to publish content to my blog. But this isn't likely to make it past the recruitment consultants and HR departments of many-a-firm. And I do not, for example, work in a team that practices agile methodologies, so how do I get work in such a team to gain experience? While I have been responsible for implementing version control and some solid working practices into our current environment, there is only so far I can go in this context. What would convince you that i'm worth taking a risk? What would convince you that i'll have caught up the other guys in your employ in next to no time?

    Read the article

  • What techniques would you use for a next generation java web application?

    - by jakob
    I'm working at a site similar to Foursquare and Yelp, with approximately 100000 unique requests each week that generates content, growing steadily. We are currently using: Seam as Java web framework. MySQL as DB Hibernate as ORM Hibernate Search as Index EhCache for Caching. Since our site is slowly growing out of the current setup and has a lot of legacy code, it is time for us to start thinking about a major refactoring/changing setup. Web framework We are not ready to change the language but we are leaning towards Spring Web Framework, since: Seam is no more. Almost all of us have worked with Spring and liked it. DB and ORM We have done a little research and we are thinking about MongoDB. Index Do we need to have a separate Index if we use MongoDB? Cache ? So my question is basically: If you take Spring Web Framework and MongoDB into consideration, how would a good setup be for a web application that is growing and handles a lot of logged in users generating input and performing searches?

    Read the article

  • Why is an anemic domain model considered bad in C#/OOP, but very important in F#/FP?

    - by Danny Tuppeny
    In a blog post on F# for fun and profit, it says: In a functional design, it is very important to separate behavior from data. The data types are simple and "dumb". And then separately, you have a number of functions that act on those data types. This is the exact opposite of an object-oriented design, where behavior and data are meant to be combined. After all, that's exactly what a class is. In a truly object-oriented design in fact, you should have nothing but behavior -- the data is private and can only be accessed via methods. In fact, in OOD, not having enough behavior around a data type is considered a Bad Thing, and even has a name: the "anemic domain model". Given that in C# we seem to keep borrowing from F#, and trying to write more functional-style code; how come we're not borrowing the idea of separating data/behavior, and even consider it bad? Is it simply that the definition doesn't with with OOP, or is there a concrete reason that it's bad in C# that for some reason doesn't apply in F# (and in fact, is reversed)? (Note: I'm specifically interested in the differences in C#/F# that could change the opinion of what is good/bad, rather than individuals that may disagree with either opinion in the blog post).

    Read the article

  • Odd company release cycle: Go Distributed Source Control?

    - by MrLane
    sorry about this long post, but I think it is worth it! I have just started with a small .NET shop that operates quite a bit differently to other places that I have worked. Unlike any of my previous positions, the software written here is targetted at multiple customers and not every customer gets the latest release of the software at the same time. As such, there is no "current production version." When a customer does get an update, they also get all of the features added to he software since their last update, which could be a long time ago. The software is highly configurable and features can be turned on and off: so called "feature toggles." Release cycles are very tight here, in fact they are not on a shedule: when a feature is complete the software is deployed to the relevant customer. The team only last year moved from Visual Source Safe to Team Foundation Server. The problem is they still use TFS as if it were VSS and enforce Checkout locks on a single code branch. Whenever a bug fix gets put out into the field (even for a single customer) they simply build whatever is in TFS, test the bug was fixed and deploy to the customer! (Myself coming from a pharma and medical devices software background this is unbeliveable!). The result is that half baked dev code gets put into production without being even tested. Bugs are always slipping into release builds, but often a customer who just got a build will not see these bugs if they don't use the feature the bug is in. The director knows this is a problem as the company is starting to grow all of a sudden with some big clients coming on board and more smaller ones. I have been asked to look at source control options in order to eliminate deploying of buggy or unfinished code but to not sacrifice the somewhat asyncronous nature of the teams releases. I have used VSS, TFS, SVN and Bazaar in my career, but TFS is where most of my experience has been. Previously most teams I have worked with use a two or three branch solution of Dev-Test-Prod, where for a month developers work directly in Dev and then changes are merged to Test then Prod, or promoted "when its done" rather than on a fixed cycle. Automated builds were used, using either Cruise Control or Team Build. In my previous job Bazaar was used sitting on top of SVN: devs worked in their own small feature branches then pushed their changes to SVN (which was tied into TeamCity). This was nice in that it was easy to isolate changes and share them with other peoples branches. With both of these models there was a central dev and prod (and sometimes test) branch through which code was pushed (and labels were used to mark builds in prod from which releases were made...and these were made into branches for bug fixes to releases and merged back to dev). This doesn't really suit the way of working here, however: there is no order to when various features will be released, they get pushed when they are complete. With this requirement the "continuous integration" approach as I see it breaks down. To get a new feature out with continuous integration it has to be pushed via dev-test-prod and that will capture any unfinished work in dev. I am thinking that to overcome this we should go down a heavily feature branched model with NO dev-test-prod branches, rather the source should exist as a series of feature branches which when development work is complete are locked, tested, fixed, locked, tested and then released. Other feature branches can grab changes from other branches when they need/want, so eventually all changes get absorbed into everyone elses. This fits very much down a pure Bazaar model from what I experienced at my last job. As flexible as this sounds it just seems odd to not have a dev trunk or prod branch somewhere, and I am worried about branches forking never to re-integrate, or small late changes made that never get pulled across to other branches and developers complaining about merge disasters... What are peoples thoughts on this? A second final question: I am somewhat confused about the exact definition of distributed source control: some people seem to suggest it is about just not having a central repository like TFS or SVN, some say it is about being disconnected (SVN is 90% disconnected and TFS has a perfectly functional offline mode) and others say it is about Feature Branching and ease of merging between branches with no parent-child relationship (TFS also has baseless merging!). Perhaps this is a second question!

    Read the article

  • how do I write a functional spec quickly and efficiently

    - by giddy
    So I just read some fabulous articles by Joel on specs here. (Was written in 2000!!) I read all 4 parts, but Im looking for some methodical approaches to writing my specs. Im the only lonely dev, working on this fairly complicated app (or family of apps) for a very well known finance company. I've never made something this serious, I started out writing something like a bad spec, an overview of some sorts, and it has wasted a LOT of my time. Ive also made 3 mockup-kinda-thingies for my client so I have a good understanding of what they want. Also released a preview (a throw away working app with the most basic workflow), and Ive only written and tested some of the very core/base systems. I think the mistake Ive been making so far is not writing a detailed spec, so Im getting to it now. So the whole thing comprises of An MVC website (for admins & data viewing) 2 Silverlight modules (For 2 specific tasks) 1 Desktop Application Im totally short on time, resources and need to get this done quick, also, need to make sure these guys read it up equally quick and painlessly. So how do I go about it, Im looking for any tips, any real world stuff, how do you guys usually do it? Do you make a mock screenie of every dialog/form/page? Im thinking of making a dummy asp.net web forms project, then filling in html files in folders and making it look like my mvc url structure. Then having a section in the spec for the website and write up a page for every URL Ive got with a screenie. For my win forms app, Ive made somewhat of a demo Win Form project, would I then put in a dialog or stucture everything as I would in the real app and then screen shot it?

    Read the article

  • Pointer initialization doubt

    - by Jestin Joy
    We could initialize a character pointer like this in C. char *c="test"; Where c points to the first character(t). But when I gave code like below. It gives segmentation fault. #include<stdio.h> #include<stdlib.h> main() { int *i; *i=0; printf("%d",*i); } But when I give #include<stdio.h> #include<stdlib.h> main() { int *i; i=(int *)malloc(2); *i=0; printf("%d",*i); } It works( gives output 0). Also when I give malloc(0), It also works( gives output 0). Please tell what is happening

    Read the article

  • How can an SQL relational database be used to model a thesaurus? [closed]

    - by Miles O'Keefe
    I would like to design a web app that functions as a simple thesaurus: a long list of words with attributes, all of which are linked to each other. This thesaurus data model can be defined as: a controlled vocabulary arranged in a known order in which equivalence, hierarchical, and associative relationships among terms are clearly displayed and identified by standardized relationship indicators. My idea so far is to have one database in which every word is a table, and every table contains all words related to that word. e.g. Thesaurus(database) - happy(table) - excited(row)|cheerful(row)|lively(row) Is there are more efficient way to store words and their relationship to other words in a relational SQL database?

    Read the article

  • Research idea in simulation

    - by Nilani Algiriyage
    Hi, I'm an undergraduate in University of Keleniya,Sri Lanka. I'm interested in doing a research on BPM, BPMN. But I have very few knowledgeable people and very few resources in my country. My supervisor also doesn't have enough knowledge in this area. So if you can please help me to find a research topic in BPM or BPMN. At least please help me to get an idea what areas I can do? Thank you very much. Regards, Nilani.

    Read the article

  • Should all foreign table references use foreign key constraints

    - by TecBrat
    Closely related to: Foreign key restrictions -> yes or no? I asked a question on SO and it led me to ask this here. If I'm faced with a choice of having a circular reference or just not enforcing the restraint, which is the better choice? In my particular case I have customers and addresses. I want an address to have a reference to a customer and I want each customer to have a default billing address id and a default shipping address id. I might query for all addresses that have a certain customer ID or I might query for the address with the ID that matches the default shipping or billing address ids. I'm not sure yet how the constraints (or lack of) will effect the system as my application and it's data age.

    Read the article

  • Organising data access for dependency injection

    - by IanAWP
    In our company we have a relatively long history of database backed applications, but have only just begun experimenting with dependency injection. I am looking for advice about how to convert our existing data access pattern into one more suited for dependency injection. Some specific questions: Do you create one access object per table (Given that a table represents an entity collection)? One interface per table? All of these would need the low level Data Access object to be injected, right? What about if there are dozens of tables, wouldn't that make the composition root into a nightmare? Would you instead have a single interface that defines things like GetCustomer(), GetOrder(), etc? If I took the example of EntityFramework, then I would have one Container that exposes an object for each table, but that container doesn't conform to any interface itself, so doesn't seem like it's compatible with DI. What we do now, in case it helps: The way we normally manage data access is through a generic data layer which exposes CRUD/Transaction capabilities and has provider specific subclasses which handle the creation of IDbConnection, IDbCommand, etc. Actual table access uses Table classes that perform the CRUD operations associated with a particular table and accept/return domain objects that the rest of the system deals with. These table classes expose only static methods, and utilise a static DataAccess singleton instantiated from a config file.

    Read the article

  • How to use Mercurial's LargeFiles extension? [migrated]

    - by DuncanBoehle
    I use Mercurial for game development, and I'm trying to use the LargeFiles extension included in Mercurial 2.0 to keep track of large binary assets. Unfortunately there isn't a whole lot of documentation on the extension, so I'm not sure how people are expected to use it. For example, is there any way to safely clean out the .hg/largefiles directory? If I'm on the tip revision, and expect to always have internet access, then I don't need the old versions of largefiles cluttering up the repository, since that's the whole point of using the LargeFiles extension. Also, how do I have more fine-grained control over where the largefile store is? I can only assume that it's created somewhere on the computer that ran hg init, but I have no idea about the details. Thanks!

    Read the article

  • What version control system can manage all aspects?

    - by Andy Canfield
    A few months ago I dug into Subversion and GIT and was disappointed. They handle SOURCE CODE fine but not other aspects. For example, a web site under version control needs to manage file/directory ownership, file/directory read & write access, Access Control Lists, timestamps, database contents. and external links. Is there a version control system that can do as perfect a reversion as reloading from a month-old backup?

    Read the article

  • C# String.format extension method

    - by Paul Roe
    With the addtion of Extension methods to C# we've seen a lot of them crop up in our group. One debate revolves around extension methods like this one: public static class StringExt { /// <summary> /// Shortcut for string.Format. /// </summary> /// <param name="str"></param> /// <param name="args"></param> /// <returns></returns> public static string Format(this string str, params object[] args) { if (str == null) return null; return string.Format(str, args); } } Does this extension method break any programming best practices that you can name? Would you use it anyway, if not why? If I renamed the function to "F" but left the xml comments would that be epic fail or just a wonderful savings of keystrokes?

    Read the article

  • What are the pre-requisites for writing .NET web services?

    - by wackytacky99
    I am very new to web development. I have been a C,C++ programmer for 5 years and I'm starting to get into the web development, writing web services, etc. I understand that basic concepts of web services. I know .Net web services can be written in VB or C#. Working with C,C++ will help getting used to writing code in C#. I do not have experience in .Net framework. I'd like to quickly get into writing .Net web services and learning on the go, without extensively spending a lot of time learning .Net framework (if possible) Any suggestions? Update - I know my way around databases and sql express in Visual Studio

    Read the article

< Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >