Search Results

Search found 16554 results on 663 pages for 'programmers identity'.

Page 243/663 | < Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >

  • Finding a way to simplify complex queries on legacy application

    - by glenatron
    I am working with an existing application built on Rails 3.1/MySql with much of the work taking place in a JavaScript interface, although the actual platforms are not tremendously relevant here, except in that they give context. The application is powerful, handles a reasonable amount of data and works well. As the number of customers using it and the complexity of the projects they create increases, however, we are starting to run into a few performance problems. As far as I can tell, the source of these problems is that the data represents a tree and it is very hard for ActiveRecord to deterministically know what data it should be retrieving. My model has many relationships like this: Project has_many Nodes has_many GlobalConditions Node has_one Parent has_many Nodes has_many WeightingFactors through NodeFactors has_many Tags through NodeTags GlobalCondition has_many Nodes ( referenced by Id, rather than replicating tree ) WeightingFactor has_many Nodes through NodeFactors Tag has_many Nodes through NodeTags The whole system has something in the region of thirty types which optionally hang off one or many nodes in the tree. My question is: What can I do to retrieve and construct this data faster? Having worked a lot with .Net, if I was in a similar situation there, I would look at building up a Stored Procedure to pull everything out of the database in one go but I would prefer to keep my logic in the application and from what I can tell it would be hard to take the queried data and build ActiveRecord objects from it without losing their integrity, which would cause more problems than it solves. It has also occurred to me that I could bunch the data up and send some of it across asynchronously, which would not improve performance but would improve the user perception of performance. However if sections of the data appeared after page load that could also be quite confusing. I am wondering whether it would be a useful strategy to make everything aware of it's parent project, so that one could pull all the records for that project and then build up the relationships later, but given the ubiquity of complex trees in day to day programming life I wouldn't be surprised if there were some better design patterns or standard approaches to this type of situation that I am not well versed in.

    Read the article

  • Best design for a memory resident tool

    - by Andrew S.
    I apologize if this tends more toward design that programming, but here goes. What design would you recommend for a database that is Memory resident Must run on windows, linux and (at a stretch) the mac Accept multiple queries simultaneously Have minimum overhead, since a search is expected to take <0.25s This program implements a domain-specific search. Think of it as a database, but one that takes advantage of domain specific information to outperform a convential database search (for example, with custom oracle indexing). We have a custom data structure for our data. Our protoype is a simple exe that constructs the database in memory each time it is run. We were thinking that perhaps this program would suffice, but augmented with sockets so it can listen for queries. This database will be static. Its contents will change infrequently. We expect queries, and the solution, to be delivered via a web service.

    Read the article

  • What do you do if you reach a design dead-end in evolutionary methods like Agile or XP?

    - by Dipan Mehta
    As I was reading Martin Fowler's famous blog post Is Design Dead?, one of the striking impressions I got is that given the fact that in Agile Methodology and Extreme Programming, the design as well as programming is evolutionary, there are always points where things need to get refactored. It may be possible that when a programmer's level is good, and they understand design implications and don't make critical mistakes, the code continues to evolve. However, in a normal context, what is the ground reality in this context? In a normal day given some significant development goes into product, and when critical change occurs in requirement isn't it a constraint that how much ever we wish, fundamental design aspects cannot be modified? (without throwing away major part of the code). Is it not quite likely that one reaches dead-end on any further possible improvement on design and requirements? I am not advocating any non-Agile practice here, but I want to know from people who practice agile or iterative or evolutionary development methods, as for their real experiences. Have you ever reached such dead-ends? How have you managed to avoid it or escaped it? Or are there measures to ensure that design remains clean and flexible as it evolves?

    Read the article

  • Why fork a library for your own application?

    - by Mr. Shickadance
    Why should a programmer ever fork a library for inclusion in a widely used application? I ask this question because I was reading an article about why Chromium isn't packaged for many Linux distros like Fedora. Apparently its largely due to the fact that Google has forked a number of libraries, modified them, and included them in Chromium. This has driven up the complexity of packaging releases. There are a number of reasons why this can be a bad thing, but how strong a case can you actually make for doing so in a large widely used application such as Chromium? The original article: http://ostatic.com/blog/making-projects-easier-to-package-why-chromium-isnt-in-fedora Isn't it usually worth the effort to make slight modifications to your own program in order to use a popular and well developed library?

    Read the article

  • Searching for a key in a multi dimensional array and adding it to another array [migrated]

    - by Moha
    Let's say I have two multi dimensional arrays: array1 ( stuff1 = array ( data = 'abc' ) stuff2 = array ( something = '123' data = 'def' ) stuff3 = array ( stuff4 = array ( data = 'ghi' ) ) ) array2 ( stuff1 = array ( ) stuff3 = array ( anything = '456' ) ) What I want is to search the key 'data' in array1 and then insert the key and value to array2 regardless of the depth. So wherever key 'data' exists in array1 it gets added to array2 with the exact depth (and key names) as in array1 AND without modifying any other keys. How can I do this recursively?

    Read the article

  • I need recommendations on free, open source, PHP-based business intelligence widget frameworks [on hold]

    - by Volomike
    I'm a PHP developer on Linux, and my manager wants a business intelligence dashboard. He wants to see in real-time our profit/loss stuff in fancy charts, based on our software sales. I could code it all from scratch and use Google Charts API or some other charts API to help me. However, I wanted to know if there was a free, open source, PHP-based business intelligence package out there, or some sort of widget framework that I could start with. That way, I can build the BI widgets inside that framework and not have to do everything from scratch. I apologize ahead of time if this is the wrong stackexchange where to place this query. I don't know where to place this query, and do want to follow the rules.

    Read the article

  • SQLite with two python processes accessing it: one reading, one writing

    - by BBnyc
    I'm developing a small system with two components: one polls data from an internet resource and translates it into sql data to persist it locally; the second one reads that sql data from the local instance and serves it via json and a restful api. I was originally planning to persist the data with postgresql, but because the application will have a very low-volume of data to store and traffic to serve, I thought that was overkill. Is SQLite up to the job? I love the idea of the small footprint and no need to maintain yet another sql server for this one task, but am concerned about concurrency. It seems that with write ahead logging enabled, concurrently reading and writing a SQLite database can happen without locking either process out of the database. Can a single SQLite instance sustain two concurrent processes accessing it, if only one reads and the other writes? I started writing the code but was wondering if this is a misapplication of SQLite.

    Read the article

  • Multilevel Queue Scheduling (MQS) with Round Robin

    - by stackuser
    I'm trying to use MQS to create a Gantt chart of 5 processes (P1-P5) as well as their waiting, response, and turnaround times (and averages of those metrics) within a CPU task schedule. Here's the basic table of arrival times and bursts: Here's my actual work version after ticking off the finished processes. The time quantum for each time slice is (2 queues) TQ1=4 and TQ2=3. Note that I'm doing MQS and NOT MLFQ: It just doesn't feel like I'm doing MQS right here, I know this gets a little complex but maybe someone can point out where I'm going totally wrong.

    Read the article

  • What are the boundaries of the product owner in scrum?

    - by Saeed Neamati
    In another question, I asked about why I feel scrum turns active developers into passive developers, and it seems that the overall problem is not scrumy (related to scrum), and rather it's related to the bad implementation of scrum. So, here I have some questions about the scope of the responsibilities of PO (product owner) and the limitations he/she shouldn't pass. Should PO interfere the UI design, when there are designers at work in scrum team? (an example of this which has happened to us, is to replace checkboxes with a drop down list with two items, namely, yes and no; or to make some boxes larger, or to left-align some content instead of centering them on the page, or stuff like that). If yeah, to what extent? Colors? Layout? Should PO interfere in Design and architecture of coding? This hasn't happened to us yet, but I'm really curious about the boundaries. For example does PO has the right to change the platform (moving from ASP.NET MVC to PHP, or something like that), or choosing the count of servers (tier architecture), etc. Should PO interfere in validation mechanisms? For example, this field should be required, or we don't need to get this piece of information from user. Sometimes, analyzers and designers confirm that something can be handled behind the scene, like extracting the user profile info from another source, instead of asking for it in UI. How granular could/should PO get into the analysis and design? For example, a user story might be: "As a customer, I'd like to be able to buy new domains online". However, scrum team can implement this user story in a wizard of five steps, or in one single page. To which level PO should monitor, or govern, or supervise the technical analysis, design, and implementation? I asked these questions to judge whether our implementation is right or wrong?

    Read the article

  • What's the best platform for blogging about coding ?

    - by timday
    I'm toying with starting an occasional blog for posting odd bits of coding related stuff (mainly C++, probably). Are there any platforms which can be recommended as providing exceptionally good support (e.g syntax highlighting) for posting snippets of code ? (Or any to avoid because posting mono-spaced font blocks of text is a pain). Outcome: I accepted Josh K's answer because what I actually ended up doing was realizing I was more interested in articles than a blog style, getting back into LaTeX (after almost 20 years away from it), using the "listings" package for code, and pushing the HTML/PDF results to my ISP's static-hosting pages. (HTML generated using tex4ht). Kudos to the answers mentioning Wordpress, Tumblr and Jekyll; I spent some time looking into all of them.

    Read the article

  • The perfect crossfade

    - by epologee
    I find it hard to describe this problem in words, which is why I made a video (45 seconds) to illustrate it. Here's a preview of the questions, please have a look at it on Vimeo: http://vimeo.com/epologee/perfect-crossfade The issue of creating a flawless crossfade or dissolve of two images or shapes has been recurring to me in a number of fields over the last decade. First in video editing, then in Flash animation and now in iOS programming. When you start googling it, there are many workarounds to be found, but I really want to solve this without a hack this time. The summary: What is the name of the technique or curve to apply in crossfading two semi-transparent, same-colored bitmaps, if you want the resulting transparency to match the original of either one? Is there a (mathematical) function to calculate the neccessary partial transparency/alpha values during the fade? Are there programming languages that have these functions as a preset, similar to the ease in, ease out or ease in out functions found in ActionScript or Cocoa?

    Read the article

  • How many questions is it appropriate to ask as an intern?

    - by Casey Patton
    So, I just started an internship, and I'm worried that I'm asking too many questions. My mentor assigns me projects and helps me learn all the company's technologies and methodologies. However, there's so much new material for me to learn while doing this project that I have a lot of questions. I generally ask questions over instant messages or E-mail (those are the primary modes of communication for my company). I'm trying to be careful not to ask too many questions: I don't want to come off as annoying or dumb. How many questions are appropriate to ask? Once an hour? More? Less? Keep in mind, my mentor is also a fellow programmer who has his own responsibilities.

    Read the article

  • Best practice while marking a bug as resolved with Bugzilla (versioning of product and components)

    - by Vincent B.
    I am wondering what is the best way to handle the situation of marking a bug as resolved and providing a version of component/product in which this fix can be found. Context For a project I am working on, we are using Bugzilla for issue tracking, and we have the following: A product "A" with a version number like vA.B.C.D, This product "A" have the following components: Component "C1" with a version number like vA.B.C.D, Component "C2" with a version number like vA.B.C.D, Component "C3" with a version number like vA.B.C.D. Internally we keep track of which component versions have been used to generate the product A version vA.B.C.D. Example: Product "A" version v1.0.0.0 has been produced from component "C1" v1.0.0.3, component "C2" v1.3.0.0 and component "C3" v2.1.3.5. And Product "A" version v1.0.1.0 has been produced from component "C1" v1.0.0.4, component "C2" v1.3.0.0 and component "C3" v2.1.3.5. Each component is a SVN repository. The person in charge of generating the product "A" have only access to the different components tags folder in SVN, and not the trunk of each component repository. Problem Now the problem is the following, when a bug is found in the product "A", and that the bug is related to Component "C1", the version of product "A" is chosen (e.g. v1.0.0.0), and this version allow the developer to know which version of component "C1" has the bug (here it will be v1.0.0.3). A bug report is created. Now let's say that the developer responsible for component "C1" corrects the bug, then when the bug seems to be fixed and after some test and validation, the developer generates a new tag for component "C1", with the version v1.0.0.4. At this time, the developer of component "C1" needs to update the bug report, but what is the best to do: Mark the bug as resolved/fixed and add a comment saying "This bug has been fixed in the tags v1.0.0.4 of C1 component" ? Keep the bug as assigned, add a comment saying "This bug has been fixed in the tags v1.0.0.4 of C1 component, update this bug status to resolved for the next version of the product that will be generated with the newest version (v1.0.0.4 of C1)" ? Another possible way to deal with this problem. Right now the problem is that when a product component CX is fixed, it is not sure in which future version of the product A it will be included, so it is for me not possible to say in which version of the product it will be solved, but it is possible to say in which version of the Component CX it has been solved. So when do we need to mark a bug as solved, when the product A version include the fixed version of CX, or only when CX component has been fixed ? Thanks for your personal feedback and ideas about this !

    Read the article

  • Designing Content-Based ETL Process with .NET and SFDC

    - by Patrick
    As my firm makes the transition to using SFDC as our main operational system, we've spun together a couple of SFDC portals where we can post customer-specific documents to be viewed at will. As such, we've had the need for pseudo-ETL applications to be implemented that are able to extract metadata from the documents our analysts generate internally (most are industry-standard PDFs, XML, or MS Office formats) and place in networked "queue" folders. From there, our applications scoop of the queued documents and upload them to the appropriate SFDC CRM Content Library along with some select pieces of metadata. I've mostly used DbAmp to broker communication with SFDC (DbAmp is a Linked Server provider that allows you to use SQL conventions to interact with your SFDC Org data). I've been able to create [console] applications in C# that work pretty well, and they're usually structured something like this: static void Main() { // Load parameters from app.config. // Get documents from queue. var files = someInterface.GetFiles(someFilterOrRegexPattern); foreach (var file in files) { // Extract metadata from the file. // Validate some attributes of the file; add any validation errors to an in-memory // structure (e.g. List<ValidationErrors>). if (isValid) { // Upload using some wrapper for an ORM an someInterface.Upload(meta.Param1, meta.Param2, ...); } else { // Bounce the file } } // Report any validation errors (via message bus or SMTP or some such). } And that's pretty much it. Most of the time I wrap all these operations in a "Worker" class that takes the needed interfaces as constructor parameters. This approach has worked reasonably well, but I just get this feeling in my gut that there's something awful about it and would love some feedback. Is writing an ETL process as a C# Console app a bad idea? I'm also wondering if there are some design patterns that would be useful in this scenario that I'm clearly overlooking. Thanks in advance!

    Read the article

  • On what basis would you split donation money among your open source team members without any strife?

    - by Vigneshwaran
    I am a developer of an open source project which is hosted in SourceForge. It started out as a little app then after some releases, it got more and more popular and it started consuming more time and responsibility from me. So I have enabled the donation option in SourceForge. I'm passionate to continue developing it for free but if (ever) any money comes in, how should I split it with my team? Should I split the amount equally among the number of team members? (50-50 as it is two-member team now) Number of classes, commits or any other valuable submissions by team members? Any other idea? What would you do in such situation? Please give your opinions. I hope this question will be useful for others.

    Read the article

  • When connecting to a server using the DRDA protocol, is it true that the first Client-To-Server command MUST be EXCSAT chained with ACCSEC?

    - by Alon Rew
    When connecting to a server using the DRDA protocol, is it true that the first Client-To-Server command MUST be EXCSAT chained with ACCSEC? I found 2 different answers when I googled it. If you look at The Open Group web site (https://collaboration.opengroup.org/dbiop/) it can be understood that the answer is NO. However, if you look at the IBM website (http://publib.boulder.ibm.com/infocenter/dzichelp/v2r2/index.jsp?topic=%2Fcom.ibm.ims11.doc.apr%2Fims_ddm_excsat.htm) you can understand the answer is YES. So which is it?

    Read the article

  • What's the best version control/QA workflow for a legacy system?

    - by John Cromartie
    I am struggling to find a good balance with our development and testing process. We use Git right now, and I am convinced that ReinH's Git Workflow For Agile Teams is not just great for capital-A Agile, but for pretty much any team on DVCS. That's what I've tried to implement but it's just not catching. We have a large legacy system with a complex environment, hundreds of outstanding and undiscovered defects, and no real good way to set up a test environment with realistic data. It's also hard to release updates without disrupting users. Most of all, it's hard to do thorough QA with this process... and we need thorough testing with this legacy system. I feel like we can't really pull off anything as slick as the Git workflow outlined in the link. What's the way to do it?

    Read the article

  • How to indicate reliability when reporting availability of competencies

    - by Jan Doggen
    We have employees with competencies: Pete Welder Carpenter Melissa Carpenter Assume they both work 40 hours/week, and have not yet been assigned work. We need to report the availability of these competencies, expressed in hours. As far as I can see now, we can report this in two ways: Method A. When someone has multiple competencies, count them both. Welder 40 hours Carpenter 80 hours Method B. When someone has multiple competencies, count an equal division of hours for each Welder 20 hours Carpenter 60 hours Method A has our preference: - A good planner will know to plan the least available competency first. If 30 hours of welding is planned, we will be left with 10 welder, 50 carpenter. - Method B has the disadvantage that the planner thinks he cannot plan the job when 30 hours of welding is required. However, if we report this we would like to give an estimate of the reliability of the numbers for each competency, i.e. how much are these over-reported? In my example A, would I say that carpenter is 100% over-reported, or 50%, or maybe another number? How would I calculate this for large numbers of competencies? I'm sure we are not the first ones dealing with this, is there a 'usual' way of doing this in planning? Additionally: - Would there be an even better method than A or B? - Optionally, we also have an preference order of competencies (like: use him/her in this order), Pete could be 1. welder 2. carpenter. Does this introduce new options?

    Read the article

  • Strategies for Indexing Custom Fields in RavenDB

    - by Adrian Thompson Phillips
    In the relational database world, if I was developing a CRM system and wanted to have the user add their own custom fields that are searchable, I could have tables that store the name of the new column, the data type and the value, etc. (which would be less inefficient to index) or I could use the less elegant (but more searchable) solution that software like Dynamics and SharePoint use, whereas I create a load of columns on my aggregate root called CustomInt1, CustomInt2, etc. (which looks dirty and has a limit of how many custom fields a user can have, but has indexing advantages). But my questions is this, in NoSQL databases, what would be the best way of achieving the same thing? My priority would be for searchability. So what would be the best way to store this data? If I used a predefined set of properties (i.e. CustomData1, CustomData2, etc.), because these are all stored as JSON (i.e. strings) in the database, does this make it simpler because I don't have to worry about data types?

    Read the article

  • design pattern for unit testing?

    - by Maddy.Shik
    I am beginner in developing test cases, and want to follow good patterns for developing test cases rather than following some person or company's specific ideas. Some people don't make test cases and just develop the way their senior have done in their projects. I am facing lot problems like object dependencies (when want to test method which persist A object i have to first persist B object since A is child of B). Please suggest some good books or sites preferably for learning design pattern for unit test cases. Or reference to some good source code or some discussion for Dos and Donts will do wonder. So that i can avoid doing mistakes be learning from experience of others.

    Read the article

  • Interviewing someone for general unix skills

    - by Christophe Vanfleteren
    How would you test a developer that claims to have *nix shell experience (just to be clear, we don't want to test if someone can develop on *nix, only that they know their way around the command line). I was thinking about making them solve a problem of getting information out of log files, which would involve some basics like cat, grep, cut, ... combined with piping. What other basic knowledge would you ask for? Once again, this isn't for interviewing someone who will develop for *nix systems, and also not for *nix system admins, but just for regular developers that sometimes need to do some work on a *nix system.

    Read the article

  • Consuming JSON stream into AWS Database on the cheap

    - by wjl
    I'm working on a project that needs to consume a JSON stream (approximately 1MB / minute), and parse and insert objects into a database. Amazon's DynamoDB or SimpleDB seem like attractive options for this. Is there a web service that can run a very simple script to eat the data and put it in a database? I could use a worker on Heroku or Elastic Beanstalk, or even pure EC2, but I'd like to find a service that's much cheaper, due to the very low amount of bandwidth and CPU required. (Sorry for the crappy tags. I'm not even sure where to categorize this question.)

    Read the article

  • Analysing Group & Individual Member Performance -RUP

    - by user23871
    I am writing a report which requires the analysis of performance of each individual team member. This is for a software development project developed using the Unified Process (UP). I was just wondering if there are any existing group & individual appraisal metrics used so I don't have to reinvent the wheel... EDIT This is by no means correct but something like: Individual Contribution (IC) = time spent (individual) / time spent (total) = Performance = ? (should use individual contribution (IC) combined with something to gain a measure of overall performance).... Maybe I am talking complete hash and I know generally its really difficult to analyse performance with numbers but any mathematicians out there that can lend a hand or know a somewhat more accurate method of analysing performance than arbitrary marking (e.g. 8 out 10)

    Read the article

  • Why do Windows Forms / Swing frameworks favour inheritance instead of Composition?

    - by devoured elysium
    Today a professor of mine commented that he found it odd that while SWT's philosophy is one of making your own controls by composition, Swing seems to favour inheritance. I have almost no contact with both frameworks, but from what I remember in C#'s Windows Forms one usually extends controls, just like Swing. Being that generally people tend to prefer composition over inheritance, why didn't Swing/Windows Forms folks favour composition instead of inheritance?

    Read the article

  • What resources are there for facial recognition

    - by Zintinio
    I'm interested in learning the theory behind facial recognition software so that I can hopefully implement it in the future. Not just face tracking, but being able to recognize individuals. What papers, books, libraries, or source is available so that I can learn more about the subject? I have found libface which seems to use eigenfaces for recognition. If there are any practitioners out there, please share any information that you can.

    Read the article

< Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >