Search Results

Search found 15103 results on 605 pages for 'programmers notepad'.

Page 142/605 | < Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >

  • What is an effective git process for managing our central code library?

    - by Mathew Byrne
    Quick background: we're a small web agency (3-6 developers at any one time) developing small to medium sized Symfony 1.4 sites. We've used git for a year now, but most of our developers have preferred Subversion and aren't used to a distributed model. For the past 6 months we've put a lot of development time into a central Symfony plugin that powers our custom CMS. This plugin includes a number of features, helpers, base classes etc. that we use to build custom functionality. This plugin is stored in git, but branches wildly as the plugin is used in various products and is pulled from/pushed to constantly. The repository is usually used as a submodule within a major project. The problems we're starting to see now are a large number of Merge conflicts and backwards incompatible changes brought into the repository by developers adding custom functionality in the context of their own project. I've read Vincent Driessen's excellent git branching model and successfully used it for projects in the past, but it doesn't seem to quite apply well to our particular situation; we have a number of projects concurrently using the same core plugin while developing new features for it. What we need is a strategy that provides the following: A methodology for developing major features within the code repository. A way of migrating those features into other projects. A way of versioning the core repository, and of tracking which version each major project uses. A plan for migrating bug fixes back to older versions. A cleaner history that's easier to see where changes have come from. Any suggestions or discussion would be greatly appreciated.

    Read the article

  • Objective C and ios development training courses feedback

    - by Mutuelinvestor
    I'm thinking about taking an Objective C / iphone/ipad development training course. Pragmatic Studio and Big Nerd Ranch appear to be the key players. I'd love to hear any feed back from anyone who has completed training with Pragmatic, Big Nerd or others. I'd interested in quality of instruction, how much you learned, strengths, weaknesses and any other valuable insights. Its a pretty big financial commitment for me and I want to get the most bang for my buck. Thanks in advance for your input.

    Read the article

  • How to store generated eigen faces for future face recognition?

    - by user3237134
    My code works in the following manner: 1.First, it obtains several images from the training set 2.After loading these images, we find the normalized faces,mean face and perform several calculation. 3.Next, we ask for the name of an image we want to recognize 4.We then project the input image into the eigenspace, and based on the difference from the eigenfaces we make a decision. 5.Depending on eigen weight vector for each input image we make clusters using kmeans command. Source code i tried: clear all close all clc % number of images on your training set. M=1200; %Chosen std and mean. %It can be any number that it is close to the std and mean of most of the images. um=60; ustd=32; %read and show images(bmp); S=[]; %img matrix for i=1:M str=strcat(int2str(i),'.jpg'); %concatenates two strings that form the name of the image eval('img=imread(str);'); [irow icol d]=size(img); % get the number of rows (N1) and columns (N2) temp=reshape(permute(img,[2,1,3]),[irow*icol,d]); %creates a (N1*N2)x1 matrix S=[S temp]; %X is a N1*N2xM matrix after finishing the sequence %this is our S end %Here we change the mean and std of all images. We normalize all images. %This is done to reduce the error due to lighting conditions. for i=1:size(S,2) temp=double(S(:,i)); m=mean(temp); st=std(temp); S(:,i)=(temp-m)*ustd/st+um; end %show normalized images for i=1:M str=strcat(int2str(i),'.jpg'); img=reshape(S(:,i),icol,irow); img=img'; end %mean image; m=mean(S,2); %obtains the mean of each row instead of each column tmimg=uint8(m); %converts to unsigned 8-bit integer. Values range from 0 to 255 img=reshape(tmimg,icol,irow); %takes the N1*N2x1 vector and creates a N2xN1 matrix img=img'; %creates a N1xN2 matrix by transposing the image. % Change image for manipulation dbx=[]; % A matrix for i=1:M temp=double(S(:,i)); dbx=[dbx temp]; end %Covariance matrix C=A'A, L=AA' A=dbx'; L=A*A'; % vv are the eigenvector for L % dd are the eigenvalue for both L=dbx'*dbx and C=dbx*dbx'; [vv dd]=eig(L); % Sort and eliminate those whose eigenvalue is zero v=[]; d=[]; for i=1:size(vv,2) if(dd(i,i)>1e-4) v=[v vv(:,i)]; d=[d dd(i,i)]; end end %sort, will return an ascending sequence [B index]=sort(d); ind=zeros(size(index)); dtemp=zeros(size(index)); vtemp=zeros(size(v)); len=length(index); for i=1:len dtemp(i)=B(len+1-i); ind(i)=len+1-index(i); vtemp(:,ind(i))=v(:,i); end d=dtemp; v=vtemp; %Normalization of eigenvectors for i=1:size(v,2) %access each column kk=v(:,i); temp=sqrt(sum(kk.^2)); v(:,i)=v(:,i)./temp; end %Eigenvectors of C matrix u=[]; for i=1:size(v,2) temp=sqrt(d(i)); u=[u (dbx*v(:,i))./temp]; end %Normalization of eigenvectors for i=1:size(u,2) kk=u(:,i); temp=sqrt(sum(kk.^2)); u(:,i)=u(:,i)./temp; end % show eigenfaces; for i=1:size(u,2) img=reshape(u(:,i),icol,irow); img=img'; img=histeq(img,255); end % Find the weight of each face in the training set. omega = []; for h=1:size(dbx,2) WW=[]; for i=1:size(u,2) t = u(:,i)'; WeightOfImage = dot(t,dbx(:,h)'); WW = [WW; WeightOfImage]; end omega = [omega WW]; end % Acquire new image % Note: the input image must have a bmp or jpg extension. % It should have the same size as the ones in your training set. % It should be placed on your desktop ed_min=[]; srcFiles = dir('G:\newdatabase\*.jpg'); % the folder in which ur images exists for b = 1 : length(srcFiles) filename = strcat('G:\newdatabase\',srcFiles(b).name); Imgdata = imread(filename); InputImage=Imgdata; InImage=reshape(permute((double(InputImage)),[2,1,3]),[irow*icol,1]); temp=InImage; me=mean(temp); st=std(temp); temp=(temp-me)*ustd/st+um; NormImage = temp; Difference = temp-m; p = []; aa=size(u,2); for i = 1:aa pare = dot(NormImage,u(:,i)); p = [p; pare]; end InImWeight = []; for i=1:size(u,2) t = u(:,i)'; WeightOfInputImage = dot(t,Difference'); InImWeight = [InImWeight; WeightOfInputImage]; end noe=numel(InImWeight); % Find Euclidean distance e=[]; for i=1:size(omega,2) q = omega(:,i); DiffWeight = InImWeight-q; mag = norm(DiffWeight); e = [e mag]; end ed_min=[ed_min MinimumValue]; theta=6.0e+03; %disp(e) z(b,:)=InImWeight; end IDX = kmeans(z,5); clustercount=accumarray(IDX, ones(size(IDX))); disp(clustercount); QUESTIONS: 1.It is working fine for M=50(i.e Training set contains 50 images) but not for M=1200(i.e Training set contains 1200 images).It is not showing any error.There is no output.I waited for 10 min still there is no output. I think it is going infinite loop.What is the problem?Where i was wrong? 2.Instead of running the training set everytime how eigen faces generated are stored so that stored eigen faces are used for future face recoginition for a new input image.So it reduces wastage of time.

    Read the article

  • Why should you document code?

    - by Edwin Tripp
    I am a graduate software developer for a financial company that uses an old COBOL-like language/flat-file record storage system. The code is completely undocumented, both code comments and overall system design and there is no help on the web (unused outside the industry). The current developers have been working on the system for between 10 and 30 years and are adamant that documentation is unnecessary as you can just read the code to work out what's going on and that you can't trust comments. Why should such a system be documented?

    Read the article

  • Should I organize my folders by business domain or by technical domain?

    - by Florian Margaine
    For example, if I'm using some MVC-like architecture, which folder structure should I use: domain1/ controller model view domain2/ controller model view Or: controllers/ domain1 domain2 models/ domain1 domain2 views/ domain1 domain2 I deliberately left out file extensions to keep this question language-agnostic. Personally, I'd prefer to separate by business domain (gut feeling), but I see that most/many frameworks separate by technical domain. Why whould I choose one over the other?

    Read the article

  • Optimizing hash lookup & memory performance in Go

    - by Moishe
    As an exercise, I'm implementing HashLife in Go. In brief, HashLife works by memoizing nodes in a quadtree so that once a given node's value in the future has been calculated, it can just be looked up instead of being re-calculated. So eg. if you have a node at the 8x8 level, you remember it by its four children (each at the 2x2 level). So next time you see an 8x8 node, when you calculate the next generation, you first check if you've already seen a node with those same four children. This is extended up through all levels of the quadtree, which gives you some pretty amazing optimizations if eg. you're 10 levels above the leaves. Unsurprisingly, it looks like the perfmance crux of this is the lookup of nodes by child-node values. Currently I have a hashmap of {&upper_left_node,&upper_right_node,&lower_left_node,&lower_right_node} -> node So my lookup function is this: func FindNode(ul, ur, ll, lr *Node) *Node { var node *Node var ok bool nc := NodeChildren{ul, ur, ll, lr} node, ok = NodeMap[nc] if ok { return node } node = &Node{ul, ur, ll, lr, 0, ul.Level + 1, nil} NodeMap[nc] = node return node } What I'm trying to figure out is if the "nc := NodeChildren..." line causes a memory allocation each time the function is called. If it does, can I/should I move the declaration to the global scope and just modify the values each time this function is called? Or is there a more efficient way to do this? Any advice/feedback would be welcome. (even coding style nits; this is literally the first thing I've written in Go so I'd love any feedback)

    Read the article

  • Rails Book Suggestions [closed]

    - by Solomon081
    Possible Duplicate: Is there a canonical book on Ruby on Rails? I'm looking to learn Ruby on Rails. I already have a small background in Ruby and don't really need a book that covers both, as I ordered the Pickaxe Book a couple days ago. I recently read some of Beginning Ruby on Rails-Steven Holzner, but abandoned that after seeing multiple statements on SO about how terrible the code in it was, and also the fact that it used Rails 1...Just wondering, what is the best book for an ABSOLUTE BEGINNER in Rails?

    Read the article

  • SQL, moving million records from a database to other database [migrated]

    - by Ryoma
    I am a C# developer, I am not really good with SQL. I have a simple questions here. I need to move more than 50 millions records from a database to other database. I tried to use the import function in ms SQL, however it got stuck because the log was full (I got an error message The transaction log for database 'mydatabase' is full due to 'LOG_BACKUP'). The database recovery model was set to simple. My friend said that importing millions records using task-import data will cause the log to be massive and told me to use loop instead to transfer the data, does anyone know how and why? thanks in advance

    Read the article

  • Class Design -- Multiple Calls from One Method or One Call from Multiple Methods?

    - by Andrew
    I've been working on some code recently that interfaces with a CMS we use and it's presented me with a question on class design that I think is applicable in a number of situations. Essentially, what I am doing is extracting information from the CMS and transforming this information into objects that I can use programatically for other purposes. This consists of two steps: Retrieve the data from the CMS (we have a DAL that I use, so this is essentially just specifying what data from the CMS I want--no connection logic or anything like that) Map the parsed data to my own [C#] objects There are basically two ways I can approach this: One call from multiple methods public void MainMethodWhereIDoStuff() { IEnumerable<MyObject> myObjects = GetMyObjects(); // Do other stuff with myObjects } private static IEnumerable<MyObject> GetMyObjects() { IEnumerable<CmsDataItem> cmsDataItems = GetCmsDataItems(); List<MyObject> mappedObjects = new List<MyObject>(); // do stuff to map the CmsDataItems to MyObjects return mappedObjects; } private static IEnumerable<CmsDataItem> GetCmsDataItems() { List<CmsDataItem> cmsDataItems = new List<CmsDataItem>(); // do stuff to get the CmsDataItems I want return cmsDataItems; } Multiple calls from one method public void MainMethodWhereIDoStuff() { IEnumerable<CmsDataItem> cmsDataItems = GetCmsDataItems(); IEnumerable<MyObject> myObjects = GetMyObjects(cmsDataItems); // do stuff with myObjects } private static IEnumerable<MyObject> GetMyObjects(IEnumerable<CmsDataItem> itemsToMap) { // ... } private static IEnumerable<CmsDataItem> GetCmsDataItems() { // ... } I am tempted to say that the latter is better than the former, as GetMyObjects does not depend on GetCmsDataItems, and it is explicit in the calling method the steps that are executed to retrieve the objects (I'm concerned that the first approach is kind of an object-oriented version of spaghetti code). On the other hand, the two helper methods are never going to be used outside of the class, so I'm not sure if it really matters whether one depends on the other. Furthermore, I like the fact that in the first approach the objects can be retrieved from one line-- most likely anyone working with the main method doesn't care how the objects are retrieved, they just need to retrieve the objects, and the "daisy chained" helper methods hide the exact steps needed to retrieve them (in practice, I actually have a few more methods but am still able to retrieve the object collection I want in one line). Is one of these methods right and the other wrong? Or is it simply a matter of preference or context dependent?

    Read the article

  • What is the difference between WCF service and a simple Web service in developing using .NET Framework?

    - by Steve Johnson
    My questions are: What is the difference between WCF service and a simple Web service in .NET Framework? What a WCF Service can do which a .NET Web service cant? In other words, what are the limitation of .NET Web services which were overcome in WCF services? I understand that WCF are REST based and .NET web services are SOAP based. But I need to know more than that. How a developer will make a design decision whether to developer a Web service or a WCF service?

    Read the article

  • Developing Web Portal

    - by Ya Basha
    I'm php, Ruby on Rails and HTML5 developer I need some advises and suggestions for a web portal project that I will build from scratch. This is my first time to build a web portal, Which developing scripting language you prefer and why? and how I should start planing my project as it will contains many modules. I'm excited to start building this project and I want to build it in the right way with planing, if you know some web resources that help me decide and plan my project please give them to me. Best Regards,

    Read the article

  • Best C# database communication technique

    - by user65439
    A few days ago I read a reply to a question where people said that the days of writing queries within your c# code are long gone. I'm not sure what the specific person meant with the comment but it got me thinking. At the company I'm currently working at we maintain an assembly containing all the queries to the database (let's call it Queries), this assembly is reference by a QueryService (Retrieve the correct queries) assembly which in turn is referenced by a UnitOfWork assembly (The database connector classes, we have different connector classes for SQL, MySQL etc.). We use these three assemblies to perform operations on our database and all queries/commands are written in our C# code. Is there a better way to communicate with the database and is there a better way to communicate with different database types?

    Read the article

  • Qt Certification Exams

    - by karlphillip
    I'm wondering about doing a Qt Certification Exam this year, but I'm not 100% sure the investment is worth. I'm considering it because I think it could be a nice + on my resume, and as you know, I'm all for improving my software engineer persona. As I already earn a BSc and MSc degrees in computer stuff, I guess I see the certification process as some kind of adventure. Anyway, I know I'll spend a lot of time preparing myself for the exam and I just wanted to know if a Qt certification is worth the effort. Apparently there are 2 certificates that you can get in the Qt world: Nokia Certified Qt Developer (basic) Nokia Certified Qt Specialist (advanced) Nowadays I build cross-platform software in C++ and this exam would fit beautifully in my resume. My main concern is that, given the obscure future of Qt, I might be throwing time and money out the window. I'm looking for some advice regarding the usefulness of such certifications.

    Read the article

  • Setter Validation can affect performance?

    - by TiagoBrenck
    Whitin a scenario where you use an ORM to map your entities to the DB, and you have setter validations (nullable, date lower than today validation, etc) every time the ORM get a result, it will pass into the setter to instance the object. If I have a grid that usually returns 500 records, I assume that for each record it passes on all validations. If my entity has 5 setter validations, than I have passed in 2.500 validations. Does those 2.500 validations will affect the performance? If was 15.000 validation, it will be different? In my opinion, and according to this answer (http://stackoverflow.com/questions/4893558/calling-setters-from-a-constructor/4893604#4893604), setter validation is usefull than constructors validation. Is there a way to avoid unecessary validation, since I am safe that the values I send to DB when saving the entity wont change until I edit it on my system?

    Read the article

  • RMS java web framework

    - by Kamil Tomšík
    We're currently reconsidering technologies and frameworks to get more agile with "simple" RMS CRUD-based projects. In short, short-living things like this Right now we have custom extension on top of SmartGWT but after some time it has proven not to be enough flexible. I also personally dislike that java-js compilation process and the whole GWT codebase. Not only its ugly designed, it also makes certain low-level js things very complicated if not completely impossible. So what I'm looking for is: closest to web as possible, like JSF or possibly Tapestry, it is very important to be able get "low" and weave framework if necessary. Happens more often than we thought. datagrid capable - Ext.js & PrimeFaces looks pretty good, Vaadin does too. db-schema generators (optional, no matter in which way) If it were only on me, I'd probably stick to Ext.js + custom rest-based java solution, possibly generated from database schema (not sure about concrete tooling yet) I only does have experience with vanilla Ext.js, vanilla GWT and JSF 2.0 / Seam, so it kinda hard for me to judge or even propose other frameworks. What would be your proposition? What are the problems you've faced, what was your solution and how hard do you think it was to deal with them in "big picture"?

    Read the article

  • Does it matter the direction of a Huffman's tree child node?

    - by Omega
    So, I'm on my quest about creating a Java implementation of Huffman's algorithm for compressing/decompressing files (as you might know, ever since Why create a Huffman tree per character instead of a Node?) for a school assignment. I now have a better understanding of how is this thing supposed to work. Wikipedia has a great-looking algorithm here that seemed to make my life way easier. Taken from http://en.wikipedia.org/wiki/Huffman_coding: Create a leaf node for each symbol and add it to the priority queue. While there is more than one node in the queue: Remove the two nodes of highest priority (lowest probability) from the queue Create a new internal node with these two nodes as children and with probability equal to the sum of the two nodes' probabilities. Add the new node to the queue. The remaining node is the root node and the tree is complete. It looks simple and great. However, it left me wondering: when I "merge" two nodes (make them children of a new internal node), does it even matter what direction (left or right) will each node be afterwards? I still don't fully understand Huffman coding, and I'm not very sure if there is a criteria used to tell whether a node should go to the right or to the left. I assumed that, perhaps the highest-frequency node would go to the right, but I've seen some Huffman trees in the web that don't seem to follow such criteria. For instance, Wikipedia's example image http://upload.wikimedia.org/wikipedia/commons/thumb/8/82/Huffman_tree_2.svg/625px-Huffman_tree_2.svg.png seems to put the highest ones to the right. But other images like this one http://thalia.spec.gmu.edu/~pparis/classes/notes_101/img25.gif has them all to the left. However, they're never mixed up in the same image (some to the right and others to the left). So, does it matter? Why?

    Read the article

  • Is it normal needing time to understand code i wrote recently

    - by user1478167
    By recently i mean some weeks ago. I am trying to continue a project i left 2 weeks ago and i need time to understand some functions i wrote(not copied from somewhere) and it takes me time. Normally i don't need to because my functions,methods etc are black boxes but when i need to change something it's really hard. Does this mean i write bad code? I am still in school and i am the only who writes/uses the code so i don't have feedback, but i am afraid that if it is difficult for me to understand it, it would be 10 times more difficult for someone else. What should i do? I write a lot of comments but most of the time are useless when reviewing. Do you have any suggestions?

    Read the article

  • Is it ok to replace optimized code with readable code?

    - by Coder
    Sometimes you run into a situation where you have to extend/improve an existing code. You see that the old code is very lean, but it's also difficult to extend, and takes time to read. Is it a good idea to replace it with modern code? Some time ago I liked the lean approach, but now, it seems to me that it's better to sacrifice a lot of optimizations in favor of higher abstractions, better interfaces and more readable, extendable code. The compilers seem to be getting better as well, so things like struct abc = {} are silently turned into memsets, shared_ptrs are pretty much producing the same code as raw pointer twiddling, templates are working super good because they produce super lean code, and so on. But still, sometimes you see stack based arrays and old C functions with some obscure logic, and usually they are not on the critical path. Is it good idea to change such code if you have to touch a small piece of it either way?

    Read the article

  • MBO in software development

    - by Euphoric
    I just found out our middle sized software development company is going to implement MBO. As a big fan of Agile, I see this as counter-intuitive, because I believe it is impossible to create a objective, that is measurable. Especialy where creativity, experience, knowledge and profesionalism are important traits and expanding those are objectives and goals of many. So my question is, what is your opinion on using MBO in software development? And if there are development companies using MBO, either successfuly or unsucessfuly.

    Read the article

  • C++ Library API Design

    - by johannes
    I'm looking for a good resource for learning about good API design for C++ libraries, looking at shared objects/dlls etc. There are many resources on writing nice APIs, nice classes, templates and so on at source level, but barely anything about putting things together in shared libs and executables. Books like Large-Scale C++ Software Design by John Lakos are interesting but massively outdated. What I'm looking for is advice i.e. on handling templates. With templates in my API I often end up with library code in my executable (or other library) so if I fix a bug in there I can't simply roll out the new library but have to recompile and redistribute all clients of that code. (and yes, I know some solutions like trying to instantiate at least the most common versions inside the library etc.) I'm also looking for other caveats and things to mind for keeping binary compatibility while working on C++ libraries. Is there a good website or book on such things?

    Read the article

  • Learning Erlang vs learning node.js

    - by Noli
    I see a lot of crap online about how Erlang kicks node.js' ass in just about every conceivable category. So I'd like to learn Erlang, and give it a shot, but here's the problem. I'm finding that I have a much harder time picking up Erlang than I did picking up node.js. With node.js, I could pick a relatively complex project, and in a day I had something working. With Erlang, I'm running into barriers, and not going nearly as quickly. So.. for those with more experience, is Erlang complicated to learn, or am I just missing something? Node.js might not be perfect, but I seem to be able to get things done with it.

    Read the article

  • Programming and Ubiquitous Language (DDD) in a non-English domain

    - by Sandor Drieënhuizen
    I know there are some questions already here that are closely related to this subject but none of them take Ubiquitous Language as the starting point so I think that justifies this question. For those who don't know: Ubiquitous Language is the concept of defining a (both spoken and written) language that is equally used across developers and domain experts to avoid inconsistencies and miscommunication due to translation problems and misunderstanding. You will see the same terminology show up in code, conversations between any team member, functional specs and whatnot. So, what I was wondering about is how to deal with Ubiquitous Language in non-English domains. Personally, I strongly favor writing programming code in English completely, including comments but ofcourse excluding constants and resources. However, in a non-English domain, I'm forced to make a decision either to: Write code reflecting the Ubiquitous Language in the natural language of the domain. Translate the Ubiquitous Language to English and stop communicating in the natural language of the domain. Define a table that defines how the Ubiquitous Language translates to English. Here are some of my thoughts based on these options: 1) I have a strong aversion against mixed-language code, that is coding using type/member/variable names etc. that are non-English. Most programming languages 'breathe' English to a large extent and most of the technical literature, design pattern names etc. are in English as well. Therefore, in most cases there's just no way of writing code entirely in a non-English language so you end up with mixed languages anyway. 2) This will force the domain experts to start thinking and talking in the English equivalent of the UL, something that will probably not come naturally to them and therefore hinders communication significantly. 3) In this case, the developers communicate with the domain experts in their native language while the developers communicate with each other in English and most importantly, they write code using the English translation of the UL. I'm sure I don't want to go for the first option and I think option 3 is much better than option 2. What do you think? Am I missing other options? UPDATE Today, about year later, having dealt with this issue on a daily basis, I have to say that option 3 has worked out pretty well for me. It wasn't as tedious as I initially feared and translating in real time while talking to the client wasn't a problem either. I also found the following advantages to be true, based on my experience. Translating the UL makes you pay more attention to defining the UL and even the domain itself, especially when you don't know how to translate a term and you have to start looking through dictionaries etc. This has even caused me to reconsider domain modeling decisions a few times. It helps you make your knowledge of the English language more profound. Obviously, your code is much more pleasant to look at instead of being a mind boggling obscenity.

    Read the article

  • Is there a rule of thumb for what a bing map's zoom setting should be based on how many miles you want to display?

    - by Clay Shannon
    If a map contains pushpins/waypoints that span only a couple of miles, the map should be zoomed way in and show a lot of detail. If the pushpins/waypoints instead cover vast areas, such as hundreds or even thousands of miles, it should be zoomed way out. That's clear. My question is: is there a general guideline for mapping (no pun intended) Bing Maps zoom levels to a particular number of miles that separate the furthest apart points? e.g., is there some chart that has something like: Zoom level N shows 2 square miles Zoom level N shows 5 square miles etc.?

    Read the article

  • Architecture for Social Graph data that has a Time Frame Associated?

    - by Jay Stevens
    I am adding some "social" type features to an existing application. There are a limited # of node & edge types. Overall the data itself is relatively small (50,000 - 70,000 for each type of node) there will be a number of edges (relationships) between them (almost all directional). This, I know, is relatively easy to represent with an SDF store (such as BrightstarDB) or something like Microsoft's Trinity (or really many of the noSQL options). The thing that, I think, makes this a unique use case is that each relationship will have a timeframe associated with it (start and end dates). Right now, I'm thinking of just storing this in a relational structure and dealing with the headaches of "traversing the graph", but I'm looking for suggestions on a better approach (both in terms of data structure and server): Column ================ From_Node_ID Relationship To_Node_ID StartDate EndDate Any suggestions or thoughts are welcomed.

    Read the article

  • Architecture design with MyBatis mappers

    - by Wolf
    I am creating rest web service for providing data. I am using Spring MVC for handling rest requests, and MyBatis for data access. Application should be designed in the way that it should be easy to change the data access implementation (for example to hibernate or something else) and it has to be fast (so I am trying to avoid unnecessary overcomplication of design). Now my question is about the general design of layers. I would normally use DAO interface and then different implementations for different data access strategies, but MyBatis uses interfaces to access the data. So I can think of 2 possible models but I am not sure which one is better or if there is any other nice way: Controller layer - uses Service layer interfaces services are then implemented for each data access stretegy - for example for mybatis: service implementation uses Mapper classes to access data and do whatever it needs to do with them and sends them to controller layer Controller layer - uses Service layer - service layer uses DAO interfaces DAOs are then implemented for each data access strategy - for example for mybatis: DAO class uses mapper interface to access data and sends them to service layer, service layer then do whatever it needs to do with them and sends them to controller layer I prefer the first strategy as it seems to be less complicated, but then I would have to write all of the service code for another data access again. What do you think? Thank You

    Read the article

< Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >