Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 304/563 | < Previous Page | 300 301 302 303 304 305 306 307 308 309 310 311  | Next Page >

  • TDD: Write a separate test for object initialization or relying on other tests exercising it

    - by DXM
    This seems to be the common pattern that's emerging in some of the tests I've worked on lately. We have a class, and quite often this is legacy code whose design can't be easily altered, which has a bunch of member variables. There's some kind of "Initialize" or "Load" function which would put an object into a valid state. Only after it is initialized/loaded, are the members in the proper state so that other methods can be exercised. So when we start writing tests, first test is "TestLoad" and all we put in there is exercising initialization logic. Then we might add one (or few) TestLoadFailureXXX tests and those are definitely valuable. Then we start writing tests to verify other behaviors but all of them require the object to be loaded. So they all start by running exactly the same code as "TestLoad". So my question: Is TestLoad even necessary? Do you take it and let other tests simply exercise the loading? Or leave it so things are more explicit? I know that each unit test function should have no (or as little as possible) overlap with other test functions, but it seems like in cases of loading, this is unavoidable. And whether we like it or not, if something in the loading code breaks, we will end up with a whole test suite of failures. Is there another approach that I might be missing here? Thank you for the responses. It definitely makes sense that you want to see "InitializationTest" and if that fails you know where to start looking. In case it matters, this question is mostly about C++ and we use CppUnit framework. And now, thanks to sleske, I'll be constantly wishing that CppUnit supported test dependencies. Might have to hack something in one of these days :)

    Read the article

  • How to apply verification and validation on the following example

    - by user970696
    I have been following verification and validation questions here with my colleagues, yet we are unable to see the slight differences, probably caused by language barrier in technical English. An example: Requirement specification User wants to control the lights in 4 rooms by remote command sent from the UI for each room separately. Functional specification The UI will contain 4 checkboxes labelled according to rooms they control. When a checkbox is checked, the signal is sent to corresponding light. A green dot appears next to the checkbox When a checkbox is unchecked, the signal (turn off) is sent to corresponding light. A red dot appears next to the checkbox. Let me start with what I learned here: Verification, according to many great answers here, ensures that product reflects specified requirements - as functional spec is done by a producer based on requirements from customer, this one will be verified for completeness, correctness). Then design document will be checked against functional spec (it should design 4 checkboxes..), and the source code against design (is there a code for 4 checkboxes, functions to send the signals etc. - is it traceable to requirements). Okay, product is built and we need to test it, validate. Here comes our understanding trouble - validation should ensure the product meets requirements for its specific intended use which is basically business requirement (does it work? can I control the lights from the UI?) but testers will definitely work with the functional spec, making sure the checkboxes are there, working, labelled, etc. They are basically checking whether the requirements in functional spec were met in the final product, isn't that verification? (should not be, lets stick to ISO 12207 that only validation is the actual testing)

    Read the article

  • Why Nodes.js being that "unique"?

    - by Adrian Shum
    Recently years there are lots of praise to Nodes.js. I am not a developer that have much exposure on network application. From my bare understanding of Nodes.js, its strength is: We are having only on thread handling multiple connections, providing a event-based architecture. However, for example in Java, what if I am having only one thread, using NIO/AIO (which is non-blocking APIs from my bare understanding), and handle multiple connections using that thread, and I provide an event-based architecture to implement the data handling logic (shouldn't be that difficult by providing some callback etc) ? Given JVM being a even more mature VM than V8 (I expect it run faster too), and event-based handling architecture seems not something difficult to create. I am not sure why Nodes.js is attracting so much attention. Did I miss some important points?

    Read the article

  • Best Ruby Git library?

    - by Jeff Welling
    Which is the best Git library in Ruby to use? Git, Grit, Rugged, Other? Background: I'm the current maintainer of TicGit-ng which is a distributed offline ticket system built on git, and I've read and heard over and over again that Grit is the one I should use because it supersedes the Git gem, but there seems to be either a lack of documentation or a lack of features because myself and others have failed in trying to switch from the deprecated-but-functional Git to the newer Grit gem.

    Read the article

  • Team Software Development using Ruby on Rails

    - by Panoy
    I used to work alone on small to medium sized programming projects before and have no experience working in a team environment. Currently, there will be 3 of us in an in-house software development team that is tasked to develop a number of software for an academic institution. We have decided to use the web for the majority of the projects and are planning to choose Ruby on Rails for this and I would like to ask for your inputs, advices and approaches with regards to software development as a team using the RoR web framework. One thing that has really confounded me is how you divide the programming tasks of a project if there are 3 of you that are really doing the coding. It’s obvious that we as developers approach a problem in a modular way and finish it one after another. If the project consists of 3 modules, should each one of us focus on each of those modules? Would it be faster that way? How about if the 3 of us would focus on one module first (that’s what I really prefer). Is using a distributed version control system such as Git the answer to this type of problem? Please don’t forget to put your tips and experiences with regards to team software development. Cheers!

    Read the article

  • How should I practice web server administration?

    - by Astyanax
    Security students can practice their skills with software like OWASP's webgoat or something similar to "hackthissite". Students interested in Operating Systems can study MINIX and PintOS, write shell scripts or study POSIX system calls. What would be the best course of action in order to practice Server Administration? Is there any such software/resource available, teaching you such skills with small lessons, or it is totally up to you? I've practiced live FreeBSD server administration and management of VMs (CentOS, Gentoo, Debian) under VirtualBox, but I always feel that this isn't enough and I must push myself harder. So, what would you recommend? What has worked for you?

    Read the article

  • ORM and component-based architecture

    - by EagleBeek
    I have joined an ongoing project, where the team calls their architecture "component-based". The lowest level is one big database. The data access (via ORM) and business layers are combined into various components, which are separated according to business logic. E.g., there's a component for handling bank accounts, one for generating invoices, etc. The higher levels of service contracts and presentation are irrelevant for the question, so I'll omit them here. From my point of view the separation of the data access layer into various components seems counterproductive, because it denies us the relational mapping capabilities of the ORM. E.g., when I want to query all invoices for one customer I have to identify the customer with the "customers" component and then make another call to the "invoices" component to get the invoices for this customer. My impression is that it would be better to leave the data access in one component and separate it from business logic, which may well be cut into various components. Does anybody have some advice? Have I overlooked something?

    Read the article

  • How do programers balance the upper or lower case style to name file or folder between work and life?

    - by sojyq
    I am a programmer from China. And I like to use English words to name my files and folders Whether it is for work or life. For example, suck as Movie, Work, QtProjects, Music and so on.And I keep the habit of initial the first letter for file name or folder name in Windows. But now I work on Ubuntu, and I found that all file name and folder name are lowercase in addition to the default folder such as Music, Movie and so on. And then I realize that in Linux world, most peoloe like to use all lowercase to name their files and folders for two reasons (1. Linux is Case sensitive. 2. It is fast for shell command.). And after work, when I switch from Linux to Windows, I confuse to use all lowercase or the first letter uppercase style to name my files in Windows. I'm caught in a dilemma. I think that all lowercase is more efficiency but the first letter uppercase is more readable. I thought for a long time and want to come up with a good answer to blance the two style name conversion. But I failed. I want to ask you that how you balance the uppercase or lowercase habbit in Windows, Mac, Linux between work and personal life style? Thank you very much! (My current solution is that when I am in Linux, I use all lowercase for files and folders, but when I am in Windows and Mac OS X, I couldn't find a good reason to convince me to use all lowercase ( I think in Windows and Mac OS X, the first letter uppercase style for me is more readable and beautiful).

    Read the article

  • Handling bugs, quirks, or annoyances in vendor-supplied headers

    - by supercat
    If the header file supplied by a vendor of something with whom one's code must interact is deficient in some way, in what cases is it better to: Work around the header's deficiencies in the main code Copy the header file to the local project and fix it Fix the header file in the spot where it's stored as a vendor-supplied tool Fix the header file in the central spot, but also make a local copy and try to always have the two match Do something else As an example, the header file supplied by ST Micro for the STM320LF series contains the lines: typedef struct { __IO uint32_t MODER; __IO uint16_t OTYPER; uint16_t RESERVED0; .... __IO uint16_t BSRRL; /* BSRR register is split to 2 * 16-bit fields BSRRL */ __IO uint16_t BSRRH; /* BSRR register is split to 2 * 16-bit fields BSRRH */ .... } GPIO_TypeDef; In the hardware, and in the hardware documentation, BSRR is described as a single 32-bit register. About 98% of the time one wants to write to BSRR, one will only be interested in writing the upper half or the lower half; it is thus convenient to be able to use BSSRH and BSSRL as a means of writing half the register. On the other hand, there are occasions when it is necessary that the entire 32-bit register be written as a single atomic operation. The "optimal" way to write it (setting aside white-spacing issues) would be: typedef struct { __IO uint32_t MODER; __IO uint16_t OTYPER; uint16_t RESERVED0; .... union // Allow BSRR access as 32-bit register or two 16-bit registers { __IO uint32_t BSRR; // 32-bit BSSR register as a whole struct { __IO uint16_t BSRRL, BSRRH; };// Two 16-bit parts }; .... } GPIO_TypeDef; If the struct were defined that way, code could use BSRR when necessary to write all 32 bits, or BSRRH/BSRRL when writing 16 bits. Given that the header isn't that way, would better practice be to use the header as-is, but apply an icky typecast in the main code writing what would be idiomatically written as thePort->BSRR = 0x12345678; as *((uint32_t)&(thePort->BSSRH)) = 0x12345678;, or would be be better to use a patched header file? If the latter, where should the patched file me stored and how should it be managed?

    Read the article

  • How do you organize your projects?

    - by Sergio
    Do you have any particular style of organizing projects? For example, currently I'm creating a project for a couple of schools here in Bolivia, this is how I organized it: TutoMentor (Solution) TutoMentor.UI (Winforms project) TutoMentor.Data (Class library project) How exactly do you organize your project? Do you have an example of something you organized and are proud of? Can you share a screenshot of the Solution pane? In the UI area of my application, I'm having trouble deciding on a good schema to organize different forms and where they belong. Edit: What about organizing different forms in the .UI project? Where/how should I group different form? Putting them all in root level of the project is a bad idea.

    Read the article

  • How would you design an application with many target platforms and devices?

    - by Pierre 303
    I'm in a very beginning of the design phase of an application that will have to run in the following platforms/devices: Desktop: Windows, Linux & Mac Mobile: Android, iPhone/iPad & Windows Phone 7 Web: Silverlight I will use C# on Mono and I want to maximize code re-usability. Except for the desktop (I'll use WinForms/GTK#), my concern is related to many different GUI that I will face. What would be your approach? Obviously, the views will be different, but what about the controllers, data access, utility classes, etc. Is it really acceptable to share everything but the views?

    Read the article

  • Where to find algorithms work?

    - by Misha
    The funnest parts of my projects have been the back-end algorithms work. I have worked on projects where I implemented Gaussian Mixture models, a Remez algorithm and a few Monte Carlo schemes. I loved figuring out how these processes worked and tuning them when they didn't. I recently graduated and my problem lies in the work I was able to find. The only jobs I have found, with my Electrical Engineering degree, are for writing user applications. Tasks such as fashioning web interfaces or front-ends for hardware devices. When I speak with potential employers about my interests they say they have no work of the sort. Where does one find work that involves implementing these kind of schemes?

    Read the article

  • Skills to Focus on to land Big 5 Software Engineer Position

    - by Megadeth.Metallica
    Guys, I'm in my penultimate quarter of grad school and have a software engineering internship lined up at a big 5 tech company. I have dabbled a lot recently in Python and am average at Java. I want to prepare myself for coding interviews when I apply for new grad positions at the Big 5 tech companies when I graduate at the end of this year. Since I want to have a good shot at all 5 companies (Amazon,Google,Yahoo,Microsoft and Apple) - Should I focus my time and effort on mastering and improving my Java. Or is my time better spent checking out other languages and tools ( Attracted to RoR, Clojure, Git, C# ) I am planning to spend my spring break implementing all the common algorithms and Data structure out of my algorithms textbook in Java.

    Read the article

  • Defining a function that is both a generator and recursive [on hold]

    - by user96454
    I am new to python, so this code might not necessarily be clean, it's just for learning purposes. I had the idea of writing this function that would display the tree down the specified path. Then, i added the global variable number_of_py to count how many python files were in that tree. That worked as well. Finally, i decided to turn the whole thing into a generator, but the recursion breaks. My understanding of generators is that once next() is called python just executes the body of the function and "yields" a value until we hit the end of the body. Can someone explain why this doesn't work? Thanks. import os from sys import argv script, path = argv number_of_py = 0 lines_of_code = 0 def list_files(directory, key=''): global number_of_py files = os.listdir(directory) for f in files: real_path = os.path.join(directory, f) if os.path.isdir(real_path): list_files(real_path, key=key+' ') else: if real_path.split('.')[-1] == 'py': number_of_py += 1 with open(real_path) as g: yield len(g.read()) print key+real_path for i in list_files(argv[1]): lines_of_code += i print 'total number of lines of code: %d' % lines_of_code print 'total number of py files: %d' % number_of_py

    Read the article

  • In MVC , DAO should be called from Controller or Model

    - by tito
    I have seen various arguments against the DAO being called from the Controller class directly and also the DAO from the Model class.Infact I personally feel that if we are following the MVC pattern , the controller should not coupled with the DAO , but the Model class should invoke the DAO from within and controller should invoke the model class.Why because , we can decouple the model class apart from a webapplication and expose the functionalities for various ways like for a REST service to use our model class. If we write the DAO invocation in the controller , it would not be possible for a REST service to reuse the functionality right ? I have summarized both the approaches below. Approach #1 public class CustomerController extends HttpServlet { proctected void doPost(....) { Customer customer = new Customer("xxxxx","23",1); new CustomerDAO().save(customer); } } Approach #2 public class CustomerController extends HttpServlet { proctected void doPost(....) { Customer customer = new Customer("xxxxx","23",1); customer.save(customer); } } public class Customer { ........... private void save(Customer customer){ new CustomerDAO().save(customer); } } Note- Here is what a definition of Model is : Model: The model manages the behavior and data of the application domain, responds to requests for information about its state (usually from the view), and responds to instructions to change state (usually from the controller). In event-driven systems, the model notifies observers (usually views) when the information changes so that they can react. I would need an expert opinion on this because I find many using #1 or #2 , So which one is it ?

    Read the article

  • Are R&D mini-projects a good activity for interns?

    - by dukeofgaming
    I'm going to be in charge of hiring some interns for our software department soon (automotive infotainment systems) and I'm designing an internship program. The main productive activity "menu" I'm planning for them consists of: Verification testing Writing Unit Tests (automated, with an xUnit-compliant framework [several languages in our projects]) Documenting Code Updating wiki Updating diagrams & design docs Helping with low priority tickets (supervised/mentored) Hunting down & cleaning compiler/run-time warnings Refactoring/cleaning code against our coding standards But I also have this idea that having them do small R&D projects would be good to test their talent and get them to have fun. These mini-projects would be: Experimental implementations & optimizations Proof of concept implementations for new technologies Small papers (~2-5 pages) doing formal research on the previous two points Apps (from a mini-project pool) These kinds of projects would be pre-defined and very concrete, although new ideas from the interns themselves would be very welcome. Even if a project is too big or is abandoned, the idea would also be to lay the ground work so they can be retaken by another intern or intern team. While I think this is good in concept, I don't know if it could be good in practice, as obviously this would diminish their productivity on "real work" (work with immediate value to the company), but I think it could help bring aboard very bright people and get them to want to stay in the future (which, I think, is the end goal for any internship program). My question here is if these activities are too open ended or difficult for the average intern to accomplish and if R&D is an efficient use of an interns time or if it makes more sense for to assign project work to interns instead.

    Read the article

  • Extract all related class type aliasing and enum into one file or not

    - by Chen OT
    I have many models in my project, and some other classes just need the class declaration and pointer type aliasing. It does not need to know the class definition, so I don't want to include the model header file. I extract all the model's declaration into one file to let every classes reference one file. model_forward.h class Cat; typedef std::shared_ptr<Cat> CatPointerType; typedef std::shared_ptr<const Cat> CatConstPointerType; class Dog; typedef std::shared_ptr<Dog> DogPointerType; typedef std::shared_ptr<const Dog> DogConstPointerType; class Fish; typedef std::shared_ptr<Fish> FishPointerType; typedef std::shared_ptr<const Fish> FishConstPointerType; enum CatType{RED_CAT, YELLOW_CAT, GREEN_CAT, PURPLE_CAT} enum DogType{HATE_CAT_DOG, HUSKY, GOLDEN_RETRIEVER} enum FishType{SHARK, OCTOPUS, SALMON} Is it acceptable practice? Should I make every unit, which needs a class declaration, depends on one file? Does it cause high coupling? Or I should put these pointer type aliasing and enum definition inside the class back? cat.h class Cat { typedef std::shared_ptr<Cat> PointerType; typedef std::shared_ptr<const Cat> ConstPointerType; enum Type{RED_CAT, YELLOW_CAT, GREEN_CAT, PURPLE_CAT} ... }; dog.h class Dog { typedef std::shared_ptr<Dog> PointerType; typedef std::shared_ptr<const Dog> ConstPointerType; enum Type{HATE_CAT_DOG, HUSKY, GOLDEN_RETRIEVER} ... } fish.h class Fish { ... }; Any suggestion will be helpful.

    Read the article

  • Have you used a Framework/Lib whose LGPL License? if yes, what are the impressions of your customers?

    - by Smarty Twiti
    I am trying to make my first app for sale, I would like to ask some questions for those who have already sold their software: Have you used a Framework/Lib whose LGPL License? if yes, what are the impressions of your customers? for example, if your customers/ competitors from the market reveal technology/secrets that you used in your solution (as LGPL requires that you make a Dynamic Link (.DLL) for your libs and you clearly tell the use of a Lib/Framework). Full story: For my project, I used a framework LGPL/commercial (Dual License) the second one it was too expensive (about 3000 USD) which pushed me to use LGPL however I still concerned. That is why I ask for advise and especially motivations.

    Read the article

  • Field symbol and Data reference in SAP-ABAP

    - by PDP-21
    If we compare field symbol and data refernece with that of the pointer in C i concluded that :- In C language, Say we declare a variable "var" type "Integer" with default value "5". The variable "var" will be stored some where in the memory and say the memory address which holds this variable is "1000". Now we define a pointer "ptr" and this pointer is assigned to our variable. So, "&ptr" will be "1000" and " *ptr " will be 5. Lets comapre the above situation in SAP ABAP. Here we declare a Field symbol "FS" and assign that to the variable "var". Now my question is what "FS" holds ? I have searched this rigorously in the internet but found out many ABAP consultants have the opinion that FS holds the address of the variable i.e. 1000. But that is wrong. While debugging i found out that fs holds only 5. So fs (in ABAP) is equivalent to *ptr (in C). Please correct me if my understanding is wrong. Now lets declare a data reference "dref" and another filed symbol "fsym" and after creating the data reference we assign the same to field symbol . Now we can do operations on this field symbol. So the difference between data refernec and field symbol is :- in case of field symbol first we will declare a variable and assign it to a field symbol. in case of data reference first we craete a data reference and then assign that to field symbol. Then what is the use of data reference? The same functionality we can achive through field symbol also.

    Read the article

  • Uses of persistent data structures in non-functional languages

    - by Ray Toal
    Languages that are purely functional or near-purely functional benefit from persistent data structures because they are immutable and fit well with the stateless style of functional programming. But from time to time we see libraries of persistent data structures for (state-based, OOP) languages like Java. A claim often heard in favor of persistent data structures is that because they are immutable, they are thread-safe. However, the reason that persistent data structures are thread-safe is that if one thread were to "add" an element to a persistent collection, the operation returns a new collection like the original but with the element added. Other threads therefore see the original collection. The two collections share a lot of internal state, of course -- that's why these persistent structures are efficient. But since different threads see different states of data, it would seem that persistent data structures are not in themselves sufficient to handle scenarios where one thread makes a change that is visible to other threads. For this, it seems we must use devices such as atoms, references, software transactional memory, or even classic locks and synchronization mechanisms. Why then, is the immutability of PDSs touted as something beneficial for "thread safety"? Are there any real examples where PDSs help in synchronization, or solving concurrency problems? Or are PDSs simply a way to provide a stateless interface to an object in support of a functional programming style?

    Read the article

  • Why would I learn C++11, having known C and C++?

    - by Shahbaz
    I am a programmer in C and C++, although I don't stick to either language and write a mixture of the two. Sometimes having code in classes, possibly with operator overloading, or templates and the oh so great STL is obviously a better way. Sometimes use of a simple C function pointer is much much more readable and clear. So I find beauty and practicality in both languages. I don't want to get into the discussion of "If you mix them and compile with a C++ compiler, it's not a mix anymore, it's all C++" I think we all understand what I mean by mixing them. Also, I don't want to talk about C vs C++, this question is all about C++11. C++11 introduces what I think are significant changes to how C++ works, but it has introduced many special cases that change how different features behave in different circumstances, placing restrictions on multiple inheritance, adding lambda functions, etc. I know that at some point in the future, when you say C++ everyone would assume C++11. Much like when you say C nowadays, you most probably mean C99. That makes me consider learning C++11. After all, if I want to continue writing code in C++, I may at some point need to start using those features simply because my colleagues have. Take C for example. After so many years, there are still many people learning and writing code in C. Why? Because the language is good. What good means is that, it follows many of the rules to create a good programming language. So besides being powerful (which easy or hard, almost all programming languages are), C is regular and has few exceptions, if any. C++11 however, I don't think so. I'm not sure that the changes introduced in C++11 are making the language better. So the question is: Why would I learn C++11? Update: My original question in short was: "I like C++, but the new C++11 doesn't look good because of this and this and this. However, deep down something tells me I need to learn it. So, I asked this question here so that someone would help convince me to learn it." However, the zealous people here can't tolerate pointing out a flaw in their language and were not at all constructive in this manner. After the moderator edited the question, it became more like a "So, how about this new C++11?" which was not at all my question. Therefore, in a day or too I am going to delete this question if no one comes up with an actual convincing argument. P.S. If you are interested in knowing what flaws I was talking about, you can edit my question and see the previous edits.

    Read the article

  • How to do thread management in C++?

    - by Dipan Mehta
    We use pthread for thread management in C based systems. pthread is in general compilable by C++ compiler (like g++). However, what are the better ways of abstractions for threads in C++? Also, for making any system to be working in a multi-threaded system, it is also important to make thread safe. What are the standard libraries that requires alternative (installs) to be thread safe or are they unsafe for multi-threaded environments? Is smart pointers, templates require special measures to make it safe? What are the best practices for the thread managements in C++?

    Read the article

  • Why do so few large websites run a Microsoft stack?

    - by realworldcoder
    Off the top of my head, I can think of a handful of large sites which utilize the Microsoft stack Microsoft.com Dell MySpace PlentyOfFish StackOverflow Hotmail, Bing, WindowsLive However, based on observation, nearly all of the top 500 sites seem to be running other platforms.What are the main reasons there's so little market penetration? Cost? Technology Limitations? Does Microsoft cater to corporate / intranet environments more then public websites? I'm not looking for market share, but rather large scale adoption of the MS stack.

    Read the article

  • Is an event loop just a for/while loop with optimized polling?

    - by Alan
    I'm trying to understand what an event loop is. Often the explanation is that in the event loop, you do something until you're notified that an event occurred. You than handle the event and continue doing what you did before. To map the above definition with an example. I have a server which 'listens' in a event loop, and when a socket connection is detected, the data from it gets read and displayed, after which the server goes to the listening it did before. However, this event happening and us getting notified 'just like that' are to much for me to handle. You can say: "It's not 'just like that' you have to register an event listener". But what's an event listener but a function which for some reason isn't returning. Is it in it's own loop, waiting to be notified when an event happens? Should the event listener also register an event listener? Where does it end? Events are a nice abstraction to work with, however just an abstraction. I believe that in the end, polling is unavoidable. Perhaps we are not doing it in our code, but the lower levels (the programming language implementation or the OS) are doing it for us. It basically comes down to the following pseudo code which is running somewhere low enough so it doesn't result in busy waiting: while(True): do stuff check if event has happened (poll) do other stuff This is my understanding of the whole idea, and i would like to hear if this is correct. I am open in accepting that the whole idea is fundamentally wrong, in which case I would like the correct explanation. Best regards

    Read the article

  • Canonical representation of a class object containing a list element in XML

    - by dendini
    I see that most implementations of JAX-RS represent a class object containing a list of elements as follows (assume a class House containing a list of People) <houses> <house> <person> <name>Adam</name> </person> <person> <name>Blake</name> </person> </house> <house> </house> </houses> The result above is obtained for instance from Jersey 2 JAX-RS implementation, notice Jersey creates a wrapper class "houses" around each house, however strangely it doesn't create a wrapper class around each person! I don't feel this is a correct mapping of a list, in other words I'd feel more confortable with something like this: <houses> <house> <persons> <person> <name>Adam</name> </person> <person> <name>Blake</name> </person> </persons> </house> <house> </house> </houses> Is there any document explaining how an object should be correctly mapped apart from any opninion?

    Read the article

< Previous Page | 300 301 302 303 304 305 306 307 308 309 310 311  | Next Page >