Search Results

Search found 21023 results on 841 pages for 'computer architecture'.

Page 134/841 | < Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >

  • How to write a flexible modular program with good interaction possibilities between modules?

    - by PeterK
    I went through answers on similar topics here on SO but could't find a satisfying answer. Since i know this is a rather large topic, i will try to be more specific. I want to write a program which processes files. The processing is nontrivial, so the best way is to split different phases into standalone modules which then would be used as necessary (since sometimes i will be only interested in the output of module A, sometimes i would need output of five other modules, etc). The thing is, that i need the modules to cooperate, because the output of one might be the input of another. And i need it to be FAST. Moreover i want to avoid doing certain processing more than once (if module A creates some data which then need to be processed by module B and C, i don't want to run module A twice to create the input for modules B,C ). The information the modules need to share would mostly be blocks of binary data and/or offsets into the processed files. The task of the main program would be quite simple - just parse arguments, run required modules (and perhaps give some output, or should this be the task of the modules?). I don't need the modules to be loaded at runtime. It's perfectly fine to have libs with a .h file and recompile the program every time there is a new module or some module is updated. The idea of modules is here mainly because of code readability, maintaining and to be able to have more people working on different modules without the need to have some predefined interface or whatever (on the other hand, some "guidelines" on how to write the modules would be probably required, i know that). We can assume that the file processing is a read-only operation, the original file is not changed. Could someone point me in a good direction on how to do this in C++ ? Any advice is wellcome (links, tutorials, pdf books...).

    Read the article

  • c# multi threaded file processing

    - by user177883
    There is a folder that contains 1000 of small text files. I aim to parse and process all of them while more files are being populated in to the folder. My intention is to multithread this operation as the single threaded prototype took 6 minutes to process 1000 files. I like to have reader and writer thread(s) as following : while the reader thread(s) are reading the files, I d like to have writer thread(s) to process them. Once the reader is started reading a file, I d like to mark it as being processed, such as by renaming it, once it s read, rename it to completed. How to approach such multithreaded application ? Is it better to use a distributed hash table or a queue? Which data structure to use that would avoid locks? Would you have a better approach to this scheme that you like to share?

    Read the article

  • Allow for modular development while still running in same JVM?

    - by Marcus
    Our current app runs in a single JVM. We are now splitting up the app into separate logical services where each service runs in its own JVM. The split is being done to allow a single service to be modified and deployed without impacting the entire system. This reduces the need to QA the entire system - just need to QA the interaction with the service being changed. For inter service communication we use a combination of REST, an MQ system bus, and database views. What I don't like about this: REST means we have to marshal data to/from XML DB views couple the systems together which defeats the whole concept of separate services MQ / system bus is added complexity There is inevitably some code duplication between services You have set up n JBoss server configurations, we have to do n number of deployments, n number of set up scripts, etc, etc. Is there a better way to structure an internal application to allow modular development and deployment while allowing the app to run in a single JVM (and achieving the associated benefits)?

    Read the article

  • List of fundamental data structures - what am I missing?

    - by jboxer
    I've been studying my fundamental data structures a bunch recently, trying to make sure I've got them down cold. By "fundamental", I mean the real basic ones. Fancy ones like Red-Black Trees and Bloom Filters are clearly worth knowing, but they're usually either enhancements of fundamental ones (Red-Black Trees are binary search trees with special properties to keep them balanced) or they're only useful in very specific situations (Bloom Filters). So far, I'm "fluent" in the following data structures: Arrays Linked Lists Stacks/Queues Binary Search Trees Heaps/Priority Queues Hash Tables However, I feel like I'm missing something. Are there any fundamental ones that I'm forgetting about? EDIT: Added these after posting the question Strings (suggested by catchmeifyoutry) Sets (suggested by Peter) Graphs (suggested by Nick D and aJ) B-Trees (Suggested by tloach) I'm a little on-the-fence about whether these are too fancy or not, but I think they're different enough from the fundamental structures (and important enough) to be worth studying as fundamental.

    Read the article

  • I did my own web framework: now, how keep it sync with applications? must I use versions?

    - by Daniel Koch
    ... and I did the first web application using it, now I'm going to create the second. In this first web application I enhanced the framework's core library with new things and promptly updated framework branch. I'm using bazaar to keep framework and web application committed. The application was in the beginning, a full branch of framework source tree, now I'm updating framework manually at every change on core files. (copying changed files from web app to framework's branch). With this second web application that I'm going to create, I need to know about versions (or revisions) which the application is based. If I found a bug in this version I can fix and then sync files with first web application no worrying: functions will be the same to this application. If I'm going to make changes in core (new behavior, new functions in library or something new in source tree) it must be named as "new version". What's the best way to do this? Because I'm using a Distributed Version Control System (bazaar), I'm not dealing with VERSIONS, but revision numbers that change every time. Please fresh my mind with new ideas.

    Read the article

  • Spring 2.0.0/2.0.6 to 3.0.5 migration stories

    - by Pangea
    We are in the process of migrating to 3.0.5 of spring from 2.0.x. We mainly use spring in below scenarios custom scope: thread local scope persistence: jdbc+hibernate 3.6 (but moving to mix of ejb 3.0+jpa 2.0+hibernate, not sure if all 3 can co-exist in 1 app) transactions: local (but planning to use jta due to the necessity of using multiple persistence inits, and has to use ejb+jpa+hibernate in 1 single trans), declarative trans mgmt parent-child contexts cxf annotations+xml OracleLobHandler Resource/ResourceBundleMessageResource JSF/Facelets with FacesSpringVariableResolver ActiveMQ integration Quartz integration TaskExecutor JMX exporter HttpExporter/Invoker Appreciate if someone can share their experiences like what to watch out for head aches/pain points which ones to drop for better alternate choices in new 3.0.5 release Is it better to switch from commons/iscreen validator to Hibernate Validator (Spec impl) or Spring Validator Is there a bean mapping framework in spring that i can use instead of Dozer XSLT transformation helper: currently we have small homegrown framework to cache xslts during load. if spring can do that for me then I would like to drop this Encryption/Decryption support. Password generation support. Authentication with SALT any SAML (or claims based secur New ideas Suggestions Switch to latest version of aspectj Upgrade guide from 2.5 to 3.0.5

    Read the article

  • Mutual Information / Entropy Calculation Help

    - by Fillip
    Hi, Hoping someone can give me some pointers with this entropy problem. Say X is chosen randomly from the uniform integer distribution 0-32 (inclusive). I calculate the entropy, H(X) = 32 bits, as each Xi has equal probability of occurring. Now, say the following pseudocode executes. int r = rand(0,1); // a random integer 0 or 1 r = r * 33 + X; How would I work out the mutual information between the two variables r and X? Mutual Information is defined as I(X; Y) = H(X) - H(X|Y) but I don't really understand how to apply the conditional entropy H(X|Y) to this problem. Thanks

    Read the article

  • Framework or design pattern for mailing all users of a webapp

    - by Todd Owen
    My app takes care of user registration (with the option to receive email announcements), and can easily handle the actual template-based rendering of email for a given user. JavaMail provides the mail transport layer. But how should I design the application layer between the business objects (e.g. User) and the mail transport? The straightforward approach would be a simple, synchronous loop: iterate through the users, queue the emails, and be done with it. "Queue" might mean sending them straight to the MTA (mail server), or to an in-memory queue to be consumed by another thread. However, I also plan to implement features like throttling the rate of emails, processing bounced emails (NDRs), and maintaining status across application restarts. My intuition is that a good design would decouple this from both the business layer and the mail transport layer as much as possible. I wondered if others had solved this problem before, but after much searching I haven't found any Java libraries which seem to fit this problem. Standalone mail apps such as James or list servers are too large in scope; packages like Spring's MailSender or Commons Email are too small in scope (being basically drop-in replacements for JavaMail). For other languages I haven't found anything appropriate either. I'm curious about how other developers have gone about adding bulk mailing to their applications.

    Read the article

  • What makes people think that NNs have more computational power than existing models?

    - by Bubba88
    I've read in Wikipedia that neural-network functions defined on a field of arbitrary real/rational numbers (along with algorithmic schemas, and the speculative `transrecursive' models) have more computational power than the computers we use today. Of course it was a page of russian wikipedia (ru.wikipedia.org) and that may be not properly proven, but that's not the only source of such.. rumors Now, the thing that I really do not understand is: How can a string-rewriting machine (NNs are exactly string-rewriting machines just as Turing machines are; only programming language is different) be more powerful than a universally capable U-machine? Yes, the descriptive instrument is really different, but the fact is that any function of such class can be (easily or not) turned to be a legal Turing-machine. Am I wrong? Do I miss something important? What is the cause of people saying that? I do know that the fenomenum of undecidability is widely accepted today (though not consistently proven according to what I've read), but I do not really see a smallest chance of NNs being able to solve that particular problem. Add-in: Not consistently proven according to what I've read - I meant that you might want to take a look at A. Zenkin's (russian mathematician) papers after mid-90-s where he persuasively postulates the wrongness of G. Cantor's concepts, including transfinite sets, uncountable sets, diagonalization method (method used in the proof of undecidability by Turing) and maybe others. Even Goedel's incompletness theorems were proven in right way in only 21-st century.. That's all just to plug Zenkin's work to the post cause I don't know how widespread that knowledge is in CS community so forgive me if that did look stupid. Thank you!

    Read the article

  • How many layers is too many?

    - by Nathan
    As I have been learning about software development the last 2 years the more I learn, it seems the more gray areas I am running into. One gray area I have issues with right now is trying to decide how many layers an application should have. For example, in a WPF MVVM application what fashion of layering is ok? Is the following too separated? When I mention layering I mean creating a new class library for each layer. Presentation (View) View Model Business Layer Data Access Model Layer Utility Layer Or for a non MVVM application is this too separated? Presenation Business Data Access Model Layer Utility Layer Is acceptable to run layers together and just create folders for each layer? Any coloring of this gray area would be appreciated.

    Read the article

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 4

    - by MarkPearl
    Learning Outcomes Explain the characteristics of memory systems Describe the memory hierarchy Discuss cache memory principles Discuss issues relevant to cache design Describe the cache organization of the Pentium Computer Memory Systems There are key characteristics of memory… Location – internal or external Capacity – expressed in terms of bytes Unit of Transfer – the number of bits read out of or written into memory at a time Access Method – sequential, direct, random or associative From a users perspective the two most important characteristics of memory are… Capacity Performance – access time, memory cycle time, transfer rate The trade off for memory happens along three axis… Faster access time, greater cost per bit Greater capacity, smaller cost per bit Greater capacity, slower access time This leads to people using a tiered approach in their use of memory   As one goes down the hierarchy, the following occurs… Decreasing cost per bit Increasing capacity Increasing access time Decreasing frequency of access of the memory by the processor The use of two levels of memory to reduce average access time works in principle, but only if conditions 1 to 4 apply. A variety of technologies exist that allow us to accomplish this. Thus it is possible to organize data across the hierarchy such that the percentage of accesses to each successively lower level is substantially less than that of the level above. A portion of main memory can be used as a buffer to hold data temporarily that is to be read out to disk. This is sometimes referred to as a disk cache and improves performance in two ways… Disk writes are clustered. Instead of many small transfers of data, we have a few large transfers of data. This improves disk performance and minimizes processor involvement. Some data designed for write-out may be referenced by a program before the next dump to disk. In that case the data is retrieved rapidly from the software cache rather than slowly from disk. Cache Memory Principles Cache memory is substantially faster than main memory. A caching system works as follows.. When a processor attempts to read a word of memory, a check is made to see if this in in cache memory… If it is, the data is supplied, If it is not in the cache, a block of main memory, consisting of a fixed number of words is loaded to the cache. Because of the phenomenon of locality of references, when a block of data is fetched into the cache, it is likely that there will be future references to that same memory location or to other words in the block. Elements of Cache Design While there are a large number of cache implementations, there are a few basic design elements that serve to classify and differentiate cache architectures… Cache Addresses Cache Size Mapping Function Replacement Algorithm Write Policy Line Size Number of Caches Cache Addresses Almost all non-embedded processors support virtual memory. Virtual memory in essence allows a program to address memory from a logical point of view without needing to worry about the amount of physical memory available. When virtual addresses are used the designer may choose to place the cache between the MMU (memory management unit) and the processor or between the MMU and main memory. The disadvantage of virtual memory is that most virtual memory systems supply each application with the same virtual memory address space (each application sees virtual memory starting at memory address 0), which means the cache memory must be completely flushed with each application context switch or extra bits must be added to each line of the cache to identify which virtual address space the address refers to. Cache Size We would like the size of the cache to be small enough so that the overall average cost per bit is close to that of main memory alone and large enough so that the overall average access time is close to that of the cache alone. Also, larger caches are slightly slower than smaller ones. Mapping Function Because there are fewer cache lines than main memory blocks, an algorithm is needed for mapping main memory blocks into cache lines. The choice of mapping function dictates how the cache is organized. Three techniques can be used… Direct – simplest technique, maps each block of main memory into only one possible cache line Associative – Each main memory block to be loaded into any line of the cache Set Associative – exhibits the strengths of both the direct and associative approaches while reducing their disadvantages For detailed explanations of each approach – read the text book (page 148 – 154) Replacement Algorithm For associative and set associating mapping a replacement algorithm is needed to determine which of the existing blocks in the cache must be replaced by a new block. There are four common approaches… LRU (Least recently used) FIFO (First in first out) LFU (Least frequently used) Random selection Write Policy When a block resident in the cache is to be replaced, there are two cases to consider If no writes to that block have happened in the cache – discard it If a write has occurred, a process needs to be initiated where the changes in the cache are propagated back to the main memory. There are several approaches to achieve this including… Write Through – all writes to the cache are done to the main memory as well at the point of the change Write Back – when a block is replaced, all dirty bits are written back to main memory The problem is complicated when we have multiple caches, there are techniques to accommodate for this but I have not summarized them. Line Size When a block of data is retrieved and placed in the cache, not only the desired word but also some number of adjacent words are retrieved. As the block size increases from very small to larger sizes, the hit ratio will at first increase because of the principle of locality, which states that the data in the vicinity of a referenced word are likely to be referenced in the near future. As the block size increases, more useful data are brought into cache. The hit ratio will begin to decrease as the block becomes even bigger and the probability of using the newly fetched information becomes less than the probability of using the newly fetched information that has to be replaced. Two specific effects come into play… Larger blocks reduce the number of blocks that fit into a cache. Because each block fetch overwrites older cache contents, a small number of blocks results in data being overwritten shortly after they are fetched. As a block becomes larger, each additional word is farther from the requested word and therefore less likely to be needed in the near future. The relationship between block size and hit ratio is complex, and no set approach is judged to be the best in all circumstances.   Pentium 4 and ARM cache organizations The processor core consists of four major components: Fetch/decode unit – fetches program instruction in order from the L2 cache, decodes these into a series of micro-operations, and stores the results in the L2 instruction cache Out-of-order execution logic – Schedules execution of the micro-operations subject to data dependencies and resource availability – thus micro-operations may be scheduled for execution in a different order than they were fetched from the instruction stream. As time permits, this unit schedules speculative execution of micro-operations that may be required in the future Execution units – These units execute micro-operations, fetching the required data from the L1 data cache and temporarily storing results in registers Memory subsystem – This unit includes the L2 and L3 caches and the system bus, which is used to access main memory when the L1 and L2 caches have a cache miss and to access the system I/O resources

    Read the article

  • MyController class must produce class according to the enum type.

    - by programmerist
    GenoTipController must produce class according to the enum type. i have 3 class: _Company,_Muayene,_Radyoloji. Also i have CompanyView Class GetPersonel method. if you look GenoTipController my codes need refactoring. Can you understand me? i need a class according to ewnum type must me produce class. For example; case DataModelType.Radyoloji it must return radyoloji= new Radyoloji . Everything must be one switch case? public class GenoTipController { public _Company GenerateCompany(DataModelType modeltype) { _Company company = null; switch (modeltype) { case DataModelType.Radyoloji: break; case DataModelType.Satis: break; case DataModelType.Muayene: break; case DataModelType.Company: company = new Company(); break; default: break; } return company; } public _Muayene GenerateMuayene(DataModelType modeltype) { _Muayene muayene = null; switch (modeltype) { case DataModelType.Radyoloji: break; case DataModelType.Satis: break; case DataModelType.Muayene: muayene = new Muayene(); break; case DataModelType.Company: break; default: break; } return muayene; } public _Radyoloji GenerateRadyoloji(DataModelType modeltype) { _Radyoloji radyoloji = null; switch (modeltype) { case DataModelType.Radyoloji: radyoloji = new Radyoloji(); break; case DataModelType.Satis: break; case DataModelType.Muayene: break; case DataModelType.Company: break; default: break; } return radyoloji; } } public class CompanyView { public static List GetPersonel() { GenoTipController controller = new GenoTipController(); _Company company = controller.GenerateCompany(DataModelType.Company); return company.GetPersonel(); } } public enum DataModelType { Radyoloji, Satis, Muayene, Company } }

    Read the article

  • is using private shared objects/variables on class level harmful ?

    - by haansi
    Hello, Thanks for your attention and time. I need your opinion on an basic architectural issue please. In page behind classes I am using a private and shared object and variables (list or just client or simplay int id) to temporary hold data coming from database or class library. This object is used temporarily to catch data and than to return, pass to some function or binding a control. 1st: Can this approach harm any way ? I couldn't analyze it but a thought was using such shared variables may replace data in it when multiple users may be sending request at a time? 2nd: Please comment also on using such variables in BLL (to hold data coming from DAL/database). In this example every time new object of BLL class will be made. Here is sample code: public class ClientManager { Client objclient = new Client(); //Used in 1st and 2nd method List<Client> clientlist = new List<Client>();// used in 3rd and 4th method ClientRepository objclientRep = new ClientRepository(); public List<Client> GetClients() { return clientlist = objclientRep.GetClients(); } public List<Client> SearchClients(string Keyword) { return clientlist = objclientRep.SearchClients(Keyword); } public Client GetaClient(int ClientId) { return objclient = objclientRep.GetaClient(ClientId); } public Client GetClientDetailForConfirmOrder(int UserId) { return objclientRep.GetClientDetailForConfirmOrder(UserId); } } I am really thankful to you for sparing time and paying kind attention.

    Read the article

  • How to extend a large website to an iPhone app?

    - by xoail
    I am trying to create an iPhone app for a large website (as big as amazon.com) and it involves using cookies and what not to get authenticated via the Apache intercepter and access the web services exposed by the main website. For that I am looking for strategies to go about developing it. I am new to iPhone development and I am mostly looking for some architectural guidance. Does anyone know how services like eBay and Amazon work seamlessly across the website and iPhone app?

    Read the article

  • How to organize and manage multiple database credentials in application?

    - by Polaris878
    Okay, so I'm designing a stand-alone web service (using RestLET as my framework). My application is divided in to 3 layers: Data Layer (just above the database, provides APIs for connecting to/querying database, and a database object) Object layer (responsible for serialization from the data layer... provides objects which the client layer can use without worrying about database) Client layer (This layer is the RestLET web service... basically just creates objects from the object layer and fulfills webservice request) Now, for each object I create in the object layer, I want to use different credentials (so I can sandbox each object...). The object layer should not know the exact credentials (IE the login/pw/DB URL etc). What would be the best way to manage this? I'm thinking that I should have a super class Database object in my data layer... and each subclass will contain the required log-in information... this way my object layer can just go Database db = new SubDatabase(); and then continue using that database. On the client level, they would just be able to go ItemCollection items = new ItemCollection(); and have no idea/control over the database that gets connected. I'm asking this because I am trying to make my platform extensible, so that others can easily create services off of my platform. If anyone has any experience with these architectural problems or how to manage this sort of thing I'd appreciate any insight or advice... Feel free to ask questions if this is confusing. Thanks! My platform is Java, the REST framework I'm using is RestLET, my database is MySQL.

    Read the article

  • Using pointers, references, handles to generic datatypes, as generic and flexible as possible

    - by Patrick
    In my application I have lots of different data types, e.g. Car, Bicycle, Person, ... (they're actually other data types, but this is just for the example). Since I also have quite some 'generic' code in my application, and the application was originally written in C, pointers to Car, Bicycle, Person, ... are often passed as void-pointers to these generic modules, together with an identification of the type, like this: Car myCar; ShowNiceDialog ((void *)&myCar, DATATYPE_CAR); The 'ShowNiceDialog' method now uses meta-information (functions that map DATATYPE_CAR to interfaces to get the actual data out of Car) to get information of the car, based on the given data type. That way, the generic logic only has to be written once, and not every time again for every new data type. Of course, in C++ you could make this much easier by using a common root class, like this class RootClass { public: string getName() const = 0; }; class Car : public RootClass { ... }; void ShowNiceDialog (RootClass *root); The problem is that in some cases, we don't want to store the data type in a class, but in a totally different format to save memory. In some cases we have hundreds of millions of instances that we need to manage in the application, and we don't want to make a full class for every instance. Suppose we have a data type with 2 characteristics: A quantity (double, 8 bytes) A boolean (1 byte) Although we only need 9 bytes to store this information, putting it in a class means that we need at least 16 bytes (because of the padding), and with the v-pointer we possibly even need 24 bytes. For hundreds of millions of instances, every byte counts (I have a 64-bit variant of the application and in some cases it needs 6 GB of memory). The void-pointer approach has the advantage that we can almost encode anything in a void-pointer and decide how to use it if we want information from it (use it as a real pointer, as an index, ...), but at the cost of type-safety. Templated solutions don't help since the generic logic forms quite a big part of the application, and we don't want to templatize all this. Additionally, the data model can be extended at run time, which also means that templates won't help. Are there better (and type-safer) ways to handle this than a void-pointer? Any references to frameworks, whitepapers, research material regarding this?

    Read the article

  • how to merge file lines having the same first word in python?

    - by user1377135
    I have written a program to merge lines in a file containing the same first word in python. However I am unable to get the desired output. Can anyone please suggest me the mistake in my program? input "file.txt" line1: a b c line2: a b1 c1 line3: d e f line4: i j k line5: i s t line6: i m n ` output a b c a b1 c1 d e f i j k i s t i m n my code a = [line.split() for line in open('file.txt')] L=[] for i in range(0,len(a)): j=i while True: if a[j][0] == a[j+1][0]: L.append(a[j]) L.append(a[j+1]) j=j+2 else: print a[i] print L break

    Read the article

  • Balancing a Binary Tree (AVL)

    - by Gustavo Carreno
    Ok, this is another one in the theory realm for the CS guys around. In the 90's I did fairly well in implementing BST's. The only thing I could bever get my head around was the intricacy of the algorithm to balance a Binary Tree (AVL). Can you guys help me on this?

    Read the article

  • Fitch Format Proofs - any resources around?

    - by devoured elysium
    I am currently studying Fitch Format first order logic proofs. My lecturer follows closely Language, Proof and Logic by Jon Barwise. I am trying to do some proofs but I am having some trouble getting to understand how to do these proofs. As I have already read what Language Proof and Logic has to offer, I'd like to know if there are any other books or resources around that use the Fitch format for their formal proofs. Plus, having solved exercises would be of great(!) help. Thanks

    Read the article

  • What does it mean to do/determine something "programmatically"?

    - by Chris Lutz
    Programmatically. (alt. programmically) I've never used it, but I see it in questions a lot, i.e. "How to programmatically determine [insert task here]". Firefox immediately tells me that neither of these two words are real (at least, it doesn't recognize them). I've also never seen them used anywhere but here. 1) What does it mean to do/determine something "programmatically"? 2) Why do so many people ask how to do/determine something "programmatically"? Isn't it assumed that, if you're asking how to do something on a programming help board, you're asking how to do it "programmatically"? 3) Why is it that I've never seen the word "programmatically" anywhere else?

    Read the article

  • ASP.NET MVC & Web Services

    - by ANaimi
    Hello, Does adding a Web Service to my ASP.NET MVC project break the whole concept of MVC? That Web Service (WCF) depends on the Model layer from my MVC project to communicate with the back-end (so it looks to me like it needs to be part of the MVC solution). Should I add this to the Controller or Model layer?

    Read the article

  • What is better for a student programming in C++ to learn for writing GUI: C# vs QT?

    - by flashnik
    I'm a teacher(instructor) of CS in the university. The course is based on Cormen and Knuth and students program algorithms in C++. But sometimes it is good to show how an algorithm works or just a result of task through GUI. Also in my opinion it's very imporant to be able to write full programs. They will have courses concerning GUI but a three years, later, in fact, before graduatuion. I think that they should be able to write simple GUI applications earlier. So I want to teach them it. How do you think, what is more useful for them to learn: programming GUI with QT or writing GUI in C# and calling unmanaged C++ library?

    Read the article

  • Decoupling into DAL and BLL - my concerns.

    - by novice_man
    Hi, In many posts concerning this topic I come across very simple examples that do not answer my question. Let's say a have a document table and user table. In DAL written in ADO.NET i have a method to retries all documents for some criteria. Now I the UI I have a case where I need to show this list along with the names of the creator. Up to know I have it done with one method in DAL containig JOIN statement. However eveytime I have such a complex method i have to do custom mapping to some object that doesn't mark 1:1 to DB. Should it be put into another layer ? If so then I will have to resing from join query for iteration through results and querying each document author. . . which doen't make sense... (performance) what is the best approach for such scenarios ?

    Read the article

  • Specifying Language for a grammar

    - by darkie15
    Hi All, Is there any specific methodology followed to specify a language for given grammar ?? i.e. Is it necessary to run all the production rules given in a grammar to determine the language it represents? I don't have an example as such since the one I am working on is a homework question. Regards, darkie15

    Read the article

< Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >