Search Results

Search found 9254 results on 371 pages for 'approach'.

Page 81/371 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • Exporting de-aggregated data

    - by Ben
    I'm currently working on a data export feature for a survey application. We are using SQL2k8. We store data in a normalized format: QuestionId, RespondentId, Answer. We have a couple other tables that define what the question text is for the QuestionId and demographics for the RespondentId... Currently I'm using some dynamic SQL to generate a pivot that joins the question table to the answer table and creates an export, its working... The problem is that it seems slow and we don't have that much data (less than 50k respondents). Right now I'm thinking "why am I 'paying' to de-aggregate the data for each query? Why don't I cache that?" The data being exported is based on dynamic criteria. It could be "give me respondents that completed on x date (or range)" or "people that like blue", etc. Because of that, I think I have to cache at the respondent level, find out what respondents are being exported and then select their combined cached de-aggregated data. To me the quick and dirty fix is a totally flat table, RespondentId, Question1, Question2, etc. The problem is, we have multiple clients and that doesn't scale AND I don't want to have to maintain the flattened table as the survey changes. So I'm thinking about putting an XML column on the respondent table and caching the results of a SELECT * FROM Data FOR XML AUTO WHERE RespondentId = x. With that in place, I would then be able to get my export with filtering and XML calls into the XML column. What are you doing to export aggregated data in a flattened format (CSV, Excel, etc)? Does this approach seem ok? I worry about the cost of XML functions on larger result sets (think SELECT RespondentId, XmlCol.value('//data/question_1', 'nvarchar(50)') AS [Why is there air?], XmlCol.RinseAndRepeat)... Is there a better technology/approach for this? Thanks!

    Read the article

  • How do you prove a function works?

    - by glenn I.
    I've recently gotten the testing religion and have started primarily with unit testing. I code unit tests which illustrate that a function works under certain cases, specifically using the exact inputs I'm using. I may do a number of unit tests to exercise the function. Still, I haven't actually proved anything other than the function does what I expect it to do under the scenarios I've tested. There may be other inputs and scenarios I haven't thought of and thinking of edge cases is expensive, particularly on the margins. This is all not very satisfying to do me. When I start to think of having to come up with tests to satisfy branch and path coverage and then integration testing, the prospective permutations can become a little maddening. So, my question is, how can one prove (in the same vein of proving a theorem in mathematics) that a function works (and, in a perfect world, compose these 'proofs' into a proof that a system works)? Is there a certain area of testing that covers an approach where you seek to prove a system works by proving that all of its functions work? Does anybody outside of academia bother with an approach like this? Are there tools and techniques to help? I realize that my use of the word 'work' is not precise. I guess I mean that a function works when it does what some spec (written or implied) states that it should do and does nothing other than that. Note, I'm not a mathematician, just a programmer.

    Read the article

  • URL development and mod_rewrite

    - by iRector
    My site is made-up of the main page, and multiple sub-directories, all under the same domain. My URLS are currently like .................| Ideal clean version: mysite.com mysite.com/?content=content1 ......................| mysite.com/content1/ mysite.com/?content=content2&page=4 ........| mysite.com/content2/4/ mysite.com/?content=content3 ......................| mysite.com/content3/ mysite.com/?content=content4 ......................| mysite.com/content4/ mysite.com/?content=article&id=34 ............| mysite.com/article/34/ Then the sub-directories are essentially the same: mysite.com/subdir, mysite.com/subdir2, mysite.com/subdir3, etc mysite.com/subdir/?content=content1 ...................| mysite.com/subdir/content1/ mysite.com/subdir/?content=content2&page=4 .....| mysite.com/subdir/content2/4/ mysite.com/subdir/?content=content3 ...................| mysite.com/subdir/content3/ mysite.com/subdir/?content=content4 ...................| mysite.com/subdir/content4/ mysite.com/subdir/?content=article&id=34 .........| mysite.com/subdir/article/34/ I've used mod_rewrite briefly, but I'm not sure how to approach these multiple variables. Also, how would I differentiate between the actually subfolders, and the content variable. As so to prevent 'subdir' or 'subdir2' from being plugged in as the content variable for the root site. I've played around with plenty of code snippets, but I've wiped my .htaccess slate clean, and approach you all in an attempt to help me repopulate it. Your input would thoroughly be appreciated. Note: The only time the page query string will be needed is when 'content' == 'content2' ?content=content2&page=4 **Same rule is shared by the article/id relationship, all other 'content' values are expected to be dynamic.

    Read the article

  • Approaches for cross server content sharing?

    - by Anonymity
    I've currently been tasked with finding a best solution to serving up content on our new site from another one of our other sites. Several approaches suggested to me, that I've looked into include using SharePoint's Lists Web Service to grab the list through javascript - which results in XSS and is not an option. Another suggestion was to build a server side custom web service and use SharePoint Request Forms to get the information - this is something I've only very briefly looked at. It's been suggested that I try permitting the requesting site in the HTTP headers of the serving site since I have access to both. This ultimately resulted in a semi-working solution that had major security holes. (I had to include username/password in the request to appease AD Authentication). This was done by allowing Access-Control-Allow-Origin: * The most direct approach I could think of was to simply build in the webpart in our new environment to have the authors manually update this content the same as they would on the other site. Are any one of the suggestions here more valid than another? Which would be the best approach? Are there other suggestions I may be overlooking? I'm also not sure if WebCrawling or Content Scrapping really holds water here...

    Read the article

  • Efficient way to copy a collection of Nodes, treat them, and then serialize?

    - by Danjah
    Hi all, I initially thought a regex to remove YUI3 classNames (or whole class attributes) and id attributes from a serialized DOM string was a sound enough approach - but now I'm not sure, given various warnings about using regex on HTML. I'm toying with the idea of making a copy of the DOM structure in question, performing: var nodeStructure = Y.one('#wrap').all('*'); // A YUI3 NodeList // Remove unwanted classNames.. I'd need to maintain a list of them to remove :/ nodeStructure.removeClass('unwantedClassName'); and then: // I believe this can be done on a NodeList collection... nodeStructure.removeAttribute('id'); I'm not quite sure about what I'd need to do to 'copy' a collection of Nodes anyway, as I don't actually want to do the above to my living markup, as its only being saved - not 'closed' or 'exited', a user could continue to change the markup, and then save again. The above doesn't make a copy, I know. Is this efficient? Is there a better way to 'sanitize' my live markup of framework additions to the DOM (and maybe other things too at a later point), before saving it as a string? If it is a good approach, what's a safe way to go about copying my collection of Nodes for safe cleaning? Thanks! d

    Read the article

  • Progressively stream the output of an ASP.NET page - or render a page outside of an HTTP request

    - by Evgeny
    I have an ASP.NET 2.0 page with many repeating blocks, including a third-party server-side control (so it's not just plain HTML). Each is quite expensive to generate, in terms of both CPU and RAM. I'm currently using a standard Repeater control for this. There are two problems with this simple approach: The entire page must be rendered before any of it is returned to the client, so the user must wait a long time before they see any data. (I write progress messages using Response.Write, so there is feedback, but no actual results.) The ASP.NET worker process must hold everything in memory at the same time. There is no inherent needs for this: once one block is processed it won't be changed, so it could be returned to the client and the memory could be freed. I would like to somehow return these blocks to the client one at a time, as each is generated. I'm thinking of extracting the stuff inside the Repeater into a separate page and getting it repeatedly using AJAX, but there are some complications involved in that and I wonder if there is some simper approach. Ideally I'd like to keep it as one page (from the client's point of view), but return it incrementally. Another way would be to do something similar, but on the server: still create a separate page, but have the server access it and then Response.Write() the HTML it gets to the response stream for the real client request. Is there a way to avoid an HTTP request here, though? Is there some ASP.NET method that would render a UserControl or a Page outside of an HTTP request and simply return the HTML to me as a string? I'm open to other ideas on how to do this as well.

    Read the article

  • Saving multiple items per single database cell...

    - by eugeneK
    Hi, i have a countries list. Each user can check multiple countries. Once saved, this "user country list" will be used to get whether other users fit into countries certain user chose. Question is what would be the most efficient approach to this problem... I have one, one to save user selection as delimited list like Canada,USA,France ... in single varchar(max) field but problem with it would be that once user from Germany enters page i perform this check on. To search for Germany i would be needed to get all items and un-delimit each field to check against value or to use sql 'like' which again is pretty damn slow.. If you have better solution or some tips i would be glad to hear. Just to make sure, many users will have their own selections of countries from which and only they want to have users to land on their page. While millions of users will reach those pages. So the faster approach will be the better. technology, MSSQL and ASP.NET thanks

    Read the article

  • Is it better to use a relational database or document-based database for an app like Wufoo?

    - by mboyle
    I'm working on an application that's similar to Wufoo in that it allows our users to create their own databases and collect/present records with auto generated forms and views. Since every user is creating a different schema (one user might have a database of their baseball card collection, another might have a database of their recipes) our current approach is using MySQL to create separate databases for every user with its own tables. So in other words, the databases our MySQL server contains look like: main-web-app-db (our web app containing tables for users account info, billing, etc) user_1_db (baseball_cards_table) user_2_db (recipes_table) .... And so on. If a user wants to set up a new database to keep track of their DVD collection, we'd do a "create database ..." with "create table ...". If they enter some data in and then decide they want to change a column we'd do an "alter table ....". Now, the further along I get with building this out the more it seems like MySQL is poorly suited to handling this. 1) My first concern is that switching databases every request, first to our main app's database for authentication etc, and then to the user's personal database, is going to be inefficient. 2) The second concern I have is that there's going to be a limit to the number of databases a single MySQL server can host. Pretending for a moment this application had 500,000 user databases, is MySQL designed to operate this way? What if it were a million, or more? 3) Lastly, is this method going to be a nightmare to support and scale? I've never heard of MySQL being used in this way so I do worry about how this affects things like replication and other methods of scaling. To me, it seems like MySQL wasn't built to be used in this way but what do I know. I've been looking at document-based databases like MongoDB, CouchDB, and Redis as alternatives because it seems like a schema-less approach to this particular problem makes a lot of sense. Can anyone offer some advice on this?

    Read the article

  • Dependent Dropdowns with single class element

    - by AJ
    I am trying to create multiple dependent dropdowns (selects) with a unique way. I want to restrict the use of selectors and want to achieve this by using a single class selectors on all SELECTs; by figuring out the SELECT that was changed by its index. Hence, the SELECT[i] that changed will change the SELECT[i+1] only (and not the previous ones like SELECT[i-1]). For html like this: <select class="someclass"> <option value="volvo">Volvo</option> <option value="saab">Saab</option> <option value="mercedes">Mercedes</option> <option value="audi">Audi</option> </select> <select class="someclass"> </select> <select class="someclass"> </select> where the SELECTs other than the first one will get something via AJAX. I see that the following Javascript gives me correct value of the correct SELECT and the value of i also corresponds to the correct SELECT. $(function() { $(".someclass").each(function(i) { $(this).change(function(x) { alert($(this).val() + i); }); }); }); Please note that I really want the minimum selector approach. I just cannot wrap my head around on how to approach this. Should I use Arrays? Like store all the SELECTS in an Array first and then select those? I believe that since the above code already passes the index i, there should be a way without Arrays also. Thanks a bunch. AJ

    Read the article

  • Looking for a RESTful or SOAP pipeline between WordPress and InterWoven TeamSite

    - by deanpeters
    I've been Googling my brains out trying see if there's a simple way to bridge content to and from WordPress to and from TeamSite. I'm coming at this from the perspective of a WordPress developer. I see in the book "The Definitive Guide to Interwoven TeamSite" (http://bit.ly/d3z4wI) mention of objects for the Interwoven LiveSite product: com.interwoven.livesite.external.impl.RSS com.interwoven.livesite.external.impl.SOAP If I understand the above objects correctly, these allow me to instantiate objects of these data types, which after populating them via various method calls, allow me to render content using com.interwoven.livesite.external.ExternalCall ... but I'm not sure. Nor do I think this approach provides me the 2-way street I seek. As it stands now, from my limited understanding, it appears that the least path of resistance is deploying Interwoven's LiveSite with the existing TeamSite implementation so content can be both consumed and rendered via RSS ... an channel which WordPress can produce and consume; the latter with plugins such as wp-o-matic and/or feedpress. So the question is, does anyone out there have experience with a SOAP or RESTful API approach to InterWoven's TeamSite? If so, can I get some direction on documentation? Or is the addition of LiveSite + RSS the most feasible 2-way channel?

    Read the article

  • Loading Dimension Tables - Methodologies

    - by Nev_Rahd
    Hello, Recently I been working on project, where need to populated Dim Tables from EDW Tables. EDW Tables are of type II which does maintain historical data. When comes to load Dim Table, for which source may be multiple EDW Tables or would be single table with multi level pivoting (on attributes). Mean: There would be 10 records - one for each attribute which need to be pivoted on domain_code to make a single row in Dim. Out of these 10 records there would be some attributes with same domain_code but with different sub_domain_code, which needs further pivoting on subdomain code. Ex: if i got domain code: 01,02, 03 = which are straight pivot on domain code I would also have domain code: 10 with subdomain code / version as 2006,2007,2008,2009 That means I need to split my source table with above attributes into two = one for domain code and other for domain_code + version. so far so good. When it comes to load Dim Table: As per design specs for Dimensions (originally written by third party), what they want is: for every single change in EDW (attribute), it should assemble all the related records (for that NK) mean new one with other attribute values which are current = process them to create a new dim record and insert it. That mean if a single extract contains 100 records updated (one for each NK), it should assemble 100 + (100*9) records to insert / update dim table. How good is this approach. Other way I tried to do is just do a lookup into dim table for that NK get the value's of recent records (attributes which not changed) and insert it and update the current one. What would be the better approach assembling records at source side for one attribute change or looking into dim table's recent record and process it. If this doesn't make sense, would like to elaborate it further. Thanks

    Read the article

  • Implementing a bitfield using java enums

    - by soappatrol
    Hello, I maintain a large document archive and I often use bit fields to record the status of my documents during processing or when validating them. My legacy code simply uses static int constants such as: static int DOCUMENT_STATUS_NO_STATE = 0 static int DOCUMENT_STATUS_OK = 1 static int DOCUMENT_STATUS_NO_TIF_FILE = 2 static int DOCUMENT_STATUS_NO_PDF_FILE = 4 This makes it pretty easy to indicate the state a document is in, by setting the appropriate flags. For example: status = DOCUMENT_STATUS_NO_TIF_FILE | DOCUMENT_STATUS_NO_PDF_FILE; Since the approach of using static constants is bad practice and because I would like to improve the code, I was looking to use Enums to achieve the same. There are a few requirements, one of them being the need to save the status into a database as a numeric type. So there is a need to transform the enumeration constants to a numeric value. Below is my first approach and I wonder if this is the correct way to go about this? class DocumentStatus{ public enum StatusFlag { DOCUMENT_STATUS_NOT_DEFINED(1<<0), DOCUMENT_STATUS_OK(1<<1), DOCUMENT_STATUS_MISSING_TID_DIR(1<<2), DOCUMENT_STATUS_MISSING_TIF_FILE(1<<3), DOCUMENT_STATUS_MISSING_PDF_FILE(1<<4), DOCUMENT_STATUS_MISSING_OCR_FILE(1<<5), DOCUMENT_STATUS_PAGE_COUNT_TIF(1<<6), DOCUMENT_STATUS_PAGE_COUNT_PDF(1<<7), DOCUMENT_STATUS_UNAVAILABLE(1<<8), private final long statusFlagValue; StatusFlag(long statusFlagValue) { this.statusFlagValue = statusFlagValue } public long getStatusFlagValue(){ return statusFlagValue } } /** * Translates a numeric status code into a Set of StatusFlag enums * @param numeric statusValue * @return EnumSet representing a documents status */ public EnumSet<StatusFlag> getStatusFlags(long statusValue) { EnumSet statusFlags = EnumSet.noneOf(StatusFlag.class) StatusFlag.each { statusFlag -> long flagValue = statusFlag.statusFlagValue if ( (flagValue&statusValue ) == flagValue ) { statusFlags.add(statusFlag) } } return statusFlags } /** * Translates a set of StatusFlag enums into a numeric status code * @param Set if statusFlags * @return numeric representation of the document status */ public long getStatusValue(Set<StatusFlag> flags) { long value=0 flags.each { statusFlag -> value|=statusFlag.getStatusFlagValue() } return value } public static void main(String[] args) { DocumentStatus ds = new DocumentStatus(); Set statusFlags = EnumSet.of( StatusFlag.DOCUMENT_STATUS_OK, StatusFlag.DOCUMENT_STATUS_UNAVAILABLE) assert ds.getStatusValue( statusFlags )==258 // 0000.0001|0000.0010 long numericStatusCode = 56 statusFlags = ds.getStatusFlags(numericStatusCode) assert !statusFlags.contains(StatusFlag.DOCUMENT_STATUS_OK) assert statusFlags.contains(StatusFlag.DOCUMENT_STATUS_MISSING_TIF_FILE) assert statusFlags.contains(StatusFlag.DOCUMENT_STATUS_MISSING_PDF_FILE) assert statusFlags.contains(StatusFlag.DOCUMENT_STATUS_MISSING_OCR_FILE) } }

    Read the article

  • Which web framework or technologies would suit me?

    - by Suraj Chandran
    Hi, I had been working on desktop apps and server side(non web) for some time and now I am diving in to web first time. I plan to write a scalable enterprise level app. I have worked with Java, Javascript, Jquery etc. but I absolutely hate jsp. So is there any framework that focuses on developing enterprise level web apps without jsp. I liked Wicket's approach, but I think there is a little lack of support of dynamic html in it and jquery(yes i looked at wiquery). Also I feel making wicket apps scalable would take some sweat. Can Spring MVC, Struts2 etc. help me make with this with just using say Java, JavaScript, and JQuery. Or are there any other options for me like Wicket. Please do forgive if anything above looks insane, I am still working on my understanding with enterprise web apps. NOTE: If you think that I should take a different direction or approach, please do suggest!

    Read the article

  • BinaryFormatter in C# a good way to read files?

    - by mr-pac
    I want to read a binary file which was created outside of my program. One obvious way in C# to read a binary file is to define class representing the file and then use a BinaryReader and read from the file via the Read* methods and assign the return values to the class properties. What I don't like with the approach is that I manually have to write code that reads the file, although the defined structure represents how the file is stored. I also have to keep the order correct when I read. After looking a bit around I came across the BinaryFormatter which can automatically serialize and deserialze object in binary format. One great advantage would be that I can read and also write the file without creating additional code. However I wonder if this approach is good for files created from other programs on not just serialized .NET objects. Take for example a graphics format file like BMP. Would it be a good idea to read the file with a BinaryFormatter or is it better to manually and write via BinaryReader and BinaryWriter? Or are there any other approaches which suit better? I'am not looking for concrete examples but just for an advice what is the best way to implement that.

    Read the article

  • Avoid an "out of memory error" in Java(eclipse), when using large data structure?

    - by gnomed
    OK, so I am writing a program that unfortunately needs to use a huge data structure to complete its work, but it is failing with a "out of memory error" during its initialization. While I understand entirely what that means and why it is a problem, I am having trouble overcoming it, since my program needs to use this large structure and I don't know any other way to store it. The program first indexes a large corpus of text files that I provide. This works fine. Then it uses this index to initialize a large 2D array. This array will have nXn entries, where "n" is the number of unique words in the corpus of text. For the relatively small chunk I am testing it on(about 60 files) it needs to make approximately 30,000x30,000 entries. this will probably be bigger once I run it on my full intended corpus too. It consistently fails every time, after it indexes, while it is initializing the data structure(to be worked on later). Things I have done include: revamp my code to use a primitive "int[]" instead of a "TreeMap" eliminate redundant structures, etc... Also, I have run eclipse with "eclipse -vmargs -Xmx2g" to max out my allocated memory I am fairly confident this is not going to be a simple line of code solution, but is most likely going to require a very new approach. I am looking for what that approach is, any ideas? Thanks, B.

    Read the article

  • MS Excel automation without macros in the generated reports. Any thoughts?

    - by ezeki77
    Hello! I know that the web is full of questions like this one, but I still haven't been able to apply the answers I can find to my situation. I realize there is VBA, but I always disliked having the program/macro living inside the Excel file, with the resulting bloat, security warnings, etc. I'm thinking along the lines of a VBScript that works on a set of Excel files while leaving them macro-free. Now, I've been able to "paint the first column blue" for all files in a directory following this approach, but I need to do more complex operations (charts, pivot tables, etc.), which would be much harder (impossible?) with VBScript than with VBA. For this specific example knowing how to remove all macros from all files after processing would be enough, but all suggestions are welcome. Any good references? Any advice on how to best approach external batch processing of Excel files will be appreciated. Thanks! PS: I eagerly tried Mark Hammond's great PyWin32 package, but the lack of documentation and interpreter feedback discouraged me.

    Read the article

  • Dependent Dropdowns (Linked Selects) with single class (no other selectors)

    - by AJ
    I am trying to create multiple dependent dropdowns (selects) with a unique way. I want to restrict the use of selectors and want to achieve this by using a single class selectors on all SELECTs; by figuring out the SELECT that was changed by its index. Hence, the SELECT[i] that changed will change the SELECT[i+1] only (and not the previous ones like SELECT[i-1]). For html like this: <select class="someclass"> <option value="volvo">Volvo</option> <option value="saab">Saab</option> <option value="mercedes">Mercedes</option> <option value="audi">Audi</option> </select> <select class="someclass"> </select> <select class="someclass"> </select> where the SELECTs other than the first one will get something via AJAX. I see that the following Javascript gives me correct value of the correct SELECT and the value of i also corresponds to the correct SELECT. $(function() { $(".someclass").each(function(i) { $(this).change(function(x) { alert($(this).val() + i); }); }); }); Please note that I really want the minimum selector approach. I just cannot wrap my head around on how to approach this. Should I use Arrays? Like store all the SELECTS in an Array first and then select those? I believe that since the above code already passes the index i, there should be a way without Arrays also. Thanks a bunch. AJ

    Read the article

  • Mocking non-virtual methods in C++ without editing production code?

    - by wk1989
    Hello, I am a fairly new software developer currently working adding unit tests to an existing C++ project that started years ago. Due to a non-technical reason, I'm not allowed to modify any existing code. The base class of all my modules has a bunch of methods for Setting/Getting data and communicating with other modules. Since I just want to unit testing each individual module, I want to be able to use canned values for all my inter-module communication methods. I.e. for a method Ping() which checks if another module is active, I want to have it return true or false based on what kind of test I'm doing. I've been looking into Google Test and Google Mock, and it does support mocking non-virtual methods. However the approach described (http://code.google.com/p/googlemock/wiki/CookBook#Mocking_Nonvirtual_Methods) requires me to "templatize" the original methods to take in either real or mock objects. I can't go and templatize my methods in the base class due to the requirement mentioned earlier, so I need some other way of mocking these virtual methods Basically, the methods I want to mock are in some base class, the modules I want to unit test and create mocks of are derived classes of that base class. There are intermediate modules in between my base Module class and the modules that I want to test. I would appreciate any advise! Thanks, JW EDIT: A more concrete examples My base class is lets say rootModule, the module I want to test is leafModule. There is an intermediate module which inherits from rootModule, leafModule inherits from this intermediate module. In my leafModule, I want to test the doStuff() method, which calls the non virtual GetStatus(moduleName) defined in the rootModule class. I need to somehow make GetStatus() to return a chosen canned value. Mocking is new to me, so is using mock objects even the right approach?

    Read the article

  • MVC2 Modelbinder for List of derived objects

    - by user250773
    I want a list of different (derived) object types working with the Default Modelbinder in Asp.net MVC 2. I have the following ViewModel: public class ItemFormModel { [Required(ErrorMessage = "Required Field")] public string Name { get; set; } public string Description { get; set; } [ScaffoldColumn(true)] //public List<Core.Object> Objects { get; set; } public ArrayList Objects { get; set; } } And the list contains objects of diffent derived types, e.g. public class TextObject : Core.Object { public string Text { get; set; } } public class BoolObject : Core.Object { public bool Value { get; set; } } It doesn't matter if I use the List or the ArrayList implementation, everything get's nicely scaffolded in the form, but the modelbinder doesn't resolve the derived object type properties for me when posting back to the ActionResult. What could be a good solution for the Viewmodel structure to get a list of different object types handled? Having an extra list for every object type (e.g. List, List etc.) seems to be not a good solution for me, since this is a lot of overhead both in building the viewmodel and mapping it back to the domain model. Thinking about the other approach of binding all properties in a custom model binder, how can I make use the data annotations approach here (validating required attributes etc.) without a lot of overhead?

    Read the article

  • DDD: Enum like entities

    - by Chris
    Hi all, I have the following DB model: **Person table** ID | Name | StateId ------------------------------ 1 Joe 1 2 Peter 1 3 John 2 **State table** ID | Desc ------------------------------ 1 Working 2 Vacation and domain model would be (simplified): public class Person { public int Id { get; } public string Name { get; set; } public State State { get; set; } } public class State { private int id; public string Name { get; set; } } The state might be used in the domain logic e.g.: if(person.State == State.Working) // some logic So from my understanding, the State acts like a value object which is used for domain logic checks. But it also needs to be present in the DB model to represent a clean ERM. So state might be extended to: public class State { private int id; public string Name { get; set; } public static State New {get {return new State([hardCodedIdHere?], [hardCodeNameHere?]);}} } But using this approach the name of the state would be hardcoded into the domain. Do you know what I mean? Is there a standard approach for such a thing? From my point of view what I am trying to do is using an object (which is persisted from the ERM design perspective) as a sort of value object within my domain. What do you think? Question update: Probably my question wasn't clear enough. What I need to know is, how I would use an entity (like the State example) that is stored in a database within my domain logic. To avoid things like: if(person.State.Id == State.Working.Id) // some logic or if(person.State.Id == WORKING_ID) // some logic

    Read the article

  • Emulating Test::More::done_testing - what is the most idiomatic way?

    - by DVK
    I have to build unit tests for in environment with a very old version of Test::More (perl5.8 with $Test::More::VERSION being '0.80') which predates the addition of done_testing(). Upgrading to newer Test::More is out of the question for practical reasons. And I am trying to avoid using no_tests - it's generally a bad idea not catching when your unit test dies prematurely. What is the most idiomatic way of running a configurable amount of tests, assuming no no_tests or done_testing() is used? Details: My unit tests usually take the form of: use Test::More; my @test_set = ( [ "Test #1", $param1, $param2, ... ] ,[ "Test #1", $param1, $param2, ... ] # ,... ); foreach my $test (@test_set) { run_test($test); } sub run_test { # $expected_tests += count_tests($test); ok(test1($test)) || diag("Test1 failed"); # ... } The standard approach of use Test::More tests => 23; or BEGIN {plan tests => 23} does not work since both are obviously executed before @tests is known. My current approach involves making @tests global and defining it in the BEGIN {} block as follows: use Test::More; BEGIN { our @test_set = (); # Same set of tests as above my $expected_tests = 0; foreach my $test (@tests) { my $expected_tests += count_tests($test); } plan tests = $expected_tests; } our @test_set; # Must do!!! Since first "our" was in BEGIN's scope :( foreach my $test (@test_set) { run_test($test); } # Same sub run_test {} # Same I feel this can be done more idiomatically but not certain how to improve. Chief among the smells is the duplicate our @test_test declarations - in BEGIN{} and after it.

    Read the article

  • Need an advice for unit testing using mock object

    - by Andree
    Hi there, I just recently read about "Mocking objects" for unit testing and currently I'm having a difficulties implementing this approach in my application. Please let me explain my problem. I have a User model class, which is dependent on 2 data sources (database and facebook web service). The controller class simply use this User model as an interface to access data and it doesn't care about where the data came from. Currently I never done any unit test to this User model because it is dependent on an external web service. But just a while ago, I read about object mocking and now I know that it is a common approach to unit test a class that depends on external resources (like in my case). Now I want to create a unit test for the User model, but then I encountered a design issue: In order for the User model to use a mocked Facebook SDK, I have to inject this mocked Facebook SDK to the User object (probably using a setter). Therefore I can't construct the Facebook SDK inside the User object. I have to construct it outside the User object, and inject the SDK into the User object. The real client of my User model is the application's controller. Therefore I have to construct the Facebook SDK inside the controller and inject it to the user object. Well, this is a problem because I want my controller to be as clean as possible. I want my controller to be ignorant about the application's data source. I'm not good at explaining something systematically, so you'll probably sleeping before reading this last paragraph. But anyway, I want to ask if anyone here ever encountered the same problem as mine? How do you solve this problem? Regards, Andree

    Read the article

  • How do you tell that your unit tests are correct?

    - by Jacob Adams
    I've only done minor unit testing at various points in my career. Whenever I start diving into it again, it always troubles me how to prove that my tests are correct. How can I tell that there isn't a bug in my unit test? Usually I end up running the app, proving it works, then using the unit test as a sort of regression test. What is the recommended approach and/or what is the approach you take to this problem? Edit: I also realize that you could write small, granular unit tests that would be easy to understand. However, if you assume that small, granular code is flawless and bulletproof, you could just write small, granular programs and not need unit testing. Edit2: For the arguments "unit testing is for making sure your changes don't break anything" and "this will only happen if the test has the exact same flaw as the code", what if the test overfits? It's possible to pass both good and bad code with a bad test. My main question is what good is unit testing since if your tests can be flawed you can't really improve your confidence in your code, can't really prove your refactoring worked, and can't really prove that you met the specification?

    Read the article

  • Reference a GNU C DLL built in GCC against Cygwin, from C#/NET

    - by Dale Halliwell
    Here is what I want: I have a huge legacy C/C++ codebase written for POSIX, including some very POSIX specific stuff like pthreads. This can be compiled on Cygwin/GCC and run as an executable under Windows with the Cygwin DLL. What I would like to do is build the codebase itself into a Windows DLL that I can then reference from C# and write a wrapper around it to access some parts of it programatically. I have tried this approach with the very simple "hello world" example at http://www.cygwin.com/cygwin-ug-net/dll.html and it doesn't seem to work. #include <stdio.h> extern "C" __declspec(dllexport) int hello(); int hello() { printf ("Hello World!\n"); return 42; } I believe I should be able to reference a DLL built with the above code in C# using something like: [DllImport("kernel32.dll")] public static extern IntPtr LoadLibrary(string dllToLoad); [DllImport("kernel32.dll")] public static extern IntPtr GetProcAddress(IntPtr hModule, string procedureName); [DllImport("kernel32.dll")] public static extern bool FreeLibrary(IntPtr hModule); [UnmanagedFunctionPointer(CallingConvention.Cdecl)] private delegate int hello(); static void Main(string[] args) { var path = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "helloworld.dll"); IntPtr pDll = LoadLibrary(path); IntPtr pAddressOfFunctionToCall = GetProcAddress(pDll, "hello"); hello hello = (hello)Marshal.GetDelegateForFunctionPointer( pAddressOfFunctionToCall, typeof(hello)); int theResult = hello(); Console.WriteLine(theResult.ToString()); bool result = FreeLibrary(pDll); Console.ReadKey(); } But this approach doesn't seem to work. LoadLibrary returns null. It can find the DLL (helloworld.dll), it is just like it can't load it or find the exported function. I am sure that if I get this basic case working I can reference the rest of my codebase in this way. Any suggestions or pointers, or does anyone know if what I want is even possible? Thanks.

    Read the article

  • Uniquing with Existing Core Data Entities

    - by warrenm
    I'm using Core Data to store a lot (1000s) of items. A pair of properties on each item are used to determine uniqueness, so when a new item comes in, I compare it against the existing items before inserting it. Since the incoming data is in the form of an RSS feed, there are often many duplicates, and the cost of the uniquing step is O(N^2), which has become significant. Right now, I create a set of existing items before iterating over the list of (possible) new items. My theory is that on the first iteration, all the items will be faulted in, and assuming we aren't pressed for memory, most of those items will remain resident over the course of the iteration. I see my options thusly: Use string comparison for uniquing, iterating over all "new" items and comparing to all existing items (Current approach) Use a predicate to filter the set of existing items against the properties of the "new" items. Use a predicate with Core Data to determine uniqueness of each "new" item (without retrieving the set of existing items). Is option 3 likely to be faster than my current approach? Do you know of a better way?

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >