Search Results

Search found 21658 results on 867 pages for 'small business'.

Page 313/867 | < Previous Page | 309 310 311 312 313 314 315 316 317 318 319 320  | Next Page >

  • Entity Framework 4 + POCO with custom classes and WCF contracts (serialization problem)

    - by eman
    Yesterday I worked on a project where I upgraded to Entity Framework 4 with the Repository pattern. In one post, I have read that it is necessary to turn off the custom tool generator classes and then write classes (same like entites) by hand. That I can do it, I used the POCO Entity Generator and then deleted the new generated files .tt and all subordinate .cs classes. Then I wrote the "entity classes" by myself. I added the repository pattern and implemented it in the business layer and then implemented a WCF layer, which should call the methods from the business layer. By calling an Insert (Add) method from the presentation layer and everything is OK. But if I call any method that should return some class, then I get an error like (the connection was interrupted by the server). I suppose there is a problem with the serialization or am I wrong? How can by this problem solved? I'm using Visual Studio S2010, Entity Framework 4, C#. UPDATE: I have uploaded the project and hope somebody can help me! link text UPDATE 2: My questions: Why is POCO good (pros/cons)? When should POCO be used? Is POCO + the repository pattern a good choice? Should POCO classes by written by myself or could I use auto generated POCO classes?

    Read the article

  • Object equality in context of hibernate / webapp

    - by bert
    How do you handle object equality for java objects managed by hibernate? In the 'hibernate in action' book they say that one should favor business keys over surrogate keys. Most of the time, i do not have a business key. Think of addresses mapped to a person. The addresses are keeped in a Set and displayed in a Wicket RefreshingView (with a ReuseIfEquals strategy). I could either use the surrogate id or use all fields in the equals() and hashCode() functions. The problem is that those fields change during the lifetime ob the object. Either because the user entered some data or the id changes due to JPA merge() being called inside the OSIV (Open Session in View) filter. My understanding of the equals() and hashCode() contract is that those should not change during the lifetime of an object. What i have tried so far: equals() based on hashCode() which uses the database id (or super.hashCode() if id is null). Problem: new addresses start with an null id but get an id when attached to a person and this person gets merged() (re-attached) in the osiv-filter. lazy compute the hashcode when hashCode() is first called and make that hashcode @Transitional. Does not work, as merge() returns a new object and the hashcode does not get copied over. What i would need is an ID that gets assigned during object creation I think. What would be my options here? I don't want to introduce some additional persistent property. Is there a way to explicitly tell JPA to assign an ID to an object? Regards

    Read the article

  • Auto scale and rotate images

    - by Dave Jarvis
    Given: two images of the same subject matter; the images have the same resolution, colour depth, and file format; the images differ in size and rotation; and two lists of (x, y) co-ordinates that correlate the images. I would like to know: How do you transform the larger image so that it visually aligns to the second image? (Optional.) What are the minimum number of points needed to get an accurate transformation? (Optional.) How far apart do the points need to be to get an accurate transformation? The transformation would need to rotate, scale, and possibly shear the larger image. Essentially, I want to create (or find) a program that does the following: Input two images (e.g., TIFFs). Click several anchor points on the small image. Click the several corresponding anchor points on the large image. Transform the large image such that it maps to the small image by aligning the anchor points. This would help align pictures of the same stellar object. (For example, a hand-drawn picture from 1855 mapped to a photograph taken by Hubble in 2000.) Many thanks in advance for any algorithms (preferably Java or similar pseudo-code), ideas or links to related open-source software packages.

    Read the article

  • WCF: Exposed Object Model - stuck in a loop

    - by Mark
    Hi I'm working on a pretty big WSSF project. I have a normal object model in the business layer. Eg a customer has an orders collection property, when this is accessed then it loads from the data layer (lazy loading). An order has a productCollection property etc etc.. Now the bit I'm finding tricky is exposing this via WCF. I want to export a collection of orders. The client app will also need information about the customers. Using the WSSF data contract designer I have set it up so that customers have a property called "order collection". This is fine if you have a customer object and would like to look at the orders but if you have an order object there is no customer property so it doesn't work going up the hierarchy. I've tried adding a customer property to the orders object but then the code gets stuck in a loop when it loads the data contracts up. This is because it doesn't load on demand like in the business layer. I need to load all properties up before the objects can be sent out via WCF. It ends up loading an order, then the customer for that order, then the orders for that customer, then the customer for that order etc etc... I'm sure I've got all this wrong. Help!!

    Read the article

  • Handling multiple media queries in Sass with Twitter Bootstrap

    - by Keith
    I have a Sass mixin for my media queries based on Twitter Bootstrap's responsive media queries: @mixin respond-to($media) { @if $media == handhelds { /* Landscape phones and down */ @media (max-width: 480px) { @content; } } @else if $media == small { /* Landscape phone to portrait tablet */ @media (max-width: 767px) {@content; } } @else if $media == medium { /* Portrait tablet to landscape and desktop */ @media (min-width: 768px) and (max-width: 979px) { @content; } } @else if $media == large { /* Large desktop */ @media (min-width: 1200px) { @content; } } @else { @media only screen and (max-width: #{$media}px) { @content; } } } And I call them throughout my SCSS file like so: .link { color:blue; @include respond-to(medium) { color: red; } } However, sometimes I want to style multiple queries with the same styles. Right now I'm doing them like this: .link { color:blue; /* this is fine for handheld and small sizes*/ /*now I want to change the styles that are cascading to medium and large*/ @include respond-to(medium) { color: red; } @include respond-to(large) { color: red; } } but I'm repeating code so I'm wondering if there is a more concise way to write it so I can target multiple queries. Something like this so I don't need to repeat my code (I know this doesn't work): @include respond-to(medium, large) { color: red; } Any suggestions on the best way to handle this?

    Read the article

  • Am I understanding premature optimization correctly?

    - by Ed Mazur
    I've been struggling with an application I'm writing and I think I'm beginning to see that my problem is premature optimization. The perfectionist side of me wants to make everything optimal and perfect the first time through, but I'm finding this is complicating the design quite a bit. Instead of writing small, testable functions that do one simple thing well, I'm leaning towards cramming in as much functionality as possible in order to be more efficient. For example, I'm avoiding multiple trips to the database for the same piece of information at the cost of my code becoming more complex. One part of me wants to just not worry about redundant database calls. It would make it easier to write correct code and the amount of data being fetched is small anyway. The other part of me feels very dirty and unclean doing this. :-) I'm leaning towards just going to the database multiple times, which I think is the right move here. It's more important that I finish the project and I feel like I'm getting hung up because of optimizations like this. My question is: is this the right strategy to be using when avoiding premature optimization?

    Read the article

  • Why does mobile first responsive design tend to not use max-width queries alongside the min-width queries?

    - by Sam
    First off, I understand the basic principles behind mobile first responsive web design, and totally agree with them. But one thing I don't understand: In my experience, not all styles for small screens can be used for the larger version of a website. For example, usually smaller versions tend to have larger clickable areas, hamburger navigation, etc. So I sometimes have to override these specific styles, aside from just progressively enhancing the base styles. So I was wondering: why is max-width rarely mentioned (or used) in the context of mobile-first responsive web design? Because it looks like it could be used to isolate styles for smaller screens that are not useful for larger screens, and would thus prevent unnecessary duplication of code. A quote which mentions min-width as typically mobile-first, but not max-width: Mobile first, from a coding perspective, means that your base style is typically a single-column, fully-fluid layout. You use @media (min-width: whatever) to add a grid-based layout on top of that. from: http://gomakethings.com/mobile-first-and-internet-explorer/ EDIT: So to be more specific: I was wondering if there is a reason to exclude max-width from a mobile-first responsive design (as it seems like it can be useful for writing your css as DRY as possible, as some styles for small screens will not be used for bigger screens).

    Read the article

  • context.Scale() with non-aspect ratio preserving parameters screws effective lineWith

    - by rrenaud
    I am trying to apply some natural transformations whereby the x axis is remapped to some very small domain, like from 0 to 1, whereas y is remapped to some small, but substantially larger domain, like 0 to 30. This way, drawing code can be nice and clean and only care about the model space. However, if I apply a scale, then lines are also scaled, which means that horizontal lines become extremely fat relative to vertical ones. Here is some sample code. When natural_height is much less than natural_height, the picture doesn't look as intended. I want the picture to look like this, which is what happens with a scale that preserves aspect ratio. rftgstats.c om/canvas_good.png However, with a non-aspect ratio preserving scale, the results look like this. rftgstats.c om/canvas_bad.png <html><head><title>Busted example</title></head> <body> <canvas id=example height=300 width=300> <script> var canvas = document.getElementById('example'); var ctx = canvas.getContext('2d'); var natural_width = 10; var natural_height = 50; ctx.scale(canvas.width / natural_width, canvas.height / natural_height); var numLines = 20; ctx.beginPath(); for (var i = 0; i < numLines; ++i) { ctx.moveTo(natural_width / 2, natural_height / 2); var angle = 2 * Math.PI * i / numLines; // yay for screen size independent draw calls. ctx.lineTo(natural_width / 2 + natural_width * Math.cos(angle), natural_height / 2 + natural_height * Math.sin(angle)); } ctx.stroke(); ctx.closePath(); </script> </body> </html>

    Read the article

  • asp.net, wcf authentication and caching

    - by andrew
    I need to place my app business logic into a WCF service. The service shouldn't be dependent on ASP.NET and there is a lot of data regarding the authenticated user which is frequently used in the business logic hence it's supposed to be cached (probably using a distributed cache). As for authentication - I'm going to use two level authentication: Front-End - forms authentication back-end (WCF Service) - message username authentication. For both authentications the same custom membership provider is supposed to be used. To cache the authenticated user data, I'm going to implement two service methods: 1) Authenticate - will retrieve the needed data and place it into the cache(where username will be used as a key) 2) SignOut - will remove the data from the cache Question 1. Is correct to perform authentication that way (in two places) ? Question 2. Is this caching strategy worth using or should I look at using aspnet compatible service and asp.net session ? Maybe, these questions are too general. But, anyway I'd like to get any suggestions or recommendations. Any Idea

    Read the article

  • LINQ Joins - Performance

    - by Meiscooldude
    I am curious on how exactly LINQ (not LINQ to SQL) is performing is joins behind the scenes in relation to how Sql Server performs joins. Sql Server before executing a query, generates an Execution Plan. The Execution Plan is basically an Expression Tree on what it believes is the best way to execute the query. Each node provides information on whether to do a Sort, Scan, Select, Join, ect. On a 'Join' node in our execution plan, we can see three possible algorithms; Hash Join, Merge Join, and Nested Loops Join. Sql Server will choose which algorithm to for each Join operation based on expected number of rows in Inner and Outer tables, what type of join we are doing (some algorithms don't support all types of joins), whether we need data ordered, and probably many other factors. Join Algorithms: Nested Loop Join: Best for small inputs, can be optimized with ordered inner table. Merge Join: Best for medium to large inputs sorted inputs, or an output that needs to be ordered. Hash Join: Best for medium to large inputs, can be parallelized to scale linearly. LINQ Query: DataTable firstTable, secondTable; ... var rows = from firstRow in firstTable.AsEnumerable () join secondRow in secondTable.AsEnumerable () on firstRow.Field<object> (randomObject.Property) equals secondRow.Field<object> (randomObject.Property) select new {firstRow, secondRow}; SQL Query: SELECT * FROM firstTable fT INNER JOIN secondTable sT ON fT.Property = sT.Property Sql Server might use a Nested Loop Join if it knows there are a small number of rows from each table, a merge join if it knows one of the tables has an index, and Hash join if it knows there are a lot of rows on either table and neither has an index. Does Linq choose its algorithm for joins? or does it always use one?

    Read the article

  • How to access a web service behind a NAT?

    - by jr
    We have a product we are deploying to some small businesses. It is basically a RESTful API over SSL using Tomcat. This is installed on the server in the small business and is accessed via an iPhone or other device portable device. So, the devices connecting to the server could come from any number of IP addresses. The problem comes with the installation. When we install this service, it seems to always become a problem when doing port forwarding so the outside world can gain access to tomcat. It seems most time the owner doesn't know router password, etc, etc. I am trying to research other ways we can accomplish this. I've come up with the following and would like to hear other thoughts on the topic. Setup a SSH tunnel from each client office to a central server. Basically the remote devices would connect to that central server on a port and that traffic would be tunneled back to Tomcat in the office. Seems kind of redundant to have SSH and then SSL, but really no other way to accomplish it since end-to-end I need SSL (from device to office). Not sure of performance implications here, but I know it would work. Would need to monitor the tunnel and bring it back up if it goes done, would need to handle SSH key exchanges, etc. Setup uPNP to try and configure the hole for me. Would likely work most of the time, but uPNP isn't guaranteed to be turned on. May be a good next step. Come up with some type of NAT transversal scheme. I'm just not familiar with these and uncertain of how they exactly work. We have access to a centralized server which is required for the authentication if that makes it any easier. What else should I be looking at to get this accomplished?

    Read the article

  • Java "Pool" of longs or Oracle sequence with reusable values

    - by Anthony Accioly
    Several months ago I implemented a solution to choose unique values from a range between 1 and 65535 (16 bits). This range is used to generate unique Route Targets suffixes, which for this customer massive network (it's a huge ISP) are a very disputed resource, so any free index needs to become immediately available to the end user. To tackle this requirement I used a BitSet. Allocate on the RT index with set and deallocate a suffix with clear. The method nextClearBit() can find the next available index. I handle synchronization / concurrency issues manually. This works pretty well for a small range... The entire index is small (around 10k), it is blazing fast and can be easy serialized into a Blob field. The problem is, some new devices can handle RTs of 32 bits (range 1 / 4294967296). Which can't be managed with a BitSet (it would, by itself, consume around 600Mb, plus be limited to int range). Even with this massive range available, the client still wants to free available Route Targets for the end user, mainly because the lowest ones (up to 65535) - which are compatible with old routers - are being heavily disputed. Before I tell the customer that this is impossible and he will have to conform with my reusable index for lower RTs (up to 65550) and use a database sequence for the other ones (which means that when the user frees a Route Target, it will not become available again). Would anyone shed some light? Maybe some kind soul already implemented a high performance number pool for Java (6 if it matters), or I am missing a killer feature of Oracle database (11R2 if it matters)... Wishful thinking. Thank you very much in advance.

    Read the article

  • Testing When Correctness is Poorly Defined?

    - by dsimcha
    I generally try to use unit tests for any code that has easily defined correct behavior given some reasonably small, well-defined set of inputs. This works quite well for catching bugs, and I do it all the time in my personal library of generic functions. However, a lot of the code I write is data mining code that basically looks for significant patterns in large datasets. Correct behavior in this case is often not well defined and depends on a lot of different inputs in ways that are not easy for a human to predict (i.e. the math can't reasonably be done by hand, which is why I'm using a computer to solve the problem in the first place). These inputs can be very complex, to the point where coming up with a reasonable test case is near impossible. Identifying the edge cases that are worth testing is extremely difficult. Sometimes the algorithm isn't even deterministic. Usually, I do the best I can by using asserts for sanity checks and creating a small toy test case with a known pattern and informally seeing if the answer at least "looks reasonable", without it necessarily being objectively correct. Is there any better way to test these kinds of cases?

    Read the article

  • How to implement Administrator rights in Java Application?

    - by Yatendra Goel
    I am developing a Data Modeling Software that is implemented in Java. This application converts the textual data (stored in a database) to graphical form so that users can interpret the data in a more efficient form. Now, this application will be accessed by 3 kinds of persons: 1. Managers (who can fill the database with data and they can also view the visual form of the data after entering the data into the database) 2. Viewers (who can only view the visual form of data that has been filled by managers) 3. Administrators (who can create and manage other administrators, managers and viewers) Now, how to implement 3 diff. views of the same application. Note: Managers, Viewers and Administrators can be located in any part of the world and should access the application through internet. One idea that came in my mind is as follows: Step1: Code all the business logic in EJBs so that it can be used in distributed environment (means which can be accessed by several users through internet) Step2: Code 3 Swing GUI Clients: One for administrators, one for managers and one for viewers. These 3 GUI clients can access business logic written in EJBs. Step3: Distribute the clients corresponding to their users. For instance, manager client to managers. =================================QUESTIONS======================================= Q1. Is the above approach is correct? Q2. This is very common functionality that various softwares have. So, Do they implement this kind of functionality through this way or any other way? Q3. If any other approach would be more better, then what is that approach?

    Read the article

  • NHibernate + Fluent long startup time

    - by PaRa
    Hi all, am new to NHibernate. When performing below test took 11.2 seconds (debug mode) i am seeing this large startup time in all my tests (basically creating the first session takes a tone of time) setup = Windows 2003 SP2 / Oracle10gR2 latest CPU / ODP.net 2.111.7.20 / FNH 1.0.0.636 / NHibernate 2.1.2.4000 / NUnit 2.5.2.9222 / VS2008 SP1 using System; using System.Collections; using System.Data; using System.Globalization; using System.IO; using System.Text; using System.Data; using NUnit.Framework; using System.Collections.Generic; using System.Data.Common; using NHibernate; using log4net.Config; using System.Configuration; using FluentNHibernate; [Test()] public void GetEmailById() { Email result; using (EmailRepository repository = new EmailRepository()) { results = repository.GetById(1111); } Assert.IsTrue(results != null); } public class EmailRepository : RepositoryBase { public EmailRepository():base() { } } In my RepositoryBase public T GetById(object id) { using (var session = sessionFactory.OpenSession()) using (var transaction = session.BeginTransaction()) { try { T returnVal = session.Get(id); transaction.Commit(); return returnVal; } catch (HibernateException ex) { // Logging here transaction.Rollback(); return null; } } } The query time is very small. The resulting entity is really small. Subsequent queries are fine. Its seems to be getting the first session started. Has anyone else seen something similar?

    Read the article

  • Multiple Solution Layout for ASP.NET Web Portal?

    - by Jared S
    At work, we've developed a custom ASP.NET Web Portal (That's very similar to iGoogle). We have "Apps" (self-contained, large web forms) and "Modules" (similar to Google Gadgets). Currently, we use a single-solution model. Right now, we have: 3 core projects 60 application projects 80 module projects To reduce copy and pasting between projects, we're going to factor out common functionality (Data Access, Business Logic) into separate projects. I'd also like to introduce Unit Tests, which is going to increase the number of projects even more. We've already reached the point where Visual Studio is choking on the number of projects. We generally only load the 3 core projects and then whatever app's/module's project we're working on. Would a different solution structure help us out? Our number of projects is only going to increase. In general, an app or module only references the 3 core projects. Soon, apps/modules may start referencing the Data Access/Business Logic projects. But in general, apps and modules do not make references between themselves. So to recap, what is the best practice for solution structure when there are MANY projects that use a small number of core projects?

    Read the article

  • C++/Win32 : XP Visual Styles - no controls are showing up?

    - by mrl33t
    Okay, so i'm pretty new to C++ & the Windows API and i'm just writing a small application. I wanted my application to make use of visual styles in both XP, Vista and Windows 7 so I added this line to the top of my code: #pragma comment(linker,"\"/manifestdependency:type='win32' name='Microsoft.Windows.Common-Controls' version='6.0.0.0' processorArchitecture='*' publicKeyToken='6595b64144ccf1df' language='*'\"") It seemed to work perfectly on my Windows 7 machine and also Vista machine. But when I tried the application on XP the application wouldn't load any controls (e.g. buttons, labels etc.) - not even messageboxes would display. This image shows a small test application which i've just put together to demonstrate what i'm trying to explain: http://img704.imageshack.us/img704/2250/myapp.png In this test application i'm not using any particularly fancy or complicated code. I've effectively just taken the most basic sample code from the MSDN Library (http://msdn.microsoft.com/en-us/library/ff381409.aspx) and added a section to the WM_CREATE message to create a button: MyBtn = CreateWindow(L"Button", L"My Button", BS_PUSHBUTTON | WS_CHILD | WS_VISIBLE, 25, 25, 100, 30, hWnd, NULL, hInst, 0); But I just can't figure out what's going on and why its not working. Any ideas guys? Thank you in advanced. (By the way the application works in XP if i remove the manifest section from the top - obviously without visual styles though. I should also probably mention that the app was built using Visual C++ 2010 Express on a Windows 7 machine - if that makes a difference?)

    Read the article

  • Mutually beneficial IP/copyright clauses for contract-based freelance work

    - by Nathan de Vries
    I have a copyright section in the contract I give to my clients stating that I retain copyright on any works produced during my work for them as an independent contractor. This is most definitely not intended to place arbitrary restrictions on my clients, but rather to maintain my ability to decide on how the software I create is licensed and distributed. Almost every project I work on results in at least one part of it being released as open source. Every project I work on makes use of third-party software released in the same fashion, so returning the favour is something I would like to continue doing. Unfortunately, the contract is not so clear when it comes to defining the rights of the client in the use of said software. I mention that the code will be licensed to them, but do not mention specifics about exclusivity, ability to produce derivatives etc. As such, a client has raised concerns about the copyright section of my contract, and has suggested that I reword it such that all copyrights are transferred entirely to the client on final payment for the project. This will almost certainly reduce my ability to distribute the software I have created; I would much prefer to find a more mutually beneficial agreement where both our concerns are appeased. Are there any tried and true approaches to licensing software in this kind of situation? To summarise: I want to maintain the ability to license (parts of) the software under my own terms, independently of my relationship with the client; with some guarantee to the client that no trade-secrets or critical business logic will be shared; giving them the ability to re-use my code in their future projects; but not necessarily letting them sell it (I'm not sure about this, though...what happens if they sell their business and the software along with it?) I realise that everyone's feedback is going to be prefixed with "IANAL", however I appreciate any thoughts you might have on the matter.

    Read the article

  • calling same function on different buttons not loaded yet

    - by Jordan Faust
    I can not get this to work for every button and I cannot find anything explaining why. I guessing it is something small that I am missing $(document).ready(function() { // delete the selected row from the database $(document).on('click', '#business-area-delete-button', { model: "BusinessArea" }, deleteRow); $(document).on('click', '#business-type-delete-button', { model: "BusinessType" }, deleteRow); $(document).on('click', '#client-delete-button', { model: "Client" }, deleteRow); $(document).on('click', '#client-type-delete-button', { model: "ClientType" }, deleteRow); $(document).on('click', '#communication-channel-type', { model: "CommunicationChannelType" }, deleteRow); $(document).on('click', '#parameter-type-delete-button', { model: "ParameterType" }, deleteRow); $(document).on('click', '#validation-method-delete-button', { model: "ValidationMethod" }, deleteRow); } the event function deleteRow(event){ $.ajax( { type:'POST', data: { id: $(".delete-row").attr("id") }, url:"/mysite/admin/delete" + event.data.model, success:function(data,textStatus){ $('#main-content').html(data); }, error:function(XMLHttpRequest,textStatus,errorThrown){ jQuery('#alerts').html(XMLHttpRequest.responseText); }, complete:function(XMLHttpRequest,textStatus){ placeAlerts() } } ); return false }; This works only for a the button with id validation-method-delete-button. I use document and not the button its self because the button is contained in a template that is loaded later via ajax. I have this working for a similar function that is selecting a row in a table however I am not attempting to pass data in that scenario.

    Read the article

  • How to handle too many files in Qt

    - by mree
    I'm not sure how to ask this, but here goes the question: I'm migrating from J2SE to Qt. After creating some small applications in Qt, I noticed that I've created way too many files compared to what I would've create if I was developing in Java (I use Netbeans). For an example, for a GUI to Orders, I'd have to create Main Order Search Window Edit Order Dialog Manage Order Dialog Maybe some other dialogs... For Java, I don't have to create a new file for every new Dialog, the Dialog will be created in the JFrame class itself. So, I will only be seeing 1 file for Orders which has other Dialogs in it. However, in Qt, I'd have to create 1 ui file, 1 header file, 1 cpp file for each of the Dialog (I know I can just put the cpp in the header, but it's easier to view codes in seperate files). So, in the end, I might end up with 3 (if there are 3 dialogs) x3 files = 9 files for the GUI in Qt, compared to Java which is only 1 file. I do know that I can create a GUI by coding it manually. But it seems easy on small GUIs but not some on complicated GUIs with lots of inputs, tabs and etc. So, is there any suggestion on how to minimize the file created in Qt?

    Read the article

  • Building a structure/object in a place other than the constructor

    - by Vishal Naidu
    I have different types of objects representing the same business entity. UIObject, PowershellObject, DevCodeModelObject, WMIObject all are different representation to the same entity. So say if the entity is Animal then I have AnimalUIObject, AnimalPSObject, AnimalModelObject, AnimalWMIObject, etc. Now the implementations of AnimalUIObject, AnimalPSObject, AnimalModelObject are all in separate assemblies. Now my scenario is I want to verify the contents of business entity Animal irrespective of the assembly it came from. So I created a GenericAnimal class to represent the Animal entity. Now in GenericAnimal I added the following constructors: GenericAnimal(AnimalUIObject) GenericAnimal(AnimalPSObject) GenericAnimal(AnimalModelObject) Basically I made GenericAnimal depend on all the underlying assemblies so that while verifying I deal with this abstraction. Now the other approach to do this is have GenericAnimal with an empty constructor an allow these underlying assemblies to have a Transform() method which would build the GenericAnimal. Both approaches have some pros and cons: The 1st approach: Pros: All construction logic is in one place in one class GenericAnimal Cons: GenericAnimal class must be touched every-time there is a new representation form. The 2nd approach: Pros: construction responsibility is delegated to the underlying assembly. Cons: As construction logic is spread accross assemblies, tomorrow if I need to add a property X in GenericAnimal then I have to touch all the assemblies to change the Transform method. Which approach looks better ? or Which would you consider a lesser evil ? Is there any alternative way better than the above two ?

    Read the article

  • Refactoring an ASP.NET 2.0 app to be more "modern"

    - by Wayne M
    This is a hypothetical scenario. Let's say you've just been hired at a company with a small development team. The company uses an internal CRM/ERP type system written in .NET 2.0 to manage all of it's day to day things (let's simplify and say customer accounts and records). The app was written a couple of years ago when .NET 2.0 was just out and uses the following architectural designs: Webforms Data layer is a thin wrapper around SqlCommand that calls stored procedures Rudimentary DTO-style business objects that are populated via the sprocs A "business logic" layer that acts as a gateway between the webform and database (i.e. code behind calls that layer) Let's say that as there are more changes and requirements added to the application, you start to feel that the old architecture is showing its age, and changes are increasingly more difficult to make. How would you go about introducing refactoring steps to A) Modernize the app (i.e. proper separation of concerns) and B) Make sure that the app can readily adapt to change in the organization? IMO the changes would involve: Introduce an ORM like Linq to Sql and get rid of the sprocs for CRUD Assuming that you can't just throw out Webforms, introduce the M-V-P pattern to the forms Make sure the gateway classes conform to SRP and the other SOLID principles. Change the logic that is re-used to be web service methods instead of having to reuse code What are your thoughts? Again this is a totally hypothetical scenario that many of us have faced in the past, or may end up facing.

    Read the article

  • Where do you put your dependencies?

    - by The All Foo
    If I use the dependency injection pattern to remove dependencies they end up some where else. For example, Snippet 1, or what I call Object Maker. I mean you have to instantiate your objects somewhere...so when you move dependency out of one object, you end up putting it another one. I see that this consolidates all my dependencies into one object. Is that the point, to reduce your dependencies so that they all reside in a single ( as close to as possible ) location? Snippet 1 - Object Maker <?php class ObjectMaker { public function makeSignUp() { $DatabaseObject = new Database(); $TextObject = new Text(); $MessageObject = new Message(); $SignUpObject = new ControlSignUp(); $SignUpObject->setObjects($DatabaseObject, $TextObject, $MessageObject); return $SignUpObject; } public function makeSignIn() { $DatabaseObject = new Database(); $TextObject = new Text(); $MessageObject = new Message(); $SignInObject = new ControlSignIn(); $SignInObject->setObjects($DatabaseObject, $TextObject, $MessageObject); return $SignInObject; } public function makeTweet( $DatabaseObject = NULL, $TextObject = NULL, $MessageObject = NULL ) { if( $DatabaseObject == 'small' ) { $DatabaseObject = new Database(); } else if( $DatabaseObject == NULL ) { $DatabaseObject = new Database(); $TextObject = new Text(); $MessageObject = new Message(); } $TweetObject = new ControlTweet(); $TweetObject->setObjects($DatabaseObject, $TextObject, $MessageObject); return $TweetObject; } public function makeBookmark( $DatabaseObject = NULL, $TextObject = NULL, $MessageObject = NULL ) { if( $DatabaseObject == 'small' ) { $DatabaseObject = new Database(); } else if( $DatabaseObject == NULL ) { $DatabaseObject = new Database(); $TextObject = new Text(); $MessageObject = new Message(); } $BookmarkObject = new ControlBookmark(); $BookmarkObject->setObjects($DatabaseObject,$TextObject,$MessageObject); return $BookmarkObject; } }

    Read the article

  • Unit Testing the Use of TransactionScope

    - by Randolpho
    The preamble: I have designed a strongly interfaced and fully mockable data layer class that expects the business layer to create a TransactionScope when multiple calls should be included in a single transaction. The problem: I would like to unit test that my business layer makes use of a TransactionScope object when I expect it to. Unfortunately, the standard pattern for using TransactionScope is a follows: using(var scope = new TransactionScope()) { // transactional methods datalayer.InsertFoo(); datalayer.InsertBar(); scope.Complete(); } While this is a really great pattern in terms of usability for the programmer, testing that it's done seems... unpossible to me. I cannot detect that a transient object has been instantiated, let alone mock it to determine that a method was called on it. Yet my goal for coverage implies that I must. The Question: How can I go about building unit tests that ensure TransactionScope is used appropriately according to the standard pattern? Final Thoughts: I've considered a solution that would certainly provide the coverage I need, but have rejected it as overly complex and not conforming to the standard TransactionScope pattern. It involves adding a CreateTransactionScope method on my data layer object that returns an instance of TransactionScope. But because TransactionScope contains constructor logic and non-virtual methods and is therefore difficult if not impossible to mock, CreateTransactionScope would return an instance of DataLayerTransactionScope which would be a mockable facade into TransactionScope. While this might do the job it's complex and I would prefer to use the standard pattern. Is there a better way?

    Read the article

  • reporting tool/viewer for large datasets

    - by FrustratedWithFormsDesigner
    I have a data processing system that generates very large reports on the data it processes. By "large" I mean that a "small" execution of this system produces about 30 MB of reporting data when dumped into a CSV file and a large dataset is about 130-150 MB (I'm sure someone out there has a bigger idea of "large" but that's not the point... ;) Excel has the ideal interface for the report consumers in the form of its Data Lists: users can filter and segment the data on-the-fly to see the specific details that they are interested in - they can also add notes and markup to the reports, create charts, graphs, etc... They know how to do all this and it's much easier to let them do it if we just give them the data. Excel was great for the small test datasets, but it cannot handle these large ones. Does anyone know of a tool that can provide a similar interface as Excel data lists, but that can handle much larger files? The next tool I tried was MS Access, and found that the Access file bloats hugely (30 MB input file leads to about 70 MB Access file, and when I open the file, run a report and close it the file's at 120-150 MB!), the import process is slow and very manual (currently, the CSV files are created by the same plsql script that runs the main process so there's next to no intervention on my part). I also tried an Access database with linked tables to the database tables that store the report data and that was many times slower (for some reason, sqlplus could query and generate the report file in a minute or soe while Access would take anywhere from 2-5 minutes for the same data) (If it helps, the data processing system is written in PL/SQL and runs on Oracle 10g.)

    Read the article

< Previous Page | 309 310 311 312 313 314 315 316 317 318 319 320  | Next Page >