Search Results

Search found 16665 results on 667 pages for 'nhibernate configuration'.

Page 338/667 | < Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >

  • Ubuntu 11.10 boot: xhost: unable to open display

    - by paulus_almighty
    I've had this papercut for a while now, it's time it was fixed. When I boot up Ubuntu, choosing "Ubuntu...generic" from the grub screen, Ubuntu fails to load. It just sits at the driver/module loading screen. What seems to be the most significant line in this output is "xhost: unable to open display" If I choose "Ubuntu...(recovery mode)" from grub then it loads OK. I don't get why this is. Out of interest I tried enabling boot error logging with #/etc/default/bootlogd BOOTLOGD_ENABLE=Yes but I'm not seeing anything in that file. ETA: I've had this problem since fresh install of 11.10. Here's lshw: $ sudo lshw -C display *-display description: VGA compatible controller product: GF104 [GeForce GTX 460] vendor: nVidia Corporation physical id: 0 bus info: pci@0000:03:00.0 version: a1 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vga_controller bus_master cap_list rom configuration: driver=nvidia latency=0 resources: irq:16 memory:f6000000-f7ffffff memory:e0000000-e7ffffff memory:ec000000-efffffff ioport:bf00(size=128) memory:e8000000-e807ffff

    Read the article

  • Display settings problems with Ubuntu 14.04 LTS

    - by DontUSASme
    For whatever reason (this was working correctly on Ubuntu 12.04 LTS) when I try to set my displays on top of each other in the settings, it won't allow to place them on top of each other. When I click "apply" I get the error message "Could Not Set Configuration Mode for CRTC 63." Can anybody help me with this? It wont let me set the two displays on top of each other, but it will allow me to place them side to side. Also, I get a random "Unknown display" as well. This was not existent in 12.04 LTS and I have no idea what it is, seeing as how the only screens that are in use, or plugged up for that matter, is my laptop's display (1366x768) and my Samsung TV display (1360x768) thru HDMI.

    Read the article

  • Ubuntu 12.04 Nvidia GTX 460 video card installation

    - by aczietlow
    Currently testing Ubuntu 12.04 x64 for our development team. After upgrading from 11.10 I've been having video card issues. I'm using Nvidia GeForce GTX 460. When ever I try to launch Nvidia X server I get the following error message. You do not appear to be using the NVIDIA X driver. Please edit your X configuration file (just run nvidia-xconfig as root), and restart the X server. I've tried running sudo nvidia-xconfig multiple times and rebooting with no success. I've also tried getting the nvidia-current driver from the x-swat repo sudo apt-add-repository ppa:ubuntu-x-swat/x-updates sudo apt-get update sudo apt-get install nvidia-current Followed again by a reboot did nothing for me but knock my resolution down to 800x600 Finally I've tried sudo apt-get purge xserver-xorg sudo apt-get update sudo apt-get install xserver-xorg xserver-xorg-video-all sudo reboot Does anyone have any thoughts or directions they could point me in? To the best of my understanding my video card is suppose to be supported.

    Read the article

  • Wine doesn't work (Problem with the mount manager)

    - by audrianore
    I didn't know exactly when the problem occurred. My Wine worked well a couple days ago. Then, I was just about to install some Windows programs hours ago but I got nothing. No installer window showed up, and no error report. It just won't work. And now I just found where the problem is (screenshot below), but I don't know how to fix it. Any idea? Any help would be appreciated. Thanks in advance. Details: Wine 1.4 Ubuntu 12.04 LTS I tried: Autoremove in Terminal Delete the software configuration

    Read the article

  • ????????SPARC????? ?OVM???????!

    - by Yusuke.Yamamoto
    ????? ??:2010/10/26 ??:?????? SPARC ???????????????????????·??????????????!????????SPARC CMT ????? Solaris ?????????????????? Oracle VM Server for SPARC(?? Logical Domains:LDoms)????????????????????????? Oracle ??????/ Oracle Virtualization Strategy?Only From Oracle?Oracle ?????????????Oracle VM for SPARC ????/ Oracle VM for SPARC?Release History?Key Components?SPARC Enterprise T / SPARC T3?????System Firmware?Oracle Solaris?Logical Domains Manager??????????Oracle VM for SPARC???????!/ ???????(????)?????????ldm?????Configuration Assistant?Logical Domains P2V?????????? ????????? ????????????????? http://www.oracle.com/technology/global/jp/ondemand/otn-seminar/pdf/1026_OVMforSPARC_Rev02.pdf

    Read the article

  • ????——???????????

    - by hamsun
    user12619775,2014?8?7? - ??:??????????? ??????????????????????????,???????????????????????????????????????? ???????? Managing Oracle Database on Oracle Solaris 11 Oracle Solaris 11 System Administration for Experienced UNIX/Linux Administrators Oracle Database 12c: RAC Administration Oracle Database 12c: Implement Partitioning Oracle WebCenter Portal 11g for Developers Oracle Hyperion PSPB 11.1.2: Create & Manage Applications (11.1.2.3) R12.x Oracle Financials Accounting Hub Fundamentals Oracle Demantra 7.3.1 Demand Management Fundamentals RPAS Administration and Configuration Fundamentals 14.0 PeopleSoft PeopleTools I Rel 8.53 (Training On Demand) ????education.oracle.com? ???Oracle??????

    Read the article

  • Most useful free .NET libraries?

    - by Binoj Antony
    I have used a lot of free .NET libraries, some from Microsoft itself! Which ones have you found the most useful? Dependency Injection/Inversion of Control Unity Framework - Microsoft StructureMap - Jeremy Miller Castle Windsor NInject Spring Framework Autofac Managed Extensibility Framework Logging Logging Application Block - Microsoft Log4Net - Apache Error Logging Modules and Handlers(ELMAH) NLog Compression SharpZipLib DotNetZip YUI Compressor (CSS and JS compression/minification) AjaxMinifier (in other downloads) (JS compression. Also includes MSBuild task) Ajax Ajax Control Toolkit - Microsoft AJAXNet Pro Data Mapper XmlDataMapper AutoMapper ORM NHibernate Castle ActiveRecord Subsonic XmlDataMapper Charting/Graphics Microsoft Chart Controls for ASP.NET 3.5 SP1 Microsoft Chart Controls for Winforms ZedGraph Charting NPlot - Charting for ASP.NET and WinForms PDF Creators/Generators PDFsharp iTextSharp Unit Testing/Mocking NUnit Rhino Mocks Moq TypeMock.Net xUnit.net mbUnit Machine.Specifications Automated Web Testing Selenium Watin URL Rewriting url rewriter UrlRewriting.Net Url Rewriter and Reverse Proxy - Managed Fusion Controls Krypton - Free winform controls Source Grid - A Grid control Devexpress - free controls Unclassified CSLA Framework - Business Objects Framework AForge.net - AI, computer vision, genetic algorithms, machine learning Enterprise Library 4.1 - Logging, Exception Management, Validation, Policy Injection File helpers library C5 Collections - Collections for .NET Quartz.NET - Enterprise Job Scheduler for .NET Platform MiscUtil - Utilities by Jon Skeet Lucene.net - Text indexing and searching Json.NET - Linq over JSON Flee - expression evaluator PostSharp - AOP IKVM - brings the extensive world of Java libraries to .NET. Title of the question taken from here. [EDIT] Please provide links to these free libraries as well. Once we have a huge list of this, it can be arranged in categories! Please do not mention .NET Applications/EXEs here.

    Read the article

  • How to populate a generic list of objects in C# from SQL database

    - by developr
    I am just learning ASP.NET c# and trying to incorporate best practices into my applications. Everything that I read says to layer my applications into DAL, BLL, UI, etc based on separation of concerns. Instead of passing datatables around, I am thinking about using custom objects so that I am loosely coupled to my data layer and can take advantage of intellisense in VS. I assume these objects would be considered DTOs? First, where do these objects reside in my layers? BLL, DAL, other? Second, when populating from SQL, should I loop through a data reader to populate the list or first fill a data table, then loop through the table to populate the list? I know you should close the database connection as soon as possible, but it seems like even more overhead to populate the data table and then loop through that for the list. Third, everything I see these days says use Linq2SQL. I am planning to learn Linq2SQL, but at this time I am working with a legacy database that doesn't have foreign keys setup and I do not have the ability to fix it atm. Also, I want to learn more about c# before I start getting into ORM solutions like nHibernate. At the same time I don't want to type out all the connection and SQL plumbing for every query. Is it ok to use the Enterprise DAAB for now?

    Read the article

  • Why is my WPF splash screen progress bar out of sync with the execution of the startup steps?

    - by denny_ch
    Hello, I've implemented a simple WPF splash screen window which informs the user about the progress of the application startup. The startup steps are defined this way: var bootSequence = new[] { new {Do = (Action) InitLogging, Message = "Init logging..."}, new {Do = (Action) InitNHibernate, Message = "Init NHibernate..."}, new {Do = (Action) SetupUnityContainer, Message = "Init Unity..."}, new {Do = (Action) UserLogOn, Message = "Logon..."}, new {Do = (Action) PrefetchData, Message = "Caching..."}, }; InitLogging etc. are methods defined elsewhere, which performs some time consuming tasks. The boot sequence gets executed this way: foreach (var step in bootSequence) { _status.Update(step.Message); step.Do(); } _status denotes an instance of my XAML splash screen window containing a progress bar and a label for status information. Its Update() method is defined as follows: public void Update(string status) { int value = ++_updateSteps; Update(status, value); } private void Update(string status, int value) { var dispatcherOperation = Dispatcher.BeginInvoke( DispatcherPriority.Background, (ThreadStart) delegate { lblStatus.Content = status; progressBar.Value = value; }); dispatcherOperation.Wait(); } In the main this works, the steps get executed and the splash screen shows the progress. But I observed that the splash screen for some reasons doesn't update its content for all steps. This is the reason I called the Dispatcher async and wait for its completion. But even this didn't help. Has anyone else experienced this or some similar behaviour and has some advice how to keep the splash screen's update in sync with the execution of the boot sequence steps? I know that the users will unlikely notice this behaviour, since the splash screen is doing something and the application starts after booting is completed. But myself isn't sleeping well, because I don't know why it is not working as expected... Thx for your help, Denny

    Read the article

  • SQL Server database with clustered GUID PKs - switch clustered index or switch to sequential (comb)

    - by Eyvind
    We have a database in which all the PKs are GUIDs, and most of the PKs are also the clustered index for the table. We know that this is bad (due to the random nature of GUIDs). So, it seems there are basically two options here (short of throwing out GUIDs as PKs altogether, which we cannot do (at least not at this time)). We could change the GUID generation algorithm to e.g. the one that NHibernate uses, as detailed in this post, or we could, for the tables that are under the heaviest use, change to a different clustered index, e.g. an IDENTITY column, and keep the "random" GUIDs as PKs. Is it possible to give any general recommendations in such a scenario? The application in question has 500+ tables, the largest one presently at about 1,5 million rows, a few tables around 500 000 rows, and the rest significantly lower (most of them well below 10K). Furthermore, the application is installed at several customer sites already, so we have to take any possible negative effects for existing customer into consideration. Thanks!

    Read the article

  • log4Net EventlogAppender does not work for Asp.Net 2.0 WebSite?

    - by Amitabh
    I have configured log4Net EventLogAppender for Asp.Net 2.0. However it does not log anything. I have following in my Web.Config. <log4net> <appender name="EventLogAppender" type="log4net.Appender.EventLogAppender"> <param name="LogName" value="Test Log" /> <param name="ApplicationName" value="Test-Web" /> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%date [%thread] %-5level %logger [%property{NDC}] - %message%newline" /> </layout> </appender> <root> <priority value="ERROR"/> <appender-ref ref="EventLogAppender"/> </root> <logger name="NHibernate"> <level value="ERROR" /> <appender-ref ref="EventLogAppender" /> </logger> </log4net> I already have Test-Log Event Log created and AspNet user has permission on the Event Log registry entry. I also have log4Net configured in Global.asax Application_Start. log4net.Config.XmlConfigurator.Configure();

    Read the article

  • New to MVVM - Best practices for seperating Data processing thread and UI Thread?

    - by OffApps Cory
    Good day. I have started messing around with the MVVP pattern, and I am having some problems with UI responsiveness versus data processing. I have a program that tracks packages. Shipment and package entities are persisted in SQL database, and are displayed in a WPF view. Upon initial retrieval of the records, there is a noticeable pause before displaying the new shipments view, and I have not even implemented the code that counts shipments that are overdue/active yet (which will necessitate a tracking check via web service, and a lot of time). I have built this with the Ocean framework, and all appears to be doing well, except when I first started my foray into multi-threading. It broke, and it appeared to break something in Ocean... Here is what I did: Private QueryThread As New System.Threading.Thread(AddressOf GetShipments) Public Sub New() ' Insert code required on object creation below this point. Me.New(ViewManagerService.CreateInstance, ViewModelUIService.CreateInstance) 'Perform initial query of shipments 'QueryThread.Start() GetShipments() Console.WriteLine(Me.Shipments.Count) End Sub Public Sub New(ByVal objIViewManagerService As IViewManagerService, ByVal objIViewModelUIService As IViewModelUIService) MyBase.New(objIViewModelUIService) End Sub Public Sub GetShipments() Dim InitialResults = From shipment In db.Shipment.Include("Packages") _ Select shipment Me.Shipments = New ShipmentsCollection(InitialResults, db) End Sub So I declared a new Thread, assigned it the GetShipments method and instanced it in the default constructor. Ocean freaks out at this, so there must be a better way of doing it. I have not had the chance to figure out the usage of the SQL ORM thing in Ocean so I am using Entity Framework (perhaps one of these days i will look at NHibernate or something too). Any information would be greatly appreciated. I have looked at a number of articles and they all have examples of simple uses. Some have mentioned the Dispatcher, but none really go very far into how it is used. Anyone know any good tutorials? Cory

    Read the article

  • Castle MonoRail ARDataBind trying to bind to non-existent row

    - by dave thieben
    I have a shopping cart application running on MonoRail and using Castle ActiveRecord/NHibernate, and there is a ShoppingCart table and a ShoppingCartItems table, which are mapped to entities. Here's the scenario: a user adds things to the shopping cart, say 5 items, and goes to view the cart. The cart shows all 5 items. the user duplicates the tab/window and gets another tab of the same cart (call it tab B). the user removes an item from the cart, so now there are 4 items in tab B, but in the original tab A, there are still 5 items. the user goes back to tab A, and updates something in the cart and clicks the "update" button which submits the changes. my MonoRail action tries to do an ARDataBind on ShoppingCartItems using the data from the view, which includes all 5 items. when it gets to the item that the user deleted from tab B, it throws a "No row with the given identifier exists" for that item. I can't figure out if there is a way to have it not bind that row, return null, return new instance, etc.? there is an AutoLoadBehavior parameter on the ARDataBind attribute, but that appears to only affect loading of child entities, and not the root entity. regardless of which option I choose, I get the exception before control even enters the action method (except AutoLoadBehavior.Never, but that doesn't really help me). instead, I have code that calls Request.ObtainParamsNode() to pull the form nodes and parse them manually into objects, and ignores the ones that no longer exist. is there a better way? thanks.

    Read the article

  • MS SQL Database with clustered GUID PKs - switch clustered index or switch to sequential (comb) GUID

    - by Eyvind
    We have a database in which all the PKs are GUIDs, and most of the PKs are also the clustered index for the table. We know that this is bad (due to the random nature of GUIDs). So, it seems there are basically two options here (short of throwing out GUIDs as PKs altogether, which we cannot do (at least not at this time)). We could change the GUID generation algorithm to e.g. the one that NHibernate uses, as detailed in this post, or we could, for the tables that are under the heaviest use, change to a different clustered index, e.g. an IDENTITY column, and keep the "random" GUIDs as PKs. Is it possible to give any general recommendations in such a scenario? The application in question has 500+ tables, the largest one presently at about 1,5 million rows, a few tables around 500 000 rows, and the rest significantly lower (most of them well below 10K). Furthermore, the application is installed at several customer sites already, so we have to take any possible negative effects for existing customer into consideration. Thanks!

    Read the article

  • One Model to Rule Them All - VS2010 UML, ADO.NET Entity Data Model, and T4

    - by Eric J.
    I worked on a fairly large project a while back where we modeled the classes in Enterprise Architect and generated the (partial) POCO classes (complete with model-driven business rule validations), persistence (NHibernate mapping file) and DDL. Based on certain model attributes we could flag alternate generation strategies or indicate that a particular portion would be entirely hand-coded. There was a good deal of initial investment, but it paid large dividends over the lifetime of a 15 developer, 3 year project. I'm investigating doing something similar with the current Microsoft technology stack. The place I'm stuck is that class modeling is done with the VS 2010 UML tools, but logical data modeling is done with Entity Data Modeler. Is it a reasonable path to use VS 2010 UML as the "single source of truth" and code generate the edmx files based on the class model? That's the inverse of the common path to create the entity model and use a POCO generator to generate classes. However, a good class model can be used to generate much more than just the properties so I tend to view it as a better choice than the entity model.

    Read the article

  • Service Oriented Architecture & Domain-Driven Design

    - by Michael
    I've always developed code in a SOA type of way. This year I've been trying to do more DDD but I keep getting the feeling that I'm not getting it. At work our systems are load balanced and designed not to have state. The architecture is: Website ===Physical Layer== Main Service ==Physical Layer== Server 1/Service 2/Service 3/Service 4 Only Server 1,Service 2,Service 3 and Service 4 can talk to the database and the Main Service calls the correct service based on products ordered. Every physical layer is load balanced too. Now when I develop a new service, I try to think DDD in that service even though it doesn't really feel like it fits. I use good DDD principles like entities, value types, repositories, aggregates, factories and etc. I've even tried using ORM's but they just don't seem like they fit in a stateless architecture. I know there are ways around it, for example use IStatelessSession instead of ISession with NHibernate. However, ORM just feel like they don't fit in a stateless architecture. I've noticed I really only use some of the concepts and patterns DDD has taught me but the overall architecture is still SOA. I am starting to think DDD doesn't fit in large systems but I do think some of the patterns and concepts do fit in large systems. Like I said, maybe I'm just not grasping DDD or maybe I'm over analyzing my designs? Maybe by using the patterns and concepts DDD has taught me I am using DDD? Not sure if there is really a question to this post but more of thoughts I've had when trying to figure out where DDD fits in overall systems and how scalable it truly is. The truth is, I don't think I really even know what DDD is?

    Read the article

  • Merge replication server side foreign key violation from unpublished table

    - by Reiste
    We are using SQL Server 2005 Merge Replication with SQL CE 3.5 clients. We are using partitions with filtering for the separate subscriptions, and nHibernate for the ORM mapping. There is automatic ID range management from SQL Server for the subscriptions. We have a table, Item, and a table with a foreign key to Item - ItemHistory. Both of these are replicated down, filtered according to the subscription. Item has a column called UserId, and is filtered per subscription with this filter: WHERE UserId IN (SELECT... [complicated subselect]...) ItemHistory hangs off Item in the publication filter articles. On the server, we have a table ItemHistoryExport, which has a foreign key to ItemHistory. ItemHistoryExport is not published. Entries in the Item and ItemHistory tables are never deleted, on the server or the client. However, the "ownership" of items (and hence their ItemHistories) MAY change, which causes them to be moved from one client subscription/partition to another from time to time. When we sync, we occasionally get the following error: A row delete at '48269404 - 4108383dbb11' could not be propagated to 'MyServer\MyInstance.MyDatabase'. This failure can be caused by a constraint violation. The DELETE statement conflicted with the REFERENCE constraint "FK_ItemHistoryExport_ItemHistory". The conflict occurred in database "MyDatabase", table "dbo.ItemHistoryExport", column 'ItemHistoryId'. Can anyone help us understand why this happens? There shouldn't ever be a delete happening on the server side.

    Read the article

  • Correct escaping of delimited identifers in SQL Server without using QUOTENAME

    - by Ross Bradbury
    Is there anything else that the code must do to sanitize identifiers (table, view, column) other than to wrap them in double quotation marks and "double up" and double quotation marks present in the identifier name? References would be appreciated. I have inherited a code base that has a custom object-relational mapping (ORM) system. SQL cannot be written in the application but the ORM must still eventually generate the SQL to send to the SQL Server. All identifiers are quoted with double quotation marks. string QuoteName(string identifier) { return "\" + identifier.Replace("\"", "\"\"") + "\""; } If I were building this dynamic SQL in SQL, I would use the built-in SQL Server QUOTENAME function: declare @identifier nvarchar(128); set @identifier = N'Client"; DROP TABLE [dbo].Client; --'; declare @delimitedIdentifier nvarchar(258); set @delimitedIdentifier = QUOTENAME(@identifier, '"'); print @delimitedIdentifier; -- "Client""; DROP TABLE [dbo].Client; --" I have not found any definitive documentation about how to escape quoted identifiers in SQL Server. I have found Delimited Identifiers (Database Engine) and I also saw this stackoverflow question about sanitizing. If it were to have to call the QUOTENAME function just to quote the identifiers that is a lot of traffic to SQL Server that should not be needed. The ORM seems to be pretty well thought out with regards to SQL Injection. It is in C# and predates the nHibernate port and Entity Framework etc. All user input is sent using ADO.NET SqlParameter objects, it is just the identifier names that I am concerned about in this question. This needs to work on SQL Server 2005 and 2008.

    Read the article

  • C# To VB.Net Conversion - array of class objects with initialisation

    - by mattryan
    can someone help me pls, im new to vb.net and im trying to work through the nhibernate firstsolkution sample (written in c#) and im struggling to convert this one bit. ive tried numerous convertors; telerik, developerfusion and a several others but none of the code produced will compile and i cant see the why... private readonly Product[] _products = new[] { new Product {Name = "Melon", Category = "Fruits"}, new Product {Name = "Pear", Category = "Fruits"}, new Product {Name = "Milk", Category = "Beverages"}, new Product {Name = "Coca Cola", Category = "Beverages"}, new Product {Name = "Pepsi Cola", Category = "Beverages"}, }; developer fusion gives Private ReadOnly _products As Product() = New () {New Product(), New Product(), New Product(), New Product(), New Product()} telerik gives Private ReadOnly _products As Product() = New () {New Product() With { _ .Name = "Melon", _ .Category = "Fruits" _ }, New Product() With { _ .Name = "Pear", _ .Category = "Fruits" _ }, New Product() With { _ .Name = "Milk", _ .Category = "Beverages" _ }, Nw Product() With { _ .Name = "Coca Cola", _ .Category = "Beverages" _ }, New Product() With { _ .Name = "Pepsi Cola", _ .Category = "Beverages" _ }} which seems the most useful except it complains about a type expected here "New () {..." ive tried various things just cant figure it out... what am i missing? am i just being dumb? or isnt there and equivilent? Cheers all

    Read the article

  • Synchronizing ASP.NET MVC action methods with ReaderWriterLockSlim

    - by James D
    Any obvious issues/problems/gotchas with synchronizing access (in an ASP.NET MVC blogging engine) to a shared object model (NHibernate, but it could be anything) at the Controller/Action level via ReaderWriterLockSlim? (Assume the object model is very large and expensive to build per-request, so we need to share it among requests.) Here's how a typical "Read Post" action would look. Enter the read lock, do some work, exit the read lock. public ActionResult ReadPost(int id) { // ReaderWriterLockSlim allows multiple concurrent writes; this method // only blocks in the unlikely event that some other client is currently // writing to the model, which would only happen if a comment were being // submitted or a new post were being saved. _lock.EnterReadLock(); try { // Access the model, fetch the post with specificied id // Pseudocode, etc. Post p = TheObjectModel.GetPostByID(id); ActionResult ar = View(p); return ar; } finally { // Under all code paths, we must release the read lock _lock.ExitReadLock(); } } Meanwhile, if a user submits a comment or an author authors a new post, they're going to need write access to the model, which is done roughly like so: [AcceptVerbs(HttpVerbs.Post)] public ActionResult SaveComment(/* some posted data */) { // try/finally omitted for brevity _lock.EnterWriteLock(); // Save the comment to the DB, update the model to include the comment, etc. _lock.ExitWriteLock(); } Of course, this could also be done by tagging those action methods with some sort of "synchronized" attribute... but however you do it, my question is is this a bad idea? ps. ReaderWriterLockSlim is optimized for multiple concurrent reads, and only blocks if the write lock is held. Since writes are so infrequent (1000s or 10,000s or 100,000s of reads for every 1 write), and since they're of such a short duration, the effect is that the model is synchronized , and almost nobody ever locks, and if they do, it's not for very long.

    Read the article

  • Testing system where App-level and Request-level IoC containers exist

    - by Bobby
    My team is in the process of developing a system where we're using Unity as our IoC container; and to provide NHibernate ISessions (Units of work) over each HTTP Request, we're using Unity's ChildContainer feature to create a child container for each request, and sticking the ISession in there. We arrived at this approach after trying others (including defining per-request lifetimes in the container, but there are issues there) and are now trying to decide on a unit testing strategy. Right now, the application-level container itself is living in the HttpApplication, and the Request container lives in the HttpContext.Current. Obviously, neither exist during testing. The pain increases when we decided to use Service Location from our Domain layer, to "lazily" resolve dependencies from the container. So now we have more components wanting to talk to the container. We are also using MSTest, which presents some concurrency dilemmas during testing as well. So we're wondering, what do the bright folks out there in the SO community do to tackle this predicament? How does one setup an application that, during "real" runtime, relies on HTTP objects to hold the containers, but during test has the flexibility to build-up and tear-down the containers consistently, and have the ServiceLocation bits get to those precise containers. I hope the question is clear, thanks!

    Read the article

  • Proper way to validate model in ASP.NET MVC 2 and ViewModel apporach

    - by adrin
    I am writing an ASP.NET MVC 2 application using NHibernate and repository pattern. I have an assembly that contains my model (business entities), moreover in my web project I want to use flattened objects (possibly with additional properties/logic) as ViewModels. These VMs contain UI-specific metadata (eg. DisplayAttribute used by Html.LabelFor() method). The problem is that I don't know how to implement validation so that I don't repeat myself throughout various tiers (specifically validation rules are written once in Model and propagated to ViewModel). I am using DataAnnotations on my ViewModel but this means no validation rules are imposed on the Model itself. One approach I am considering is deriving ViewModel objects from business entities adding new properties/overriding old ones, thus preserving validation metadata between the two however this is an ugly workaround. I have seen Automapper project which helps to map properties, but I am not sure if it can handle ASP.NET MVC 2 validation metadata properly. Is it difficult to use custom validation framework in asp.net mvc 2? Do you have any patterns that help to preserve DRY in regard to validation?

    Read the article

  • video calling (center)

    - by rrejc
    We are starting to develop a new application and I'm searching for information/tips/guides on application architecture. Application should: read the data from an external (USB) device send the data to the remote server (through internet) receive the data from the remote server perform a video call with to the calling (support) center receive a video call call from the calling (support) center support touch screens In addition: some of the data should also be visible through the web page. So I was thinking about: On the server side: use the database (probably MS SQL) use ORM (nHibernate) to map the data from the DB to the domain objects create a layer with business logic in C# create a web (WCF) services (for client application) create an asp.net mvc application (for item 7.) to enable data view through the browser On the client side I would use WPF 4 application which will communicate with external device and the wcf services on the server. So far so good. Now the problem begins. I have no idea how to create a video call (outgoing or incoming) part of the application. I believe that there is no problem to communicate with microphone, speaker, camera with WPF/C#. But how to communicate with the call center? What protocol and encoding should be used? I think that I will need to create some kind of server which will: have a list of operators in the calling center and track which operator is occupied and which operator is free have a list of connected end users receive incoming calls from end users and delegate call to free operator delegate calls from calling center to the end user Any info, link, anything on where to start would be much appreciated. Many thanks!

    Read the article

  • Flexible design - customizable entity model, UI and workflow

    - by Ngm
    Hi All, I want to achieve the following aspects in the software I am building: 1. Customizable entity model 2. Customizable UI 3. Customizable workflow I have thought about an approach to achieve this, I want you to review this and make suggestions: Entity objects should be plain objects and will hold just data Separate Entity model and DB Schema by using an framework (like NHibernate?). This will allow easy modification of entity objects. Business logic to fetch/modify entities has to be granular enough so that they can be invoked as part of the workflow. Business objects should not hold any state, and hence will contain only static methods The workflow will decide depending upon the "state" of an entity/entities which methods on business object/objects to invoke. The workflow should obtain the results of the processing and then pass on the business objects to the appropriate UI screen. The UI screen has to contain instructions about how to display a given entity/entites. Possibly the UI has to be generated dynamically based on a set of UI instructions. (like XUL) What do you think about this approach? Suggest which existing frameworks (like NHiberante, Window Workflow) fit into this model, so that I will not spend time on coding these frameworks Also suggest is there any asp.net framework that can generate dynamic asp.net ajax pages based on a set of UI instructions (like Mozilla XUL)? I have recently been exploring Apache Ofbiz and was impressed by its ability to customize most areas of the application: UI, workflow, entities. Is there any similar (not necessarily an ERP system) application developed in C#/.Net which offers a similar level of customization? I am looking for examples of applications developed in C# that are highly customizable in terms of UI, Workflow and Entity Model

    Read the article

  • yet another question about migrating to Java

    - by aloneguid
    Hi, There are plenty similar questions, but maybe responses to this one will save a developer's life :) I want to migrate to Java. The reasons are very clear: all the .NET vacancies are client and windows oriented (Silverlight developer, ASP.NET developer, WPF developer etc.) and none of them are any interest to me. I worked with .NET since it's beginning as our company decided to invest in .NET having C++ stack and all the natual problems, so I was just blindly following and actually enjoyed it as the products were mostly server oriented with mixed C++/C# code. Today I have beforementioned problem - can't find an inspiring job. I'd rather kill myself than start working on a Silverlight or WPF project. Searching Java vacancies shows promising results, however they all require a huge java-related technology stack and experience. The question is is there any chance to find a job quickly and without dramatic salary drop (I know that Java guys are usually better paid, so there must be a kind of a credit) and if not, how must time and effort does it take to migrate (my .NET knowledge mostly includes server-oriented technologies like NHibernate, WCF, threading, sockets, ASP.NET web services, Enterprise Library, NInject etc etc etc, and (still) some C++ leftovers). Thanks!

    Read the article

< Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >