Search Results

Search found 5369 results on 215 pages for 'entity razer'.

Page 8/215 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Entity Framework 4.0: Creating objects of correct type when using lazy loading

    - by DigiMortal
    In my posting about Entity Framework 4.0 and POCOs I introduced lazy loading in EF applications. EF uses proxy classes for lazy loading and this means we have new types in that come and go dynamically in runtime. We don’t have these types available when we write code but we cannot forget that EF may expect us to use dynamically generated types. In this posting I will give you simple hint how to use correct types in your code. The background of lazy loading and proxy classes As a first thing I will explain you in short what is proxy class. Business classes when designed correctly have no knowledge about their birth and death – they don’t know how they are created and they don’t know how their data is persisted. This is the responsibility of object runtime. When we use lazy loading we need a little bit different classes that know how to load data for properties when code accesses the property first time. As we cannot add this functionality to our business classes (they may be stored through more than one data access technology or by more than one Data Access Layer (DAL)) we create proxy classes that extend our business classes. If we have class called Product and product has lazy loaded property called Customer then we need proxy class, let’s say ProductProxy, that has same public signature as Product so we can use it INSTEAD OF product in our code. ProductProxy overrides Customer property. If customer is not asked then customer is null. But if we ask for Customer property then overridden property of ProductProxy loads it from database. This is how lazy loading works. Problem – two types for same thing As lazy loading may introduce dynamically generated proxy types we don’t know in our application code which type is returned. We cannot be sure that we have Product not ProductProxy returned. This leads us to the following question: how can we create Product of correct type if we don’t know the correct type? In EF solution is simple. Solution – use factory methods If you are using repositories and you are not using factories (imho it is pretty pointless with mapper) you can add factory methods to your EF based repositories. Take a look at this class. public class Event {     public int ID { get; set; }     public string Title { get; set; }     public string Location { get; set; }     public virtual Party Organizer { get; set; }     public DateTime Date { get; set; } } We have virtual member called Organizer. This property is virtual because we want to use lazy loading on this class so Organizer is loaded only when we ask it. EF provides us with method called CreateObject<T>(). CreateObject<T>() is member of ObjectContext class and it creates the object based on given type. In runtime proxy type for Event is created for us automatically and when we call CreateObject<T>() for Event it returns as object of Event proxy type. The factory method for events repository is as follows. public Event CreateEvent() {     var evt = _context.CreateObject<Event>();     return evt; } And we are done. Instead of creating factory classes we created factory methods that guarantee that created objects are of correct type. Conclusion Although lazy loading introduces some new objects we cannot use at design time because they live only in runtime we can write code without worrying about exact implementation type of object. This holds true until we have clean code and we don’t make any decisions based on object type. EF4.0 provides us with very simple factory method that create and return objects of correct type. All we had to do was adding factory methods to our repositories.

    Read the article

  • Is it possible to auto update only selected properties on an existent entity object without touching the others

    - by LaserBeak
    Say I have a bunch of boolean properties on my entity class public bool isActive etc. Values which will be manipulated by setting check boxes in a web application. I will ONLY be posting back the one changed name/value pair and the primary key at a time, say { isActive : true , NewsPageID: 34 } and the default model binder will create a NewsPage object with only those two properties set. Now if I run the below code it will not only update the values for the properties that have been set on the NewsPage object created by the model binder but of course also attempt to null all the other non set values for the existent entity object because they are not set on NewsPage object created by the model binder. Is it possible to somehow tell entity framework not to look at the properties that are set to null and attempt to persist those changes back to the retrieved entity object and hence database ? Perhaps there's some code I can write that will only utilize the non-null values and their property names on the NewsPage object created by model binder and only attempt to update those particular properties ? [HttpPost] public PartialViewResult SaveNews(NewsPage Np) { Np.ModifyDate = DateTime.Now; _db.NewsPages.Attach(Np); _db.ObjectStateManager.ChangeObjectState(Np, System.Data.EntityState.Modified); _db.SaveChanges(); _db.Dispose(); return PartialView("MonthNewsData"); } I can of course do something like below, but I have a feeling it's not the optimal solution. Especially considering that I have like 6 boolean properties that I need to set. [HttpPost] public PartialViewResult SaveNews(int NewsPageID, bool isActive, bool isOnFrontPage) { if (isActive != null) { //Get entity and update this property } if (isOnFontPage != null) { //Get entity and update this property } }

    Read the article

  • How to use batch rendering with an entity component system?

    - by Kiril
    I have an entity component system and a 2D rendering engine. Because I have a lot of repeating sprites (the entities are non-animated, the background is tile based) I would really like to use batch rendering to reduce calls to the drawing routine. What would be the best way to integrate this with an engtity system? I thought about creating and populating the sprite batche every frame update, but that will probably be very slow. A better way would be to add a reference to an entity's quad to the sprite batch at initialization, but that would mean that the entity factory has to be aware of the Rendering System or that the sprite batch has to be a component of some Cache entity. One case violates encapsulation pretty heavily, while the other forces a non-game object entity in the entity system, which I am not sure I like a lot. As for engine, I am using Love2D (Love2D website) and FEZ ( FEZ website) as entity system(so everything is in Lua). I am more interested in a generic pattern of how to properly implement that rather than a language/library specific solution. Thanks in advance!

    Read the article

  • Executing logic before save or validation with EF Code-First Models

    - by Ryan Norbauer
    I'm still getting accustomed to EF Code First, having spent years working with the Ruby ORM, ActiveRecord. ActiveRecord used to have all sorts of callbacks like before_validation and before_save, where it was possible to modify the object before it would be sent off to the data layer. I am wondering if there is an equivalent technique in EF Code First object modeling. I know how to set object members at the time of instantiation, of course, (to set default values and so forth) but sometimes you need to intervene at different moments in the object lifecycle. To use a slightly contrived example, say I have a join table linking Authors and Plays, represented with a corresponding Authoring object: public class Authoring { public int ID { get; set; } [Required] public int Position { get; set; } [Required] public virtual Play Play { get; set; } [Required] public virtual Author Author { get; set; } } where Position represents a zero-indexed ordering of the Authors associated to a given Play. (You might have a single "South Pacific" Play with two authors: a "Rodgers" author with a Position 0 and a "Hammerstein" author with a Position 1.) Let's say I wanted to create a method that, before saving away an Authoring record, it checked to see if there were any existing authors for the Play to which it was associated. If no, it set the Position to 0. If yes, it would find set the Position of the highest value associated with that Play and increment by one. Where would I implement such logic within an EF code first model layer? And, in other cases, what if I wanted to massage data in code before it is checked for validation errors? Basically, I'm looking for an equivalent to the Rails lifecycle hooks mentioned above, or some way to fake it at least. :)

    Read the article

  • Effective way to check if an Entity/Player enters a region/trigger

    - by Chris
    I was wondering how multiplayer games detect if you enter a special region. Let's assume there is a huge map that is so big that simply checking it would become a huge performance issue. I've seen bukkit (a modding API for Minecraft servers) firing an Event on every single move. I don't think that larger games do the same because even if you have only a few coordinates you are interested in, you have to loop through a few trigger zone to see if the player is inside your region - for every player. This seems like an extremely CPU-intense operation to me even though I've never developed something like that. Is there a special algorithm that is used by larger games to accomplish this? The only thing I could imagine is to split up the world into multiple parts and to register the event not on the movement itself but on all the parts that are covered by your area and only check for areas that are registered in the current part. And another thing I would like to know: How could you detect when someone must have entered a trigger but you never saw him directly in it since his client only sent you an move packet shortly before entering and after leaving the trigger area. Drawing a line and calculate all colliding parts seems rather CPU intensive if you have to perform it every time.

    Read the article

  • Loading Entities Dynamically with Entity Framework

    - by Ricardo Peres
    Sometimes we may be faced with the need to load entities dynamically, that is, knowing their Type and the value(s) for the property(ies) representing the primary key. One way to achieve this is by using the following extension methods for ObjectContext (which can be obtained from a DbContext, of course): 1: public static class ObjectContextExtensions 2: { 3: public static Object Load(this ObjectContext ctx, Type type, params Object [] ids) 4: { 5: Object p = null; 6:  7: EntityType ospaceType = ctx.MetadataWorkspace.GetItems<EntityType>(DataSpace.OSpace).SingleOrDefault(x => x.FullName == type.FullName); 8:  9: List<String> idProperties = ospaceType.KeyMembers.Select(k => k.Name).ToList(); 10:  11: List<EntityKeyMember> members = new List<EntityKeyMember>(); 12:  13: EntitySetBase collection = ctx.MetadataWorkspace.GetEntityContainer(ctx.DefaultContainerName, DataSpace.CSpace).BaseEntitySets.Where(x => x.ElementType.FullName == type.FullName).Single(); 14:  15: for (Int32 i = 0; i < ids.Length; ++i) 16: { 17: members.Add(new EntityKeyMember(idProperties[i], ids[i])); 18: } 19:  20: EntityKey key = new EntityKey(String.Concat(ctx.DefaultContainerName, ".", collection.Name), members); 21:  22: if (ctx.TryGetObjectByKey(key, out p) == true) 23: { 24: return (p); 25: } 26:  27: return (p); 28: } 29:  30: public static T Load<T>(this ObjectContext ctx, params Object[] ids) 31: { 32: return ((T)Load(ctx, typeof(T), ids)); 33: } 34: } This will work with both single-property primary keys or with multiple, but you will have to supply each of the corresponding values in the appropriate order. Hope you find this useful!

    Read the article

  • How can I resolve component types in a way that supports adding new types relatively easily?

    - by John
    I am trying to build an Entity Component System for an interactive application developed using C++ and OpenGL. My question is quite simple. In my GameObject class I have a collection of Components. I can add and retrieve components. class GameObject: public Object { public: GameObject(std::string objectName); ~GameObject(void); Component * AddComponent(std::string name); Component * AddComponent(Component componentType); Component * GetComponent (std::string TypeName); Component * GetComponent (<Component Type Here>); private: std::map<std::string,Component*> m_components; }; I will have a collection of components that inherit from the base Components class. So if I have a meshRenderer component and would like to do the following GameObject * warship = new GameObject("myLovelyWarship"); MeshRenderer * meshRenderer = warship->AddComponent(MeshRenderer); or possibly MeshRenderer * meshRenderer = warship->AddComponent("MeshRenderer"); I could be make a Component Factory like this: class ComponentFactory { public: static Component * CreateComponent(const std::string &compTyp) { if(compTyp == "MeshRenderer") return new MeshRenderer; if(compTyp == "Collider") return new Collider; return NULL; } }; However, I feel like I should not have to keep updating the Component Factory every time I want to create a new custom Component but it is an option. Is there a more proper way to add and retrieve these components? Is standard templates another solution?

    Read the article

  • A Simple Entity Tagger

    - by Elton Stoneman
    In the REST world, ETags are your gateway to performance boosts by letting clients cache responses. In the non-REST world, you may also want to add an ETag to an entity definition inside a traditional service contract – think of a scenario where a consumer persists its own representation of your entity, and wants to keep it in sync. Rather than load every entity by ID and check for changes, the consumer can send in a set of linked IDs and ETags, and you can return only the entities where the current ETag is different from the consumer’s version.  If your entity is a projection from various sources, you may not have a persistent ETag, so you need an efficient way to generate an ETag which is deterministic, so an entity with the same state always generates the same ETag. I have an implementation for a generic ETag generator on GitHub here: EntityTagger code sample. The essence is simple - we get the entity, serialize it and build a hash from the serialized value. Any changes to either the state or the structure of the entity will result in a different hash. To use it, just call SetETag, passing your populated object and a Func<> which acts as an accessor to the ETag property: EntityTagger.SetETag(user, x => x.ETag); The implementation is all in at 80 lines of code, which is all pretty straightforward: var eTagProperty = AsPropertyInfo(eTagPropertyAccessor); var originalETag = eTagProperty.GetValue(entity, null); try { ResetETag(entity, eTagPropertyAccessor); string json; var serializer = new DataContractJsonSerializer(entity.GetType()); using (var stream = new MemoryStream()) { serializer.WriteObject(stream, entity); json = Encoding.UTF8.GetString(stream.GetBuffer(), 0, (int)stream.Length); } var guid = GetDeterministicGuid(json); eTagProperty.SetValue(entity, guid.ToString(), null); //... There are a couple of helper methods to check if the object has changed since the ETag value was last set, and to reset the ETag. This implementation uses JSON to do the serializing rather than XML. Benefit - should be marginally more efficient as your hashing a much smaller serialized string; downside, JSON doesn't include namespaces or class names at the root level, so if you have two classes with the exact same structure but different names, then instances which have the same content will have the same ETag. You may want that behaviour, but change to use the XML DataContractSerializer if you think that will be an issue. If you can persist the ETag somewhere, it will save you server processing to load up the entity, but that will only apply to scenarios where you can reliably invalidate your ETag (e.g. if you control all the entry points where entity contents can be updated, then you can calculate and persist the new ETag with each update).

    Read the article

  • How to extend abstract Entity class in RIA Services

    - by Calanus
    I want to add a bool variable and property to the base Entity class in my RIA services project so that it is available throughout all the entity objects but seem unable to work out how to do this. I know that adding properties to actual entities themselves is easy using .shared.cs and partial classes but adding such properties to the Entity class using similar methods doesn't work. For example, the following code doesn't work namespace System.ServiceModel.DomainServices.Client { public abstract partial class Entity { private bool auditRequired; public bool AuditRequired { get { return auditRequired; } set { auditRequired = value; } } } } All that happens is that the existing Entity class gets totally overriden rather than extending the Entity class. How do I extend the base Entity class so that functionality is available thoughout all derived entity classes?

    Read the article

  • How to update correctly an Entity Model after changes of database structure?

    - by Slauma
    I've made some changes in the table structure and especially the relationships between tables in my SQL Server database. Now I want to update my Entity model based on this new database structure. Right clicking on the edmx file I find the option "Update model from database". But when I do this I get kind of a 50% update: The new columns appear in the Entity classes but I am confused about a lot of navigation properties which are still there in the model although the corresponding foreign key relationships do not exist anymore in the database. Am I doing something wrong? Or is there another option to update the model including deletion of navigation properties? Or do I have to delete those navigation properties manually in the model files? I am using Entity Framework Version 1 (VS 2008 SP1). Thanks for help in advance!

    Read the article

  • How can I hide a database column in the entity model?

    - by Nick Butler
    Hi. I'm using the Entity Framework 4 and have a question: I have a password column in my database that I want to manage using custom SQL. So I don't want the model to know anything about it. I've tried deleting the property in the Mapping Details window, but then I got a compilation error: Error 3023: Problem in mapping fragments starting at line 1660:Column User.Password in table User must be mapped: It has no default value and is not nullable. So, I made the column nullable in the database and updated the model. Now I get this error: Error 3004: Problem in mapping fragments starting at line 1660:No mapping specified for properties User.Password, User.Salt in Set Users. An Entity with Key (PK) will not round-trip when: Entity is type [UserDirectoryModel.User] Any ideas please? Thanks, Nick

    Read the article

  • Getting macro keys from a razer blackwidow to work on linux

    - by Journeyman Geek
    I picked up a razer blackwidow ultimate that has additional keys meant for macros that are set using a tool that's installed on windows. I'm assuming that these arn't some fancypants joojoo keys and should emit scancodes like any other keys. Firstly is there a standard way to check these scancodes in linux? Secondly how do i set these keys to do things in command line and x based linux setups? My current linux install is xubuntu 10.10, but i'll be switching to kubuntu once i have a few things fixed up. Ideally the answer should be generic and system-wide Things i have tried so far: showkeys from the built in kbd package (in a seperate vt) - macro keys not detected xev - macro keys not detected lsusb and evdev output this ahk script's output suggests the M keys are not outputting standard scancodes Things i need to try snoopy pro + reverse engineering (oh dear) Wireshark - preliminary futzing around seems to indicate no scancodes emitted when what i seem to think is the keyboard is monitored and keys pressed. Might indicate additional keys are a seperate device or need to be initialised somehow. Need to cross reference that with lsusb output from linux, in 3 scenarios - standalone, passed through to a windows VM without the drivers installed, and the same with. LSUSB only detects one device on a standalone linux install It might be useful to check if the mice use the same razer synapse driver , since that means some variation of razercfg might work (not detected. only seems to work for mice) Things i have Have worked out: In a windows system with the driver, the keyboard is seen as a keyboard and a pointing device. And said pointing device uses, in addition to your bog standard mouse drivers.. a driver for something called a razer synapse. Mouse driver seen in linux under evdev and lsusb as well Single Device under OS X apparently, though i have yet to try lsusb equivilent on that Keyboard goes into pulsing backlight mode in OS X upon initialisation with the driver. This should probably indicate that there's some initialisation sequence sent to the keyboard on activation. They are, in fact, fancypants joojoo keys. Extending this question a little I have access to a windows system so if i need to use any tools on that to help answer the question, its fine. I can also try it on systems with and without the config utility. The expected end result is still to make those keys usable on linux however. I also realise this is a very specific family of hardware. I would be willing to test anything that makes sense on a linux system if i have detailed instructions - this should open up the question to people who have linux skills, but no access to this keyboard The minimum end result i require I need these keys detected, and usable in any fashion on any of the current graphical mainstream ubuntu varients

    Read the article

  • Entity Framework 4.0: Why Would One Use the Code Generated EntityObjects Over POCO Objects?

    - by senfo
    Aside from faster development time (Visual Studio 2010 beta 2 has no T4 templates for building POCO entity objects that I'm aware of), are there any advantages to using the traditional EntityObject entities that Entity Framework creates, by default? If Microsoft delivers a T4 template for building POCO objects, I'm trying to figure out why anybody would want to use the traditional method.

    Read the article

  • Where I can find SQL Generated by Entity framework?

    - by Jalpesh P. Vadgama
    Few days back I was optimizing the performance with Entity framework and Linq queries and I was using LinqPad and looking SQL generated by the Linq or entity framework queries. After some point of time I got the same question in mind that how I can find the SQL Statement generated by Entity framework?. After some struggling I have managed to found the way of finding SQL Statement so I thought it would be a great idea to write a post about  same and share my knowledge about that. So in this post I will explain how to find SQL statements generated Entity framework queries. Read More on dotnetjalps.com

    Read the article

  • SceneManagers as systems in entity system or as a core class used by a system?

    - by Hatoru Hansou
    It seems entity systems are really popular here. Links posted by other users convinced me of the power of such system and I decided to try it. (Well, that and my original code getting messy) In my project, I originally had a SceneManager class that maintained needed logic and structures to organize the scene (QuadTree, 2D game). Before rendering I call selectRect() and pass the x,y of the camera and the width and height of the screen and then obtain a minimized list containing only visible entities ordered from back to front. Now with Systems, originally in my first attempt my Render system required to get added all entities it should handle. This may sound like the correct approach but I realized this was not efficient. Trying to optimize It I reused the SceneManager class internally in the Renderer system, but then I realized I needed methods such as selectRect() in others systems too (AI principally) and make the SceneManager accessible globally again. Currently I converted SceneManager to a system, and ended up with the following interface (only relevant methods): /// Base system interface class System { public: virtual void tick (double delta_time) = 0; // (methods to add and remove entities) }; typedef std::vector<Entity*> EntitiesVector; /// Specialized system interface to allow query the scene class SceneManager: public System { public: virtual EntitiesVector& cull () = 0; /// Sets the entity to be used as the camera and replaces previous ones. virtual void setCamera (Entity* entity) = 0; }; class SceneRenderer // Not a system { vitual void render (EntitiesVector& entities) = 0; }; Also I could not guess how to convert renderers to systems. My game separates logic updates from screen updates, my main class have a tick() method and a render() method that may not be called the same times. In my first attempt renderers were systems but they was saved in a separated manager, updated only in render() and not in tick() like all other systems. I realized that was silly and simply created a SceneRenderer interface and give up about converting them to systems, but that may be for another question. Then... something does not feel right, isn't it? If I understood correctly a system should not depend on another or even count with another system exposing an specific interface. Each system should care only about its entities, or nodes (as optimization, so they have direct references to relevant components without having to constantly call the component() or getComponent() method of the entity).

    Read the article

  • What is the most effective order to learn SQL Server, LINQ, and Entity Framework?

    - by user1525474
    I am trying to get some advice on what order I should learn about SQL Server, LINQ, and Entity Framework to be able to better work with ASP.NET Webforms and MVC. From what I've been able to learn so far, many recommend learning LINQ or Entity Framework before learning SQL Server. It also appears that many companies are looking for people with knowledge in LINQ-to-SQL and Entity Framework without mentioning SQL Server. However, my understanding is that LINQ-to-SQL and Entity Framework translate code into SQL Server queries, making this a poor approach. Is there a correct or best order in which to learn these technologies?

    Read the article

  • Do entity collections and object sets implement IQueryable<T>?

    - by Chevex
    I am using Entity Framework for the first time and noticed that the entities object returns entity collections. DBEntities db = new DBEntities(); db.Users; //Users is an ObjectSet<User> User user = db.Users.Where(x => x.Username == "test").First(); //Is this getting executed in the SQL or in memory? user.Posts; //Posts is an EntityCollection<Post> Post post = user.Posts.Where(x => x.PostID == "123").First(); //Is this getting executed in the SQL or in memory? Do both ObjectSet and EntityCollection implement IQueryable? I am hoping they do so that I know the queries are getting executed at the data source and not in memory. EDIT: So apparently EntityCollection does not while ObjectSet does. Does that mean I would be better off using this code? DBEntities db = new DBEntities(); User user = db.Users.Where(x => x.Username == "test").First(); //Is this getting executed in the SQL or in memory? Post post = db.Posts.Where(x => (x.PostID == "123")&&(x.Username == user.Username)).First(); // Querying the object set instead of the entity collection. Also, what is the difference between ObjectSet and EntityCollection? Shouldn't they be the same? Thanks in advance!

    Read the article

  • How to reduce Entity Framework 4 query compile time?

    - by Rup
    Summary: We're having problems with EF4 query compilation times of 12+ seconds. Cached queries will only get us so far; are there any ways we can actually reduce the compilation time? Is there anything we might be doing wrong we can look for? Thanks! We have an EF4 model which is exposed over the WCF services. For each of our entity types we expose a method to fetch and return the whole entity for display / edit including a number of referenced child objects. For one particular entity we have to .Include() 31 tables / sub-tables to return all relevant data. Unfortunately this makes the EF query compilation prohibitively slow: it takes 12-15 seconds to compile and builds a 7,800-line, 300K query. This is the back-end of a web UI which will need to be snappier than that. Is there anything we can do to improve this? We can CompiledQuery.Compile this - that doesn't do any work until first use and so helps the second and subsequent executions but our customer is nervous that the first usage shouldn't be slow either. Similarly if the IIS app pool hosting the web service gets recycled we'll lose the cached plan, although we can increase lifetimes to minimise this. Also I can't see a way to precompile this ahead of time and / or to serialise out the EF compiled query cache (short of reflection tricks). The CompiledQuery object only contains a GUID reference into the cache so it's the cache we really care about. (Writing this out it occurs to me I can kick off something in the background from app_startup to execute all queries to get them compiled - is that safe?) However even if we do solve that problem, we build up our search queries dynamically with LINQ-to-Entities clauses based on which parameters we're searching on: I don't think the SQL generator does a good enough job that we can move all that logic into the SQL layer so I don't think we can pre-compile our search queries. This is less serious because the search data results use fewer tables and so it's only 3-4 seconds compile not 12-15 but the customer thinks that still won't really be acceptable to end-users. So we really need to reduce the query compilation time somehow. Any ideas? Profiling points to ELinqQueryState.GetExecutionPlan as the place to start and I have attempted to step into that but without the real .NET 4 source available I couldn't get very far, and the source generated by Reflector won't let me step into some functions or set breakpoints in them. The project was upgraded from .NET 3.5 so I have tried regenerating the EDMX from scratch in EF4 in case there was something wrong with it but that didn't help. I have tried the EFProf utility advertised here but it doesn't look like it would help with this. My large query crashes its data collector anyway. I have run the generated query through SQL performance tuning and it already has 100% index usage. I can't see anything wrong with the database that would cause the query generator problems. Is there something O(n^2) in the execution plan compiler - is breaking this down into blocks of separate data loads rather than all 32 tables at once likely to help? Setting EF to lazy-load didn't help. I've bought the pre-release O'Reilly Julie Lerman EF4 book but I can't find anything in there to help beyond 'compile your queries'. I don't understand why it's taking 12-15 seconds to generate a single select across 32 tables so I'm optimistic there's some scope for improvement! Thanks for any suggestions! We're running against SQL Server 2008 in case that matters and XP / 7 / server 2008 R2 using RTM VS2010.

    Read the article

  • EntityFramework .net 4 Update entity with a simple method

    - by Dennis Larsen
    I was looking at this SO question: http://stackoverflow.com/questions/1168215/ado-net-entity-framework-update-only-certian-properties-on-a-detached-entity. This was a big help for me. I know now that I need to attach an entity before making my changes to it. But how can do I do this: I have an MVC website, a Customer Update Page with fields: ID, Name, Address, etc. My MVC is parsing this into a Customer entity. How do I do the following: Update my entity and Save the changes to it. Catch the exception if my changes have been made since I loaded my entity.

    Read the article

  • Entity update from ASP.NET MVC page with post to Edit action

    - by mare
    I have a form that posts back to /Edit/ and my action in controller looks like this: // // POST: /Client/Edit [AcceptVerbs(HttpVerbs.Post)] public ActionResult Edit(Client client) My form includes a subset of Client entity properties and I also include a hidden field that holds an ID of the client. The client entity itself is provided via the GET Edit action. Now I want to do entity update but so far I've only been trying without first loading the entity from the DB. Because the client object that comes in POST Edit has everything it needs. I want to update just those properties on the entity in datastore. So far I've tried with Context.Refresh() and AttachTo() but with no success - I'm getting all sorts of exceptions. I inspected the entity object as it comes from the web form and it looks fine (it does have an ID and entered data) mapped just fine. If I do it with Context.SaveChanges() then it just inserts new row..

    Read the article

  • Entity Framework 4 and SQL Compact 4: How to generate database?

    - by David Veeneman
    I am developing an app with Entity Framework 4 and SQL Compact 4, using a Model First approach. I have created my EDM, and now I want to generate a SQL Compact 4.0 database to act as a data store for the model. I bring up the Generate Database Wizard and click the New Connection button to create a connection for the generated file. The Choose Data Source dialog appears, but SQL Compact 4.0 is not listed in the list of available data sources: I am running VS 2010 SP1 (beta) and I have installed the VS 2010 Tools for SQL Compact 4.0. I can create a SQL Compact 4.0 data connection from the Server Explorer. It is only in the Generate Database Wizard that the 4.0 option doesn't appear. BTW, my SQL Compact 4.0 installation does include System.Data.SqlServerCe.Entity.dll. So I should have the SQL Compact components I need. Am I doing something incorrectly, or is this a bug? Does anyone have a fix or a workaround? Thanks for your help.

    Read the article

  • In an Entity-Component-System Engine, How do I deal with groups of dependent entities?

    - by John Daniels
    After going over a few game design patterns, I have settle with Entity-Component-System (ES System) for my game engine. I've reading articles (mainly T=Machine) and review some source code and I think I got enough to get started. There is just one basic idea I am struggling with. How do I deal with groups of entities that are dependent on each other? Let me use an example: Assume I am making a standard overhead shooter (think Jamestown) and I want to construct a "boss entity" with multiple distinct but connected parts. The break down might look like something like this: Ship body: Movement, Rendering Cannon: Position (locked relative to the Ship body), Tracking\Fire at hero, Taking Damage until disabled Core: Position (locked relative to the Ship body), Tracking\Fire at hero, Taking Damage until disabled, Disabling (er...destroying) all other entities in the ship group My goal would be something that would be identified (and manipulated) as a distinct game element without having to rewrite subsystem form the ground up every time I want to build a new aggregate Element. How do I implement this kind of design in ES System? Do I implement some kind of parent-child entity relationship (entities can have children)? This seems to contradict the methodology that Entities are just empty container and makes it feel more OOP. Do I implement them as separate entities, with some kind of connecting Component (BossComponent) and related system (BossSubSystem)? I can't help but think that this will be hard to implement since how components communicate seem to be a big bear trap. Do I implement them as one Entity, with a collection of components (ShipComponent, CannonComponents, CoreComponent)? This one seems to veer way of the ES System intent (components here seem too much like heavy weight entities), but I'm know to this so I figured I would put that out there. Do I implement them as something else I have mentioned? I know that this can be implemented very easily in OOP, but my choosing ES over OOP is one that I will stick with. If I need to break with pure ES theory to implement this design I will (not like I haven't had to compromise pure design before), but I would prefer to do that for performance reason rather than start with bad design. For extra credit, think of the same design but, each of the "boss entities" were actually connected to a larger "BigBoss entity" made of a main body, main core and 3 "Boss Entities". This would let me see a solution for at least 3 dimensions (grandparent-parent-child)...which should be more than enough for me. Links to articles or example code would be appreciated. Thanks for your time.

    Read the article

  • Is Moving Entity Framework objects over a webservice really the best way?

    - by aceinthehole
    I've inherited a .NET project that has close to 2 thousand clients out in the field that need to push data periodically up to a central repository. The clients wake up and attempt to push the data up via a series of WCF webservices where they are passing each entity framework entity as parameter. Once the service receives this object, it preforms some business logic on the data, and then turns around and sticks it in it's own database that mirrors the database on the client machines. The trick is, is that this data is being transmitted over a metered connection, which is very expensive. So optimizing the data is a serious priority. Now, we are using a custom encoder that compresses the data (and decompresses it on the other end) while it is being transmitted, and this is reducing the data footprint. However, the amount of data that the clients are using, seem ridiculously large, given the amount of information that is actually being transmitted. It seems me that entity framework itself may be to blame. I'm suspecting that the objects are very large when serialized to be sent over wire, with a lot context information and who knows what else, when what we really need is just the 'new' inserts. Is using the entity framework and WCF services as we have done so far the correct way, architecturally, of approaching this n-tiered, asynchronous, push only problem? Or is there a different approach, that could optimize the data use?

    Read the article

  • Dump Linq-To-Sql now that Entity Framework 4.0 has been released?

    - by DanM
    The relative simplicity of Linq-To-Sql as well as all the criticism leveled at version 1 of Entity Framework (especially, the vote of no confidence) convinced me to go with Linq-To-Sql "for the time being". Now that EF 4.0 is out, I wonder if it's time to start migrating over to it. Questions: What are the pros and cons of EF 4.0 relative to Linq-To-Sql? Is EF 4.0 finally ready for prime time? Is now the time to switch over?

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >