Search Results

Search found 53148 results on 2126 pages for 'coder net'.

Page 915/2126 | < Previous Page | 911 912 913 914 915 916 917 918 919 920 921 922  | Next Page >

  • should I use Entity Framework instead of raw ADO.NET

    - by user110182
    I am new to CSLA and Entity Framework. I am creating a new CSLA / Silverlight application that will replace a 12 year old Win32 C++ system. The old system uses a custom DCOM business object library and uses ODBC to get to SQL Server. The new system will not immediately replace the old system -- they must coexist against the same database for years to come. At first I thought EF was the way to go since it is the latest and greatest. After making a small EF model and only 2 CSLA editable root objects (I will eventually have hundreds of objects as my DB has 800+ tables) I am seriously questioning the use of EF. In the current system I have the need many times to do fine detail performance tuning of the queries which I can do because of 100% control of generated SQL. But it seems in EF that so much happens behind the scenes that I lose that control. Article like http://toomanylayers.blogspot.com/2009/01/entity-framework-and-linq-to-sql.html don't help my impression of EF. People seem to like EF because of LINQ to EF but since my criteria is passed between client and server as criteria object it seems like I could build queries just as easily without LINQ. I understand in WCF RIA that there is query projection (or something like that) where I can do client side LINQ which does move to the server before translation into actual SQL so in that case I can see the benefit of EF, but not in CSLA. If I use raw ADO.NET, will I regret my decision 5 years from now? Has anyone else made this choice recently and which way did you go?

    Read the article

  • should I use Entity Framework instead of raw ADO.NET

    - by user110182
    I am new to CSLA and Entity Framework. I am creating a new CSLA / Silverlight application that will replace a 12 year old Win32 C++ system. The old system uses a custom DCOM business object library and uses ODBC to get to SQL Server. The new system will not immediately replace the old system -- they must coexist against the same database for years to come. At first I thought EF was the way to go since it is the latest and greatest. After making a small EF model and only 2 CSLA editable root objects (I will eventually have hundreds of objects as my DB has 800+ tables) I am seriously questioning the use of EF. In the current system I have the need many times to do fine detail performance tuning of the queries which I can do because of 100% control of generated SQL. But it seems in EF that so much happens behind the scenes that I lose that control. Article like http://toomanylayers.blogspot.com/2009/01/entity-framework-and-linq-to-sql.html don't help my impression of EF. People seem to like EF because of LINQ to EF but since my criteria is passed between client and server as criteria object it seems like I could build queries just as easily without LINQ. I understand in WCF RIA that there is query projection (or something like that) where I can do client side LINQ which does move to the server before translation into actual SQL so in that case I can see the benefit of EF, but not in CSLA. If I use raw ADO.NET, will I regret my decision 5 years from now? Has anyone else made this choice recently and which way did you go?

    Read the article

  • C#.NET Forms programing - Modal and Non-Modal forms problem

    - by Povilas
    Hello, I have a problem with modality of the forms under C#.NET. Let's say I have main form #0 (see the image below). This form represents main application form, where user can perform various operations. However, from time to time, there is a need to open additional non-modal form to perform additional main application functionality supporting tasks. Let's say this is form #1 in the image. On this #1 form there might be opened few additional modal forms on top of each other (#2 form in the image), and at the end, there is a progress dialog showing a long operation progress and status, which might take from few minutes up to few hours. The problem is that the main form #0 is not responsive until you close all modal forms (#2 in the image). I need that the main form #0 would be operational in this situation. However, if you open a non-modal form in form #2, you can operate with both modal #2 form and newly created non modal form. I need the same behavior between the main form #0 and form #1 with all its child forms. Is it possible? Or am I doing something wrong? Maybe there is some kind of workaround, I really would not like to change all ShowDialog calls to ShowDialog... Thanks, Povilas

    Read the article

  • What are good design practices when working with Entity Framework

    - by AD
    This will apply mostly for an asp.net application where the data is not accessed via soa. Meaning that you get access to the objects loaded from the framework, not Transfer Objects, although some recommendation still apply. This is a community post, so please add to it as you see fit. Applies to: Entity Framework 1.0 shipped with Visual Studio 2008 sp1. Why pick EF in the first place? Considering it is a young technology with plenty of problems (see below), it may be a hard sell to get on the EF bandwagon for your project. However, it is the technology Microsoft is pushing (at the expense of Linq2Sql, which is a subset of EF). In addition, you may not be satisfied with NHibernate or other solutions out there. Whatever the reasons, there are people out there (including me) working with EF and life is not bad.make you think. EF and inheritance The first big subject is inheritance. EF does support mapping for inherited classes that are persisted in 2 ways: table per class and table the hierarchy. The modeling is easy and there are no programming issues with that part. (The following applies to table per class model as I don't have experience with table per hierarchy, which is, anyway, limited.) The real problem comes when you are trying to run queries that include one or many objects that are part of an inheritance tree: the generated sql is incredibly awful, takes a long time to get parsed by the EF and takes a long time to execute as well. This is a real show stopper. Enough that EF should probably not be used with inheritance or as little as possible. Here is an example of how bad it was. My EF model had ~30 classes, ~10 of which were part of an inheritance tree. On running a query to get one item from the Base class, something as simple as Base.Get(id), the generated SQL was over 50,000 characters. Then when you are trying to return some Associations, it degenerates even more, going as far as throwing SQL exceptions about not being able to query more than 256 tables at once. Ok, this is bad, EF concept is to allow you to create your object structure without (or with as little as possible) consideration on the actual database implementation of your table. It completely fails at this. So, recommendations? Avoid inheritance if you can, the performance will be so much better. Use it sparingly where you have to. In my opinion, this makes EF a glorified sql-generation tool for querying, but there are still advantages to using it. And ways to implement mechanism that are similar to inheritance. Bypassing inheritance with Interfaces First thing to know with trying to get some kind of inheritance going with EF is that you cannot assign a non-EF-modeled class a base class. Don't even try it, it will get overwritten by the modeler. So what to do? You can use interfaces to enforce that classes implement some functionality. For example here is a IEntity interface that allow you to define Associations between EF entities where you don't know at design time what the type of the entity would be. public enum EntityTypes{ Unknown = -1, Dog = 0, Cat } public interface IEntity { int EntityID { get; } string Name { get; } Type EntityType { get; } } public partial class Dog : IEntity { // implement EntityID and Name which could actually be fields // from your EF model Type EntityType{ get{ return EntityTypes.Dog; } } } Using this IEntity, you can then work with undefined associations in other classes // lets take a class that you defined in your model. // that class has a mapping to the columns: PetID, PetType public partial class Person { public IEntity GetPet() { return IEntityController.Get(PetID,PetType); } } which makes use of some extension functions: public class IEntityController { static public IEntity Get(int id, EntityTypes type) { switch (type) { case EntityTypes.Dog: return Dog.Get(id); case EntityTypes.Cat: return Cat.Get(id); default: throw new Exception("Invalid EntityType"); } } } Not as neat as having plain inheritance, particularly considering you have to store the PetType in an extra database field, but considering the performance gains, I would not look back. It also cannot model one-to-many, many-to-many relationship, but with creative uses of 'Union' it could be made to work. Finally, it creates the side effet of loading data in a property/function of the object, which you need to be careful about. Using a clear naming convention like GetXYZ() helps in that regards. Compiled Queries Entity Framework performance is not as good as direct database access with ADO (obviously) or Linq2SQL. There are ways to improve it however, one of which is compiling your queries. The performance of a compiled query is similar to Linq2Sql. What is a compiled query? It is simply a query for which you tell the framework to keep the parsed tree in memory so it doesn't need to be regenerated the next time you run it. So the next run, you will save the time it takes to parse the tree. Do not discount that as it is a very costly operation that gets even worse with more complex queries. There are 2 ways to compile a query: creating an ObjectQuery with EntitySQL and using CompiledQuery.Compile() function. (Note that by using an EntityDataSource in your page, you will in fact be using ObjectQuery with EntitySQL, so that gets compiled and cached). An aside here in case you don't know what EntitySQL is. It is a string-based way of writing queries against the EF. Here is an example: "select value dog from Entities.DogSet as dog where dog.ID = @ID". The syntax is pretty similar to SQL syntax. You can also do pretty complex object manipulation, which is well explained [here][1]. Ok, so here is how to do it using ObjectQuery< string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); The first time you run this query, the framework will generate the expression tree and keep it in memory. So the next time it gets executed, you will save on that costly step. In that example EnablePlanCaching = true, which is unnecessary since that is the default option. The other way to compile a query for later use is the CompiledQuery.Compile method. This uses a delegate: static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => ctx.DogSet.FirstOrDefault(it => it.ID == id)); or using linq static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => (from dog in ctx.DogSet where dog.ID == id select dog).FirstOrDefault()); to call the query: query_GetDog.Invoke( YourContext, id ); The advantage of CompiledQuery is that the syntax of your query is checked at compile time, where as EntitySQL is not. However, there are other consideration... Includes Lets say you want to have the data for the dog owner to be returned by the query to avoid making 2 calls to the database. Easy to do, right? EntitySQL string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)).Include("Owner"); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); CompiledQuery static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => (from dog in ctx.DogSet.Include("Owner") where dog.ID == id select dog).FirstOrDefault()); Now, what if you want to have the Include parametrized? What I mean is that you want to have a single Get() function that is called from different pages that care about different relationships for the dog. One cares about the Owner, another about his FavoriteFood, another about his FavotireToy and so on. Basicly, you want to tell the query which associations to load. It is easy to do with EntitySQL public Dog Get(int id, string include) { string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)) .IncludeMany(include); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); } The include simply uses the passed string. Easy enough. Note that it is possible to improve on the Include(string) function (that accepts only a single path) with an IncludeMany(string) that will let you pass a string of comma-separated associations to load. Look further in the extension section for this function. If we try to do it with CompiledQuery however, we run into numerous problems: The obvious static readonly Func<Entities, int, string, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, string, Dog>((ctx, id, include) => (from dog in ctx.DogSet.Include(include) where dog.ID == id select dog).FirstOrDefault()); will choke when called with: query_GetDog.Invoke( YourContext, id, "Owner,FavoriteFood" ); Because, as mentionned above, Include() only wants to see a single path in the string and here we are giving it 2: "Owner" and "FavoriteFood" (which is not to be confused with "Owner.FavoriteFood"!). Then, let's use IncludeMany(), which is an extension function static readonly Func<Entities, int, string, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, string, Dog>((ctx, id, include) => (from dog in ctx.DogSet.IncludeMany(include) where dog.ID == id select dog).FirstOrDefault()); Wrong again, this time it is because the EF cannot parse IncludeMany because it is not part of the functions that is recognizes: it is an extension. Ok, so you want to pass an arbitrary number of paths to your function and Includes() only takes a single one. What to do? You could decide that you will never ever need more than, say 20 Includes, and pass each separated strings in a struct to CompiledQuery. But now the query looks like this: from dog in ctx.DogSet.Include(include1).Include(include2).Include(include3) .Include(include4).Include(include5).Include(include6) .[...].Include(include19).Include(include20) where dog.ID == id select dog which is awful as well. Ok, then, but wait a minute. Can't we return an ObjectQuery< with CompiledQuery? Then set the includes on that? Well, that what I would have thought so as well: static readonly Func<Entities, int, ObjectQuery<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, ObjectQuery<Dog>>((ctx, id) => (ObjectQuery<Dog>)(from dog in ctx.DogSet where dog.ID == id select dog)); public Dog GetDog( int id, string include ) { ObjectQuery<Dog> oQuery = query_GetDog(id); oQuery = oQuery.IncludeMany(include); return oQuery.FirstOrDefault; } That should have worked, except that when you call IncludeMany (or Include, Where, OrderBy...) you invalidate the cached compiled query because it is an entirely new one now! So, the expression tree needs to be reparsed and you get that performance hit again. So what is the solution? You simply cannot use CompiledQueries with parametrized Includes. Use EntitySQL instead. This doesn't mean that there aren't uses for CompiledQueries. It is great for localized queries that will always be called in the same context. Ideally CompiledQuery should always be used because the syntax is checked at compile time, but due to limitation, that's not possible. An example of use would be: you may want to have a page that queries which two dogs have the same favorite food, which is a bit narrow for a BusinessLayer function, so you put it in your page and know exactly what type of includes are required. Passing more than 3 parameters to a CompiledQuery Func is limited to 5 parameters, of which the last one is the return type and the first one is your Entities object from the model. So that leaves you with 3 parameters. A pitance, but it can be improved on very easily. public struct MyParams { public string param1; public int param2; public DateTime param3; } static readonly Func<Entities, MyParams, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, MyParams, IEnumerable<Dog>>((ctx, myParams) => from dog in ctx.DogSet where dog.Age == myParams.param2 && dog.Name == myParams.param1 and dog.BirthDate > myParams.param3 select dog); public List<Dog> GetSomeDogs( int age, string Name, DateTime birthDate ) { MyParams myParams = new MyParams(); myParams.param1 = name; myParams.param2 = age; myParams.param3 = birthDate; return query_GetDog(YourContext,myParams).ToList(); } Return Types (this does not apply to EntitySQL queries as they aren't compiled at the same time during execution as the CompiledQuery method) Working with Linq, you usually don't force the execution of the query until the very last moment, in case some other functions downstream wants to change the query in some way: static readonly Func<Entities, int, string, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, IEnumerable<Dog>>((ctx, age, name) => from dog in ctx.DogSet where dog.Age == age && dog.Name == name select dog); public IEnumerable<Dog> GetSomeDogs( int age, string name ) { return query_GetDog(YourContext,age,name); } public void DataBindStuff() { IEnumerable<Dog> dogs = GetSomeDogs(4,"Bud"); // but I want the dogs ordered by BirthDate gridView.DataSource = dogs.OrderBy( it => it.BirthDate ); } What is going to happen here? By still playing with the original ObjectQuery (that is the actual return type of the Linq statement, which implements IEnumerable), it will invalidate the compiled query and be force to re-parse. So, the rule of thumb is to return a List< of objects instead. static readonly Func<Entities, int, string, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, IEnumerable<Dog>>((ctx, age, name) => from dog in ctx.DogSet where dog.Age == age && dog.Name == name select dog); public List<Dog> GetSomeDogs( int age, string name ) { return query_GetDog(YourContext,age,name).ToList(); //<== change here } public void DataBindStuff() { List<Dog> dogs = GetSomeDogs(4,"Bud"); // but I want the dogs ordered by BirthDate gridView.DataSource = dogs.OrderBy( it => it.BirthDate ); } When you call ToList(), the query gets executed as per the compiled query and then, later, the OrderBy is executed against the objects in memory. It may be a little bit slower, but I'm not even sure. One sure thing is that you have no worries about mis-handling the ObjectQuery and invalidating the compiled query plan. Once again, that is not a blanket statement. ToList() is a defensive programming trick, but if you have a valid reason not to use ToList(), go ahead. There are many cases in which you would want to refine the query before executing it. Performance What is the performance impact of compiling a query? It can actually be fairly large. A rule of thumb is that compiling and caching the query for reuse takes at least double the time of simply executing it without caching. For complex queries (read inherirante), I have seen upwards to 10 seconds. So, the first time a pre-compiled query gets called, you get a performance hit. After that first hit, performance is noticeably better than the same non-pre-compiled query. Practically the same as Linq2Sql When you load a page with pre-compiled queries the first time you will get a hit. It will load in maybe 5-15 seconds (obviously more than one pre-compiled queries will end up being called), while subsequent loads will take less than 300ms. Dramatic difference, and it is up to you to decide if it is ok for your first user to take a hit or you want a script to call your pages to force a compilation of the queries. Can this query be cached? { Dog dog = from dog in YourContext.DogSet where dog.ID == id select dog; } No, ad-hoc Linq queries are not cached and you will incur the cost of generating the tree every single time you call it. Parametrized Queries Most search capabilities involve heavily parametrized queries. There are even libraries available that will let you build a parametrized query out of lamba expressions. The problem is that you cannot use pre-compiled queries with those. One way around that is to map out all the possible criteria in the query and flag which one you want to use: public struct MyParams { public string name; public bool checkName; public int age; public bool checkAge; } static readonly Func<Entities, MyParams, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, MyParams, IEnumerable<Dog>>((ctx, myParams) => from dog in ctx.DogSet where (myParams.checkAge == true && dog.Age == myParams.age) && (myParams.checkName == true && dog.Name == myParams.name ) select dog); protected List<Dog> GetSomeDogs() { MyParams myParams = new MyParams(); myParams.name = "Bud"; myParams.checkName = true; myParams.age = 0; myParams.checkAge = false; return query_GetDog(YourContext,myParams).ToList(); } The advantage here is that you get all the benifits of a pre-compiled quert. The disadvantages are that you most likely will end up with a where clause that is pretty difficult to maintain, that you will incur a bigger penalty for pre-compiling the query and that each query you run is not as efficient as it could be (particularly with joins thrown in). Another way is to build an EntitySQL query piece by piece, like we all did with SQL. protected List<Dod> GetSomeDogs( string name, int age) { string query = "select value dog from Entities.DogSet where 1 = 1 "; if( !String.IsNullOrEmpty(name) ) query = query + " and dog.Name == @Name "; if( age > 0 ) query = query + " and dog.Age == @Age "; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>( query, YourContext ); if( !String.IsNullOrEmpty(name) ) oQuery.Parameters.Add( new ObjectParameter( "Name", name ) ); if( age > 0 ) oQuery.Parameters.Add( new ObjectParameter( "Age", age ) ); return oQuery.ToList(); } Here the problems are: - there is no syntax checking during compilation - each different combination of parameters generate a different query which will need to be pre-compiled when it is first run. In this case, there are only 4 different possible queries (no params, age-only, name-only and both params), but you can see that there can be way more with a normal world search. - Noone likes to concatenate strings! Another option is to query a large subset of the data and then narrow it down in memory. This is particularly useful if you are working with a definite subset of the data, like all the dogs in a city. You know there are a lot but you also know there aren't that many... so your CityDog search page can load all the dogs for the city in memory, which is a single pre-compiled query and then refine the results protected List<Dod> GetSomeDogs( string name, int age, string city) { string query = "select value dog from Entities.DogSet where dog.Owner.Address.City == @City "; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>( query, YourContext ); oQuery.Parameters.Add( new ObjectParameter( "City", city ) ); List<Dog> dogs = oQuery.ToList(); if( !String.IsNullOrEmpty(name) ) dogs = dogs.Where( it => it.Name == name ); if( age > 0 ) dogs = dogs.Where( it => it.Age == age ); return dogs; } It is particularly useful when you start displaying all the data then allow for filtering. Problems: - Could lead to serious data transfer if you are not careful about your subset. - You can only filter on the data that you returned. It means that if you don't return the Dog.Owner association, you will not be able to filter on the Dog.Owner.Name So what is the best solution? There isn't any. You need to pick the solution that works best for you and your problem: - Use lambda-based query building when you don't care about pre-compiling your queries. - Use fully-defined pre-compiled Linq query when your object structure is not too complex. - Use EntitySQL/string concatenation when the structure could be complex and when the possible number of different resulting queries are small (which means fewer pre-compilation hits). - Use in-memory filtering when you are working with a smallish subset of the data or when you had to fetch all of the data on the data at first anyway (if the performance is fine with all the data, then filtering in memory will not cause any time to be spent in the db). Singleton access The best way to deal with your context and entities accross all your pages is to use the singleton pattern: public sealed class YourContext { private const string instanceKey = "On3GoModelKey"; YourContext(){} public static YourEntities Instance { get { HttpContext context = HttpContext.Current; if( context == null ) return Nested.instance; if (context.Items[instanceKey] == null) { On3GoEntities entity = new On3GoEntities(); context.Items[instanceKey] = entity; } return (YourEntities)context.Items[instanceKey]; } } class Nested { // Explicit static constructor to tell C# compiler // not to mark type as beforefieldinit static Nested() { } internal static readonly YourEntities instance = new YourEntities(); } } NoTracking, is it worth it? When executing a query, you can tell the framework to track the objects it will return or not. What does it mean? With tracking enabled (the default option), the framework will track what is going on with the object (has it been modified? Created? Deleted?) and will also link objects together, when further queries are made from the database, which is what is of interest here. For example, lets assume that Dog with ID == 2 has an owner which ID == 10. Dog dog = (from dog in YourContext.DogSet where dog.ID == 2 select dog).FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; Person owner = (from o in YourContext.PersonSet where o.ID == 10 select dog).FirstOrDefault(); //dog.OwnerReference.IsLoaded == true; If we were to do the same with no tracking, the result would be different. ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>) (from dog in YourContext.DogSet where dog.ID == 2 select dog); oDogQuery.MergeOption = MergeOption.NoTracking; Dog dog = oDogQuery.FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; ObjectQuery<Person> oPersonQuery = (ObjectQuery<Person>) (from o in YourContext.PersonSet where o.ID == 10 select o); oPersonQuery.MergeOption = MergeOption.NoTracking; Owner owner = oPersonQuery.FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; Tracking is very useful and in a perfect world without performance issue, it would always be on. But in this world, there is a price for it, in terms of performance. So, should you use NoTracking to speed things up? It depends on what you are planning to use the data for. Is there any chance that the data your query with NoTracking can be used to make update/insert/delete in the database? If so, don't use NoTracking because associations are not tracked and will causes exceptions to be thrown. In a page where there are absolutly no updates to the database, you can use NoTracking. Mixing tracking and NoTracking is possible, but it requires you to be extra careful with updates/inserts/deletes. The problem is that if you mix then you risk having the framework trying to Attach() a NoTracking object to the context where another copy of the same object exist with tracking on. Basicly, what I am saying is that Dog dog1 = (from dog in YourContext.DogSet where dog.ID == 2).FirstOrDefault(); ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>) (from dog in YourContext.DogSet where dog.ID == 2 select dog); oDogQuery.MergeOption = MergeOption.NoTracking; Dog dog2 = oDogQuery.FirstOrDefault(); dog1 and dog2 are 2 different objects, one tracked and one not. Using the detached object in an update/insert will force an Attach() that will say "Wait a minute, I do already have an object here with the same database key. Fail". And when you Attach() one object, all of its hierarchy gets attached as well, causing problems everywhere. Be extra careful. How much faster is it with NoTracking It depends on the queries. Some are much more succeptible to tracking than other. I don't have a fast an easy rule for it, but it helps. So I should use NoTracking everywhere then? Not exactly. There are some advantages to tracking object. The first one is that the object is cached, so subsequent call for that object will not hit the database. That cache is only valid for the lifetime of the YourEntities object, which, if you use the singleton code above, is the same as the page lifetime. One page request == one YourEntity object. So for multiple calls for the same object, it will load only once per page request. (Other caching mechanism could extend that). What happens when you are using NoTracking and try to load the same object multiple times? The database will be queried each time, so there is an impact there. How often do/should you call for the same object during a single page request? As little as possible of course, but it does happens. Also remember the piece above about having the associations connected automatically for your? You don't have that with NoTracking, so if you load your data in multiple batches, you will not have a link to between them: ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>)(from dog in YourContext.DogSet select dog); oDogQuery.MergeOption = MergeOption.NoTracking; List<Dog> dogs = oDogQuery.ToList(); ObjectQuery<Person> oPersonQuery = (ObjectQuery<Person>)(from o in YourContext.PersonSet select o); oPersonQuery.MergeOption = MergeOption.NoTracking; List<Person> owners = oPersonQuery.ToList(); In this case, no dog will have its .Owner property set. Some things to keep in mind when you are trying to optimize the performance. No lazy loading, what am I to do? This can be seen as a blessing in disguise. Of course it is annoying to load everything manually. However, it decreases the number of calls to the db and forces you to think about when you should load data. The more you can load in one database call the better. That was always true, but it is enforced now with this 'feature' of EF. Of course, you can call if( !ObjectReference.IsLoaded ) ObjectReference.Load(); if you want to, but a better practice is to force the framework to load the objects you know you will need in one shot. This is where the discussion about parametrized Includes begins to make sense. Lets say you have you Dog object public class Dog { public Dog Get(int id) { return YourContext.DogSet.FirstOrDefault(it => it.ID == id ); } } This is the type of function you work with all the time. It gets called from all over the place and once you have that Dog object, you will do very different things to it in different functions. First, it should be pre-compiled, because you will call that very often. Second, each different pages will want to have access to a different subset of the Dog data. Some will want the Owner, some the FavoriteToy, etc. Of course, you could call Load() for each reference you need anytime you need one. But that will generate a call to the database each time. Bad idea. So instead, each page will ask for the data it wants to see when it first request for the Dog object: static public Dog Get(int id) { return GetDog(entity,"");} static public Dog Get(int id, string includePath) { string query = "select value o " + " from YourEntities.DogSet as o " +

    Read the article

  • Converting PDF to images using ImageMagick.NET - how to set the DPI

    - by Avi Pinto
    Hi, I'm trying to convert pdf files to images. ImageMagick is a great tool, and using the command line tool gets me desired result. but i need to do this in my code, So added a reference to http://imagemagick.codeplex.com/ And the following code sample renders each page of the pdf as an image: MagickNet.InitializeMagick(); using (ImageList im = new ImageList()) { im.ReadImages(@"E:\Test\" + fileName + ".pdf"); int count = 0; foreach (Image image in im) { image.Quality = 100; image.CompressType = mageMagickNET.CompressionType.LosslessJPEGCompression; image.Write(@"E:\Test\" + fileName + "-" + count.ToString() + ".jpg"); ++count; } } The problem: IT LOOKS LIKE CRAP the rendered image is hardly readable. the problem i realized is it uses the default 72 DPI of ImageMagick. and i can't find a way to set it(96dpi or 120dpi gives good results) via the .Net wrapper. Am I missing something , or there is really no way to set it via this wrapper? Thanks

    Read the article

  • Fast string suffix checking in C# (.NET 4.0)?

    - by ilitirit
    What is the fastest method of checking string suffixes in C#? I need to check each string in a large list (anywhere from 5000 to 100000 items) for a particular term. The term is guaranteed never to be embedded within the string. In other words, if the string contains the term, it will be at the end of the string. The string is also guaranteed to be longer than the suffix. Cultural information is not important. These are how different methods performed against 100000 strings (half of them have the suffix): 1. Substring Comparison - 13.60ms 2. String.Contains - 22.33ms 3. CompareInfo.IsSuffix - 24.60ms 4. String.EndsWith - 29.08ms 5. String.LastIndexOf - 30.68ms These are average times. [Edit] Forgot to mention that the strings also get put into separate lists, but this is not important. It does add to the running time though. On my system substring comparison (extracting the end of the string using the String.Substring method and comparing it to the suffix term) is consistently the fastest when tested against 100000 strings. The problem with using substring comparison though is that Garbage Collection can slow it down considerably (more than the other methods) because String.Substring creates new strings. The effect is not as bad in .NET 4.0 as it was in 3.5 and below, but it is still noticeable. In my tests, String.Substring performed consistently slower on sets of 12000-13000 strings. This will obviously differ between systems and implementations. [EDIT] Benchmark code: http://pastebin.com/smEtYNYN

    Read the article

  • Microsoft Reporting 2005 and Report Viewer Report ASP.Net Session Has Expired on Load

    - by ThaKidd
    At my job, I have been tasked with fixing an error with our reporting server. That error is ASP.Net Session Has Expired. This error occurs when the Visual Studio ReportViewer 2005 Control attempts to load a report. We are trying to host this report to users hitting our Internet exposed Windows 2003 Server running IIS 6.0. The reportviewer control is attempting to load this report from a second server running Microsoft SQL 2005 w/Reporting Services. The SQL server is not exposed to the Internet. Here is the weird thing. This error never occurs on the development box. When it is transferred to the production IIS server, the error starts to occur. It only happens every time the report is first loaded. If the browser's refresh button is clicked 5-10 times, the report will finally load correctly. I have reproduced this same error on the latest version of Mozilla Firefox, IE 7, and IE 8. The report only takes 10-20 seconds to load. I have tried timeouts in the 300+ second range on the reporting server/iis production server. I have tried a few options like Async (which causes images not to load properly) and setting the session mode to iproc with a high timeout value in the Reporting Server's web.config. I have also tried using the reporting server's IP address in the report viewer's code instead of the server name. I plan on verifying a picture loading issue which I also read about tomorrow when I get into work. I am unsure what service packs Visual Studio 2005 and the MSSQL server are running. Was an update released to fix this problem that I could not find? Does anyone have a fix for this?

    Read the article

  • .NET Remoting memory leak?

    - by PrimeTSS
    I have a Remoting Class as a Singleton <configuration> <system.runtime.remoting> <application> <service> <wellknown mode="Singleton" type="PTSSLinkClasses.PTSSLinkClientDesktopRemotable, PTSSLinkClasses" objectUri="PTSSLinkDesktop" /> </service> <channels> <channel ref="http" port="8901"/> </channels> </application> </system.runtime.remoting> </configuration> Its created within a "server" Service. Another client service consumes this remote object. The client is calling the remote object every .5 second using a timer (polling) (for testing) If the server service is stopped, so the remote object is not available, memory useage for the client service keeps increasing...... I have overwritten InitialLifetimeService to return a null public override Object InitializeLifetimeService() { return null; } If a remote object is not available does .net queue all the call requests to this object??? untill all the memory is consumed? How can I dected if the remote object is not available and stop trying to call the remote method?

    Read the article

  • MySQL Connector for .NET - Is it REALLY mature?

    - by effkay
    After spending a miserable month with MySQL/.NET/EntityFramework, my findings: Support for Entity Framework is VERY primitive, please use it for student-subjects type of database. Kindly do not consider it using for serious development as they ARE STILL unable to sort out VERY BASIC things like: it DOES NOT support unsigned stuff it DOES NOT support unsigned columns as FK; if you try, it gives you a beautiful exception; "The specified value is not an instance of a valid constant type\r\nParameter name: value" [http://bugs.mysql.com/bug.php?id=44801] blob cannot store more then few KB; cannot compare null object with a column with a LEGAL null value [http://bugs.mysql.com/bug.php?id=49936] they are unable to write VERY PRIMITIVE check to return date as null if value in column is 0000-00-00 00:00:00 if you use Visual Studio; sorry; mysql/sun guys hate Microsoft, they will NOT LET you import more then two or three tables (for Micky Mouse type of tables, they allow five; but thats it) - if you try, it will throw TIME OUT error on your face ... unless you are smart enough to change the connection time in connection string Anyone who would like to add in above list? WISH I would have seen a list like this before I selected MySQL :(

    Read the article

  • English Error Messages in German Visual Studio 2008 / ASP.NET

    - by BlaM
    This might be a bit weird question, but I'll give it a shot: HELP, my Visual Studio 2008 / ASP.NET is giving me GERMAN error messages. Besides the fact that translations tend to be not as good as the original text, I can't search for those and find relevant answers to my problems on the internet. So: How do I switch my German Visual Studio 2008 Standard Edition to English locals? Update - Just to make it clear: I am a German developer, working with a German Windows Vista... I also have a German version of Visual Studio, so it is not surprising, that everything is German. Is just don't want it that way... There must be a way to install english locals into my Visual Studio, though? Or uninstall german ones, so that default english is used?!? (BTW: Same thing for SQL Server Management Studio, too. F**k "Sichten". I want "Views". That's how you really call them. No one says "Sichten", not even here in Germany, and not even though it is translated correctly).

    Read the article

  • OleDbExeption Was unhandled in VB.Net

    - by ritch
    Syntax error (missing operator) in query expression '((ProductID = ?) AND ((? = 1 AND Product Name IS NULL) OR (Product Name = ?)) AND ((? = 1 AND Price IS NULL) OR (Price = ?)) AND ((? = 1 AND Quantity IS NULL) OR (Quantity = ?)))'. I need some help sorting this error out in Visual Basics.Net 2008. I am trying to update records in a MS Access Database 2008. I have it being able to update one table but the other table is just not having it. Private Sub Admin_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load 'Reads Users into the program from the text file (Located at Module.VB) ReadUsers() 'Connect To Access 2007 Database File con.ConnectionString = ("Provider=Microsoft.ACE.OLEDB.12.0;" & "Data Source=E:\Computing\Projects\Login\Login\bds.accdb;") con.Open() 'SQL connect 1 sql = "Select * From Clients" da = New OleDb.OleDbDataAdapter(sql, con) da.Fill(ds, "Clients") MaxRows = ds.Tables("Clients").Rows.Count intCounter = -1 'SQL connect 2 sql2 = "Select * From Products" da2 = New OleDb.OleDbDataAdapter(sql2, con) da2.Fill(ds, "Products") MaxRows2 = ds.Tables("Products").Rows.Count intCounter2 = -1 'Show Clients From Database in a ComboBox ComboBoxClients.DisplayMember = "ClientName" ComboBoxClients.ValueMember = "ClientID" ComboBoxClients.DataSource = ds.Tables("Clients") End Sub The button, the error appears on da2.update(ds, "Products") Private Sub Button4_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button4.Click Dim cb2 As New OleDb.OleDbCommandBuilder(da2) ds.Tables("Products").Rows(intCounter2).Item("Price") = ProductPriceBox.Text da2.Update(ds, "Products") 'Alerts the user that the Database has been updated MsgBox("Database Updated") End Sub However the code works on updating another table Private Sub UpdateButton_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles UpdateButton.Click 'Allows users to update records in the Database Dim cb As New OleDb.OleDbCommandBuilder(da) 'Changes the database contents with the content in the text fields ds.Tables("Clients").Rows(intCounter).Item("ClientName") = ClientNameBox.Text ds.Tables("Clients").Rows(intCounter).Item("ClientID") = ClientIDBox.Text ds.Tables("Clients").Rows(intCounter).Item("ClientAddress") = ClientAddressBox.Text ds.Tables("Clients").Rows(intCounter).Item("ClientTelephoneNumber") = ClientNumberBox.Text 'Updates the table withing the Database da.Update(ds, "Clients") 'Alerts the user that the Database has been updated MsgBox("Database Updated") End Sub

    Read the article

  • ASP.NET MVC Using Castle Windsor IoC

    - by Mad Halfling
    I have an app, modelled on the one from Apress Pro ASP.NET MVC that uses castle windsor's IoC to instantiate the controllers with their respective repositories, and this is working fine e.g. public class ItemController : Controller { private IItemsRepository itemsRepository; public ItemController(IItemsRepository windsorItemsRepository) { this.itemsRepository = windsorItemsRepository; } with using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc; using Castle.Windsor; using Castle.Windsor.Configuration.Interpreters; using Castle.Core.Resource; using System.Reflection; using Castle.Core; namespace WebUI { public class WindsorControllerFactory : DefaultControllerFactory { WindsorContainer container; // The constructor: // 1. Sets up a new IoC container // 2. Registers all components specified in web.config // 3. Registers all controller types as components public WindsorControllerFactory() { // Instantiate a container, taking configuration from web.config container = new WindsorContainer(new XmlInterpreter(new ConfigResource("castle"))); // Also register all the controller types as transient var controllerTypes = from t in Assembly.GetExecutingAssembly().GetTypes() where typeof(IController).IsAssignableFrom(t) select t; foreach (Type t in controllerTypes) container.AddComponentWithLifestyle(t.FullName, t, LifestyleType.Transient); } // Constructs the controller instance needed to service each request protected override IController GetControllerInstance(Type controllerType) { return (IController)container.Resolve(controllerType); } } } controlling the controller creation. I sometimes need to create other repository instances within controllers, to pick up data from other places, can I do this using the CW IoC, if so then how? I have been playing around with the creation of new controller classes, as they should auto-register with my existing code (if I can get this working, I can register them properly later) but when I try to instantiate them there is an obvious objection as I can't supply a repos class for the constructor (I was pretty sure that was the wrong way to go about it anyway). Any help (especially examples) would be much appreciated. Cheers MH

    Read the article

  • Slow Databinding setup time in C# .NET 4.0

    - by Svisstack
    Hello, I have got a problem. I have windows forms application with dynamic generated layout, but i have a problem in performance. In this form i use DataBinding from .NET 4.0 and databinding after setup works fine, but he binding setup time for ONE control blocking my application on approx 0.7 second. I have some controls and time of binging setuping is around 2 minutes. I trying all possible solutions, I dont have any ideas without write self binding class. Why is wrong with my code? case "Boolean": { Binding b = new Binding("Checked", __bindingsource, __ep.Name); CheckBox cb = new CheckBox(); /* * HERE is the problem */ cb.DataBindings.Add(b); /* * HERE is the end of problem */ __flp.Controls.Add(cb); __bindingcontrol.AddBinding(b); break; } Without problem code lines all works fast and without binding ;-( but i want binding turn on in normal speed. PS1. I have suspended layout in generation time. PS2. I have same problem with binding TextBox'es, PictureBoxe's, CheckBox is only example. How to do that?

    Read the article

  • Attempted to read or write protected memory-Sql Compact and .NEt

    - by Jankhana
    I'm using Sql Compact3.5 as my DB with C# .NET . I have a strange problem with my application. I'm running the code in two PC with same configuration except the Sql Compact installed. In one PC where Sql Compact3.5 is not installed I'm getting this strange error: Exception :Attempted to read or write protected memory. This is often an indication that other memory is corrupt. Inner Exception : Stack Trace : at System.Data.SqlServerCe.NativeMethods.CloseStore(IntPtr pSeStore) at System.Data.SqlServerCe.SqlCeConnection.ReleaseNativeInterfaces() at System.Data.SqlServerCe.SqlCeConnection.Dispose(Boolean disposing) at System.Data.SqlServerCe.SqlCeConnection.Finalize() Source : System.Data.SqlServerCe I don't know where i have went wrong. I checked my code and included try catch everywhere. I'm handling Unhandled exception also using this . I am getting this error from a console application which I'm starting from Windows Form. In both the application I've inserted the Unhandled Exception coding and it's getting executed and getting return to the texxt file. But still Microsoft Don't Error Report is getting generated and I get that Dialog box!!! Y still that dialog box is getting generated is the trouble!!! Is there any way to supress that Dialog box??? Y I'm getting this box when I'm handling the exception and it's executing my catch handler??? In another PC where SQL Compact is installed no error I get!!! Any idea y is that so???

    Read the article

  • Subscription website architecture questions + SQL Server & .NET

    - by chopps
    Hey Guys, I have a few questions about the architecture of a subscription service I am about to embark on and I am looking for some feedback on how best to set it up. I won’t have a large amount of customers as Basecamp, maybe a few hundred and was wondering what would be a solid architecture for setting up the customer sites. I’m running SQL Server and .NET on a dedicated machine. Should create a new database for each customer as to have control and isolation of data or keep them all in one database? I am also thinking of creating a sub-domain for each customer as well so modifications can be made to each site as needed. The customer URLs would look like this: https://customer1.foobar.com https://customer2.foobar.com I am going to have the ability to ‘plug-in’ reports that will be uploaded to the site so each customer can customize as needed. Off the top of my head this necessitates having each sub domain on its own code-base for the uploading of these reports. So on the main site the customer would sign up for their new subscription and I would programmatically create a new directory for the customer from the main code base and then create a sub domain pointing to the new directory for the customer and then finally their database. Does this sound about right? Am I on the right track? How do other such sites accomplish the same thing? Thanks for letting me bend your ear for a bit on this.

    Read the article

  • .net framework execution aborted while executing CLR sproc?

    - by Sean Ochoa
    I constructed a sproc that does the equivalent of FOR XML AUTO in SQL 2008. Now that I'm testing it, it gives me a really unhelpful error msg. Any idea what this error means? Msg 10329, Level 16, State 49, Procedure ForXML, Line 0 .Net Framework execution was aborted. System.Threading.ThreadAbortException: Thread was being aborted. System.Threading.ThreadAbortException: at System.Runtime.InteropServices.Marshal.PtrToStringUni(IntPtr ptr, Int32 len) at System.Data.SqlServer.Internal.CXVariantBase.WSTRToString() at System.Data.SqlServer.Internal.SqlWSTRLimitedBuffer.GetString(SmiEventSink sink) at System.Data.SqlServer.Internal.RowData.GetString(SmiEventSink sink, Int32 i) at Microsoft.SqlServer.Server.ValueUtilsSmi.GetValue(SmiEventSink_Default sink, ITypedGettersV3 getters, Int32 ordinal, SmiMetaData metaData, SmiContext context) at Microsoft.SqlServer.Server.ValueUtilsSmi.GetValue200(SmiEventSink_Default sink, SmiTypedGetterSetter getters, Int32 ordinal, SmiMetaData metaData, SmiContext context) at System.Data.SqlClient.SqlDataReaderSmi.GetValue(Int32 ordinal) at System.Data.SqlClient.SqlDataReaderSmi.GetValues(Object[] values) at System.Data.ProviderBase.DataReaderContainer.CommonLanguageSubsetDataReader.GetValues(Object[] values) at System.Data.ProviderBase.SchemaMapping.LoadDataRow() at System.Data.Common.DataAdapter.FillLoadDataRow(SchemaMapping mapping) at System.Data.Common.DataAdapter.FillFromReader(DataSet dataset, DataTable datatable, String srcTable, DataReaderContainer dataReader, Int32 startRecord, Int32 maxRecords, DataColumn parentChapterColumn, Object parentChapterValue) at System.Data.Common.DataAdapter.Fill(DataTable[] dataTables, IDataReader dataReader, Int32 startRecord, Int32 maxRecords) at System.Data.Common.DbDataAdapter.FillInternal(DataSet dataset, DataTable[] datatables, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataTable[] dataTables, Int32 startRecord, Int32 maxRecords, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataTable dataTable) at ForXML.GetXML...

    Read the article

  • Maintaining state and data context between requests in ASP.NET + EF4

    - by Nick
    I have a EF4/ASP.NET web application that is structured to use POCOs and generic repositories, based essentially on this excellent article. The application is relatively sophisticated with one page that involves selection and linking of multiple entities to build up a complex user profile. This requires access to multiple entity types (20 or so) and associated repositories across multiple posts. When a repository is first accessed it uses the existing data context if exists, else it creates a new context. The problem is that if the lifetime of the context is only per-request (as suggested in the article) then you have to deal with multiple contexts and the complexity around detaching and attaching entities from contexts. My solution is to share the context between posts by creating a single View Model that includes all required repositories (initialised to share the same context) plus any associated data and store this model in a Session variable, retrieving from Session on subsequent page requests. Therefore maintaining the same context across all posts until the profile is saved. This works fine BUT I am concerned that I don't actually know exactly what is stored in the model session variable or more importantly the size of the Session variable. So two questions I suppose: firstly should I look for a better solution to handle the shared context across posts issue (any suggestions welcome)? And secondly what is actually stored in the Session when it includes a repository plus context? Any help appreciated!

    Read the article

  • Fastest PNG decoder for .NET

    - by sboisse
    Our web server needs to process many compositions of large images together before sending the results to web clients. This process is performance critical because the server can receive several thousands of requests per hour. Right now our solution loads PNG files (around 1MB each) from the HD and sends them to the video card so the composition is done on the GPU. We first tried loading our images using the PNG decoder exposed by the XNA API. We saw the performance was not too good. To understand if the problem was loading from the HD or the decoding of the PNG, we modified that by loading the file in a memory stream, and then sending that memory stream to the .NET PNG decoder. The difference of performance using XNA or using System.Windows.Media.Imaging.PngBitmapDecoder class is not significant. We roughly get the same levels of performance. Our benchmarks show the following performance results: Load images from disk: 37.76ms 1% Decode PNGs: 2816.97ms 77% Load images on Video Hardware: 196.67ms 5% Composition: 87.80ms 2% Get composition result from Video Hardware: 166.21ms 5% Encode to PNG: 318.13ms 9% Store to disk: 3.96ms 0% Clean up: 53.00ms 1% Total: 3680.50ms 100% From these results we see that the slowest parts are when decoding the PNG. So we are wondering if there wouldn't be a PNG decoder we could use that would allow us to reduce the PNG decoding time. We also considered keeping the images uncompressed on the hard disk, but then each image would be 10MB in size instead of 1MB and since there are several tens of thousands of these images stored on the hard disk, it is not possible to store them all without compression.

    Read the article

  • Calling webservice via server causes java.net.MalformedURLException: no protocol

    - by Thomas
    I am writing a web-service, which parses an xml file. In the client, I read the whole content of the xml into a String then I give it to the web-service. If I run my web-service with main as a Java-Application (for tests) there is no problem, no error messages. However when I try to call it via the server, I get the following error: java.net.MalformedURLException: no protocol I use the same xml file, the same code (without main), and I just cannot figure out, what the cause of the error can be. here is my code: DOMParser parser=new DOMParser(); try { parser.setFeature("http://xml.org/sax/features/validation", true); parser.setFeature("http://apache.org/xml/features/validation/schema",true); parser.setFeature("http://apache.org/xml/features/validation/dynamic",true); parser.setErrorHandler(new myErrorHandler()); parser.parse(new InputSource(new StringReader(xmlFile))); document=parser.getDocument(); xmlFile is constructed in the client so: String myFile ="C:/test.xml"; File file=new File(myFile); String myString=""; FileInputStream fis=new FileInputStream(file); BufferedInputStream bis=new BufferedInputStream(fis); DataInputStream dis=new DataInputStream(bis); while (dis.available()!=0) { myString=myString+dis.readLine(); } fis.close(); bis.close(); dis.close(); Any suggestions will be appreciated!

    Read the article

  • Using MSBuild 4 command line to publish ASP.NET web application

    - by meandmycode
    In previous msbuild we used the target '_CopyWebApplication' in order to build and convert the source of a project into a published site, this worked OK, but wasn't ideal. In .NET 4, the publishing process is somewhat more sophisticated and additionally seems a bit of a black box to understand. Whilst packages look great, I cannot fully understand how they can be harnessed by a build server, the build server would not get any manifest information, and equally, something (msbuild?) is CREATING this manifest information FROM the project file. In our build server, I ideally want to say, here is my csproj file, deploy it by the package configuration 'x'. I'm trying to understand the workflow I need to make this happen. Right now when I use _CopyWebApplication, the result is different to doing a publish from visual studio 2010, primarily that web.config transforms aren't processed, and obviously msdeploy isn't involved at all. Can somebody point me in the right direction, I believe I need to get msbuild to do the equiv of 'Build Deployment Package', and then use msdeploy to deploy this from our build server to our CI testing environments. I know this is a very vague post, but I hope somebody can give me some hints, I'll be continuing research also, so if I make any progress, I'll post my findings here. Thanks in advance, Stephen.

    Read the article

  • Visual Studio 2008 / ASP.NET 3.5 / C# -- issues with intellisense, references, and builds

    - by goober
    Hey all, Hoping you can help me -- the strangest thing seems to have happened with my VS install. System config: Windows 7 Pro x64, Visual Studio 2008 SP1, C#, ASP.NET 3.5. I have two web site projects in a solution. I am referencing NUnit / NHibernate (did this by right-clicking on the project and selecting "Add Reference". I've done this for several projects in the past). Things were working fine but recently stopped working and I can't figure out why. Intellisense completely disappears for any files in my App_Code directory, and none of the references are recognized (they are recognized by any file in the root directory of the web site project. Additionally, pretty simple commands like the following (in Page_Load) fail (assume TextBox1 is definitely an element on the page): if (Page.IsPostBack) { str test1; test1 = TextBox1.Text; } It says that all the page elements are null or that it can't access them. At first I thought it was me, but due to the combination of issues, it seems to be Visual Studio itself. I've tried clearing the temp directories & rebuilding the solution. I've also tried tools -- options -- text editor settings to ensure intellisense is turned on. I'd appreciate any help you can give! Thanks, Sean

    Read the article

  • Java and .net interoperability

    - by dineshrekula
    I have a c# program through which i am opening cmd window as a a process. in this command window i am running a batch file. i am redirecting the output of that batch file commands to a Text File. When i run my application everything seems to be ok. But few times, Application is giving some error like "Can't access the file. it's being used by another application" at the same time cmd window is not getting closed. If we close the cmd process through the Task Manager, then it's writing the content to the file and getting closed. Even though i closed the cmd process, still file handle is not getting released. so that i am not able to run the application next time onwards.Always it's saying Can't access the file. Only after restarting the system, it's working. Here is my code: Process objProcess = new Process(); ProcessStartInfo objProInfo = new ProcessStartInfo(); objProInfo.WindowStyle = ProcessWindowStyle.Maximized; objProInfo.UseShellExecute = true; objProInfo.FileName = "Batch file path" objProInfo.Arguments = "Some Arguments"; if (Directory.Exists(strOutputPath) == false) { Directory.CreateDirectory(strOutputPath); } objProInfo.CreateNoWindow = false; objProcess.StartInfo = objProInfo; objProcess.Start(); objProcess.WaitForExit(); test.bat: java classname argument > output.txt Here is my question: I am not able to trace where the problem is.. How we can see the process which holding handle on ant file. Is there any suggestions for Java and .net interoperability

    Read the article

  • ASP.Net Response Filter Causing SharePoint 2010 "Unexpected Error"

    - by Jason Weber
    Hello everyone, I'm debugging an HttpModule with an ASP.NET response filter. This dynamically rewrites portions of rendered SharePoint WCM pages. The publishing pages render fine in SP2007 on both Server 2003 and Server 2008. However the equivalent pages fail to render in SP2010 B2 on Server 2008 R2. The generic "An unexpected error has occurred message" page is displayed. This error only happens when the response filter is applied to an .aspx page. Other page types, such as .css, render fine on this platform. This error also happens when the response filter does not modify the page at all (pure pass-through). This KB article seems very closely related: http://support.microsoft.com/kb/2014472. However, this same error occurs with caching disabled. I see no related entries in any of the following: ULS for SP, Event Log, Failed Request Tracing (IIS7). Running under the debugger suggests that the custom code is not raising any exceptions. Any help or insight would be greatly appreciated.

    Read the article

  • .Net xsd.exe tool doesn't generate all types

    - by Mrchief
    For some reason, MS .Net (v3.5) tool - xsd.exe doesn't generate types when they are not used inside any element. e.g. XSD File (I threw in the complex element to avoid this warning - "Warning: cannot generate classes because no top-level elements with complex type were found."): <?xml version="1.0" encoding="utf-8"?> <xs:schema targetNamespace="http://tempuri.org/XMLSchema.xsd" elementFormDefault="qualified" xmlns="http://tempuri.org/XMLSchema.xsd" xmlns:mstns="http://tempuri.org/XMLSchema.xsd" xmlns:xs="http://www.w3.org/2001/XMLSchema" > <xs:simpleType name="EnumTest"> <xs:restriction base="xs:string"> <xs:enumeration value="item1" /> <xs:enumeration value="item2" /> <xs:enumeration value="item3" /> </xs:restriction> </xs:simpleType> <xs:complexType name="myComplexType"> <xs:attribute name="Name" use="required" type="xs:string"/> </xs:complexType> <xs:element name="myElem" type="myComplexType"></xs:element> </xs:schema> When i run this thru xsd.exe using xsd /c xsdfile.xsd I don't see EnumTest in the generated cs file. Note; Even though I don't use the enum here, but in my actual project, I have cases like this where we send enum's string value as output. How can I force the xsd tool to include these? Or should I switch to some other tool? I work in Visual Studio 2008.

    Read the article

  • Asp.Net tree view in SharePoint webpart- Input string error

    - by Faiz
    Hi All, I am facing a very strange issue. I have a SharePoint webpart that displays an asp.net tree view. It takes tree depth from a drop down. To improve performance of the tree view, i am setting the PopulateOnDemand property to true for the last level of the tree depth. For example, if i have a total of 10 levels in the data and the user selects tree depth as 3, then the third level data i set PopulateOnDemand to true. Now comes the strange part. When i click on the + image on the third level, and if there are children under that particular node then call back happens and node gets expanded. But if there no children for that particular node, then click + throws "Input string was not in the correct format" error. I have made sure that there is no server side error. Some things looks to be fishy when internet explorer is trying to bind construct the expanded node. Please let me know if any one faced similar issue or the resolution for the same? Thanks in advance

    Read the article

< Previous Page | 911 912 913 914 915 916 917 918 919 920 921 922  | Next Page >