Search Results

Search found 45581 results on 1824 pages for 'value objects'.

Page 421/1824 | < Previous Page | 417 418 419 420 421 422 423 424 425 426 427 428  | Next Page >

  • FluentNHibernate Unit Of Work / Repository Design Pattern Questions

    - by Echiban
    Hi all, I think I am at a impasse here. I have an application I built from scratch using FluentNHibernate (ORM) / SQLite (file db). I have decided to implement the Unit of Work and Repository Design pattern. I am at a point where I need to think about the end game, which will start as a WPF windows app (using MVVM) and eventually implement web services / ASP.Net as UI. Now I already created domain objects (entities) for ORM. And now I don't know how should I use it outside of ORM. Questions about it include: Should I use ORM entity objects directly as models in MVVM? If yes, do I put business logic (such as certain values must be positive and be greater than another Property) in those entity objects? It is certainly the simpler approach, and one I am leaning right now. However, will there be gotchas that would trash this plan? If the answer above is no, do I then create a new set of classes to implement business logic and use those as Models in MVVM? How would I deal with the transition between model objects and entity objects? I guess a type converter implementation would work well here. Now I followed this well written article to implement the Unit Of Work pattern. However, due to the fact that I am using FluentNHibernate instead of NHibernate, I had to bastardize the implementation of UnitOfWorkFactory. Here's my implementation: using System; using FluentNHibernate.Cfg; using FluentNHibernate.Cfg.Db; using NHibernate; using NHibernate.Cfg; using NHibernate.Tool.hbm2ddl; namespace ELau.BlindsManagement.Business { public class UnitOfWorkFactory : IUnitOfWorkFactory { private static readonly string DbFilename; private static Configuration _configuration; private static ISession _currentSession; private ISessionFactory _sessionFactory; static UnitOfWorkFactory() { // arbitrary default filename DbFilename = "defaultBlindsDb.db3"; } internal UnitOfWorkFactory() { } #region IUnitOfWorkFactory Members public ISession CurrentSession { get { if (_currentSession == null) { throw new InvalidOperationException(ExceptionStringTable.Generic_NotInUnitOfWork); } return _currentSession; } set { _currentSession = value; } } public ISessionFactory SessionFactory { get { if (_sessionFactory == null) { _sessionFactory = BuildSessionFactory(); } return _sessionFactory; } } public Configuration Configuration { get { if (_configuration == null) { Fluently.Configure().ExposeConfiguration(c => _configuration = c); } return _configuration; } } public IUnitOfWork Create() { ISession session = CreateSession(); session.FlushMode = FlushMode.Commit; _currentSession = session; return new UnitOfWorkImplementor(this, session); } public void DisposeUnitOfWork(UnitOfWorkImplementor adapter) { CurrentSession = null; UnitOfWork.DisposeUnitOfWork(adapter); } #endregion public ISession CreateSession() { return SessionFactory.OpenSession(); } public IStatelessSession CreateStatelessSession() { return SessionFactory.OpenStatelessSession(); } private static ISessionFactory BuildSessionFactory() { ISessionFactory result = Fluently.Configure() .Database( SQLiteConfiguration.Standard .UsingFile(DbFilename) ) .Mappings(m => m.FluentMappings.AddFromAssemblyOf<UnitOfWorkFactory>()) .ExposeConfiguration(BuildSchema) .BuildSessionFactory(); return result; } private static void BuildSchema(Configuration config) { // this NHibernate tool takes a configuration (with mapping info in) // and exports a database schema from it _configuration = config; new SchemaExport(_configuration).Create(false, true); } } } I know that this implementation is flawed because a few tests pass when run individually, but when all tests are run, it would fail for some unknown reason. Whoever wants to help me out with this one, given its complexity, please contact me by private message. I am willing to send some $$$ by Paypal to someone who can address the issue and provide solid explanation. I am new to ORM, so any assistance is appreciated.

    Read the article

  • Django manager for _set in model

    - by Daniel Johansson
    Hello, I'm in the progress of learning Django at the moment but I can't figure out how to solve this problem on my own. I'm reading the book Developers Library - Python Web Development With Django and in one chapter you build a simple CMS system with two models (Story and Category), some generic and custom views together with templates for the views. The book only contains code for listing stories, story details and search. I wanted to expand on that and build a page with nested lists for categories and stories. - Category1 -- Story1 -- Story2 - Category2 - Story3 etc. I managed to figure out how to add my own generic object_list view for the category listing. My problem is that the Story model have STATUS_CHOICES if the Story is public or not and a custom manager that'll only fetch the public Stories per default. I can't figure out how to tell my generic Category list view to also use a custom manager and only fetch the public Stories. Everything works except that small problem. I'm able to create a list for all categories with a sub list for all stories in that category on a single page, the only problem is that the list contains non public Stories. I don't know if I'm on the right track here. My urls.py contains a generic view that fetches all Category objects and in my template I'm using the *category.story_set.all* to get all Story objects for that category, wich I then loop over. I think it would be possible to add a if statement in the template and use the VIEWABLE_STATUS from my model file to check if it should be listed or not. The problem with that solution is that it's not very DRY compatible. Is it possible to add some kind of manager for the Category model too that only will fetch in public Story objects when using the story_set on a category? Or is this the wrong way to attack my problem? Related code urls.py (only category list view): urlpatterns += patterns('django.views.generic.list_detail', url(r'^categories/$', 'object_list', {'queryset': Category.objects.all(), 'template_object_name': 'category' }, name='cms-categories'), models.py: from markdown import markdown import datetime from django.db import models from django.db.models import permalink from django.contrib.auth.models import User VIEWABLE_STATUS = [3, 4] class ViewableManager(models.Manager): def get_query_set(self): default_queryset = super(ViewableManager, self).get_query_set() return default_queryset.filter(status__in=VIEWABLE_STATUS) class Category(models.Model): """A content category""" label = models.CharField(blank=True, max_length=50) slug = models.SlugField() class Meta: verbose_name_plural = "categories" def __unicode__(self): return self.label @permalink def get_absolute_url(self): return ('cms-category', (), {'slug': self.slug}) class Story(models.Model): """A hunk of content for our site, generally corresponding to a page""" STATUS_CHOICES = ( (1, "Needs Edit"), (2, "Needs Approval"), (3, "Published"), (4, "Archived"), ) title = models.CharField(max_length=100) slug = models.SlugField() category = models.ForeignKey(Category) markdown_content = models.TextField() html_content = models.TextField(editable=False) owner = models.ForeignKey(User) status = models.IntegerField(choices=STATUS_CHOICES, default=1) created = models.DateTimeField(default=datetime.datetime.now) modified = models.DateTimeField(default=datetime.datetime.now) class Meta: ordering = ['modified'] verbose_name_plural = "stories" def __unicode__(self): return self.title @permalink def get_absolute_url(self): return ("cms-story", (), {'slug': self.slug}) def save(self): self.html_content = markdown(self.markdown_content) self.modified = datetime.datetime.now() super(Story, self).save() admin_objects = models.Manager() objects = ViewableManager() category_list.html (related template): {% extends "cms/base.html" %} {% block content %} <h1>Categories</h1> {% if category_list %} <ul id="category-list"> {% for category in category_list %} <li><a href="{{ category.get_absolute_url }}">{{ category.label }}</a></li> {% if category.story_set %} <ul> {% for story in category.story_set.all %} <li><a href="{{ story.get_absolute_url }}">{{ story.title }}</a></li> {% endfor %} </ul> {% endif %} {% endfor %} </ul> {% else %} <p> Sorry, no categories at the moment. </p> {% endif %} {% endblock %}

    Read the article

  • Linking LLVM JIT Code to Static LLVM Libraries?

    - by inflector
    I'm in the process of implementing a cross-platform (Mac OS X, Windows, and Linux) application which will do lots of CPU intensive analysis of financial data. The bulk of the analysis engine will be written in C++ for speed reasons, with a user-accessible scripting engine interfacing with the C++ testing engine. I want to write several scripting front-ends over time to emulate other popular software with existing large user bases. The first front will be a VisualBasic-like scripting language. I'm thinking that LLVM would be perfect for my needs. Performance is very important because of the sheer amount of data; it can take hours or days to run a single run of tests to get an answer. I believe that using LLVM will also allow me to use a single back-end solution while I implement different front-ends for different flavors of the scripting language over time. The testing engine itself will be separated from the interface and testing will even take place in a separate process with progress and results being reported to the testing management interface. Tests will consist of scripting code integrated with the testing engine code. In a previous implementation of a similar commercial testing system I wrote, I built a fast interpreter which easily interfaced with the testing library because it was written in C++ and linked directly to the testing engine library. Callbacks from scripting code to testing library objects involved translating between the formats with significant overhead. I'm imagining that with LLVM, I could implement the callbacks into C++ directly so that I could make the scripting code work almost as if it had been written in C++. Likewise, if all the code was compiled to LLVM byte-code format, it seems like the LLVM optimizers could optimize across the boundaries between the scripting language and the testing engine code that was written in C++. I don't want to have to compile the testing engine every time. Ideally, I'd like to JIT compile only the scripting code. For small tests, I'd skip some optimization passes, while for large tests, I'd perform full optimizations during the link. So is this possible? Can I precompile the testing engine to a .o object file or .a library file and then link in the scripting code using the JIT? Finally, ideally, I'd like to have the scripting code implement specific methods as subclasses for a specific C++ class. So the C++ testing engine would only see C++ objects while the JIT setup code compiled scripting code that implemented some of the methods for the objects. It seems that if I used the right name mangling algorithm it would be relatively easy to set up the LLVM generation for the scripting language to look like a C++ method call which could then be linked into the testing engine. Thus the linking stage would go in two directions, calls from the scripting language into the testing engine objects to retrieve pricing information and test state information and calls from the testing engine of methods of some particular C++ objects where the code was supplied not from C++ but from the scripting language. In summary: 1) Can I link in precompiled (either .bc, .o, or .a) files as part of the JIT compilation, code-generation process? 2) Can I link in code using that the process in 1) above in such a way that I am able to create code that acts as if it was all written in C++?

    Read the article

  • Overriding Object.Equals() instance method in C#; now Code Analysis / FxCop warning CA2218: "should

    - by Chris W. Rea
    I've got a complex class in my C# project on which I want to be able to do equality tests. It is not a trivial class; it contains a variety of scalar properties as well as references to other objects and collections (e.g. IDictionary). For what it's worth, my class is sealed. To enable a performance optimization elsewhere in my system (an optimization that avoids a costly network round-trip), I need to be able to compare instances of these objects to each other for equality – other than the built-in reference equality – and so I'm overriding the Object.Equals() instance method. However, now that I've done that, Visual Studio 2008's Code Analysis a.k.a. FxCop, which I keep enabled by default, is raising the following warning: warning : CA2218 : Microsoft.Usage : Since 'MySuperDuperClass' redefines Equals, it should also redefine GetHashCode. I think I understand the rationale for this warning: If I am going to be using such objects as the key in a collection, the hash code is important. i.e. see this question. However, I am not going to be using these objects as the key in a collection. Ever. Feeling justified to suppress the warning, I looked up code CA2218 in the MSDN documentation to get the full name of the warning so I could apply a SuppressMessage attribute to my class as follows: [SuppressMessage("Microsoft.Naming", "CA2218:OverrideGetHashCodeOnOverridingEquals", Justification="This class is not to be used as key in a hashtable.")] However, while reading further, I noticed the following: How to Fix Violations To fix a violation of this rule, provide an implementation of GetHashCode. For a pair of objects of the same type, you must ensure that the implementation returns the same value if your implementation of Equals returns true for the pair. When to Suppress Warnings ----- Do not suppress a warning from this rule. [arrow & emphasis mine] So, I'd like to know: Why shouldn't I suppress this warning as I was planning to? Doesn't my case warrant suppression? I don't want to code up an implementation of GetHashCode() for this object that will never get called, since my object will never be the key in a collection. If I wanted to be pedantic, instead of suppressing, would it be more reasonable for me to override GetHashCode() with an implementation that throws a NotImplementedException? Update: I just looked this subject up again in Bill Wagner's good book Effective C#, and he states in "Item 10: Understand the Pitfalls of GetHashCode()": If you're defining a type that won't ever be used as the key in a container, this won't matter. Types that represent window controls, web page controls, or database connections are unlikely to be used as keys in a collection. In those cases, do nothing. All reference types will have a hash code that is correct, even if it is very inefficient. [...] In most types that you create, the best approach is to avoid the existence of GetHashCode() entirely. ... that's where I originally got this idea that I need not be concerned about GetHashCode() always.

    Read the article

  • What's my best approach on this simple hierarchy Java Problem?

    - by Nazgulled
    First, I'm sorry for the question title but I can't think of a better one to describe my problem. Feel free to change it :) Let's say I have this abstract class Box which implements a couple of constructors, methods and whatever on some private variables. Then I have a couple of sub classes like BoxA and BoxB. Both of these implement extra things. Now I have another abstract class Shape and a few sub classes like Square and Circle. For both BoxA and BoxB I need to have a list of Shape objects but I need to make sure that only Square objects go into BoxA's list and only Circle objects go into BoxB's list. For that list (on each box), I need to have a get() and set() method and also a addShape() and removeShape() methods. Another important thing to know is that for each box created, either BoxA or BoxB, each respectively Shape list is exactly the same. Let's say I create a list of Square's named ls and two BoxA objects named boxA1 and boxA2. No matter what, both boxA1 and boxA2 must have the same ls list. This is my idea: public abstract class Box { // private instance variables public Box() { // constructor stuff } // public instance methods } public class BoxA extends Box { // private instance variables private static List<Shape> list; public BoxA() { // constructor stuff } // public instance methods public static List<Square> getList() { List<Square> aux = new ArrayList<Square>(); for(Square s : list.values()) { aux.add(s.clone()); // I know what I'm doing with this clone, don't worry about it } return aux; } public static void setList(List<Square> newList) { list = new ArrayList<Square>(newList); } public static void addShape(Square s) { list.add(s); } public static void removeShape(Square s) { list.remove(list.indexOf(s)); } } As the list needs to be the same for that type of object, I declared as static and all methods that work with that list are also static. Now, for BoxB the class would be almost the same regarding the list stuff. I would only replace Square by Triangle and the problem was solved. So, for each BoxA object created, the list would be only one and the same for each BoxB object created, but a different type of list of course. So, what's my problem you ask? Well, I don't like the code... The getList(), setList(), addShape() and removeShape() methods are basically repeated for BoxA and BoxB, only the type of the objects that the list will hold is different. I can't think of way to do it in the super class Box instead. Doing it statically too, using Shape instead of Square and Triangle, wouldn't work because the list would be only one and I need it to be only one but for each sub class of Box. How could I do this differently and better? P.S: I could not describe my real example because I don't know the correct words in English for the stuff I'm doing, so I just used a box and shapes example, but it's basically the same.

    Read the article

  • What are good design practices when working with Entity Framework

    - by AD
    This will apply mostly for an asp.net application where the data is not accessed via soa. Meaning that you get access to the objects loaded from the framework, not Transfer Objects, although some recommendation still apply. This is a community post, so please add to it as you see fit. Applies to: Entity Framework 1.0 shipped with Visual Studio 2008 sp1. Why pick EF in the first place? Considering it is a young technology with plenty of problems (see below), it may be a hard sell to get on the EF bandwagon for your project. However, it is the technology Microsoft is pushing (at the expense of Linq2Sql, which is a subset of EF). In addition, you may not be satisfied with NHibernate or other solutions out there. Whatever the reasons, there are people out there (including me) working with EF and life is not bad.make you think. EF and inheritance The first big subject is inheritance. EF does support mapping for inherited classes that are persisted in 2 ways: table per class and table the hierarchy. The modeling is easy and there are no programming issues with that part. (The following applies to table per class model as I don't have experience with table per hierarchy, which is, anyway, limited.) The real problem comes when you are trying to run queries that include one or many objects that are part of an inheritance tree: the generated sql is incredibly awful, takes a long time to get parsed by the EF and takes a long time to execute as well. This is a real show stopper. Enough that EF should probably not be used with inheritance or as little as possible. Here is an example of how bad it was. My EF model had ~30 classes, ~10 of which were part of an inheritance tree. On running a query to get one item from the Base class, something as simple as Base.Get(id), the generated SQL was over 50,000 characters. Then when you are trying to return some Associations, it degenerates even more, going as far as throwing SQL exceptions about not being able to query more than 256 tables at once. Ok, this is bad, EF concept is to allow you to create your object structure without (or with as little as possible) consideration on the actual database implementation of your table. It completely fails at this. So, recommendations? Avoid inheritance if you can, the performance will be so much better. Use it sparingly where you have to. In my opinion, this makes EF a glorified sql-generation tool for querying, but there are still advantages to using it. And ways to implement mechanism that are similar to inheritance. Bypassing inheritance with Interfaces First thing to know with trying to get some kind of inheritance going with EF is that you cannot assign a non-EF-modeled class a base class. Don't even try it, it will get overwritten by the modeler. So what to do? You can use interfaces to enforce that classes implement some functionality. For example here is a IEntity interface that allow you to define Associations between EF entities where you don't know at design time what the type of the entity would be. public enum EntityTypes{ Unknown = -1, Dog = 0, Cat } public interface IEntity { int EntityID { get; } string Name { get; } Type EntityType { get; } } public partial class Dog : IEntity { // implement EntityID and Name which could actually be fields // from your EF model Type EntityType{ get{ return EntityTypes.Dog; } } } Using this IEntity, you can then work with undefined associations in other classes // lets take a class that you defined in your model. // that class has a mapping to the columns: PetID, PetType public partial class Person { public IEntity GetPet() { return IEntityController.Get(PetID,PetType); } } which makes use of some extension functions: public class IEntityController { static public IEntity Get(int id, EntityTypes type) { switch (type) { case EntityTypes.Dog: return Dog.Get(id); case EntityTypes.Cat: return Cat.Get(id); default: throw new Exception("Invalid EntityType"); } } } Not as neat as having plain inheritance, particularly considering you have to store the PetType in an extra database field, but considering the performance gains, I would not look back. It also cannot model one-to-many, many-to-many relationship, but with creative uses of 'Union' it could be made to work. Finally, it creates the side effet of loading data in a property/function of the object, which you need to be careful about. Using a clear naming convention like GetXYZ() helps in that regards. Compiled Queries Entity Framework performance is not as good as direct database access with ADO (obviously) or Linq2SQL. There are ways to improve it however, one of which is compiling your queries. The performance of a compiled query is similar to Linq2Sql. What is a compiled query? It is simply a query for which you tell the framework to keep the parsed tree in memory so it doesn't need to be regenerated the next time you run it. So the next run, you will save the time it takes to parse the tree. Do not discount that as it is a very costly operation that gets even worse with more complex queries. There are 2 ways to compile a query: creating an ObjectQuery with EntitySQL and using CompiledQuery.Compile() function. (Note that by using an EntityDataSource in your page, you will in fact be using ObjectQuery with EntitySQL, so that gets compiled and cached). An aside here in case you don't know what EntitySQL is. It is a string-based way of writing queries against the EF. Here is an example: "select value dog from Entities.DogSet as dog where dog.ID = @ID". The syntax is pretty similar to SQL syntax. You can also do pretty complex object manipulation, which is well explained [here][1]. Ok, so here is how to do it using ObjectQuery< string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); The first time you run this query, the framework will generate the expression tree and keep it in memory. So the next time it gets executed, you will save on that costly step. In that example EnablePlanCaching = true, which is unnecessary since that is the default option. The other way to compile a query for later use is the CompiledQuery.Compile method. This uses a delegate: static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => ctx.DogSet.FirstOrDefault(it => it.ID == id)); or using linq static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => (from dog in ctx.DogSet where dog.ID == id select dog).FirstOrDefault()); to call the query: query_GetDog.Invoke( YourContext, id ); The advantage of CompiledQuery is that the syntax of your query is checked at compile time, where as EntitySQL is not. However, there are other consideration... Includes Lets say you want to have the data for the dog owner to be returned by the query to avoid making 2 calls to the database. Easy to do, right? EntitySQL string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)).Include("Owner"); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); CompiledQuery static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => (from dog in ctx.DogSet.Include("Owner") where dog.ID == id select dog).FirstOrDefault()); Now, what if you want to have the Include parametrized? What I mean is that you want to have a single Get() function that is called from different pages that care about different relationships for the dog. One cares about the Owner, another about his FavoriteFood, another about his FavotireToy and so on. Basicly, you want to tell the query which associations to load. It is easy to do with EntitySQL public Dog Get(int id, string include) { string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)) .IncludeMany(include); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); } The include simply uses the passed string. Easy enough. Note that it is possible to improve on the Include(string) function (that accepts only a single path) with an IncludeMany(string) that will let you pass a string of comma-separated associations to load. Look further in the extension section for this function. If we try to do it with CompiledQuery however, we run into numerous problems: The obvious static readonly Func<Entities, int, string, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, string, Dog>((ctx, id, include) => (from dog in ctx.DogSet.Include(include) where dog.ID == id select dog).FirstOrDefault()); will choke when called with: query_GetDog.Invoke( YourContext, id, "Owner,FavoriteFood" ); Because, as mentionned above, Include() only wants to see a single path in the string and here we are giving it 2: "Owner" and "FavoriteFood" (which is not to be confused with "Owner.FavoriteFood"!). Then, let's use IncludeMany(), which is an extension function static readonly Func<Entities, int, string, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, string, Dog>((ctx, id, include) => (from dog in ctx.DogSet.IncludeMany(include) where dog.ID == id select dog).FirstOrDefault()); Wrong again, this time it is because the EF cannot parse IncludeMany because it is not part of the functions that is recognizes: it is an extension. Ok, so you want to pass an arbitrary number of paths to your function and Includes() only takes a single one. What to do? You could decide that you will never ever need more than, say 20 Includes, and pass each separated strings in a struct to CompiledQuery. But now the query looks like this: from dog in ctx.DogSet.Include(include1).Include(include2).Include(include3) .Include(include4).Include(include5).Include(include6) .[...].Include(include19).Include(include20) where dog.ID == id select dog which is awful as well. Ok, then, but wait a minute. Can't we return an ObjectQuery< with CompiledQuery? Then set the includes on that? Well, that what I would have thought so as well: static readonly Func<Entities, int, ObjectQuery<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, ObjectQuery<Dog>>((ctx, id) => (ObjectQuery<Dog>)(from dog in ctx.DogSet where dog.ID == id select dog)); public Dog GetDog( int id, string include ) { ObjectQuery<Dog> oQuery = query_GetDog(id); oQuery = oQuery.IncludeMany(include); return oQuery.FirstOrDefault; } That should have worked, except that when you call IncludeMany (or Include, Where, OrderBy...) you invalidate the cached compiled query because it is an entirely new one now! So, the expression tree needs to be reparsed and you get that performance hit again. So what is the solution? You simply cannot use CompiledQueries with parametrized Includes. Use EntitySQL instead. This doesn't mean that there aren't uses for CompiledQueries. It is great for localized queries that will always be called in the same context. Ideally CompiledQuery should always be used because the syntax is checked at compile time, but due to limitation, that's not possible. An example of use would be: you may want to have a page that queries which two dogs have the same favorite food, which is a bit narrow for a BusinessLayer function, so you put it in your page and know exactly what type of includes are required. Passing more than 3 parameters to a CompiledQuery Func is limited to 5 parameters, of which the last one is the return type and the first one is your Entities object from the model. So that leaves you with 3 parameters. A pitance, but it can be improved on very easily. public struct MyParams { public string param1; public int param2; public DateTime param3; } static readonly Func<Entities, MyParams, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, MyParams, IEnumerable<Dog>>((ctx, myParams) => from dog in ctx.DogSet where dog.Age == myParams.param2 && dog.Name == myParams.param1 and dog.BirthDate > myParams.param3 select dog); public List<Dog> GetSomeDogs( int age, string Name, DateTime birthDate ) { MyParams myParams = new MyParams(); myParams.param1 = name; myParams.param2 = age; myParams.param3 = birthDate; return query_GetDog(YourContext,myParams).ToList(); } Return Types (this does not apply to EntitySQL queries as they aren't compiled at the same time during execution as the CompiledQuery method) Working with Linq, you usually don't force the execution of the query until the very last moment, in case some other functions downstream wants to change the query in some way: static readonly Func<Entities, int, string, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, IEnumerable<Dog>>((ctx, age, name) => from dog in ctx.DogSet where dog.Age == age && dog.Name == name select dog); public IEnumerable<Dog> GetSomeDogs( int age, string name ) { return query_GetDog(YourContext,age,name); } public void DataBindStuff() { IEnumerable<Dog> dogs = GetSomeDogs(4,"Bud"); // but I want the dogs ordered by BirthDate gridView.DataSource = dogs.OrderBy( it => it.BirthDate ); } What is going to happen here? By still playing with the original ObjectQuery (that is the actual return type of the Linq statement, which implements IEnumerable), it will invalidate the compiled query and be force to re-parse. So, the rule of thumb is to return a List< of objects instead. static readonly Func<Entities, int, string, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, IEnumerable<Dog>>((ctx, age, name) => from dog in ctx.DogSet where dog.Age == age && dog.Name == name select dog); public List<Dog> GetSomeDogs( int age, string name ) { return query_GetDog(YourContext,age,name).ToList(); //<== change here } public void DataBindStuff() { List<Dog> dogs = GetSomeDogs(4,"Bud"); // but I want the dogs ordered by BirthDate gridView.DataSource = dogs.OrderBy( it => it.BirthDate ); } When you call ToList(), the query gets executed as per the compiled query and then, later, the OrderBy is executed against the objects in memory. It may be a little bit slower, but I'm not even sure. One sure thing is that you have no worries about mis-handling the ObjectQuery and invalidating the compiled query plan. Once again, that is not a blanket statement. ToList() is a defensive programming trick, but if you have a valid reason not to use ToList(), go ahead. There are many cases in which you would want to refine the query before executing it. Performance What is the performance impact of compiling a query? It can actually be fairly large. A rule of thumb is that compiling and caching the query for reuse takes at least double the time of simply executing it without caching. For complex queries (read inherirante), I have seen upwards to 10 seconds. So, the first time a pre-compiled query gets called, you get a performance hit. After that first hit, performance is noticeably better than the same non-pre-compiled query. Practically the same as Linq2Sql When you load a page with pre-compiled queries the first time you will get a hit. It will load in maybe 5-15 seconds (obviously more than one pre-compiled queries will end up being called), while subsequent loads will take less than 300ms. Dramatic difference, and it is up to you to decide if it is ok for your first user to take a hit or you want a script to call your pages to force a compilation of the queries. Can this query be cached? { Dog dog = from dog in YourContext.DogSet where dog.ID == id select dog; } No, ad-hoc Linq queries are not cached and you will incur the cost of generating the tree every single time you call it. Parametrized Queries Most search capabilities involve heavily parametrized queries. There are even libraries available that will let you build a parametrized query out of lamba expressions. The problem is that you cannot use pre-compiled queries with those. One way around that is to map out all the possible criteria in the query and flag which one you want to use: public struct MyParams { public string name; public bool checkName; public int age; public bool checkAge; } static readonly Func<Entities, MyParams, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, MyParams, IEnumerable<Dog>>((ctx, myParams) => from dog in ctx.DogSet where (myParams.checkAge == true && dog.Age == myParams.age) && (myParams.checkName == true && dog.Name == myParams.name ) select dog); protected List<Dog> GetSomeDogs() { MyParams myParams = new MyParams(); myParams.name = "Bud"; myParams.checkName = true; myParams.age = 0; myParams.checkAge = false; return query_GetDog(YourContext,myParams).ToList(); } The advantage here is that you get all the benifits of a pre-compiled quert. The disadvantages are that you most likely will end up with a where clause that is pretty difficult to maintain, that you will incur a bigger penalty for pre-compiling the query and that each query you run is not as efficient as it could be (particularly with joins thrown in). Another way is to build an EntitySQL query piece by piece, like we all did with SQL. protected List<Dod> GetSomeDogs( string name, int age) { string query = "select value dog from Entities.DogSet where 1 = 1 "; if( !String.IsNullOrEmpty(name) ) query = query + " and dog.Name == @Name "; if( age > 0 ) query = query + " and dog.Age == @Age "; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>( query, YourContext ); if( !String.IsNullOrEmpty(name) ) oQuery.Parameters.Add( new ObjectParameter( "Name", name ) ); if( age > 0 ) oQuery.Parameters.Add( new ObjectParameter( "Age", age ) ); return oQuery.ToList(); } Here the problems are: - there is no syntax checking during compilation - each different combination of parameters generate a different query which will need to be pre-compiled when it is first run. In this case, there are only 4 different possible queries (no params, age-only, name-only and both params), but you can see that there can be way more with a normal world search. - Noone likes to concatenate strings! Another option is to query a large subset of the data and then narrow it down in memory. This is particularly useful if you are working with a definite subset of the data, like all the dogs in a city. You know there are a lot but you also know there aren't that many... so your CityDog search page can load all the dogs for the city in memory, which is a single pre-compiled query and then refine the results protected List<Dod> GetSomeDogs( string name, int age, string city) { string query = "select value dog from Entities.DogSet where dog.Owner.Address.City == @City "; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>( query, YourContext ); oQuery.Parameters.Add( new ObjectParameter( "City", city ) ); List<Dog> dogs = oQuery.ToList(); if( !String.IsNullOrEmpty(name) ) dogs = dogs.Where( it => it.Name == name ); if( age > 0 ) dogs = dogs.Where( it => it.Age == age ); return dogs; } It is particularly useful when you start displaying all the data then allow for filtering. Problems: - Could lead to serious data transfer if you are not careful about your subset. - You can only filter on the data that you returned. It means that if you don't return the Dog.Owner association, you will not be able to filter on the Dog.Owner.Name So what is the best solution? There isn't any. You need to pick the solution that works best for you and your problem: - Use lambda-based query building when you don't care about pre-compiling your queries. - Use fully-defined pre-compiled Linq query when your object structure is not too complex. - Use EntitySQL/string concatenation when the structure could be complex and when the possible number of different resulting queries are small (which means fewer pre-compilation hits). - Use in-memory filtering when you are working with a smallish subset of the data or when you had to fetch all of the data on the data at first anyway (if the performance is fine with all the data, then filtering in memory will not cause any time to be spent in the db). Singleton access The best way to deal with your context and entities accross all your pages is to use the singleton pattern: public sealed class YourContext { private const string instanceKey = "On3GoModelKey"; YourContext(){} public static YourEntities Instance { get { HttpContext context = HttpContext.Current; if( context == null ) return Nested.instance; if (context.Items[instanceKey] == null) { On3GoEntities entity = new On3GoEntities(); context.Items[instanceKey] = entity; } return (YourEntities)context.Items[instanceKey]; } } class Nested { // Explicit static constructor to tell C# compiler // not to mark type as beforefieldinit static Nested() { } internal static readonly YourEntities instance = new YourEntities(); } } NoTracking, is it worth it? When executing a query, you can tell the framework to track the objects it will return or not. What does it mean? With tracking enabled (the default option), the framework will track what is going on with the object (has it been modified? Created? Deleted?) and will also link objects together, when further queries are made from the database, which is what is of interest here. For example, lets assume that Dog with ID == 2 has an owner which ID == 10. Dog dog = (from dog in YourContext.DogSet where dog.ID == 2 select dog).FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; Person owner = (from o in YourContext.PersonSet where o.ID == 10 select dog).FirstOrDefault(); //dog.OwnerReference.IsLoaded == true; If we were to do the same with no tracking, the result would be different. ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>) (from dog in YourContext.DogSet where dog.ID == 2 select dog); oDogQuery.MergeOption = MergeOption.NoTracking; Dog dog = oDogQuery.FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; ObjectQuery<Person> oPersonQuery = (ObjectQuery<Person>) (from o in YourContext.PersonSet where o.ID == 10 select o); oPersonQuery.MergeOption = MergeOption.NoTracking; Owner owner = oPersonQuery.FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; Tracking is very useful and in a perfect world without performance issue, it would always be on. But in this world, there is a price for it, in terms of performance. So, should you use NoTracking to speed things up? It depends on what you are planning to use the data for. Is there any chance that the data your query with NoTracking can be used to make update/insert/delete in the database? If so, don't use NoTracking because associations are not tracked and will causes exceptions to be thrown. In a page where there are absolutly no updates to the database, you can use NoTracking. Mixing tracking and NoTracking is possible, but it requires you to be extra careful with updates/inserts/deletes. The problem is that if you mix then you risk having the framework trying to Attach() a NoTracking object to the context where another copy of the same object exist with tracking on. Basicly, what I am saying is that Dog dog1 = (from dog in YourContext.DogSet where dog.ID == 2).FirstOrDefault(); ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>) (from dog in YourContext.DogSet where dog.ID == 2 select dog); oDogQuery.MergeOption = MergeOption.NoTracking; Dog dog2 = oDogQuery.FirstOrDefault(); dog1 and dog2 are 2 different objects, one tracked and one not. Using the detached object in an update/insert will force an Attach() that will say "Wait a minute, I do already have an object here with the same database key. Fail". And when you Attach() one object, all of its hierarchy gets attached as well, causing problems everywhere. Be extra careful. How much faster is it with NoTracking It depends on the queries. Some are much more succeptible to tracking than other. I don't have a fast an easy rule for it, but it helps. So I should use NoTracking everywhere then? Not exactly. There are some advantages to tracking object. The first one is that the object is cached, so subsequent call for that object will not hit the database. That cache is only valid for the lifetime of the YourEntities object, which, if you use the singleton code above, is the same as the page lifetime. One page request == one YourEntity object. So for multiple calls for the same object, it will load only once per page request. (Other caching mechanism could extend that). What happens when you are using NoTracking and try to load the same object multiple times? The database will be queried each time, so there is an impact there. How often do/should you call for the same object during a single page request? As little as possible of course, but it does happens. Also remember the piece above about having the associations connected automatically for your? You don't have that with NoTracking, so if you load your data in multiple batches, you will not have a link to between them: ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>)(from dog in YourContext.DogSet select dog); oDogQuery.MergeOption = MergeOption.NoTracking; List<Dog> dogs = oDogQuery.ToList(); ObjectQuery<Person> oPersonQuery = (ObjectQuery<Person>)(from o in YourContext.PersonSet select o); oPersonQuery.MergeOption = MergeOption.NoTracking; List<Person> owners = oPersonQuery.ToList(); In this case, no dog will have its .Owner property set. Some things to keep in mind when you are trying to optimize the performance. No lazy loading, what am I to do? This can be seen as a blessing in disguise. Of course it is annoying to load everything manually. However, it decreases the number of calls to the db and forces you to think about when you should load data. The more you can load in one database call the better. That was always true, but it is enforced now with this 'feature' of EF. Of course, you can call if( !ObjectReference.IsLoaded ) ObjectReference.Load(); if you want to, but a better practice is to force the framework to load the objects you know you will need in one shot. This is where the discussion about parametrized Includes begins to make sense. Lets say you have you Dog object public class Dog { public Dog Get(int id) { return YourContext.DogSet.FirstOrDefault(it => it.ID == id ); } } This is the type of function you work with all the time. It gets called from all over the place and once you have that Dog object, you will do very different things to it in different functions. First, it should be pre-compiled, because you will call that very often. Second, each different pages will want to have access to a different subset of the Dog data. Some will want the Owner, some the FavoriteToy, etc. Of course, you could call Load() for each reference you need anytime you need one. But that will generate a call to the database each time. Bad idea. So instead, each page will ask for the data it wants to see when it first request for the Dog object: static public Dog Get(int id) { return GetDog(entity,"");} static public Dog Get(int id, string includePath) { string query = "select value o " + " from YourEntities.DogSet as o " +

    Read the article

  • URL Rewrite – Multiple domains under one site. Part II

    - by OWScott
    I believe I have it … I’ve been meaning to put together the ultimate outgoing rule for hosting multiple domains under one site.  I finally sat down this week and setup a few test cases, and created one rule to rule them all.  In Part I of this two part series, I covered the incoming rule necessary to host a site in a subfolder of a website, while making it appear as if it’s in the root of the site.  Part II won’t work without applying Part I first, so if you haven’t read it, I encourage you to read it now. However, the incoming rule by itself doesn’t address everything.  Here’s the problem … Let’s say that we host www.site2.com in a subfolder called site2, off of masterdomain.com.  This is the same example I used in Part I.   Using an incoming rewrite rule, we are able to make a request to www.site2.com even though the site is really in the /site2 folder.  The gotcha comes with any type of path that ASP.NET generates (I’m sure other scripting technologies could do the same too).  ASP.NET thinks that the path to the root of the site is /site2, but the URL is /.  See the issue?  If ASP.NET generates a path or a redirect for us, it will always add /site2 to the URL.  That results in a path that looks something like www.site2.com/site2.  In Part I, I mentioned that you should add a condition where “{PATH_INFO} ‘does not match’ /site2”.  That allows www.site2.com/site2 and www.site2.com to both function the same.  This allows the site to always work, but if you want to hide /site2 in the URL, you need to take it one step further. One way to address this is in your code.  Ultimately this is the best bet.  Ruslan Yakushev has a great article on a few considerations that you can address in code.  I recommend giving that serious consideration.  Additionally, if you have upgraded to ASP.NET 3.5 SP1 or greater, it takes care of some of the references automatically for you. However, what if you inherit an existing application?  Or you can’t easily go through your existing site and make the code changes?  If this applies to you, read on. That’s where URL Rewrite 2.0 comes in.  With URL Rewrite 2.0, you can create an outgoing rule that will remove the /site2 before the page is sent back to the user.  This means that you can take an existing application, host it in a subfolder of your site, and ensure that the URL never reveals that it’s in a subfolder. Performance Considerations Performance overhead is something to be mindful of.  These outbound rules aren’t simply changing the server variables.  The first rule I’ll cover below needs to parse the HTML body and pull out the path (i.e. /site2) on the way through.  This will add overhead, possibly significant if you have large pages and a busy site.  In other words, your mileage may vary and you may need to test to see the impact that these rules have.  Don’t worry too much though.  For many sites, the performance impact is negligible. So, how do we do it? Creating the Outgoing Rule There are really two things to keep in mind.  First, ASP.NET applications frequently generate a URL that adds the /site2 back into the URL.  In addition to URLs, they can be in form elements, img elements and the like.  The goal is to find all of those situations and rewrite it on the way out.  Let’s call this the ‘URL problem’. Second, and similarly, ASP.NET can send a LOCATION redirect that causes a redirect back to another page.  Again, ASP.NET isn’t aware of the different URL and it will add the /site2 to the redirect.  Form Authentication is a good example on when this occurs.  Try to password protect a site running from a subfolder using forms auth and you’ll quickly find that the URL becomes www.site2.com/site2 again.  Let’s term this the ‘redirect problem’. Solving the URL Problem – Outgoing Rule #1 Let’s create a rule that removes the /site2 from any URL.  We want to remove it from relative URLs like /site2/something, or absolute URLs like http://www.site2.com/site2/something.  Most URLs that ASP.NET creates will be relative URLs, but I figure that there may be some applications that piece together a full URL, so we might as well expect that situation. Let’s get started.  First, create a new outbound rule.  You can create the rule within the /site2 folder which will reduce the performance impact of the rule.  Just a reminder that incoming rules for this situation won’t work in a subfolder … but outgoing rules will. Give it a name that makes sense to you, for example “Outgoing – URL paths”. Precondition.  If you place the rule in the subfolder, it will only run for that site and folder, so there isn’t need for a precondition.  Run it for all requests.  If you place it in the root of the site, you may want to create a precondition for HTTP_HOST = ^(www\.)?site2\.com$. For the Match section, there are a few things to consider.  For performance reasons, it’s best to match the least amount of elements that you need to accomplish the task.  For my test cases, I just needed to rewrite the <a /> tag, but you may need to rewrite any number of HTML elements.  Note that as long as you have the exclude /site2 rule in your incoming rule as I described in Part I, some elements that don’t show their URL—like your images—will work without removing the /site2 from them.  That reduces the processing needed for this rule. Leave the “matching scope” at “Response” and choose the elements that you want to change. Set the pattern to “^(?:site2|(.*//[_a-zA-Z0-9-\.]*)?/site2)(.*)”.  Make sure to replace ‘site2’ with your subfolder name in both places.  Yes, I realize this is a pretty messy looking rule, but it handles a few situations.  This rule will handle the following situations correctly: Original Rewritten using {R:1}{R:2} http://www.site2.com/site2/default.aspx http://www.site2.com/default.aspx http://www.site2.com/folder1/site2/default.aspx Won’t rewrite since it’s a sub-sub folder /site2/default.aspx /default.aspx site2/default.aspx /default.aspx /folder1/site2/default.aspx Won’t rewrite since it’s a sub-sub folder. For the conditions section, you can leave that be. Finally, for the rule, set the Action Type to “Rewrite” and set the Value to “{R:1}{R:2}”.  The {R:1} and {R:2} are back references to the sections within parentheses.  In other words, in http://domain.com/site2/something, {R:1} will be http://domain.com and {R:2} will be /something. If you view your rule from your web.config file (or applicationHost.config if it’s a global rule), it should look like this: <rule name="Outgoing - URL paths" enabled="true"> <match filterByTags="A" pattern="^(?:site2|(.*//[_a-zA-Z0-9-\.]*)?/site2)(.*)" /> <action type="Rewrite" value="{R:1}{R:2}" /> </rule> Solving the Redirect Problem Outgoing Rule #2 The second issue that we can run into is with a client-side redirect.  This is triggered by a LOCATION response header that is sent to the client.  Forms authentication is a common example.  To reproduce this, password protect your subfolder and watch how it redirects and adds the subfolder path back in. Notice in my test case the extra paths: http://site2.com/site2/login.aspx?ReturnUrl=%2fsite2%2fdefault.aspx I want to remove /site2 from both the URL and the ReturnUrl querystring value.  For semi-readability, let’s do this in 2 separate rules, one for the URL and one for the querystring. Create a second rule.  As with the previous rule, it can be created in the /site2 subfolder.  In the URL Rewrite wizard, select Outbound rules –> “Blank Rule”. Fill in the following information: Name response_location URL Precondition Don’t set Match: Matching Scope Server Variable Match: Variable Name RESPONSE_LOCATION Match: Pattern ^(?:site2|(.*//[_a-zA-Z0-9-\.]*)?/site2)(.*) Conditions Don’t set Action Type Rewrite Action Properties {R:1}{R:2} It should end up like so: <rule name="response_location URL"> <match serverVariable="RESPONSE_LOCATION" pattern="^(?:site2|(.*//[_a-zA-Z0-9-\.]*)?/site2)(.*)" /> <action type="Rewrite" value="{R:1}{R:2}" /> </rule> Outgoing Rule #3 Outgoing Rule #2 only takes care of the URL path, and not the querystring path.  Let’s create one final rule to take care of the path in the querystring to ensure that ReturnUrl=%2fsite2%2fdefault.aspx gets rewritten to ReturnUrl=%2fdefault.aspx. The %2f is the HTML encoding for forward slash (/). Create a rule like the previous one, but with the following settings: Name response_location querystring Precondition Don’t set Match: Matching Scope Server Variable Match: Variable Name RESPONSE_LOCATION Match: Pattern (.*)%2fsite2(.*) Conditions Don’t set Action Type Rewrite Action Properties {R:1}{R:2} The config should look like this: <rule name="response_location querystring"> <match serverVariable="RESPONSE_LOCATION" pattern="(.*)%2fsite2(.*)" /> <action type="Rewrite" value="{R:1}{R:2}" /> </rule> It’s possible to squeeze the last two rules into one, but it gets kind of confusing so I felt that it’s better to show it as two separate rules. Summary With the rules covered in these two parts, we’re able to have a site in a subfolder and make it appear as if it’s in the root of the site.  Not only that, we can overcome automatic redirecting that is caused by ASP.NET, other scripting technologies, and especially existing applications. Following is an example of the incoming and outgoing rules necessary for a site called www.site2.com hosted in a subfolder called /site2.  Remember that the outgoing rules can be placed in the /site2 folder instead of the in the root of the site. <rewrite> <rules> <rule name="site2.com in a subfolder" enabled="true" stopProcessing="true"> <match url=".*" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> <add input="{HTTP_HOST}" pattern="^(www\.)?site2\.com$" /> <add input="{PATH_INFO}" pattern="^/site2($|/)" negate="true" /> </conditions> <action type="Rewrite" url="/site2/{R:0}" /> </rule> </rules> <outboundRules> <rule name="Outgoing - URL paths" enabled="true"> <match filterByTags="A" pattern="^(?:site2|(.*//[_a-zA-Z0-9-\.]*)?/site2)(.*)" /> <action type="Rewrite" value="{R:1}{R:2}" /> </rule> <rule name="response_location URL"> <match serverVariable="RESPONSE_LOCATION" pattern="^(?:site2|(.*//[_a-zA-Z0-9-\.]*)?/site2)(.*)" /> <action type="Rewrite" value="{R:1}{R:2}" /> </rule> <rule name="response_location querystring"> <match serverVariable="RESPONSE_LOCATION" pattern="(.*)%2fsite2(.*)" /> <action type="Rewrite" value="{R:1}{R:2}" /> </rule> </outboundRules> </rewrite> If you run into any situations that aren’t caught by these rules, please let me know so I can update this to be as complete as possible. Happy URL Rewriting!

    Read the article

  • C#/.NET Little Wonders: The Joy of Anonymous Types

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. In the .NET 3 Framework, Microsoft introduced the concept of anonymous types, which provide a way to create a quick, compiler-generated types at the point of instantiation.  These may seem trivial, but are very handy for concisely creating lightweight, strongly-typed objects containing only read-only properties that can be used within a given scope. Creating an Anonymous Type In short, an anonymous type is a reference type that derives directly from object and is defined by its set of properties base on their names, number, types, and order given at initialization.  In addition to just holding these properties, it is also given appropriate overridden implementations for Equals() and GetHashCode() that take into account all of the properties to correctly perform property comparisons and hashing.  Also overridden is an implementation of ToString() which makes it easy to display the contents of an anonymous type instance in a fairly concise manner. To construct an anonymous type instance, you use basically the same initialization syntax as with a regular type.  So, for example, if we wanted to create an anonymous type to represent a particular point, we could do this: 1: var point = new { X = 13, Y = 7 }; Note the similarity between anonymous type initialization and regular initialization.  The main difference is that the compiler generates the type name and the properties (as readonly) based on the names and order provided, and inferring their types from the expressions they are assigned to. It is key to remember that all of those factors (number, names, types, order of properties) determine the anonymous type.  This is important, because while these two instances share the same anonymous type: 1: // same names, types, and order 2: var point1 = new { X = 13, Y = 7 }; 3: var point2 = new { X = 5, Y = 0 }; These similar ones do not: 1: var point3 = new { Y = 3, X = 5 }; // different order 2: var point4 = new { X = 3, Y = 5.0 }; // different type for Y 3: var point5 = new {MyX = 3, MyY = 5 }; // different names 4: var point6 = new { X = 1, Y = 2, Z = 3 }; // different count Limitations on Property Initialization Expressions The expression for a property in an anonymous type initialization cannot be null (though it can evaluate to null) or an anonymous function.  For example, the following are illegal: 1: // Null can't be used directly. Null reference of what type? 2: var cantUseNull = new { Value = null }; 3:  4: // Anonymous methods cannot be used. 5: var cantUseAnonymousFxn = new { Value = () => Console.WriteLine(“Can’t.”) }; Note that the restriction on null is just that you can’t use it directly as the expression, because otherwise how would it be able to determine the type?  You can, however, use it indirectly assigning a null expression such as a typed variable with the value null, or by casting null to a specific type: 1: string str = null; 2: var fineIndirectly = new { Value = str }; 3: var fineCast = new { Value = (string)null }; All of the examples above name the properties explicitly, but you can also implicitly name properties if they are being set from a property, field, or variable.  In these cases, when a field, property, or variable is used alone, and you don’t specify a property name assigned to it, the new property will have the same name.  For example: 1: int variable = 42; 2:  3: // creates two properties named varriable and Now 4: var implicitProperties = new { variable, DateTime.Now }; Is the same type as: 1: var explicitProperties = new { variable = variable, Now = DateTime.Now }; But this only works if you are using an existing field, variable, or property directly as the expression.  If you use a more complex expression then the name cannot be inferred: 1: // can't infer the name variable from variable * 2, must name explicitly 2: var wontWork = new { variable * 2, DateTime.Now }; In the example above, since we typed variable * 2, it is no longer just a variable and thus we would have to assign the property a name explicitly. ToString() on Anonymous Types One of the more trivial overrides that an anonymous type provides you is a ToString() method that prints the value of the anonymous type instance in much the same format as it was initialized (except actual values instead of expressions as appropriate of course). For example, if you had: 1: var point = new { X = 13, Y = 42 }; And then print it out: 1: Console.WriteLine(point.ToString()); You will get: 1: { X = 13, Y = 42 } While this isn’t necessarily the most stunning feature of anonymous types, it can be handy for debugging or logging values in a fairly easy to read format. Comparing Anonymous Type Instances Because anonymous types automatically create appropriate overrides of Equals() and GetHashCode() based on the underlying properties, we can reliably compare two instances or get hash codes.  For example, if we had the following 3 points: 1: var point1 = new { X = 1, Y = 2 }; 2: var point2 = new { X = 1, Y = 2 }; 3: var point3 = new { Y = 2, X = 1 }; If we compare point1 and point2 we’ll see that Equals() returns true because they overridden version of Equals() sees that the types are the same (same number, names, types, and order of properties) and that the values are the same.   In addition, because all equal objects should have the same hash code, we’ll see that the hash codes evaluate to the same as well: 1: // true, same type, same values 2: Console.WriteLine(point1.Equals(point2)); 3:  4: // true, equal anonymous type instances always have same hash code 5: Console.WriteLine(point1.GetHashCode() == point2.GetHashCode()); However, if we compare point2 and point3 we get false.  Even though the names, types, and values of the properties are the same, the order is not, thus they are two different types and cannot be compared (and thus return false).  And, since they are not equal objects (even though they have the same value) there is a good chance their hash codes are different as well (though not guaranteed): 1: // false, different types 2: Console.WriteLine(point2.Equals(point3)); 3:  4: // quite possibly false (was false on my machine) 5: Console.WriteLine(point2.GetHashCode() == point3.GetHashCode()); Using Anonymous Types Now that we’ve created instances of anonymous types, let’s actually use them.  The property names (whether implicit or explicit) are used to access the individual properties of the anonymous type.  The main thing, once again, to keep in mind is that the properties are readonly, so you cannot assign the properties a new value (note: this does not mean that instances referred to by a property are immutable – for more information check out C#/.NET Fundamentals: Returning Data Immutably in a Mutable World). Thus, if we have the following anonymous type instance: 1: var point = new { X = 13, Y = 42 }; We can get the properties as you’d expect: 1: Console.WriteLine(“The point is: ({0},{1})”, point.X, point.Y); But we cannot alter the property values: 1: // compiler error, properties are readonly 2: point.X = 99; Further, since the anonymous type name is only known by the compiler, there is no easy way to pass anonymous type instances outside of a given scope.  The only real choices are to pass them as object or dynamic.  But really that is not the intention of using anonymous types.  If you find yourself needing to pass an anonymous type outside of a given scope, you should really consider making a POCO (Plain Old CLR Type – i.e. a class that contains just properties to hold data with little/no business logic) instead. Given that, why use them at all?  Couldn’t you always just create a POCO to represent every anonymous type you needed?  Sure you could, but then you might litter your solution with many small POCO classes that have very localized uses. It turns out this is the key to when to use anonymous types to your advantage: when you just need a lightweight type in a local context to store intermediate results, consider an anonymous type – but when that result is more long-lived and used outside of the current scope, consider a POCO instead. So what do we mean by intermediate results in a local context?  Well, a classic example would be filtering down results from a LINQ expression.  For example, let’s say we had a List<Transaction>, where Transaction is defined something like: 1: public class Transaction 2: { 3: public string UserId { get; set; } 4: public DateTime At { get; set; } 5: public decimal Amount { get; set; } 6: // … 7: } And let’s say we had this data in our List<Transaction>: 1: var transactions = new List<Transaction> 2: { 3: new Transaction { UserId = "Jim", At = DateTime.Now, Amount = 2200.00m }, 4: new Transaction { UserId = "Jim", At = DateTime.Now, Amount = -1100.00m }, 5: new Transaction { UserId = "Jim", At = DateTime.Now.AddDays(-1), Amount = 900.00m }, 6: new Transaction { UserId = "John", At = DateTime.Now.AddDays(-2), Amount = 300.00m }, 7: new Transaction { UserId = "John", At = DateTime.Now, Amount = -10.00m }, 8: new Transaction { UserId = "Jane", At = DateTime.Now, Amount = 200.00m }, 9: new Transaction { UserId = "Jane", At = DateTime.Now, Amount = -50.00m }, 10: new Transaction { UserId = "Jaime", At = DateTime.Now.AddDays(-3), Amount = -100.00m }, 11: new Transaction { UserId = "Jaime", At = DateTime.Now.AddDays(-3), Amount = 300.00m }, 12: }; So let’s say we wanted to get the transactions for each day for each user.  That is, for each day we’d want to see the transactions each user performed.  We could do this very simply with a nice LINQ expression, without the need of creating any POCOs: 1: // group the transactions based on an anonymous type with properties UserId and Date: 2: byUserAndDay = transactions 3: .GroupBy(tx => new { tx.UserId, tx.At.Date }) 4: .OrderBy(grp => grp.Key.Date) 5: .ThenBy(grp => grp.Key.UserId); Now, those of you who have attempted to use custom classes as a grouping type before (such as GroupBy(), Distinct(), etc.) may have discovered the hard way that LINQ gets a lot of its speed by utilizing not on Equals(), but also GetHashCode() on the type you are grouping by.  Thus, when you use custom types for these purposes, you generally end up having to write custom Equals() and GetHashCode() implementations or you won’t get the results you were expecting (the default implementations of Equals() and GetHashCode() are reference equality and reference identity based respectively). As we said before, it turns out that anonymous types already do these critical overrides for you.  This makes them even more convenient to use!  Instead of creating a small POCO to handle this grouping, and then having to implement a custom Equals() and GetHashCode() every time, we can just take advantage of the fact that anonymous types automatically override these methods with appropriate implementations that take into account the values of all of the properties. Now, we can look at our results: 1: foreach (var group in byUserAndDay) 2: { 3: // the group’s Key is an instance of our anonymous type 4: Console.WriteLine("{0} on {1:MM/dd/yyyy} did:", group.Key.UserId, group.Key.Date); 5:  6: // each grouping contains a sequence of the items. 7: foreach (var tx in group) 8: { 9: Console.WriteLine("\t{0}", tx.Amount); 10: } 11: } And see: 1: Jaime on 06/18/2012 did: 2: -100.00 3: 300.00 4:  5: John on 06/19/2012 did: 6: 300.00 7:  8: Jim on 06/20/2012 did: 9: 900.00 10:  11: Jane on 06/21/2012 did: 12: 200.00 13: -50.00 14:  15: Jim on 06/21/2012 did: 16: 2200.00 17: -1100.00 18:  19: John on 06/21/2012 did: 20: -10.00 Again, sure we could have just built a POCO to do this, given it an appropriate Equals() and GetHashCode() method, but that would have bloated our code with so many extra lines and been more difficult to maintain if the properties change.  Summary Anonymous types are one of those Little Wonders of the .NET language that are perfect at exactly that time when you need a temporary type to hold a set of properties together for an intermediate result.  While they are not very useful beyond the scope in which they are defined, they are excellent in LINQ expressions as a way to create and us intermediary values for further expressions and analysis. Anonymous types are defined by the compiler based on the number, type, names, and order of properties created, and they automatically implement appropriate Equals() and GetHashCode() overrides (as well as ToString()) which makes them ideal for LINQ expressions where you need to create a set of properties to group, evaluate, etc. Technorati Tags: C#,CSharp,.NET,Little Wonders,Anonymous Types,LINQ

    Read the article

  • Curious about IObservable? Here’s a quick example to get you started!

    - by Roman Schindlauer
    Have you heard about IObservable/IObserver support in Microsoft StreamInsight 1.1? Then you probably want to try it out. If this is your first incursion into the IObservable/IObserver pattern, this blog post is for you! StreamInsight 1.1 introduced the ability to use IEnumerable and IObservable objects as event sources and sinks. The IEnumerable case is pretty straightforward, since many data collections are already surfacing as this type. This was already covered by Colin in his blog. Creating your own IObservable event source is a little more involved but no less exciting – here is a primer: First, let’s look at a very simple Observable data source. All it does is publish an integer in regular time periods to its registered observers. (For more information on IObservable, see http://msdn.microsoft.com/en-us/library/dd990377.aspx ). sealed class RandomSubject : IObservable<int>, IDisposable {     private bool _done;     private readonly List<IObserver<int>> _observers;     private readonly Random _random;     private readonly object _sync;     private readonly Timer _timer;     private readonly int _timerPeriod;       /// <summary>     /// Random observable subject. It produces an integer in regular time periods.     /// </summary>     /// <param name="timerPeriod">Timer period (in milliseconds)</param>     public RandomSubject(int timerPeriod)     {         _done = false;         _observers = new List<IObserver<int>>();         _random = new Random();         _sync = new object();         _timer = new Timer(EmitRandomValue);         _timerPeriod = timerPeriod;         Schedule();     }       public IDisposable Subscribe(IObserver<int> observer)     {         lock (_sync)         {             _observers.Add(observer);         }         return new Subscription(this, observer);     }       public void OnNext(int value)     {         lock (_sync)         {             if (!_done)             {                 foreach (var observer in _observers)                 {                     observer.OnNext(value);                 }             }         }     }       public void OnError(Exception e)     {         lock (_sync)         {             foreach (var observer in _observers)             {                 observer.OnError(e);             }             _done = true;         }     }       public void OnCompleted()     {         lock (_sync)         {             foreach (var observer in _observers)             {                 observer.OnCompleted();             }             _done = true;         }     }       void IDisposable.Dispose()     {         _timer.Dispose();     }       private void Schedule()     {         lock (_sync)         {             if (!_done)             {                 _timer.Change(_timerPeriod, Timeout.Infinite);             }         }     }       private void EmitRandomValue(object _)     {         var value = (int)(_random.NextDouble() * 100);         Console.WriteLine("[Observable]\t" + value);         OnNext(value);         Schedule();     }       private sealed class Subscription : IDisposable     {         private readonly RandomSubject _subject;         private IObserver<int> _observer;           public Subscription(RandomSubject subject, IObserver<int> observer)         {             _subject = subject;             _observer = observer;         }           public void Dispose()         {             IObserver<int> observer = _observer;             if (null != observer)             {                 lock (_subject._sync)                 {                     _subject._observers.Remove(observer);                 }                 _observer = null;             }         }     } }   So far, so good. Now let’s write a program that consumes data emitted by the observable as a stream of point events in a Streaminsight query. First, let’s define our payload type: class Payload {     public int Value { get; set; }       public override string ToString()     {         return "[StreamInsight]\tValue: " + Value.ToString();     } }   Now, let’s write the program. First, we will instantiate the observable subject. Then we’ll use the ToPointStream() method to consume it as a stream. We can now write any query over the source - here, a simple pass-through query. class Program {     static void Main(string[] args)     {         Console.WriteLine("Starting observable source...");         using (var source = new RandomSubject(500))         {             Console.WriteLine("Started observable source.");             using (var server = Server.Create("Default"))             {                 var application = server.CreateApplication("My Application");                   var stream = source.ToPointStream(application,                     e => PointEvent.CreateInsert(DateTime.Now, new Payload { Value = e }),                     AdvanceTimeSettings.StrictlyIncreasingStartTime,                     "Observable Stream");                   var query = from e in stream                             select e;                   [...]   We’re done with consuming input and querying it! But you probably want to see the output of the query. Did you know you can turn a query into an observable subject as well? Let’s do precisely that, and exploit the Reactive Extensions for .NET (http://msdn.microsoft.com/en-us/devlabs/ee794896.aspx) to quickly visualize the output. Notice we’re subscribing “Console.WriteLine()” to the query, a pattern you may find useful for quick debugging of your queries. Reminder: you’ll need to install the Reactive Extensions for .NET (Rx for .NET Framework 4.0), and reference System.CoreEx and System.Reactive in your project.                 [...]                   Console.ReadLine();                 Console.WriteLine("Starting query...");                 using (query.ToObservable().Subscribe(Console.WriteLine))                 {                     Console.WriteLine("Started query.");                     Console.ReadLine();                     Console.WriteLine("Stopping query...");                 }                 Console.WriteLine("Stopped query.");             }             Console.ReadLine();             Console.WriteLine("Stopping observable source...");             source.OnCompleted();         }         Console.WriteLine("Stopped observable source.");     } }   We hope this blog post gets you started. And for bonus points, you can go ahead and rewrite the observable source (the RandomSubject class) using the Reactive Extensions for .NET! The entire sample project is attached to this article. Happy querying! Regards, The StreamInsight Team

    Read the article

  • Parsing Concerns

    - by Jesse
    If you’ve ever written an application that accepts date and/or time inputs from an external source (a person, an uploaded file, posted XML, etc.) then you’ve no doubt had to deal with parsing some text representing a date into a data structure that a computer can understand. Similarly, you’ve probably also had to take values from those same data structure and turn them back into their original formats. Most (all?) suitably modern development platforms expose some kind of parsing and formatting functionality for turning text into dates and vice versa. In .NET, the DateTime data structure exposes ‘Parse’ and ‘ToString’ methods for this purpose. This post will focus mostly on parsing, though most of the examples and suggestions below can also be applied to the ToString method. The DateTime.Parse method is pretty permissive in the values that it will accept (though apparently not as permissive as some other languages) which makes it pretty easy to take some text provided by a user and turn it into a proper DateTime instance. Here are some examples (note that the resulting DateTime values are shown using the RFC1123 format): DateTime.Parse("3/12/2010"); //Fri, 12 Mar 2010 00:00:00 GMT DateTime.Parse("2:00 AM"); //Sat, 01 Jan 2011 02:00:00 GMT (took today's date as date portion) DateTime.Parse("5-15/2010"); //Sat, 15 May 2010 00:00:00 GMT DateTime.Parse("7/8"); //Fri, 08 Jul 2011 00:00:00 GMT DateTime.Parse("Thursday, July 1, 2010"); //Thu, 01 Jul 2010 00:00:00 GMT Dealing With Inaccuracy While the DateTime struct has the ability to store a date and time value accurate down to the millisecond, most date strings provided by a user are not going to specify values with that much precision. In each of the above examples, the Parse method was provided a partial value from which to construct a proper DateTime. This means it had to go ahead and assume what you meant and fill in the missing parts of the date and time for you. This is a good thing, especially when we’re talking about taking input from a user. We can’t expect that every person using our software to provide a year, day, month, hour, minute, second, and millisecond every time they need to express a date. That said, it’s important for developers to understand what assumptions the software might be making and plan accordingly. I think the assumptions that were made in each of the above examples were pretty reasonable, though if we dig into this method a little bit deeper we’ll find that there are a lot more assumptions being made under the covers than you might have previously known. One of the biggest assumptions that the DateTime.Parse method has to make relates to the format of the date represented by the provided string. Let’s consider this example input string: ‘10-02-15’. To some people. that might look like ‘15-Feb-2010’. To others, it might be ‘02-Oct-2015’. Like many things, it depends on where you’re from. This Is America! Most cultures around the world have adopted a “little-endian” or “big-endian” formats. (Source: Date And Time Notation By Country) In this context,  a “little-endian” date format would list the date parts with the least significant first while the “big-endian” date format would list them with the most significant first. For example, a “little-endian” date would be “day-month-year” and “big-endian” would be “year-month-day”. It’s worth nothing here that ISO 8601 defines a “big-endian” format as the international standard. While I personally prefer “big-endian” style date formats, I think both styles make sense in that they follow some logical standard with respect to ordering the date parts by their significance. Here in the United States, however, we buck that trend by using what is, in comparison, a completely nonsensical format of “month/day/year”. Almost no other country in the world uses this format. I’ve been fortunate in my life to have done some international travel, so I’ve been aware of this difference for many years, but never really thought much about it. Until recently, I had been developing software for exclusively US-based audiences and remained blissfully ignorant of the different date formats employed by other countries around the world. The web application I work on is being rolled out to users in different countries, so I was recently tasked with updating it to support different date formats. As it turns out, .NET has a great mechanism for dealing with different date formats right out of the box. Supporting date formats for different cultures is actually pretty easy once you understand this mechanism. Pulling the Curtain Back On the Parse Method Have you ever taken a look at the different flavors (read: overloads) that the DateTime.Parse method comes in? In it’s simplest form, it takes a single string parameter and returns the corresponding DateTime value (if it can divine what the date value should be). You can optionally provide two additional parameters to this method: an ‘System.IFormatProvider’ and a ‘System.Globalization.DateTimeStyles’. Both of these optional parameters have some bearing on the assumptions that get made while parsing a date, but for the purposes of this article I’m going to focus on the ‘System.IFormatProvider’ parameter. The IFormatProvider exposes a single method called ‘GetFormat’ that returns an object to be used for determining the proper format for displaying and parsing things like numbers and dates. This interface plays a big role in the globalization capabilities that are built into the .NET Framework. The cornerstone of these globalization capabilities can be found in the ‘System.Globalization.CultureInfo’ class. To put it simply, the CultureInfo class is used to encapsulate information related to things like language, writing system, and date formats for a certain culture. Support for many cultures are “baked in” to the .NET Framework and there is capacity for defining custom cultures if needed (thought I’ve never delved into that). While the details of the CultureInfo class are beyond the scope of this post, so for now let me just point out that the CultureInfo class implements the IFormatInfo interface. This means that a CultureInfo instance created for a given culture can be provided to the DateTime.Parse method in order to tell it what date formats it should expect. So what happens when you don’t provide this value? Let’s crack this method open in Reflector: When no IFormatInfo parameter is provided (i.e. we use the simple DateTime.Parse(string) overload), the ‘DateTimeFormatInfo.CurrentInfo’ is used instead. Drilling down a bit further we can see the implementation of the DateTimeFormatInfo.CurrentInfo property: From this property we can determine that, in the absence of an IFormatProvider being specified, the DateTime.Parse method will assume that the provided date should be treated as if it were in the format defined by the CultureInfo object that is attached to the current thread. The culture specified by the CultureInfo instance on the current thread can vary depending on several factors, but if you’re writing an application where a single instance might be used by people from different cultures (i.e. a web application with an international user base), it’s important to know what this value is. Having a solid strategy for setting the current thread’s culture for each incoming request in an internationally used ASP .NET application is obviously important, and might make a good topic for a future post. For now, let’s think about what the implications of not having the correct culture set on the current thread. Let’s say you’re running an ASP .NET application on a server in the United States. The server was setup by English speakers in the United States, so it’s configured for US English. It exposes a web page where users can enter order data, one piece of which is an anticipated order delivery date. Most users are in the US, and therefore enter dates in a ‘month/day/year’ format. The application is using the DateTime.Parse(string) method to turn the values provided by the user into actual DateTime instances that can be stored in the database. This all works fine, because your users and your server both think of dates in the same way. Now you need to support some users in South America, where a ‘day/month/year’ format is used. The best case scenario at this point is a user will enter March 13, 2011 as ‘25/03/2011’. This would cause the call to DateTime.Parse to blow up since that value doesn’t look like a valid date in the US English culture (Note: In all likelihood you might be using the DateTime.TryParse(string) method here instead, but that method behaves the same way with regard to date formats). “But wait a minute”, you might be saying to yourself, “I thought you said that this was the best case scenario?” This scenario would prevent users from entering orders in the system, which is bad, but it could be worse! What if the order needs to be delivered a day earlier than that, on March 12, 2011? Now the user enters ‘12/03/2011’. Now the call to DateTime.Parse sees what it thinks is a valid date, but there’s just one problem: it’s not the right date. Now this order won’t get delivered until December 3, 2011. In my opinion, that kind of data corruption is a much bigger problem than having the Parse call fail. What To Do? My order entry example is a bit contrived, but I think it serves to illustrate the potential issues with accepting date input from users. There are some approaches you can take to make this easier on you and your users: Eliminate ambiguity by using a graphical date input control. I’m personally a fan of a jQuery UI Datepicker widget. It’s pretty easy to setup, can be themed to match the look and feel of your site, and has support for multiple languages and cultures. Be sure you have a way to track the culture preference of each user in your system. For a web application this could be done using something like a cookie or session state variable. Ensure that the current user’s culture is being applied correctly to DateTime formatting and parsing code. This can be accomplished by ensuring that each request has the handling thread’s CultureInfo set properly, or by using the Format and Parse method overloads that accept an IFormatProvider instance where the provided value is a CultureInfo object constructed using the current user’s culture preference. When in doubt, favor formats that are internationally recognizable. Using the string ‘2010-03-05’ is likely to be recognized as March, 5 2011 by users from most (if not all) cultures. Favor standard date format strings over custom ones. So far we’ve only talked about turning a string into a DateTime, but most of the same “gotchas” apply when doing the opposite. Consider this code: someDateValue.ToString("MM/dd/yyyy"); This will output the same string regardless of what the current thread’s culture is set to (with the exception of some cultures that don’t use the Gregorian calendar system, but that’s another issue all together). For displaying dates to users, it would be better to do this: someDateValue.ToString("d"); This standard format string of “d” will use the “short date format” as defined by the culture attached to the current thread (or provided in the IFormatProvider instance in the proper method overload). This means that it will honor the proper month/day/year, year/month/day, or day/month/year format for the culture. Knowing Your Audience The examples and suggestions shown above can go a long way toward getting an application in shape for dealing with date inputs from users in multiple cultures. There are some instances, however, where taking approaches like these would not be appropriate. In some cases, the provider or consumer of date values that pass through your application are not people, but other applications (or other portions of your own application). For example, if your site has a page that accepts a date as a query string parameter, you’ll probably want to format that date using invariant date format. Otherwise, the same URL could end up evaluating to a different page depending on the user that is viewing it. In addition, if your application exports data for consumption by other systems, it’s best to have an agreed upon format that all systems can use and that will not vary depending upon whether or not the users of the systems on either side prefer a month/day/year or day/month/year format. I’ll look more at some approaches for dealing with these situations in a future post. If you take away one thing from this post, make it an understanding of the importance of knowing where the dates that pass through your system come from and are going to. You will likely want to vary your parsing and formatting approach depending on your audience.

    Read the article

  • Cannot delete a SharePoint web application

    - by Vijay
    What I have? I have normal web application and it has 3 site collections with name, "PDirectory". Other than this I have only Central administration web application in the farm. What I want? I want to delete that web application, "PDirectory". What problem am I facing? I am not able to delete the web application. I get below error when I try to delete it but, the site collections got deleted! Error: An object in the SharePoint administrative framework, "SPWebApplication Name=XXX Parent=SPWebService", could not be deleted because other objects depend on it. Update all of these dependants to point to null or different objects and retry this operation. The dependant objects are as follows: SPFarm Name=SharePoint_Config SPFarm Name=SharePoint_Config at Microsoft.SharePoint.Administration.SPConfigurationDatabase.DeleteObject(Guid id) at Microsoft.SharePoint.Administration.SPConfigurationDatabase.DeleteObject(SPPersistedObject obj) at Microsoft.SharePoint.Administration.SPPersistedObject.Delete() at Microsoft.SharePoint.Administration.SPWebApplication.Delete() at Microsoft.SharePoint.ApplicationPages.DeleteWebApplicationPage.BtnSubmit_Click(Object sender, EventArgs e) at System.Web.UI.WebControls.Button.OnClick(EventArgs e) at System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) at System.Web.UI.WebControls.Button.System.Web.UI.IPostBackEventHandler.RaisePostBackEvent(String eventArgument) at System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) at System.Web.UI.Page.RaisePostBackEvent(NameValueCollection postData) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) Can somebody tell me how I can delete this web application? Thanks in advance!

    Read the article

  • unlinked libraries in a makefile?

    - by wyatt
    I'm trying to install a libspopc, but when I run the make I get the following output: cc -Wall -Wextra -pedantic -pipe -fPIC -Os -DUSE_SSL -c session.c cc -Wall -Wextra -pedantic -pipe -fPIC -Os -DUSE_SSL -c queries.c cc -Wall -Wextra -pedantic -pipe -fPIC -Os -DUSE_SSL -c parsing.c cc -Wall -Wextra -pedantic -pipe -fPIC -Os -DUSE_SSL -c format.c cc -Wall -Wextra -pedantic -pipe -fPIC -Os -DUSE_SSL -c objects.c cc -Wall -Wextra -pedantic -pipe -fPIC -Os -DUSE_SSL -c libspopc.c rm -f libspopc*.a ar r libspopc-0.9n.a session.o queries.o parsing.o format.o objects.o libspopc.o ar: creating libspopc-0.9n.a ranlib libspopc-0.9n.a ln -s libspopc-0.9n.a libspopc.a rm -f libspopc*.so cc -o libspopc-0.9n.so -shared session.o queries.o parsing.o format.o objects.o libspopc.o ln -s libspopc-0.9n.so libspopc.so cc -o poptest1 -Wall -Wextra -pedantic -pipe -fPIC -Os -DUSE_SSL examples/poptest1.c -L. -lspopc -lssl -lcrypto /usr/local/lib/libcrypto.a(dso_dlfcn.o): In function `dlfcn_globallookup': dso_dlfcn.c:(.text+0x2d): undefined reference to `dlopen' dso_dlfcn.c:(.text+0x43): undefined reference to `dlsym' dso_dlfcn.c:(.text+0x4d): undefined reference to `dlclose' /usr/local/lib/libcrypto.a(dso_dlfcn.o): In function `dlfcn_pathbyaddr': dso_dlfcn.c:(.text+0x8f): undefined reference to `dladdr' dso_dlfcn.c:(.text+0xe9): undefined reference to `dlerror' /usr/local/lib/libcrypto.a(dso_dlfcn.o): In function `dlfcn_bind_func': dso_dlfcn.c:(.text+0x491): undefined reference to `dlsym' dso_dlfcn.c:(.text+0x570): undefined reference to `dlerror' /usr/local/lib/libcrypto.a(dso_dlfcn.o): In function `dlfcn_bind_var': dso_dlfcn.c:(.text+0x5f1): undefined reference to `dlsym' dso_dlfcn.c:(.text+0x6d0): undefined reference to `dlerror' /usr/local/lib/libcrypto.a(dso_dlfcn.o): In function `dlfcn_unload': dso_dlfcn.c:(.text+0x735): undefined reference to `dlclose' /usr/local/lib/libcrypto.a(dso_dlfcn.o): In function `dlfcn_load': dso_dlfcn.c:(.text+0x817): undefined reference to `dlopen' dso_dlfcn.c:(.text+0x88e): undefined reference to `dlclose' dso_dlfcn.c:(.text+0x8d5): undefined reference to `dlerror' collect2: ld returned 1 exit status make: *** [poptest1] Error 1 A quick search suggested that this was due to libdl being unlinked, though this seems unlikely in a distributed library, particularly a seemingly relatively popular one. Could anything else be causing this? And if it is due to an unlinked library, how would I go about fixing it? Thanks

    Read the article

  • Files built with a makefile are disapearing (including the binary)

    - by Reid
    I am building a program on a TS-7800(SBC), and when I run make (show below), it appears to go through all of the steps normally, but in the end i do not get a binary file. Why is this, and how can I get my file. makefile CC= /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc # compiler options #CFLAGS= -O2 CFLAGS= -mcpu=arm9 #CFLAGS= -pg -Wall # linker LN= $(CC) # linker options LNFLAGS= #LNFLAGS= -pg # extra libraries used in linking (use -l command) LDLIBS= -lpthread # source files SOURCES= HMITelem.c Cpacket.c GPS.c ADC.c Wireless.c Receivers.c CSVReader.c RPM.c RS485.c # include files INCLUDES= Cpacket.h HMITelem.h CSVReader.h RS485.h # object files OBJECTS= HMITelem.o Cpacket.o GPS.o ADC.o Wireless.o Receivers.o CSVReader.o RPM.o RS485.o HMITelem: $(OBJECTS) $(LN) $(LNFLAGS) -o $@ $(OBJECTS) $(LDLIBS) .c.o: $*.c $(CC) $(CFLAGS) -c $*.c RUN : ./HMITelem #clean: # rm -f *.o # rm -f *~ Output root@ts7800:ReidTest# make /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -mcpu=arm9 -c HMITelem.c /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -mcpu=arm9 -c Cpacket.c /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -mcpu=arm9 -c GPS.c /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -mcpu=arm9 -c ADC.c /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -mcpu=arm9 -c Wireless.c /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -mcpu=arm9 -c Receivers.c /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -mcpu=arm9 -c CSVReader.c /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -mcpu=arm9 -c RPM.c /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -mcpu=arm9 -c RS485.c /home/eclipse/ReidTest/cc/cross-toolchains/arm-none-linux-gnueabi/bin/arm-none-linux-gnueabi-gcc -o HMITelem HMITelem.o Cpacket.o GPS.o ADC.o Wireless.o Receivers.o CSVReader.o RPM.o RS485.o -lpthread Thank you.

    Read the article

  • Running git-svn with cron results in garbage in .git

    - by Paul
    I've setup a git-svn repo with cron to fetch from the svn repo daily. I have a script to do the fetching, and this is what is invoked by cron. Everything is fine with the repo, and the script works fine when executed manually. However, when it runs under cron, empty files get dropped into the .git directory. The files have names that look like they are some base64 output, e.g. juTrvjP6m8 and kcKf3hu3b4. Two of these files show up for every cron run. I thought these might be commit hashes, but they're not, git-show says it's an unknown revision. I set-up the repo as follows: git svn init http://svn.ip.addr/repo git svn fetch svn-remote My script looks like this: cd /gitsvn/dir git svn fetch svn-remote git svn push pub The last line pushes the repo to a separate (bare) public repo from which others can clone. I'm piping the output from the cron job to a file, which looks like this: fatal: unable to run 'git-svn' Counting objects: 21, done. Delta compression using up to 2 threads. Compressing objects: 100% (10/10), done. Writing objects: 100% (11/11), 59.08 KiB, done. Total 11 (delta 8), reused 0 (delta 0) To /gitpub/repo.git 360faf5..a153b0d trunk -> trunk The line "fatal: unable to run 'git-svn'" is alarming, but the fetch seems to go ahead anyway. Any suggestions? Where are these empty garbage files coming from, and how to stop them? Am I in for bigger problems in the future? BTW, I'm using git 1.6.3.3.

    Read the article

  • Git fails to push with error 'out of memory'

    - by jwir3
    I'm using gitosis on a server that has a low amount of memory, specifically around 512 MB. When I try to push a large folder (happens to be a backup from an android phone), I get: me@corellia:~/Configs/$ git push origin master Counting objects: 18, done. Delta compression using up to 8 threads. Compressing objects: 100% (14/14), done. fatal: Out of memory, malloc failed MiB | 685 KiB/s error: pack-objects died of signal 13 error: failed to push some refs to 'git@dagobah:Configs' I've been searching the web, and notably found: http://www.mail-archive.com/[email protected]/msg01747.html as well as http://git.661346.n2.nabble.com/Out-of-memory-error-during-git-push-td5443705.html but these don't seem to help me for two reasons: 1) I am not actually out of memory when I push. When I run 'top' during the push, I get: 24262 git 18 0 16204 6084 1096 S 2 1.2 0:00.12 git-unpack-obje Also, during the push if I run /head/meminfo, I get: MemTotal: 524288 kB MemFree: 289408 kB Buffers: 0 kB Cached: 0 kB SwapCached: 0 kB Active: 0 kB Inactive: 0 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 524288 kB So, it seems that I have enough memory free, but it's actually still failing, and I'm not enough of a git guru to figure out what is happening. I would appreciate it if someone could give me a hand here and tell me what could be causing this problem, and what I can do to solve it. Thanks! EDIT: The output of running the ulimit -a command: scottj@dagobah:~$ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 204800 max locked memory (kbytes, -l) 32 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 204800 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited

    Read the article

  • Looking for "bitmap-vector" image editor

    - by Borek
    I used to use PhotoImpact which is no longer developed so I'm looking for a replacement. What made PhotoImpact great to me was the ability to work in both bitmap and vector modes. What I mean by that: I could have an image or screenshot and easily add arrows, text captions or shapes to it. These shapes were vector objects so I could come back to them later and amend their properties easily. Software I know of: Paint.NET is purely bitmap so please don't recommend it - layers are not enough for my needs Drawing tools in MS Office work pretty much the way I'd like - you can paste an image and then add vector objects on top of it. It just doesn't feel right to have the full-fidelity original images stored as .docx or .pptx (I don't fully trust Word/Powerpoint that they don't compress the image) I'm not sure about GIMP but if it's just "better Paint.NET" (i.e., layers but no vector objects) I'm not interested Photoshop is out of question purely because of its price tag Corel killed PhotoImpact because they already had a competing product (Paint Shop Pro) but AFAIK it lacks vector features. Any tips for PhotoImpact alternatives would be very welcome.

    Read the article

  • gitolite mac don't add new user to authorized_keys

    - by crashbus
    I installed gitolite and every thing works fine for me as admin. But when I'd like to add add a new user the new user can't connect to the server. After I looked into the file authorized_keys I saw that the new user wasn't added to the file. During the commit of the new public-key I get some workings: WARNING: split conf not set, gl-conf present for 'gitolite-admin' Counting objects: 6, done. Delta compression using up to 8 threads. Compressing objects: 100% (4/4), done. Writing objects: 100% (4/4), 882 bytes, done. Total 4 (delta 1), reused 0 (delta 0) remote: WARNING: split conf not set, gl-conf present for 'gitolite-admin' remote: WARNING: ?? @staff christianwaldmann markwelch remote: sh: find: command not found remote: sh: find: command not found remote: sh: sort: command not found remote: sh: find: command not found remote: /usr/local/bin/triggers/post-compile/update-gitweb-access-list: line 26: cut: command not found remote: /usr/local/bin/triggers/post-compile/update-gitweb-access-list: line 23: grep: command not found remote: /usr/local/bin/triggers/post-compile/update-gitweb-access-list: line 26: sort: command not found remote: /usr/local/bin/triggers/post-compile/update-gitweb-access-list: line 26: sed: command not found remote: sh: find: command not found remote: sh: find: command not found How can I fix it that gitolite auto-add the new user to the authorized_keys.

    Read the article

  • Cannot delete a SharePoint web application

    - by Vijay
    What I have? I have normal web application and it has 3 site collections with name, "PDirectory". Other than this I have only Central administration web application in the farm. What I want? I want to delete that web application, "PDirectory". What problem am I facing? I am not able to delete the web application. I get below error when I try to delete it but, the site collections got deleted! Error: An object in the SharePoint administrative framework, "SPWebApplication Name=XXX Parent=SPWebService", could not be deleted because other objects depend on it. Update all of these dependants to point to null or different objects and retry this operation. The dependant objects are as follows: SPFarm Name=SharePoint_Config SPFarm Name=SharePoint_Config at Microsoft.SharePoint.Administration.SPConfigurationDatabase.DeleteObject(Guid id) at Microsoft.SharePoint.Administration.SPConfigurationDatabase.DeleteObject(SPPersistedObject obj) at Microsoft.SharePoint.Administration.SPPersistedObject.Delete() at Microsoft.SharePoint.Administration.SPWebApplication.Delete() at Microsoft.SharePoint.ApplicationPages.DeleteWebApplicationPage.BtnSubmit_Click(Object sender, EventArgs e) at System.Web.UI.WebControls.Button.OnClick(EventArgs e) at System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) at System.Web.UI.WebControls.Button.System.Web.UI.IPostBackEventHandler.RaisePostBackEvent(String eventArgument) at System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) at System.Web.UI.Page.RaisePostBackEvent(NameValueCollection postData) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) Can somebody tell me how I can delete this web application? Thanks in advance!

    Read the article

  • GitHub - commit local changes in local branch to remote branch

    - by user62046
    I use Git Shell in Windows 7, working in a branch named Save-Rotation. Then I used git push origin Save-Rotation to commit the changes to remote. The result is posted at the end. It seems good. But when I went to my repository in GitHub site, which is https://github.com/chiapas/sumatrapdf/tree/Save-Rotation I can't see any change in the repository tree or commit tree. How can I know if the commit (to remote) is successful, and why the repository page is not updated? Here is the result in command-line C:\Users\imo\Documents\GitHub\sumatrapdf [Save-Rotation]> git push origin Save-R otation Counting objects: 167, done. Delta compression using up to 8 threads. Compressing objects: 100% (18/18), done. Writing objects: 100% (119/119), 27.43 KiB, done. Total 119 (delta 101), reused 119 (delta 101) To https://github.com/chiapas/sumatrapdf * [new branch] Save-Rotation -> Save-Rotation C:\Users\imo\Documents\GitHub\sumatrapdf [Save-Rotation +2 ~17 -0 !]> git push o rigin Save-Rotation Everything up-to-date C:\Users\imo\Documents\GitHub\sumatrapdf [Save-Rotation +2 ~17 -0 !]>

    Read the article

  • OAM OVD integration - Error Encounterd while performance test "LDAP response read timed out, timeout used:2000ms"

    - by siddhartha_sinha
    While working on OAM OVD integration for one of my client, I have been involved in the performance test of the products wherein I encountered OAM authentication failures while talking to OVD during heavy load. OAM logs revealed the following: oracle.security.am.common.policy.common.response.ResponseException: oracle.security.am.engines.common.identity.provider.exceptions.IdentityProviderException: OAMSSA-20012: Exception in getting user attributes for user : dummy_user1, idstore MyIdentityStore with exception javax.naming.NamingException: LDAP response read timed out, timeout used:2000ms.; remaining name 'ou=people,dc=oracle,dc=com' at oracle.security.am.common.policy.common.response.IdentityValueProvider.getUserAttribute(IdentityValueProvider.java:271) ... During the authentication and authorization process, OAM complains that the LDAP repository is taking too long to return user attributes.The default value is 2 seconds as can be seen from the exception, "2000ms". While troubleshooting the issue, it was found that we can increase the ldap read timeout in oam-config.xml.  For reference, the attribute to add in the oam-config.xml file is: <Setting Name="LdapReadTimeout" Type="xsd:string">2000</Setting> However it is not recommended to increase the time out unless it is absolutely necessary and ensure that back-end directory servers are working fine. Rather I took the path of tuning OVD in the following manner: 1) Navigate to ORACLE_INSTANCE/config/OPMN/opmn folder and edit opmn.xml. Search for <data id="java-options" ………> and edit the contents of the file with the highlighted items: <category id="start-options"><data id="java-bin" value="$ORACLE_HOME/jdk/bin/java"/><data id="java-options" value="-server -Xms1024m -Xmx1024m -Dvde.soTimeoutBackend=0 -Didm.oracle.home=$ORACLE_HOME -Dcommon.components.home=$ORACLE_HOME/../oracle_common -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/opt/bea/Middleware/asinst_1/diagnostics/logs/OVD/ovd1/ovdGClog.log -XX:+UseConcMarkSweepGC -Doracle.security.jps.config=$ORACLE_INSTANCE/config/JPS/jps-config-jse.xml"/><data id="java-classpath" value="$ORACLE_HOME/ovd/jlib/vde.jar$:$ORACLE_HOME/jdbc/lib/ojdbc6.jar"/></category></module-data><stop timeout="120"/><ping interval="60"/></process-type> When the system is busy, a ping from the Oracle Process Manager and Notification Server (OPMN) to Oracle Virtual Directory may fail. As a result, OPMN will restart Oracle Virtual Directory after 20 seconds (the default ping interval). To avoid this, consider increasing the ping interval to 60 seconds or more. 2) Navigate to ORACLE_INSTANCE/config/OVD/ovd1 folder.Open listeners.os_xml file and perform the following changes: · Search for <ldap id=”Ldap Endpoint”…….> and point the cursor to that line. · Change threads count to 200. · Change anonymous bind to Deny. · Change workQueueCapacity to 8096. Add a new parameter <useNIO> and set its value to false viz: <useNIO>false</useNio> Snippet: <ldap version="8" id="LDAP Endpoint"> ....... .......  <socketOptions><backlog>128</backlog>         <reuseAddress>false</reuseAddress>         <keepAlive>false</keepAlive>         <tcpNoDelay>true</tcpNoDelay>         <readTimeout>0</readTimeout>      </socketOptions> <useNIO>false</useNIO></ldap> Restart OVD server. For more information on OVD tuneup refer to http://docs.oracle.com/cd/E25054_01/core.1111/e10108/ovd.htm. Please Note: There were few patches released from OAM side for performance tune-up as well. Will provide the updates shortly !!!

    Read the article

  • Knockout.js - Filtering, Sorting, and Paging

    - by jtimperley
    Originally posted on: http://geekswithblogs.net/jtimperley/archive/2013/07/28/knockout.js---filtering-sorting-and-paging.aspxKnockout.js is fantastic! Maybe I missed it but it appears to be missing flexible filtering, sorting, and pagination of its grids. This is a summary of my attempt at creating this functionality which has been working out amazingly well for my purposes. Before you continue, this post is not intended to teach you the basics of Knockout. They have already created a fantastic tutorial for this purpose. You'd be wise to review this before you continue. http://learn.knockoutjs.com/ Please view the full source code and functional example on jsFiddle. Below you will find a brief explanation of some of the components. http://jsfiddle.net/JTimperley/pyCTN/13/ First we need to create a model to represent our records. This model is a simple container with defined and guaranteed members. function CustomerModel(data) { if (!data) { data = {}; } var self = this; self.id = data.id; self.name = data.name; self.status = data.status; } Next we need a model to represent the page as a whole with an array of the previously defined records. I have intentionally overlooked the filtering and sorting options for now. Note how the filtering, sorting, and pagination are chained together to accomplish all three goals. This strategy allows each of these pieces to be used selectively based on the page's needs. If you only need sorting, just sort, etc. function CustomerPageModel(data) { if (!data) { data = {}; } var self = this; self.customers = ExtractModels(self, data.customers, CustomerModel); var filters = […]; var sortOptions = […]; self.filter = new FilterModel(filters, self.customers); self.sorter = new SorterModel(sortOptions, self.filter.filteredRecords); self.pager = new PagerModel(self.sorter.orderedRecords); } The code currently supports text box and drop down filters. Text box filters require defining the current 'Value' and the 'RecordValue' function to retrieve the filterable value from the provided record. Drop downs allow defining all possible values, the current option, and the 'RecordValue' as before. Once defining these filters, they are automatically added to the screen and any changes to their values will automatically update the results, causing their sort and pagination to be re-evaluated. var filters = [ { Type: "text", Name: "Name", Value: ko.observable(""), RecordValue: function(record) { return record.name; } }, { Type: "select", Name: "Status", Options: [ GetOption("All", "All", null), GetOption("New", "New", true), GetOption("Recently Modified", "Recently Modified", false) ], CurrentOption: ko.observable(), RecordValue: function(record) { return record.status; } } ]; Sort options are more simplistic and are also automatically added to the screen. Simply provide each option's name and value for the sort drop down as well as function to allow defining how the records are compared. This mechanism can easily be adapted for using table headers as the sort triggers. That strategy hasn't crossed my functionality needs at this point. var sortOptions = [ { Name: "Name", Value: "Name", Sort: function(left, right) { return CompareCaseInsensitive(left.name, right.name); } } ]; Paging options are completely contained by the pager model. Because we will be chaining arrays between our filtering, sorting, and pagination models, the following utility method is used to prevent errors when handing an observable array to another observable array. function GetObservableArray(array) { if (typeof(array) == 'function') { return array; }   return ko.observableArray(array); }

    Read the article

  • A Closer Look at the HiddenInput Attribute in MVC 2

    - by Steve Michelotti
    MVC 2 includes an attribute for model metadata called the HiddenInput attribute. The typical usage of the attribute looks like this (line #3 below): 1: public class PersonViewModel 2: { 3: [HiddenInput(DisplayValue = false)] 4: public int? Id { get; set; } 5: public string FirstName { get; set; } 6: public string LastName { get; set; } 7: } So if you displayed your PersonViewModel with Html.EditorForModel() or Html.EditorFor(m => m.Id), the framework would detect the [HiddenInput] attribute metadata and produce HTML like this: 1: <input id="Id" name="Id" type="hidden" value="21" /> This is pretty straight forward and allows an elegant way to keep the technical key for your model (e.g., a Primary Key from the database) in the HTML so that everything will be wired up correctly when the form is posted to the server and of course not displaying this value visually to the end user. However, when I was giving a recent presentation, a member of the audience asked me (quite reasonably), “When would you ever set DisplayValue equal to true when using a HiddenInput?” To which I responded, “Well, it’s an edge case. There are sometimes when…er…um…people might want to…um…display this value to the user.” It was quickly apparent to me (and I’m sure everyone else in the room) what a terrible answer this was. I realized I needed to have a much better answer here. First off, let’s look at what is produced if we change our view model to use “true” (which is equivalent to use specifying [HiddenInput] since “true” is the default) on line #3: 1: public class PersonViewModel 2: { 3: [HiddenInput(DisplayValue = true)] 4: public int? Id { get; set; } 5: public string FirstName { get; set; } 6: public string LastName { get; set; } 7: } Will produce the following HTML if rendered from Htm.EditorForModel() in your view: 1: <div class="editor-label"> 2: <label for="Id">Id</label> 3: </div> 4: <div class="editor-field"> 5: 21<input id="Id" name="Id" type="hidden" value="21" /> 6: <span class="field-validation-valid" id="Id_validationMessage"></span> 7: </div> The key is line #5. We get the text of “21” (which happened to be my DB Id in this instance) and also a hidden input element (again with “21”). So the question is, why would one want to use this? The best answer I’ve found is contained in this MVC 2 whitepaper: When a view lets users edit the ID of an object and it is necessary to display the value as well as to provide a hidden input element that contains the old ID so that it can be passed back to the controller. Well, that actually makes sense. Yes, it seems like something that would happen *rarely* but, for those instances, it would enable them easily. It’s effectively equivalent to doing this in your view: 1: <%: Html.LabelFor(m => m.Id) %> 2: <%: Model.Id %> 3: <%: Html.HiddenFor(m => m.Id) %> But it’s allowing you to specify it in metadata on your view model (and thereby take advantage of templated helpers like Html.EditorForModel() and Html.EditorFor()) rather than having to explicitly specifying everything in your view.

    Read the article

  • Stretch in multiple components using af:popup, af:region, af:panelTabbed

    - by Arvinder Singh
    Case study: I have a pop-up(dialogue) that contains a region(separate taskflow) showing a tab. The contents of this tab is in a region having a separate taskflow. The jsff page of this taskflow contains a panelSplitter which in turn contains a table. In short the components are : pop-up(dialogue) --> region(separate taskflow) --> tab --> region(separate taskflow) --> panelSplitter --> table At times the tab is not displayed with 100% width or the table in panelSplitter is not 100% visible or the splitter is not visible. Maintaining the stretch for all the components is difficult......not any more!!! Below is the solution that you can make use of in many similar scenarios. I am mentioning the major code snippets affecting the stretch and alignment. pop-up: <af:popup> <af:dialog id="d2" type="none" title="" inlineStyle="width:1200px"> <af:region value="#{bindings.PriceChangePopupFlow1.regionModel}" id="r1"/> </af:dialog> The above region is a jsff containing multiple tabs. I am showing code for a single tab. I kept the tab in a panelStretchLayout. <af:panelStretchLayout id="psl1" topHeight="300px" styleClass="AFStretchWidth"> <af:panelTabbed id="pt1"> <af:showDetailItem text="PO Details" id="sdi1" stretchChildren="first" > <af:region value="#{bindings.PriceChangePurchaseOrderFlow1.regionModel}" id="r1" binding="# {pageFlowScope.priceChangePopupBean.poDetailsRegion}" /> This "region" displays a .jsff containing a table in a panelSplitter. <af:panelSplitter id="ps1"  orientation="horizontal" splitterPosition="700"> <f:facet name="first"> <af:panelHeader text="PurchaseOrder" id="ph1"> <af:table id="md1" rows="#{bindings.PurchaseOrderVO.rangeSize}" That's it!!! We're done... Note the stretchChildren="first" attribute in the af:showDetailItem. That does the trick for us. Oracle docs say the following about stretchChildren :  Valid Values: none, first The stretching behavior for children. Acceptable values include: "none": does not attempt to stretch any children (the default value and the value you need to use if you have more than a single child; also the value you need to use if the child does not support being stretched) "first": stretches the first child (not to be used if you have multiple children as such usage will produce unreliable results; also not to be used if the child does not support being stretched)

    Read the article

  • Exposed: Fake Social Marketing

    - by Mike Stiles
    Brands and marketers who want to build their social popularity on a foundation of lies are starting to face more of an uphill climb. Fake social is starting to get exposed, and there are a lot of emperors getting caught without any clothes. Facebook is getting ready to do a purge of “Likes” on Pages that were a result of bots, fake accounts, and even real users who were duped or accidentally Liked a Page. Most of those accidental Likes occur on mobile, where it’s easy for large fingers to hit the wrong space. Depending on the degree to which your Page has been the subject of such activity, you may see your number of Likes go down. But don’t sweat it, that’s a good thing. The social world has turned the corner and assessed the value of a Like. And the verdict is that a Like is valuable as an opportunity to build a real relationship with a real customer. Its value pales immensely compared to a user who’s actually engaged with the brand. Those fake Likes aren’t doing you any good. Huge numbers may once have impressed, but it’s not fooling anybody anymore. Facebook’s selling point to marketers is the ability to use a brand’s fans to reach friends of those fans. Consequently, there has to be validity and legitimacy to a fan count. Speaking of mobile, Trademob recently reported 40% of clicks are essentially worthless, because 22% of them are accidental (again with the fat fingers), while 18% are trickery. Publishers will but huge banner ads next to tiny app buttons to increase the odds of an accident. Others even hide a banner behind another to score 2 clicks instead of 1. Pontiflex and Harris Interactive last year found 47% of users were more likely to click a mobile ad accidentally than deliberately. Beyond that, hijacked devices are out there manipulating click data. But to what end for a marketer? What’s the value of a click on something a user never even saw? What’s the value of a seen but accidentally clicked ad if there’s no resulting transaction? Back to fake Likes, followers and views; they’re definitely for sale on numerous sites, none of which I’ll promote. $5 can get you 1,000 Twitter followers. You can even get followers targeted by interests. One site was set up by an unemployed accountant out of his house in England. He gets them from a wholesaler in Brooklyn, who gets them from a 19-year-old supplier in India. The unemployed accountant is making $10,000 a day. That means a lot of brands, celebrities and organizations are playing the fake social game, apparently not coming to grips with the slim value of the numbers they’re buying. But now, in addition to having paid good money for non-ROI numbers, there’s the embarrassment factor. At least a couple of sites have popped up allowing anyone to see just how many fake and inactive followers you have. Britain’s Fake Follower Check and StatusPeople are the two getting the most attention. Enter any Twitter handle and the results are there for all to see. Fake isn’t good, period. “Inactive” could be real followers, but if they’re real, they’re just watching, not engaging. If someone runs a check on your Twitter handle and turns up fake followers, does that mean you’re suspect or have purchased followers? No. Anyone can follow anyone, so most accounts will have some fakes. Even account results like Barack Obama’s (70% fake according to StatusPeople) and Lady Gaga’s (71% fake) don’t mean these people knew about all those fakes or initiated them. Regardless, brands should realize they’re now being watched, and users are judging the legitimacy of their social channels. Use one of any number of tools available to assess and clean out fake Likes and followers so that your numbers are as genuine as possible. And obviously, skip the “buying popularity” route of social marketing strategy. It doesn’t work and it gets you busted…a losing combination.

    Read the article

  • Best Design Pattern for Coupling User Interface Components and Data Structures

    - by szahn
    I have a windows desktop application with a tree view. Due to lack of a sound data-binding solution for a tree view, I've implemented my own layer of abstraction on it to bind nodes to my own data structure. The requirements are as follows: Populate a tree view with nodes that resemble fields in a data structure. When a node is clicked, display the appropriate control to modify the value of that property in the instance of the data structure. The tree view is populated with instances of custom TreeNode classes that inherit from TreeNode. The responsibility of each custom TreeNode class is to (1) format the node text to represent the name and value of the associated field in my data structure, (2) return the control used to modify the property value, (3) get the value of the field in the control (3) set the field's value from the control. My custom TreeNode implementation has a property called "Control" which retrieves the proper custom control in the form of the base control. The control instance is stored in the custom node and instantiated upon first retrieval. So each, custom node has an associated custom control which extends a base abstract control class. Example TreeNode implementation: //The Tree Node Base Class public abstract class TreeViewNodeBase : TreeNode { public abstract CustomControlBase Control { get; } public TreeViewNodeBase(ExtractionField field) { UpdateControl(field); } public virtual void UpdateControl(ExtractionField field) { Control.UpdateControl(field); UpdateCaption(FormatValueForCaption()); } public virtual void SaveChanges(ExtractionField field) { Control.SaveChanges(field); UpdateCaption(FormatValueForCaption()); } public virtual string FormatValueForCaption() { return Control.FormatValueForCaption(); } public virtual void UpdateCaption(string newValue) { this.Text = Caption; this.LongText = newValue; } } //The tree node implementation class public class ExtractionTypeNode : TreeViewNodeBase { private CustomDropDownControl control; public override CustomControlBase Control { get { if (control == null) { control = new CustomDropDownControl(); control.label1.Text = Caption; control.comboBox1.Items.Clear(); control.comboBox1.Items.AddRange( Enum.GetNames( typeof(ExtractionField.ExtractionType))); } return control; } } public ExtractionTypeNode(ExtractionField field) : base(field) { } } //The custom control base class public abstract class CustomControlBase : UserControl { public abstract void UpdateControl(ExtractionField field); public abstract void SaveChanges(ExtractionField field); public abstract string FormatValueForCaption(); } //The custom control generic implementation (view) public partial class CustomDropDownControl : CustomControlBase { public CustomDropDownControl() { InitializeComponent(); } public override void UpdateControl(ExtractionField field) { //Nothing to do here } public override void SaveChanges(ExtractionField field) { //Nothing to do here } public override string FormatValueForCaption() { //Nothing to do here return string.Empty; } } //The custom control specific implementation public class FieldExtractionTypeControl : CustomDropDownControl { public override void UpdateControl(ExtractionField field) { comboBox1.SelectedIndex = comboBox1.FindStringExact(field.Extraction.ToString()); } public override void SaveChanges(ExtractionField field) { field.Extraction = (ExtractionField.ExtractionType) Enum.Parse(typeof(ExtractionField.ExtractionType), comboBox1.SelectedItem.ToString()); } public override string FormatValueForCaption() { return string.Empty; } The problem is that I have "generic" controls which inherit from CustomControlBase. These are just "views" with no logic. Then I have specific controls that inherit from the generic controls. I don't have any functions or business logic in the generic controls because the specific controls should govern how data is associated with the data structure. What is the best design pattern for this?

    Read the article

< Previous Page | 417 418 419 420 421 422 423 424 425 426 427 428  | Next Page >