Search Results

Search found 73044 results on 2922 pages for 'custom data attribute'.

Page 76/2922 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • How can data templates in generic.xaml get applied automatically?

    - by Thiado de Arruda
    I have a custom control that has a ContentPresenter that will have an arbitrary object set as it content. This object does not have any constraint on its type, so I want this control to display its content based on any data templates defined by application or by data templates defined in Generic.xaml. If in a application I define some data template(without a key because I want it to be applied automatically to objects of that type) and I use the custom control bound to an object of that type, the data template gets applied automatically. But I have some data templates defined for some types in the generic.xaml where I define the custom control style, and these templates are not getting applied automatically. Here is the generic.xaml : If I set an object of type 'PredefinedType' as the content in the contentpresenter, the data template does not get applied. However, If it works if I define the data template in the app.xaml for the application thats using the custom control. Does someone got a clue? I really cant assume that the user of the control will define this data template, so I need some way to tie it up with the custom control.

    Read the article

  • Core Data strategy using in memory cache, or no core data at all?

    - by randombits
    I have a user interface where the user can check off a bunch of items from a tableview, almost like a todo list. The items are populated from a Core Data stack. I need to be able to take all of the items they're clicking through and put them into a "temporary" shopping cart. Once they're in the shopping cart, users can go through the list and remove the items, or just submit them to a server. The thing is, the selected items are temporary just like an internet based shopping cart. It's nothing something that gets persisted once the application closes. Once the view is no longer in display, I can assume that the shopping cart is safe to discard. What's the best way to approach this? Since the user is essentially clicking on instances that map back to a Core Data entity .. should I setup a different persistence store such as in memory and add that store to my managed object context?

    Read the article

  • Help choosing the right data structure

    - by devoured elysium
    I need a data structure with the following requirements: Needs to be able to get elements by index (like a List). I will always just add / remove elements from the end of the structure. I am inclined to use an ArrayList. In this situation, it seems to be O(1) both to read elements (they always are?), remove elements (I only need to remove them at the end of the list) and to add(I only add to the end of the list). There is only the problem that time to time the ArrayList will have a performance penalty when it's completly full and I need to add more elements to it. Is there any other better idea? I don't think of a data structure that'd beat the ArrayList here. Thanks

    Read the article

  • Store data in Ruby on Rails without Database

    - by snowmaninthesun
    I have a few data values that I need to store on my rails app and wanted to know if there are any alternatives to creating a database table just to do this simple task. Background: I'm writing some analytics and dashboard tools for my ruby on rails app and i'm hoping to speed up the dashboard by caching results that will never change. Right now I pull all users for the last 30 days, and re arange them so I can see the number of new users per day. It works great but takes quite a long time, in reality I should only need to calculate the most recent day and just store the rest of the array somewhere else. Where is the best way to store this array? Creating a database table seems a bit overkill, and i'm not sure that global variables are the correct answer. Is there a best practice for persisting data like this? If anyone has done anything like this before let me know what you did and how it turned out.

    Read the article

  • Designing DAOs for data sources other than a database

    - by James P.
    Hi, Until now I've been used to using DAOs to retrieve information from databases. Other sources of data are possible though and I'm wondering if and how the pattern could be applied in general. For example, I'm now working on an application that fetches XML on the web. The XML file could be considered as a data source and the actual fetching is similar in principle to a database request. I'm not quite sure how the DAO could be structured though. Any views on the subject are welcome.

    Read the article

  • Twin edges - Half edge data structure

    - by Pradeep Kumar
    I have implemented a Half-edge data structure for loading 3d objects. I find that the part of assigning twin/pair edges takes the longest computation time (especially for objects which have hundreds of thousands half edges). The reason is that I use nested loops to accomplish this. Is there a simpler and efficient way of doing this? Below is the code which I've written. HE is the half-edge data structure. hearr is a vector containing all the half edges. vert is the starting vertex and end is the ending vertex. Thanks!! HE *e1,*e2; for(size_t i=0;i<hearr.size();i++){ e1=hearr[i]; for(size_t j=1;j<hearr.size();j++){ e2=hearr[j]; if((e1->vert==e2->end)&&(e2->vert==e1->end)){ e1->twin=e2; e2->twin=e1; } } }

    Read the article

  • Programming DataEntry&Forms: Population of Official Common Data Lists

    - by rlb.usa
    As a programmer of data-entry forms of all kinds, I often find myself making fields for things like Country and State. Consider: Perhaps a list the 50 United States names is an easy thing to find (does one include DC?) , but the countries are not. Nearly every site you find has a differing list with all of the political goings on over the years, and they become outdated quickly. What's the best practice regarding population of these kinds of lists? Is there an official list somewhere that one uses to populate these kinds of formal/official fields? Where do you get this data from, when it's not exactly specified in the specs?

    Read the article

  • Store image in core data and Retina Display ?

    - by shani
    Hi I have an app that has hundreds of words with 3/4 images for each word. I have 2 versions of each word one for iOS 3 and one for retina display. I wish to save the images as data and connect them to the appropriate word so it will be easy to pull them later. my question is - how do i get the suitable size ? its works great with the @2x wjen you get it from the app file system, but hoe does it supposed to work when i get it from data ? thanks shani

    Read the article

  • Entity Framework 4 & WCF Data Service: N:M mapping

    - by JJO
    I have three tables in my database: An A table, a B table, and a many-to-many ABMapping table. For simplicity, A and B are keyed with identity columns; ABMapping has just two columns: AId and BId. I built an Entity Framework 4 model from this, and it did correctly identify the N:M mapping between A and B. I then built a WCF Data Service based on this EF model. I'm trying to consume this WCF Data Service. Unfortunately, I can't figure out how to get a mapping between As and Bs to map back to the database. I've tried something like this: A a = new A(); B b = new B(); a.Bs.Add(b); connection.SaveChanges(); But this doesn't seem to have worked. Any clues? What am I missing?

    Read the article

  • The data structure of libev watchers

    - by changchang
    Libev uses three data structures to storage different watchers. Heap: for watchers that sorted by time, such as ev_timer and ev_periodic. Linked list: such as ev_io, ev_signal, ev_child and etc. Array: such as ev_prepare, ev_check, ev_async and etc. There is no doubt about that uses heap to store timer watcher. But what is the criteria of selecting linked list and array? The data structure that stores ev_io watchers seems a little complex. It first is an array that with fd as its index and the element in the array is a linked list of ev_io watcher. It is more convenient to allocate space for array if use linked list as element. Is it the reason? Or just because of the insert or remove operation of ev_io is more frequently and the ev_prepare seems more stable? Or any other reasons?

    Read the article

  • Where does one get data like Country:(list) State:(list)

    - by rlb.usa
    As a programmer of data-entry forms of all kinds, I often find myself making fields for things like Country: <choose from list>, State: <choose from list>, Race/Ethnicity: <choose from list>. Consider: Perhaps a list the 50 United States names is an easy thing to find (does one include DC?) , but the countries are not. Nearly every site you find has a differing list with all of the political goings on over the years, and they become outdated quickly. What's the best/common practice regarding population of these kinds of lists? Where does this data come from if it's not given in the specs?

    Read the article

  • How to copy a structure with pointers to data inside (so to copy pointers and data they point to)?

    - by Kabumbus
    so I have a structure like struct GetResultStructure { int length; char* ptr; }; I need a way to make a full copy of it meaning I need a copy to have a structure with new ptr poinnting on to copy of data I had in original structure. Is It any how possible? I mean any structure I have which contains ptrs will have some fields with its lengths I need a function that would copy my structure coping all ptrs and data they point to by given array of lengthes... Any cool boost function for it? Or any way how to create such function?

    Read the article

  • Manipulate data for scaling

    - by user1487000
    I have this data: Game 1: 7.0/10.0, Reviewed: 1000 times Game 2: 7.5/10.0, Reviewed: 3000 times Game 3: 8.9/10.0, Reviewed: 140,000 times Game 4: 10.0/10.0 Reviewed: 5 times . . . I want to manipulate this data in a way to make each rating reflective of how many times it has been reviewed. For example Game 3 should have a little heavier weight than than Game 4, since it has been reviewed way more. And Game 2's 7 should be weighted more than Game 1's 7. Is there a proper function to do this scaling? In such a way that ScaledGameRating = OldGameRating * (some exponential function?)

    Read the article

  • Fragmented Log files could be slowing down your database

    - by Fatherjack
    Something that is sometimes forgotten by a lot of DBAs is the fact that database log files get fragmented in the same way that you get fragmentation in a data file. The cause is very different but the effect is the same – too much effort reading and writing data. Data files get fragmented as data is changed through normal system activity, INSERTs, UPDATEs and DELETEs cause fragmentation and most experienced DBAs are monitoring their indexes for fragmentation and dealing with it accordingly. However, you don’t hear about so many working on their log files. How can a log file get fragmented? I’m glad you asked. When you create a database there are at least two files created on the disk storage; an mdf for the data and an ldf for the log file (you can also have ndf files for extra data storage but that’s off topic for now). It is wholly possible to have more than one log file but in most cases there is little point in creating more than one as the log file is written to in a ‘wrap-around’ method (more on that later). When a log file is created at the time that a database is created the file is actually sub divided into a number of virtual log files (VLFs). The number and size of these VLFs depends on the size chosen for the log file. VLFs are also created in the space added to a log file when a log file growth event takes place. Do you have your log files set to auto grow? Then you have potentially been introducing many VLFs into your log file. Let’s get to see how many VLFs we have in a brand new database. USE master GO CREATE DATABASE VLF_Test ON ( NAME = VLF_Test, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test.mdf', SIZE = 100, MAXSIZE = 500, FILEGROWTH = 50 ) LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5MB, MAXSIZE = 250MB, FILEGROWTH = 5MB ); go USE VLF_Test go DBCC LOGINFO; The results of this are firstly a new database is created with specified files sizes and the the DBCC LOGINFO results are returned to the script editor. The DBCC LOGINFO results have plenty of interesting information in them but lets first note there are 4 rows of information, this relates to the fact that 4 VLFs have been created in the log file. The values in the FileSize column are the sizes of each VLF in bytes, you will see that the last one to be created is slightly larger than the others. So, a 5MB log file has 4 VLFs of roughly 1.25 MB. Lets alter the CREATE DATABASE script to create a log file that’s a bit bigger and see what happens. Alter the code above so that the log file details are replaced by LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 1GB, MAXSIZE = 25GB, FILEGROWTH = 1GB ); With a bigger log file specified we get more VLFs What if we make it bigger again? LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5GB, MAXSIZE = 250GB, FILEGROWTH = 5GB ); This time we see more VLFs are created within our log file. We now have our 5GB log file comprised of 16 files of 320MB each. In fact these sizes fall into all the ranges that control the VLF creation criteria – what a coincidence! The rules that are followed when a log file is created or has it’s size increased are pretty basic. If the file growth is lower than 64MB then 4 VLFs are created If the growth is between 64MB and 1GB then 8 VLFs are created If the growth is greater than 1GB then 16 VLFs are created. Now the potential for chaos comes if the default values and settings for log file growth are used. By default a database log file gets a 1MB log file with unlimited growth in steps of 10%. The database we just created is 6 MB, let’s add some data and see what happens. USE vlf_test go -- we need somewhere to put the data so, a table is in order IF OBJECT_ID('A_Table') IS NOT NULL DROP TABLE A_Table go CREATE TABLE A_Table ( Col_A int IDENTITY, Col_B CHAR(8000) ) GO -- Let's check the state of the log file -- 4 VLFs found EXECUTE ('DBCC LOGINFO'); go -- We can go ahead and insert some data and then check the state of the log file again INSERT A_Table (col_b) SELECT TOP 500 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO -- insert 500 rows and we get 22 VLFs EXECUTE ('DBCC LOGINFO'); go -- Let's insert more rows INSERT A_Table (col_b) SELECT TOP 2000 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO 10 -- insert 2000 rows, in 10 batches and we suddenly have 107 VLFs EXECUTE ('DBCC LOGINFO'); Well, that escalated quickly! Our log file is split, internally, into 107 fragments after a few thousand inserts. The same happens with any logged transactions, I just chose to illustrate this with INSERTs. Having too many VLFs can cause performance degradation at times of database start up, log backup and log restore operations so it’s well worth keeping a check on this property. How do we prevent excessive VLF creation? Creating the database with larger files and also with larger growth steps and actively choosing to grow your databases rather than leaving it to the Auto Grow event can make sure that the growths are made with a size that is optimal. How do we resolve a situation of a database with too many VLFs? This process needs to be done when the database is under little or no stress so that you don’t affect system users. The steps are: BACKUP LOG YourDBName TO YourBackupDestinationOfChoice Shrink the log file to its smallest possible size DBCC SHRINKFILE(FileNameOfTLogHere, TRUNCATEONLY) * Re-size the log file to the size you want it to, taking in to account your expected needs for the coming months or year. ALTER DATABASE YourDBName MODIFY FILE ( NAME = FileNameOfTLogHere, SIZE = TheSizeYouWantItToBeIn_MB) * – If you don’t know the file name of your log file then run sp_helpfile while you are connected to the database that you want to work on and you will get the details you need. The resize step can take quite a while This is already detailed far better than I can explain it by Kimberley Tripp in her blog 8-Steps-to-better-Transaction-Log-throughput.aspx. The result of this will be a log file with a VLF count according to the bullet list above. Knowing when VLFs are being created By complete coincidence while I have been writing this blog (it’s been quite some time from it’s inception to going live) Jonathan Kehayias from SQLSkills.com has written a great article on how to track database file growth using Event Notifications and Service Broker. I strongly recommend taking a look at it as this is going to catch any sneaky auto grows that take place and let you know about them right away. Hassle free monitoring of VLFs If you are lucky or wise enough to be using SQL Monitor or another monitoring tool that let’s you write your own custom metrics then you can keep an eye on this very easily. There is a custom metric for VLFs (written by Stuart Ainsworth) already on the site and there are some others there are very useful so take a moment or two to look around while you are there. Resources MSDN – http://msdn.microsoft.com/en-us/library/ms179355(v=sql.105).aspx Kimberly Tripp from SQLSkills.com – http://www.sqlskills.com/BLOGS/KIMBERLY/post/8-Steps-to-better-Transaction-Log-throughput.aspx Thomas LaRock at Simple-Talk.com – http://www.simple-talk.com/sql/database-administration/monitoring-sql-server-virtual-log-file-fragmentation/ Disclosure I am a Friend of Red Gate. This means that I am more than likely to say good things about Red Gate DBA and Developer tools. No matter how awesome I make them sound, take the time to compare them with other products before you contact the Red Gate sales team to make your order.

    Read the article

  • How to fetch distinct values in Core Data?

    - by Andy
    So in looking through Core Data Snippets, I found the following code: ... [request setEntity:entity]; [request setResultType:NSDictionaryResultType]; [request setReturnsDistinctValues:YES]; [request setPropertiesToFetch:[NSArray arrayWithObject:@"<#Attribute name#>"]]; // Execute the fetch NSError *error; id requestedValue = nil; // WTF? This isn't defined or used anywhere NSArray *objects = [managedObjectContext executeFetchRequest:request error:&error]; if (objects == nil) { // handle the error } This is great and seems perfect for what I need...but how does one actually use it? I assume since it's returning dictionaries, I need a key to get at the values - but where's the key defined? Is that the "id requestedValue = nil" line? If so, how does "requestedValue" become the key? Xcode gives me a compiler warning about an unused variable at the "requestedValue" declaration. I feel like I'm missing something here. Thanks in advance for any assistance you can offer.

    Read the article

  • Where to put data management rules for complex data validation in ASP.NET MVC?

    - by TheRHCP
    Hello, I am currently working on an ASP.NET MVC2 project. This is the first time I am working on a real MVC web application. The ASP.NET MVC website really helped me to get started really fast, but I still have some obscure knowledge concerning datamodel validation. My problem is that I do not really know where to manage my filled datamodel when it comes to complex validation rules. For example, validating a string field with a Regex is quite easy and I know that I just have to decorate my field with a specific attribute, so data management rules are implemented in the model. But if I have multiple fields that I need to validate which each other, for example multiple datetime that need to be correctly set following a specific time rule, where do I need to validate them? I know that I could create my own validation attributes, but sometimes validation ask a specific validation path which is to complex to be validated using attributes. This first question also leads me to a related question which is, is it right to validate a model in the controller? Because for the moment that is the only way I found for complex validation. But I find this a bit dirty and I feel it does not really fit a the controller role and much harder to test (multiple code path). Thanks.

    Read the article

  • Windows 8 Data Binding Bug - OnPropertyChanged Updates Wrong Object

    - by Andrew
    I'm experiencing some really weird behavior with data binding in Windows 8. I have a combobox set up like this: <ComboBox VerticalAlignment="Center" Margin="0,18,0,0" HorizontalAlignment="Right" Height="Auto" Width="138" Background="{StaticResource DarkBackgroundBrush}" BorderThickness="0" ItemsSource="{Binding CurrentForum.SortValues}" SelectedItem="{Binding CurrentForum.CurrentSort, Mode=TwoWay}"> <ComboBox.ItemTemplate> <DataTemplate> <TextBlock HorizontalAlignment="Right" Text="{Binding Converter={StaticResource SortValueConverter}}"/> </DataTemplate> </ComboBox.ItemTemplate> </ComboBox> Inside of a page with the DataContext set to a statically located ViewModel. When I change that ViewModel's CurrentForm attribute, who's property is implemented like this... public FormViewModel CurrentForm { get { return _currentForm; } set { _currentForm = value; if (!_currentForm.IsLoaded) { _currentSubreddit.Refresh.Execute(null); } RaisePropertyChanged("CurrentForm"); } } ... something really strange happens. The previous FormViewModel's CurrentSort property is changed to the new FormViewModel's current sort property. This happens as the RaisePropertyChanged event is called, through a managed-to-native transition, with native code invoking the setter of CurrentSort of the previous FormViewModel. Does that sound like a bug in Win8's data binding? Am I doing something wrong?

    Read the article

  • Core Data Predicates with Subclassed NSManagedObjects

    - by coneybeare
    I have an AUDIO class. This audio has a SOUND_A subclass and a SOUND_B subclass. This is all done correctly and is working fine. I have another model, lets call it PLAYLIST_JOIN, and this can contain (in the real world) SOUND_A's and SOUND_B's, so we give it a relationship of AUDIO and PLAYLIST. This all works in the app. The problem I am having now is querying the PLAYLIST_JOIN table with an NSPredicate. What I want to do is find an exact PLAYLIST_JOIN item by giving it 2 keys in the predicate sound_a._sound_a_id = %@ && playlist.playlist_id = %@ and sound_b.sound_b_id = %@ && playlist.playlist_id = %@ The main problem is that because the table does not store sound_a and sound_b, but stored audio, I cannot use this syntax. I do not have the option of reorganizing the sound_a and sound_b to use the same _id attribute name, so how do I do this? Can I pass a method to the predicate? something like this: [audio getID] = %@ && playlist_id = %@

    Read the article

  • Missing tables on Scaffold Dynamic Data page

    - by Ben Amada
    I created a Linq-to-SQL DBML for the first time. I dragged and dropped all my tables over to the designer. The tables all appear in the designer.cs file. In Global.asax, I also have model.RegisterContext() with the ScaffoldAllTables = true option. The routes are also setup. I can pull up the Scaffolding page, but there's at least one table that is missing that I'm trying to get to show up. This missing table has a relationship with a child table that references it. The child table appears. When viewing data for the child table, the column that references the missing/parent table shows the numeric PK int value, rather than showing the "name". So instead of showing "Cars" it shows 1, and instead of showing "Planes" it shows 2, etc. There's another table in the DB that has the same type of structure as the missing table, and it is correctly appearing in the scaffolded tables. For this missing table, I've tried explicitly adding the ScaffoldTable attribute to no avail. Does anyone know what would cause a table like this to not appear in the list Scaffolded tables? Thanks very much.

    Read the article

  • A Custom View Engine with Dynamic View Location

    - by imran_ku07
        Introduction:          One of the nice feature of ASP.NET MVC framework is its pluggability. This means you can completely replace the default view engine(s) with a custom one. One of the reason for using a custom view engine is to change the default views location and sometimes you need to change the views location at run-time. For doing this, you can extend the default view engine(s) and then change the default views location variables at run-time.  But, you cannot directly change the default views location variables at run-time because they are static and shared among all requests. In this article, I will show you how you can dynamically change the views location without changing the default views location variables at run-time.       Description:           Let's say you need to synchronize the views location with controller name and controller namespace. So, instead of searching to the default views location(Views/ControllerName/ViewName) to locate views, this(these) custom view engine(s) will search in the Views/ControllerNameSpace/ControllerName/ViewName folder to locate views.           First of all create a sample ASP.NET MVC 3 application and then add these custom view engines to your application,   public class MyRazorViewEngine : RazorViewEngine { public MyRazorViewEngine() : base() { AreaViewLocationFormats = new[] { "~/Areas/{2}/Views/%1/{1}/{0}.cshtml", "~/Areas/{2}/Views/%1/{1}/{0}.vbhtml", "~/Areas/{2}/Views/%1/Shared/{0}.cshtml", "~/Areas/{2}/Views/%1/Shared/{0}.vbhtml" }; AreaMasterLocationFormats = new[] { "~/Areas/{2}/Views/%1/{1}/{0}.cshtml", "~/Areas/{2}/Views/%1/{1}/{0}.vbhtml", "~/Areas/{2}/Views/%1/Shared/{0}.cshtml", "~/Areas/{2}/Views/%1/Shared/{0}.vbhtml" }; AreaPartialViewLocationFormats = new[] { "~/Areas/{2}/Views/%1/{1}/{0}.cshtml", "~/Areas/{2}/Views/%1/{1}/{0}.vbhtml", "~/Areas/{2}/Views/%1/Shared/{0}.cshtml", "~/Areas/{2}/Views/%1/Shared/{0}.vbhtml" }; ViewLocationFormats = new[] { "~/Views/%1/{1}/{0}.cshtml", "~/Views/%1/{1}/{0}.vbhtml", "~/Views/%1/Shared/{0}.cshtml", "~/Views/%1/Shared/{0}.vbhtml" }; MasterLocationFormats = new[] { "~/Views/%1/{1}/{0}.cshtml", "~/Views/%1/{1}/{0}.vbhtml", "~/Views/%1/Shared/{0}.cshtml", "~/Views/%1/Shared/{0}.vbhtml" }; PartialViewLocationFormats = new[] { "~/Views/%1/{1}/{0}.cshtml", "~/Views/%1/{1}/{0}.vbhtml", "~/Views/%1/Shared/{0}.cshtml", "~/Views/%1/Shared/{0}.vbhtml" }; } protected override IView CreatePartialView(ControllerContext controllerContext, string partialPath) { var nameSpace = controllerContext.Controller.GetType().Namespace; return base.CreatePartialView(controllerContext, partialPath.Replace("%1", nameSpace)); } protected override IView CreateView(ControllerContext controllerContext, string viewPath, string masterPath) { var nameSpace = controllerContext.Controller.GetType().Namespace; return base.CreateView(controllerContext, viewPath.Replace("%1", nameSpace), masterPath.Replace("%1", nameSpace)); } protected override bool FileExists(ControllerContext controllerContext, string virtualPath) { var nameSpace = controllerContext.Controller.GetType().Namespace; return base.FileExists(controllerContext, virtualPath.Replace("%1", nameSpace)); } } public class MyWebFormViewEngine : WebFormViewEngine { public MyWebFormViewEngine() : base() { MasterLocationFormats = new[] { "~/Views/%1/{1}/{0}.master", "~/Views/%1/Shared/{0}.master" }; AreaMasterLocationFormats = new[] { "~/Areas/{2}/Views/%1/{1}/{0}.master", "~/Areas/{2}/Views/%1/Shared/{0}.master", }; ViewLocationFormats = new[] { "~/Views/%1/{1}/{0}.aspx", "~/Views/%1/{1}/{0}.ascx", "~/Views/%1/Shared/{0}.aspx", "~/Views/%1/Shared/{0}.ascx" }; AreaViewLocationFormats = new[] { "~/Areas/{2}/Views/%1/{1}/{0}.aspx", "~/Areas/{2}/Views/%1/{1}/{0}.ascx", "~/Areas/{2}/Views/%1/Shared/{0}.aspx", "~/Areas/{2}/Views/%1/Shared/{0}.ascx", }; PartialViewLocationFormats = ViewLocationFormats; AreaPartialViewLocationFormats = AreaViewLocationFormats; } protected override IView CreatePartialView(ControllerContext controllerContext, string partialPath) { var nameSpace = controllerContext.Controller.GetType().Namespace; return base.CreatePartialView(controllerContext, partialPath.Replace("%1", nameSpace)); } protected override IView CreateView(ControllerContext controllerContext, string viewPath, string masterPath) { var nameSpace = controllerContext.Controller.GetType().Namespace; return base.CreateView(controllerContext, viewPath.Replace("%1", nameSpace), masterPath.Replace("%1", nameSpace)); } protected override bool FileExists(ControllerContext controllerContext, string virtualPath) { var nameSpace = controllerContext.Controller.GetType().Namespace; return base.FileExists(controllerContext, virtualPath.Replace("%1", nameSpace)); } }             Here, I am extending the RazorViewEngine and WebFormViewEngine class and then appending /%1 in each views location variable, so that we can replace /%1 at run-time. I am also overriding the FileExists, CreateView and CreatePartialView methods. In each of these method implementation, I am replacing /%1 with controller namespace. Now, just register these view engines in Application_Start method in Global.asax.cs file,   protected void Application_Start() { ViewEngines.Engines.Clear(); ViewEngines.Engines.Add(new MyRazorViewEngine()); ViewEngines.Engines.Add(new MyWebFormViewEngine()); ................................................ ................................................ }             Now just create a controller and put this controller's view inside Views/ControllerNameSpace/ControllerName folder and then run this application. You will find that everything works just fine.       Summary:          ASP.NET MVC uses convention over configuration to locate views. For many applications this convention to locate views is acceptable. But sometimes you may need to locate views at run-time. In this article, I showed you how you can dynamically locate your views by using a custom view engine. I am also attaching a sample application. Hopefully you will enjoy this article too. SyntaxHighlighter.all()  

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >