Search Results

Search found 43654 results on 1747 pages for 'custom method'.

Page 346/1747 | < Previous Page | 342 343 344 345 346 347 348 349 350 351 352 353  | Next Page >

  • Inside the Concurrent Collections: ConcurrentDictionary

    - by Simon Cooper
    Using locks to implement a thread-safe collection is rather like using a sledgehammer - unsubtle, easy to understand, and tends to make any other tool redundant. Unlike the previous two collections I looked at, ConcurrentStack and ConcurrentQueue, ConcurrentDictionary uses locks quite heavily. However, it is careful to wield locks only where necessary to ensure that concurrency is maximised. This will, by necessity, be a higher-level look than my other posts in this series, as there is quite a lot of code and logic in ConcurrentDictionary. Therefore, I do recommend that you have ConcurrentDictionary open in a decompiler to have a look at all the details that I skip over. The problem with locks There's several things to bear in mind when using locks, as encapsulated by the lock keyword in C# and the System.Threading.Monitor class in .NET (if you're unsure as to what lock does in C#, I briefly covered it in my first post in the series): Locks block threads The most obvious problem is that threads waiting on a lock can't do any work at all. No preparatory work, no 'optimistic' work like in ConcurrentQueue and ConcurrentStack, nothing. It sits there, waiting to be unblocked. This is bad if you're trying to maximise concurrency. Locks are slow Whereas most of the methods on the Interlocked class can be compiled down to a single CPU instruction, ensuring atomicity at the hardware level, taking out a lock requires some heavy lifting by the CLR and the operating system. There's quite a bit of work required to take out a lock, block other threads, and wake them up again. If locks are used heavily, this impacts performance. Deadlocks When using locks there's always the possibility of a deadlock - two threads, each holding a lock, each trying to aquire the other's lock. Fortunately, this can be avoided with careful programming and structured lock-taking, as we'll see. So, it's important to minimise where locks are used to maximise the concurrency and performance of the collection. Implementation As you might expect, ConcurrentDictionary is similar in basic implementation to the non-concurrent Dictionary, which I studied in a previous post. I'll be using some concepts introduced there, so I recommend you have a quick read of it. So, if you were implementing a thread-safe dictionary, what would you do? The naive implementation is to simply have a single lock around all methods accessing the dictionary. This would work, but doesn't allow much concurrency. Fortunately, the bucketing used by Dictionary allows a simple but effective improvement to this - one lock per bucket. This allows different threads modifying different buckets to do so in parallel. Any thread making changes to the contents of a bucket takes the lock for that bucket, ensuring those changes are thread-safe. The method that maps each bucket to a lock is the GetBucketAndLockNo method: private void GetBucketAndLockNo( int hashcode, out int bucketNo, out int lockNo, int bucketCount) { // the bucket number is the hashcode (without the initial sign bit) // modulo the number of buckets bucketNo = (hashcode & 0x7fffffff) % bucketCount; // and the lock number is the bucket number modulo the number of locks lockNo = bucketNo % m_locks.Length; } However, this does require some changes to how the buckets are implemented. The 'implicit' linked list within a single backing array used by the non-concurrent Dictionary adds a dependency between separate buckets, as every bucket uses the same backing array. Instead, ConcurrentDictionary uses a strict linked list on each bucket: This ensures that each bucket is entirely separate from all other buckets; adding or removing an item from a bucket is independent to any changes to other buckets. Modifying the dictionary All the operations on the dictionary follow the same basic pattern: void AlterBucket(TKey key, ...) { int bucketNo, lockNo; 1: GetBucketAndLockNo( key.GetHashCode(), out bucketNo, out lockNo, m_buckets.Length); 2: lock (m_locks[lockNo]) { 3: Node headNode = m_buckets[bucketNo]; 4: Mutate the node linked list as appropriate } } For example, when adding another entry to the dictionary, you would iterate through the linked list to check whether the key exists already, and add the new entry as the head node. When removing items, you would find the entry to remove (if it exists), and remove the node from the linked list. Adding, updating, and removing items all follow this pattern. Performance issues There is a problem we have to address at this point. If the number of buckets in the dictionary is fixed in the constructor, then the performance will degrade from O(1) to O(n) when a large number of items are added to the dictionary. As more and more items get added to the linked lists in each bucket, the lookup operations will spend most of their time traversing a linear linked list. To fix this, the buckets array has to be resized once the number of items in each bucket has gone over a certain limit. (In ConcurrentDictionary this limit is when the size of the largest bucket is greater than the number of buckets for each lock. This check is done at the end of the TryAddInternal method.) Resizing the bucket array and re-hashing everything affects every bucket in the collection. Therefore, this operation needs to take out every lock in the collection. Taking out mutiple locks at once inevitably summons the spectre of the deadlock; two threads each hold a lock, and each trying to acquire the other lock. How can we eliminate this? Simple - ensure that threads never try to 'swap' locks in this fashion. When taking out multiple locks, always take them out in the same order, and always take out all the locks you need before starting to release them. In ConcurrentDictionary, this is controlled by the AcquireLocks, AcquireAllLocks and ReleaseLocks methods. Locks are always taken out and released in the order they are in the m_locks array, and locks are all released right at the end of the method in a finally block. At this point, it's worth pointing out that the locks array is never re-assigned, even when the buckets array is increased in size. The number of locks is fixed in the constructor by the concurrencyLevel parameter. This simplifies programming the locks; you don't have to check if the locks array has changed or been re-assigned before taking out a lock object. And you can be sure that when a thread takes out a lock, another thread isn't going to re-assign the lock array. This would create a new series of lock objects, thus allowing another thread to ignore the existing locks (and any threads controlling them), breaking thread-safety. Consequences of growing the array Just because we're using locks doesn't mean that race conditions aren't a problem. We can see this by looking at the GrowTable method. The operation of this method can be boiled down to: private void GrowTable(Node[] buckets) { try { 1: Acquire first lock in the locks array // this causes any other thread trying to take out // all the locks to block because the first lock in the array // is always the one taken out first // check if another thread has already resized the buckets array // while we were waiting to acquire the first lock 2: if (buckets != m_buckets) return; 3: Calculate the new size of the backing array 4: Node[] array = new array[size]; 5: Acquire all the remaining locks 6: Re-hash the contents of the existing buckets into array 7: m_buckets = array; } finally { 8: Release all locks } } As you can see, there's already a check for a race condition at step 2, for the case when the GrowTable method is called twice in quick succession on two separate threads. One will successfully resize the buckets array (blocking the second in the meantime), when the second thread is unblocked it'll see that the array has already been resized & exit without doing anything. There is another case we need to consider; looking back at the AlterBucket method above, consider the following situation: Thread 1 calls AlterBucket; step 1 is executed to get the bucket and lock numbers. Thread 2 calls GrowTable and executes steps 1-5; thread 1 is blocked when it tries to take out the lock in step 2. Thread 2 re-hashes everything, re-assigns the buckets array, and releases all the locks (steps 6-8). Thread 1 is unblocked and continues executing, but the calculated bucket and lock numbers are no longer valid. Between calculating the correct bucket and lock number and taking out the lock, another thread has changed where everything is. Not exactly thread-safe. Well, a similar problem was solved in ConcurrentStack and ConcurrentQueue by storing a local copy of the state, doing the necessary calculations, then checking if that state is still valid. We can use a similar idea here: void AlterBucket(TKey key, ...) { while (true) { Node[] buckets = m_buckets; int bucketNo, lockNo; GetBucketAndLockNo( key.GetHashCode(), out bucketNo, out lockNo, buckets.Length); lock (m_locks[lockNo]) { // if the state has changed, go back to the start if (buckets != m_buckets) continue; Node headNode = m_buckets[bucketNo]; Mutate the node linked list as appropriate } break; } } TryGetValue and GetEnumerator And so, finally, we get onto TryGetValue and GetEnumerator. I've left these to the end because, well, they don't actually use any locks. How can this be? Whenever you change a bucket, you need to take out the corresponding lock, yes? Indeed you do. However, it is important to note that TryGetValue and GetEnumerator don't actually change anything. Just as immutable objects are, by definition, thread-safe, read-only operations don't need to take out a lock because they don't change anything. All lockless methods can happily iterate through the buckets and linked lists without worrying about locking anything. However, this does put restrictions on how the other methods operate. Because there could be another thread in the middle of reading the dictionary at any time (even if a lock is taken out), the dictionary has to be in a valid state at all times. Every change to state has to be made visible to other threads in a single atomic operation (all relevant variables are marked volatile to help with this). This restriction ensures that whatever the reading threads are doing, they never read the dictionary in an invalid state (eg items that should be in the collection temporarily removed from the linked list, or reading a node that has had it's key & value removed before the node itself has been removed from the linked list). Fortunately, all the operations needed to change the dictionary can be done in that way. Bucket resizes are made visible when the new array is assigned back to the m_buckets variable. Any additions or modifications to a node are done by creating a new node, then splicing it into the existing list using a single variable assignment. Node removals are simply done by re-assigning the node's m_next pointer. Because the dictionary can be changed by another thread during execution of the lockless methods, the GetEnumerator method is liable to return dirty reads - changes made to the dictionary after GetEnumerator was called, but before the enumeration got to that point in the dictionary. It's worth listing at this point which methods are lockless, and which take out all the locks in the dictionary to ensure they get a consistent view of the dictionary: Lockless: TryGetValue GetEnumerator The indexer getter ContainsKey Takes out every lock (lockfull?): Count IsEmpty Keys Values CopyTo ToArray Concurrent principles That covers the overall implementation of ConcurrentDictionary. I haven't even begun to scratch the surface of this sophisticated collection. That I leave to you. However, we've looked at enough to be able to extract some useful principles for concurrent programming: Partitioning When using locks, the work is partitioned into independant chunks, each with its own lock. Each partition can then be modified concurrently to other partitions. Ordered lock-taking When a method does need to control the entire collection, locks are taken and released in a fixed order to prevent deadlocks. Lockless reads Read operations that don't care about dirty reads don't take out any lock; the rest of the collection is implemented so that any reading thread always has a consistent view of the collection. That leads us to the final collection in this little series - ConcurrentBag. Lacking a non-concurrent analogy, it is quite different to any other collection in the class libraries. Prepare your thinking hats!

    Read the article

  • Adding SQL Cache Dependencies to the Loosely coupled .NET Cache Provider

    - by Rhames
    This post adds SQL Cache Dependency support to the loosely coupled .NET Cache Provider that I described in the previous post (http://geekswithblogs.net/Rhames/archive/2012/09/11/loosely-coupled-.net-cache-provider-using-dependency-injection.aspx). The sample code is available on github at https://github.com/RobinHames/CacheProvider.git. Each time we want to apply a cache dependency to a call to fetch or cache a data item we need to supply an instance of the relevant dependency implementation. This suggests an Abstract Factory will be useful to create cache dependencies as needed. We can then use Dependency Injection to inject the factory into the relevant consumer. Castle Windsor provides a typed factory facility that will be utilised to implement the cache dependency abstract factory (see http://docs.castleproject.org/Windsor.Typed-Factory-Facility-interface-based-factories.ashx). Cache Dependency Interfaces First I created a set of cache dependency interfaces in the domain layer, which can be used to pass a cache dependency into the cache provider. ICacheDependency The ICacheDependency interface is simply an empty interface that is used as a parent for the specific cache dependency interfaces. This will allow us to place a generic constraint on the Cache Dependency Factory, and will give us a type that can be passed into the relevant Cache Provider methods. namespace CacheDiSample.Domain.CacheInterfaces { public interface ICacheDependency { } }   ISqlCacheDependency.cs The ISqlCacheDependency interface provides specific SQL caching details, such as a Sql Command or a database connection and table. It is the concrete implementation of this interface that will be created by the factory in passed into the Cache Provider. using System; using System.Collections.Generic; using System.Linq; using System.Text;   namespace CacheDiSample.Domain.CacheInterfaces { public interface ISqlCacheDependency : ICacheDependency { ISqlCacheDependency Initialise(string databaseConnectionName, string tableName); ISqlCacheDependency Initialise(System.Data.SqlClient.SqlCommand sqlCommand); } } If we want other types of cache dependencies, such as by key or file, interfaces may be created to support these (the sample code includes an IKeyCacheDependency interface). Modifying ICacheProvider to accept Cache Dependencies Next I modified the exisitng ICacheProvider<T> interface so that cache dependencies may be passed into a Fetch method call. I did this by adding two overloads to the existing Fetch methods, which take an IEnumerable<ICacheDependency> parameter (the IEnumerable allows more than one cache dependency to be included). I also added a method to create cache dependencies. This means that the implementation of the Cache Provider will require a dependency on the Cache Dependency Factory. It is pretty much down to personal choice as to whether this approach is taken, or whether the Cache Dependency Factory is injected directly into the repository or other consumer of Cache Provider. I think, because the cache dependency cannot be used without the Cache Provider, placing the dependency on the factory into the Cache Provider implementation is cleaner. ICacheProvider.cs using System; using System.Collections.Generic;   namespace CacheDiSample.Domain.CacheInterfaces { public interface ICacheProvider<T> { T Fetch(string key, Func<T> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry); T Fetch(string key, Func<T> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry, IEnumerable<ICacheDependency> cacheDependencies);   IEnumerable<T> Fetch(string key, Func<IEnumerable<T>> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry); IEnumerable<T> Fetch(string key, Func<IEnumerable<T>> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry, IEnumerable<ICacheDependency> cacheDependencies);   U CreateCacheDependency<U>() where U : ICacheDependency; } }   Cache Dependency Factory Next I created the interface for the Cache Dependency Factory in the domain layer. ICacheDependencyFactory.cs namespace CacheDiSample.Domain.CacheInterfaces { public interface ICacheDependencyFactory { T Create<T>() where T : ICacheDependency;   void Release<T>(T cacheDependency) where T : ICacheDependency; } }   I used the ICacheDependency parent interface as a generic constraint on the create and release methods in the factory interface. Now the interfaces are in place, I moved on to the concrete implementations. ISqlCacheDependency Concrete Implementation The concrete implementation of ISqlCacheDependency will need to provide an instance of System.Web.Caching.SqlCacheDependency to the Cache Provider implementation. Unfortunately this class is sealed, so I cannot simply inherit from this. Instead, I created an interface called IAspNetCacheDependency that will provide a Create method to create an instance of the relevant System.Web.Caching Cache Dependency type. This interface is specific to the ASP.NET implementation of the Cache Provider, so it should be defined in the same layer as the concrete implementation of the Cache Provider (the MVC UI layer in the sample code). IAspNetCacheDependency.cs using System.Web.Caching;   namespace CacheDiSample.CacheProviders { public interface IAspNetCacheDependency { CacheDependency CreateAspNetCacheDependency(); } }   Next, I created the concrete implementation of the ISqlCacheDependency interface. This class also implements the IAspNetCacheDependency interface. This concrete implementation also is defined in the same layer as the Cache Provider implementation. AspNetSqlCacheDependency.cs using System.Web.Caching; using CacheDiSample.Domain.CacheInterfaces;   namespace CacheDiSample.CacheProviders { public class AspNetSqlCacheDependency : ISqlCacheDependency, IAspNetCacheDependency { private string databaseConnectionName;   private string tableName;   private System.Data.SqlClient.SqlCommand sqlCommand;   #region ISqlCacheDependency Members   public ISqlCacheDependency Initialise(string databaseConnectionName, string tableName) { this.databaseConnectionName = databaseConnectionName; this.tableName = tableName; return this; }   public ISqlCacheDependency Initialise(System.Data.SqlClient.SqlCommand sqlCommand) { this.sqlCommand = sqlCommand; return this; }   #endregion   #region IAspNetCacheDependency Members   public System.Web.Caching.CacheDependency CreateAspNetCacheDependency() { if (sqlCommand != null) return new SqlCacheDependency(sqlCommand); else return new SqlCacheDependency(databaseConnectionName, tableName); }   #endregion   } }   ICacheProvider Concrete Implementation The ICacheProvider interface is implemented by the CacheProvider class. This implementation is modified to include the changes to the ICacheProvider interface. First I needed to inject the Cache Dependency Factory into the Cache Provider: private ICacheDependencyFactory cacheDependencyFactory;   public CacheProvider(ICacheDependencyFactory cacheDependencyFactory) { if (cacheDependencyFactory == null) throw new ArgumentNullException("cacheDependencyFactory");   this.cacheDependencyFactory = cacheDependencyFactory; }   Next I implemented the CreateCacheDependency method, which simply passes on the create request to the factory: public U CreateCacheDependency<U>() where U : ICacheDependency { return this.cacheDependencyFactory.Create<U>(); }   The signature of the FetchAndCache helper method was modified to take an additional IEnumerable<ICacheDependency> parameter:   private U FetchAndCache<U>(string key, Func<U> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry, IEnumerable<ICacheDependency> cacheDependencies) and the following code added to create the relevant System.Web.Caching.CacheDependency object for any dependencies and pass them to the HttpContext Cache: CacheDependency aspNetCacheDependencies = null;   if (cacheDependencies != null) { if (cacheDependencies.Count() == 1) // We know that the implementations of ICacheDependency will also implement IAspNetCacheDependency // so we can use a cast here and call the CreateAspNetCacheDependency() method aspNetCacheDependencies = ((IAspNetCacheDependency)cacheDependencies.ElementAt(0)).CreateAspNetCacheDependency(); else if (cacheDependencies.Count() > 1) { AggregateCacheDependency aggregateCacheDependency = new AggregateCacheDependency(); foreach (ICacheDependency cacheDependency in cacheDependencies) { // We know that the implementations of ICacheDependency will also implement IAspNetCacheDependency // so we can use a cast here and call the CreateAspNetCacheDependency() method aggregateCacheDependency.Add(((IAspNetCacheDependency)cacheDependency).CreateAspNetCacheDependency()); } aspNetCacheDependencies = aggregateCacheDependency; } }   HttpContext.Current.Cache.Insert(key, value, aspNetCacheDependencies, absoluteExpiry.Value, relativeExpiry.Value);   The full code listing for the modified CacheProvider class is shown below: using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Caching; using CacheDiSample.Domain.CacheInterfaces;   namespace CacheDiSample.CacheProviders { public class CacheProvider<T> : ICacheProvider<T> { private ICacheDependencyFactory cacheDependencyFactory;   public CacheProvider(ICacheDependencyFactory cacheDependencyFactory) { if (cacheDependencyFactory == null) throw new ArgumentNullException("cacheDependencyFactory");   this.cacheDependencyFactory = cacheDependencyFactory; }   public T Fetch(string key, Func<T> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry) { return FetchAndCache<T>(key, retrieveData, absoluteExpiry, relativeExpiry, null); }   public T Fetch(string key, Func<T> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry, IEnumerable<ICacheDependency> cacheDependencies) { return FetchAndCache<T>(key, retrieveData, absoluteExpiry, relativeExpiry, cacheDependencies); }   public IEnumerable<T> Fetch(string key, Func<IEnumerable<T>> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry) { return FetchAndCache<IEnumerable<T>>(key, retrieveData, absoluteExpiry, relativeExpiry, null); }   public IEnumerable<T> Fetch(string key, Func<IEnumerable<T>> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry, IEnumerable<ICacheDependency> cacheDependencies) { return FetchAndCache<IEnumerable<T>>(key, retrieveData, absoluteExpiry, relativeExpiry, cacheDependencies); }   public U CreateCacheDependency<U>() where U : ICacheDependency { return this.cacheDependencyFactory.Create<U>(); }   #region Helper Methods   private U FetchAndCache<U>(string key, Func<U> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry, IEnumerable<ICacheDependency> cacheDependencies) { U value; if (!TryGetValue<U>(key, out value)) { value = retrieveData(); if (!absoluteExpiry.HasValue) absoluteExpiry = Cache.NoAbsoluteExpiration;   if (!relativeExpiry.HasValue) relativeExpiry = Cache.NoSlidingExpiration;   CacheDependency aspNetCacheDependencies = null;   if (cacheDependencies != null) { if (cacheDependencies.Count() == 1) // We know that the implementations of ICacheDependency will also implement IAspNetCacheDependency // so we can use a cast here and call the CreateAspNetCacheDependency() method aspNetCacheDependencies = ((IAspNetCacheDependency)cacheDependencies.ElementAt(0)).CreateAspNetCacheDependency(); else if (cacheDependencies.Count() > 1) { AggregateCacheDependency aggregateCacheDependency = new AggregateCacheDependency(); foreach (ICacheDependency cacheDependency in cacheDependencies) { // We know that the implementations of ICacheDependency will also implement IAspNetCacheDependency // so we can use a cast here and call the CreateAspNetCacheDependency() method aggregateCacheDependency.Add( ((IAspNetCacheDependency)cacheDependency).CreateAspNetCacheDependency()); } aspNetCacheDependencies = aggregateCacheDependency; } }   HttpContext.Current.Cache.Insert(key, value, aspNetCacheDependencies, absoluteExpiry.Value, relativeExpiry.Value);   } return value; }   private bool TryGetValue<U>(string key, out U value) { object cachedValue = HttpContext.Current.Cache.Get(key); if (cachedValue == null) { value = default(U); return false; } else { try { value = (U)cachedValue; return true; } catch { value = default(U); return false; } } }   #endregion } }   Wiring up the DI Container Now the implementations for the Cache Dependency are in place, I wired them up in the existing Windsor CacheInstaller. First I needed to register the implementation of the ISqlCacheDependency interface: container.Register( Component.For<ISqlCacheDependency>() .ImplementedBy<AspNetSqlCacheDependency>() .LifestyleTransient());   Next I registered the Cache Dependency Factory. Notice that I have not implemented the ICacheDependencyFactory interface. Castle Windsor will do this for me by using the Type Factory Facility. I do need to bring the Castle.Facilities.TypedFacility namespace into scope: using Castle.Facilities.TypedFactory;   Then I registered the factory: container.AddFacility<TypedFactoryFacility>();   container.Register( Component.For<ICacheDependencyFactory>() .AsFactory()); The full code for the CacheInstaller class is: using Castle.MicroKernel.Registration; using Castle.MicroKernel.SubSystems.Configuration; using Castle.Windsor; using Castle.Facilities.TypedFactory;   using CacheDiSample.Domain.CacheInterfaces; using CacheDiSample.CacheProviders;   namespace CacheDiSample.WindsorInstallers { public class CacheInstaller : IWindsorInstaller { public void Install(IWindsorContainer container, IConfigurationStore store) { container.Register( Component.For(typeof(ICacheProvider<>)) .ImplementedBy(typeof(CacheProvider<>)) .LifestyleTransient());   container.Register( Component.For<ISqlCacheDependency>() .ImplementedBy<AspNetSqlCacheDependency>() .LifestyleTransient());   container.AddFacility<TypedFactoryFacility>();   container.Register( Component.For<ICacheDependencyFactory>() .AsFactory()); } } }   Configuring the ASP.NET SQL Cache Dependency There are a couple of configuration steps required to enable SQL Cache Dependency for the application and database. From the Visual Studio Command Prompt, the following commands should be used to enable the Cache Polling of the relevant database tables: aspnet_regsql -S <servername> -E -d <databasename> –ed aspnet_regsql -S <servername> -E -d CacheSample –et –t <tablename>   (The –t option should be repeated for each table that is to be made available for cache dependencies). Finally the SQL Cache Polling needs to be enabled by adding the following configuration to the <system.web> section of web.config: <caching> <sqlCacheDependency pollTime="10000" enabled="true"> <databases> <add name="BloggingContext" connectionStringName="BloggingContext"/> </databases> </sqlCacheDependency> </caching>   (obviously the name and connection string name should be altered as required). Using a SQL Cache Dependency Now all the coding is complete. To specify a SQL Cache Dependency, I can modify my BlogRepositoryWithCaching decorator class (see the earlier post) as follows: public IList<Blog> GetAll() { var sqlCacheDependency = cacheProvider.CreateCacheDependency<ISqlCacheDependency>() .Initialise("BloggingContext", "Blogs");   ICacheDependency[] cacheDependencies = new ICacheDependency[] { sqlCacheDependency };   string key = string.Format("CacheDiSample.DataAccess.GetAll");   return cacheProvider.Fetch(key, () => { return parentBlogRepository.GetAll(); }, null, null, cacheDependencies) .ToList(); }   This will add a dependency of the “Blogs” table in the database. The data will remain in the cache until the contents of this table change, then the cache item will be invalidated, and the next call to the GetAll() repository method will be routed to the parent repository to refresh the data from the database.

    Read the article

  • Simple GET operation with JSON data in ADF Mobile

    - by PadmajaBhat
    Usecase: This sample uses a RESTful service which contains a GET method that fetches employee details for an employee with given employee ID along with other methods. The data is fetched in JSON format. This RESTful service is then invoked via ADF Mobile and the JSON data thus obtained is parsed and rendered in mobile in a table. Prerequisite: Download JDev build JDEVADF_11.1.2.4.0_GENERIC_130421.1600.6436.1 or higher with mobile support.  Steps: Run EmployeeService.java in JSONService.zip. This is a simple service with a method, getEmpById(id) that takes employee ID as parameter and produces employee details in JSON format. Copy the target URL generated on running this service. The target URL will be as shown below: http://127.0.0.1:7101/JSONService-Project1-context-root/jersey/project1 Now, let us invoke this service in our mobile application. For this, create an ADF Mobile application.  Name the application JSON_SearchByEmpID and finish the wizard. Now, let us create a connection to our service. To do this, we create a URL Connection. Invoke new gallery wizard on ApplicationController project.  Select URL Connection option. In the Create URL Connection window, enter connection name as ‘conn’. For URL endpoint, supply the URL you copied earlier on running the service. Remember to use your system IP instead of localhost. Test the connection and click OK. At this point, a connection to the REST service has been created. Since JSON data is not supported directly in WSDC wizard, we need to invoke the operation through Java code using RestServiceAdapter. For this, in the ApplicationController project, create a Java class called ‘EmployeeDC’. We will be creating DC from this class. Add the following code to the newly created class to invoke the getEmpById method. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 public Employee fetchEmpDetails(){ RestServiceAdapter restServiceAdapter = Model.createRestServiceAdapter(); restServiceAdapter.clearRequestProperties(); restServiceAdapter.setConnectionName("conn"); //URL connection created with this name restServiceAdapter.setRequestType(RestServiceAdapter.REQUEST_TYPE_GET); restServiceAdapter.addRequestProperty("Content-Type", "application/json"); restServiceAdapter.addRequestProperty("Accept", "application/json; charset=UTF-8"); restServiceAdapter.setRetryLimit(0); restServiceAdapter.setRequestURI("/getById/"+inputEmpID); String response = ""; JSONBeanSerializationHelper jsonHelper = new JSONBeanSerializationHelper(); try { response = restServiceAdapter.send(""); //Invoke the GET operation System.out.println("Response received!"); Employee responseObject = (Employee) jsonHelper.fromJSON(Employee.class, response); return responseObject; } catch (Exception e) { } return null; } Here, in lines 2 to 9, we create the RestServiceAdapter and set various properties required to invoke the web service. At line 4, we are pointing to the connection ‘conn’ created previously. Since we want to invoke getEmpById method of the service, which is defined by the URL http://IP:7101/REST_Sanity_JSON-Project1-context-root/resources/project1/getById/{id} we are updating the request URI to point to this URI at line 9. inputEmpID is a variable that will hold the value input by the user for employee ID. This we will be creating in a while. As the method we are invoking is a GET operation and consumes json data, these properties are being set in lines 5 through 7. Finally, we are sending the request in line 13. In line 15, we use jsonHelper.fromJSON to convert received JSON data to a Java object. The required Java objects' structure is defined in class Employee.java whose structure is provided later. Since the response from our service is a simple response consisting of attributes like employee Id, name, design etc, we will just return this parsed response (line 16) and use it to create DC. As mentioned previously, we would like the user to input the employee ID for which he/she wants to perform search. So, in the same class, define a variable inputEmpID which will hold the value input by the user. Generate accessors for this variable. Lastly, we need to create Employee class. Employee class will define how we want to structure the JSON object received from the service. To design the Employee class, run the services’ method in the browser or via analyzer using path parameter as 1. This will give you the output JSON structure. Ours is a simple service that returns a JSONObject with a set of data. Hence, Employee class will just contain this set of data defined with the proper data types. Create Employee.java in the same project as EmployeeDC.java and write the below code: package application; import oracle.adfmf.java.beans.PropertyChangeListener; import oracle.adfmf.java.beans.PropertyChangeSupport; public class Employee { private String dept; private String desig; private int id; private String name; private int salary; private PropertyChangeSupport propertyChangeSupport = new PropertyChangeSupport(this); public void setDept(String dept) {         String oldDept = this.dept; this.dept = dept; propertyChangeSupport.firePropertyChange("dept", oldDept, dept); } public String getDept() { return dept; } public void setDesig(String desig) { String oldDesig = this.desig; this.desig = desig; propertyChangeSupport.firePropertyChange("desig", oldDesig, desig); } public String getDesig() { return desig; } public void setId(int id) { int oldId = this.id; this.id = id; propertyChangeSupport.firePropertyChange("id", oldId, id); } public int getId() { return id; } public void setName(String name) { String oldName = this.name; this.name = name; propertyChangeSupport.firePropertyChange("name", oldName, name); } public String getName() { return name; } public void setSalary(int salary) { int oldSalary = this.salary; this.salary = salary; propertyChangeSupport.firePropertyChange("salary", oldSalary, salary); } public int getSalary() { return salary; } public void addPropertyChangeListener(PropertyChangeListener l) { propertyChangeSupport.addPropertyChangeListener(l); } public void removePropertyChangeListener(PropertyChangeListener l) { propertyChangeSupport.removePropertyChangeListener(l);     } } Now, let us create a DC out of EmployeeDC.java.  DC as shown below is created. Now, you can design the mobile page as usual and invoke the operation of the service. To design the page, go to ViewController project and locate adfmf-feature.xml. Create a new feature called ‘SearchFeature’ by clicking the plus icon. Go the content tab and add an amx page. Call it SearchPage.amx. Call it SearchPage.amx. Remove primary and secondary buttons as we don’t need them and rename the header. Drag and drop inputEmpID from the DC palette onto Panel Page in the structure pane as input text with label. Next, drop fetchEmpDetails method as an ADF button. For a change, let us display the output in a table component instead of the usual form. However, you will notice that if you drag and drop Employee onto the structure pane, there is no option for ADF Mobile Table. Hence, we will need to create the table on our own. To do this, let us first drop Employee as an ADF Read -Only form. This step is needed to get the required bindings. We will be deleting this form in a while. Now, from the Component palette, search for ‘Table Layout’. Drag and drop this below the command button.  Within the tablelayout, insert ‘Row Layout’ and ‘Cell Format’ components. Final table structure should be as shown below. Here, we have also defined some inline styling to render the UI in a nice manner. <amx:tableLayout id="tl1" borderWidth="2" halign="center" inlineStyle="vertical-align:middle;" width="100%" cellPadding="10"> <amx:rowLayout id="rl1" > <amx:cellFormat id="cf1" width="30%"> <amx:outputText value="#{bindings.dept.hints.label}" id="ot7" inlineStyle="color:rgb(0,148,231);"/> </amx:cellFormat> <amx:cellFormat id="cf2"> <amx:outputText value="#{bindings.dept.inputValue}" id="ot8" /> </amx:cellFormat> </amx:rowLayout> <amx:rowLayout id="rl2"> <amx:cellFormat id="cf3" width="30%"> <amx:outputText value="#{bindings.desig.hints.label}" id="ot9" inlineStyle="color:rgb(0,148,231);"/> </amx:cellFormat> <amx:cellFormat id="cf4" > <amx:outputText value="#{bindings.desig.inputValue}" id="ot10"/> </amx:cellFormat> </amx:rowLayout> <amx:rowLayout id="rl3"> <amx:cellFormat id="cf5" width="30%"> <amx:outputText value="#{bindings.id.hints.label}" id="ot11" inlineStyle="color:rgb(0,148,231);"/> </amx:cellFormat> <amx:cellFormat id="cf6" > <amx:outputText value="#{bindings.id.inputValue}" id="ot12"/> </amx:cellFormat> </amx:rowLayout> <amx:rowLayout id="rl4"> <amx:cellFormat id="cf7" width="30%"> <amx:outputText value="#{bindings.name.hints.label}" id="ot13" inlineStyle="color:rgb(0,148,231);"/> </amx:cellFormat> <amx:cellFormat id="cf8"> <amx:outputText value="#{bindings.name.inputValue}" id="ot14"/> </amx:cellFormat> </amx:rowLayout> <amx:rowLayout id="rl5"> <amx:cellFormat id="cf9" width="30%"> <amx:outputText value="#{bindings.salary.hints.label}" id="ot15" inlineStyle="color:rgb(0,148,231);"/> </amx:cellFormat> <amx:cellFormat id="cf10"> <amx:outputText value="#{bindings.salary.inputValue}" id="ot16"/> </amx:cellFormat> </amx:rowLayout>     </amx:tableLayout> The values used in the output text of the table come from the bindings obtained from the ADF Form created earlier. As we have used the bindings and don’t need the form anymore, let us delete the form.  One last thing before we deploy. When user changes employee ID, we want to clear the table contents. For this we associate a value change listener with the input text box. Click New in the resulting dialog to create a managed bean. Next, we create a method within the managed bean. For this, click on the New button associated with method. Call the method ‘empIDChange’. Open myClass.java and write the below code in empIDChange(). public void empIDChange(ValueChangeEvent valueChangeEvent) { // Add event code here... //Resetting the values to blank values when employee id changes AdfELContext adfELContext = AdfmfJavaUtilities.getAdfELContext(); ValueExpression ve = AdfmfJavaUtilities.getValueExpression("#{bindings.dept.inputValue}", String.class); ve.setValue(adfELContext, ""); ve = AdfmfJavaUtilities.getValueExpression("#{bindings.desig.inputValue}", String.class); ve.setValue(adfELContext, ""); ve = AdfmfJavaUtilities.getValueExpression("#{bindings.id.inputValue}", int.class); ve.setValue(adfELContext, ""); ve = AdfmfJavaUtilities.getValueExpression("#{bindings.name.inputValue}", String.class); ve.setValue(adfELContext, ""); ve = AdfmfJavaUtilities.getValueExpression("#{bindings.salary.inputValue}", int.class); ve.setValue(adfELContext, ""); } That’s it. Deploy the application to android emulator or device. Some snippets from the app.

    Read the article

  • Tomcat 7 ClassNotFoundException: org.apache.tomcat.JarScanner

    - by CodesLikeA_Mokey
    When I try and start my app I get this error. I have verified that JarScanner exists in the CATALINA_HOME directory so I dont know why it cant find it. Is there anything that could lead to this issue starting my app? I noticed that earlier in the same log i find this: [Loaded org.apache.tomcat.JarScanner from file:/usr/local/apache-tomcat-7.0.30/lib/tomcat-api.jar] Here is the actual error further down: Oct 8, 2012 1:24:01 PM org.apache.catalina.core.ContainerBase addChildInternal SEVERE: ContainerBase.addChild: start: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Catalina].StandardHost[localhost].StandardContext[/client]] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:154) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:618) at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:650) at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1582) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: java.lang.NoClassDefFoundError: org/apache/tomcat/JarScanner at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631) at java.lang.ClassLoader.defineClass(ClassLoader.java:615) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141) at java.net.URLClassLoader.defineClass(URLClassLoader.java:283) at java.net.URLClassLoader.access$000(URLClassLoader.java:58) at java.net.URLClassLoader$1.run(URLClassLoader.java:197) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:306) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:295) at java.lang.ClassLoader.loadClass(ClassLoader.java:247) at org.apache.catalina.core.StandardContext.getJarScanner(StandardContext.java:1025) at org.apache.catalina.startup.ContextConfig.processJarsForWebFragments(ContextConfig.java:1911) at org.apache.catalina.startup.ContextConfig.webConfig(ContextConfig.java:1265) at org.apache.catalina.startup.ContextConfig.configureStart(ContextConfig.java:878) at org.apache.catalina.startup.ContextConfig.lifecycleEvent(ContextConfig.java:369) at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:119) at org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBase.java:90) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5173) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) ... 11 more Caused by: java.lang.ClassNotFoundException: org.apache.tomcat.JarScanner at java.net.URLClassLoader$1.run(URLClassLoader.java:202) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:306) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:247) ... 33 more Oct 8, 2012 1:24:01 PM org.apache.catalina.startup.HostConfig deployDescriptor SEVERE: Error deploying configuration descriptor /software/sirsi/tomcat_sbox7/conf/Catalina/localhost/client.xml java.lang.IllegalStateException: ContainerBase.addChild: start: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Catalina].StandardHost[localhost].StandardContext[/client]] at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:904) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:618) at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:650) at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1582) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Oct 8, 2012 1:24:01 PM org.apache.catalina.startup.HostConfig deployDescriptor INFO: Deploying configuration descriptor /software/sirsi/tomcat_sbox7/conf/Catalina/localhost/custom.xml Oct 8, 2012 1:24:01 PM org.apache.catalina.core.StandardContext setPath WARNING: A context path must either be an empty string or start with a '/'. The path [custom] does not meet these criteria and has been changed to [/custom] Oct 8, 2012 1:24:01 PM org.apache.catalina.startup.SetContextPropertiesRule begin WARNING: [SetContextPropertiesRule]{Context} Setting property 'debug' to '0' did not find a matching property. Oct 8, 2012 1:24:01 PM org.apache.catalina.core.ContainerBase addChildInternal SEVERE: ContainerBase.addChild: start: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Catalina].StandardHost[localhost].StandardContext[/custom]] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:154) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:618) at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:650) at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1582) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: java.lang.NoClassDefFoundError: org/apache/tomcat/util/scan/StandardJarScanner at org.apache.catalina.core.StandardContext.getJarScanner(StandardContext.java:1025) at org.apache.catalina.startup.ContextConfig.processJarsForWebFragments(ContextConfig.java:1911) at org.apache.catalina.startup.ContextConfig.webConfig(ContextConfig.java:1265) at org.apache.catalina.startup.ContextConfig.configureStart(ContextConfig.java:878) at org.apache.catalina.startup.ContextConfig.lifecycleEvent(ContextConfig.java:369) at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:119) at org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBase.java:90) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5173) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) ... 11 more Oct 8, 2012 1:24:01 PM org.apache.catalina.startup.HostConfig deployDescriptor SEVERE: Error deploying configuration descriptor /software/sirsi/tomcat_sbox7/conf/Catalina/localhost/custom.xml java.lang.IllegalStateException: ContainerBase.addChild: start: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Catalina].StandardHost[localhost].StandardContext[/custom]] at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:904) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:618) at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:650) at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1582) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662)

    Read the article

  • Any Way to monitor JMX Servers with NetIq ?

    - by Carlos
    I ask this question after a search in the site, there's many JMX questions but I think no one with a NetIq involved. In the documentation of NetIq, There's no "module" to check JMX, but I have seen that u can make "custom" modules and "custom" scripts, in Perl and VbScript. The question is if anyone with more experience than me on NetIq, has made any custom script or module to check JMX. I look on technet too, for a JMX Api for Windos Shell or Powershell with no luck. Of course, JMX is much more powerfull, but what I really need is to "monitor" some values inside NetIq, because we cant install Nagios, Cacti, Hyperic or any other monitor software but NetIq. Thanks.

    Read the article

  • AWS forwarding email to a gmail account

    - by user2433617
    So I registered a domain name. I then set up a static webpage using aws (S3 and Rout53). Now what I want to do is forward any email I get from that custom domain name to a personal email address I have set up. I can't seem to figure out how to do this. I have these record sets already: A NS SOA CNAME I believe I have to set up an MX record but not sure how. say I have the custom domain [email protected] and I want to redirect all email to [email protected]. The personal email account is a gmail (google accounts) email address. Thanks.

    Read the article

  • Apache mod_rewrite redirect with prefix

    - by Marc
    I am newbie with Apache's mod_rewrite and I'm having some difficulties getting it to do what I want. In my static directory, I have some javascript files (.js) with 2 kind of filename: xxxx.js which is the standard file name AT_xxxx.js (with prefixed filename) which has been duplicated from previous standard file name but also contains my customizations I would like to parse requests for each standard requested javascript file (xxxx.js) to check if a customized file exists (AT_xxxx.js) including all sub-directories. Then, in this case, use the custom file instead of the standard file (perhaps by internal redirect). I tried to figure this out for hours but something is still wrong. Note: Also, I don't know how to find custom files in sub-directories. DocumentRoot "/data/apps/dev0/custom/my_static" <filesMatch "\\.(js)$"> Options +FollowSymLinks RewriteEngine on RewriteCond %{DOCUMENT_ROOT}/AT_$1.js -f RewriteRule ^(.*[^/])/?$ %{DOCUMENT_ROOT}/AT_$1.js [QSA,L] </filesMatch>

    Read the article

  • ubiquity automatic install stops at keyboard layout chooser

    - by badgerhill
    I am trying to automate an Ubuntu (12.04.1 64bit) installation using ubiquity & preseed on a desktop live cd. It almost works fine. I edited the txt.cfg and added label unattended menu label ^Unattended installation kernel /casper/vmlinuz append file=/cdrom/preseed/custom.seed boot=casper initrd=/casper/initrd.lz quiet splash noprompt -- This is my custom.seed file The problem is that the installer shows the keyboard layout chooser and I have to click next. The correct language & keyboard layout (german) are already preselected. What am I missing, or what's wrong in my custom.seed file, to automate the next click?

    Read the article

  • ubiquity automatic install stops at keyboard layout chooser

    - by badgerhill
    i am trying to automate an ubuntu (12.04.1 64bit) installation using ubiquity & preseed on a desktop live cd. it almost works fine. i edited the txt.cfg and added label unattended menu label ^Unattended installation kernel /casper/vmlinuz append file=/cdrom/preseed/custom.seed boot=casper initrd=/casper/initrd.lz quiet splash noprompt -- this is my custom.seed file the problem is that, the installer shows the keyboard layout chooser and i have to click next. the correct language & keyboard layout (german) are already preselected. what am i missing, or what's wrong in my custom.seed file, to automate the next click? thomas

    Read the article

  • Word 2003 set styles won't convert over when opened in Word 2010

    - by Candy
    If I have set styles in a Word 2003 document, how can I get the set styles to appear when the document is opened in Word 2010? When I open the document that was created using 2003 (that has set custom styles), in 2010 it converts everything to the 2010 styles. When I try selecting Change Styles?Style Set?Word 2003, it doesn’t pick up my custom styles; it only picks up the default 2003 styles. I want to be able to keep my custom styles that were created in the template using 2003.

    Read the article

  • Notepad++ shortcuts not getting copied to the second computer where I want to replicate my settings

    - by Dragos Toader
    In Notepad++, there's a way to assign your custom shortcuts by going to Run - Modify Shortcut/Delete Command... This brings up the Shortcut Mapper I set up my custom shortcuts on Computer 1 I then installed Notepad++ with the same install settings and plugins on Computer 2 I then created a zip archive of my Notepad++ folder in Program Files on Computer 1 I overwrote the Notepad++ folder in Program Files on Computer 2 with this archive My custom shortcuts did not come across. I thought that the shortcuts were saved in C:\Program Files\Notepad++\shortcuts.xml I compared C:\Program Files\Notepad++\shortcuts.xml from Computer 1 with the same file on Computer 2 and the two files are identical. Why then are the shortcuts not coming across to Computer 2? Computer 1 is Windows XP Computer 2 is Windows Server 2008 R2

    Read the article

  • How to set the relevance of emails in Outlook?

    - by Grastveit
    Mails in outlook have a relevance-field that can be displayed as a column in the inbox. How do one set it? Edit: More precisely, how are the values of custom e-mail-fields changed through the outlook 2007 gui? Here relevance and a custom field 'Score' is shown in my inbox, but all emails have blank values. The context-menu of emails give access to "Message Options...". I was hoping the custom fields would be available there.

    Read the article

  • ErrorDocument 404 not found in non-existent subdomain

    - by Question Overflow
    I am trying to get the apache server to issue a custom 404 error for invalid subdomains. The following is the relevant part of the httpd configuration: Alias /err/ "/var/www/error/" ErrorDocument 404 /err/HTTP_NOT_FOUND.html.var <VirtualHost *:80> # the default virtual host ServerName site_not_found Redirect 404 / </VirtualHost> <VirtualHost *:80> ServerName example.com ServerAlias ??.example.com </VirtualHost> What I get instead is this: Not Found The requested URL / was not found on this server. Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request. I don't understand why a URL to non-existent-subdomain.example.com produces a 404 error without custom error as shown above while a URL to eg.example.com/non-existent-file produces the full custom 404 error. Can someone advise on this. Thanks.

    Read the article

  • Customising error bars

    - by itid
    Hello I've been told I HAVE TO have custom error bars for an assignment I have to hand in. Okay, I have a scatter graph with twelve points, and the error for each one is different. It's the same plus and minus, but different for each one. The twelve different error values are sitting nicely in a column. I have been told I can reference that column in "custom error bars" simply by indicating the range of values, like F2-F14. However, I get an error message every time. When I open Custom Error Bars, it is set up like this: ={} obviously waiting for a function. The error message says remove the equals sign. How should I enter the value range, please ?

    Read the article

  • Architecture for a business objects / database access layer

    - by gregmac
    For various reasons, we are writing a new business objects/data storage library. One of the requirements of this layer is to separate the logic of the business rules, and the actual data storage layer. It is possible to have multiple data storage layers that implement access to the same object - for example, a main "database" data storage source that implements most objects, and another "ldap" source that implements a User object. In this scenario, User can optionally come from an LDAP source, perhaps with slightly different functionality (eg, not possible to save/update the User object), but otherwise it is used by the application the same way. Another data storage type might be a web service, or an external database. There are two main ways we are looking at implementing this, and me and a co-worker disagree on a fundamental level which is correct. I'd like some advice on which one is the best to use. I'll try to keep my descriptions of each as neutral as possible, as I'm looking for some objective view points here. Business objects are base classes, and data storage objects inherit business objects. Client code deals with data storage objects. In this case, common business rules are inherited by each data storage object, and it is the data storage objects that are directly used by the client code. This has the implication that client code determines which data storage method to use for a given object, because it has to explicitly declare an instance to that type of object. Client code needs to explicitly know connection information for each data storage type it is using. If a data storage layer implements different functionality for a given object, client code explicitly knows about it at compile time because the object looks different. If the data storage method is changed, client code has to be updated. Business objects encapsulate data storage objects. In this case, business objects are directly used by client application. Client application passes along base connection information to business layer. Decision about which data storage method a given object uses is made by business object code. Connection information would be a chunk of data taken from a config file (client app does not really know/care about details of it), which may be a single connection string for a database, or several pieces connection strings for various data storage types. Additional data storage connection types could also be read from another spot - eg, a configuration table in a database that specifies URLs to various web services. The benefit here is that if a new data storage method is added to an existing object, a configuration setting can be set at runtime to determine which method to use, and it is completely transparent to the client applications. Client apps do not need to be modified if data storage method for a given object changes. Business objects are base classes, data source objects inherit from business objects. Client code deals primarily with base classes. This is similar to the first method, but client code declares variables of the base business object types, and Load()/Create()/etc static methods on the business objects return the appropriate data source-typed objects. The architecture of this solution is similar to the first method, but the main difference is the decision about which data storage object to use for a given business object is made by the business layer, not the client code. I know there are already existing ORM libraries that provide some of this functionality, but please discount those for now (there is the possibility that a data storage layer is implemented with one of these ORM libraries) - also note I'm deliberately not telling you what language is being used here, other than that it is strongly typed. I'm looking for some general advice here on which method is better to use (or feel free to suggest something else), and why.

    Read the article

  • JPA thinks I'm deleting a detached object

    - by Steve
    So, I've got a DAO that I used to load and save my domain objects using JPA. I finally managed to get the transaction stuff working (with a bunch of help from the folks here...), now I've got another issue. In my test case, I call my DAO to load a domain object with a given id, check that it got loaded and then call the same DAO to delete the object I just loaded. When I do that I get the following: java.lang.IllegalArgumentException: Removing a detached instance mil.navy.ndms.conops.common.model.impl.jpa.Group#10 at org.hibernate.ejb.event.EJB3DeleteEventListener.performDetachedEntityDeletionCheck(EJB3DeleteEventListener.java:45) at org.hibernate.event.def.DefaultDeleteEventListener.onDelete(DefaultDeleteEventListener.java:108) at org.hibernate.event.def.DefaultDeleteEventListener.onDelete(DefaultDeleteEventListener.java:74) at org.hibernate.impl.SessionImpl.fireDelete(SessionImpl.java:794) at org.hibernate.impl.SessionImpl.delete(SessionImpl.java:772) at org.hibernate.ejb.AbstractEntityManagerImpl.remove(AbstractEntityManagerImpl.java:253) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:48) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37) at java.lang.reflect.Method.invoke(Method.java:600) at org.springframework.orm.jpa.SharedEntityManagerCreator$SharedEntityManagerInvocationHandler.invoke(SharedEntityManagerCreator.java:180) at $Proxy27.remove(Unknown Source) at mil.navy.ndms.conops.common.dao.impl.jpa.GroupDao.delete(GroupDao.java:499) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:48) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37) at java.lang.reflect.Method.invoke(Method.java:600) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:304) at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:182) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:149) at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:106) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204) at $Proxy28.delete(Unknown Source) at mil.navy.ndms.conops.common.dao.impl.jpa.GroupDaoTest.testGroupDaoSave(GroupDaoTest.java:89) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:48) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37) at java.lang.reflect.Method.invoke(Method.java:600) at junit.framework.TestCase.runTest(TestCase.java:164) at junit.framework.TestCase.runBare(TestCase.java:130) at junit.framework.TestResult$1.protect(TestResult.java:106) at junit.framework.TestResult.runProtected(TestResult.java:124) at junit.framework.TestResult.run(TestResult.java:109) at junit.framework.TestCase.run(TestCase.java:120) at junit.framework.TestSuite.runTest(TestSuite.java:230) at junit.framework.TestSuite.run(TestSuite.java:225) at org.eclipse.jdt.internal.junit.runner.junit3.JUnit3TestReference.run(JUnit3TestReference.java:130) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:460) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:673) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:386) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:196) Now given that I'm using the same DAO instance, and I've not changed EntityManagers (unless Spring does so without letting me know), how can this be a detached object? My DAO code looks like this: public class GenericJPADao implements IWebDao, IDao, IDaoUtil { private static Logger logger = Logger.getLogger (GenericJPADao.class); protected Class voClass; @PersistenceContext(unitName = "CONOPS_PU") protected EntityManagerFactory emf; @PersistenceContext(unitName = "CONOPS_PU") protected EntityManager em; public GenericJPADao() { super ( ); ParameterizedType genericSuperclass = (ParameterizedType) getClass ( ).getGenericSuperclass ( ); this.voClass = (Class) genericSuperclass.getActualTypeArguments ( )[1]; } ... public void delete (INTFC modelObj, EntityManager em) { em.remove (modelObj); } @SuppressWarnings("unchecked") public INTFC findById (Long id) { return ((INTFC) em.find (voClass, id)); } } The test case code looks like: IGroup loadedGroup = dao.findById (group.getId ( )); assertNotNull (loadedGroup); assertEquals (group.getId ( ), loadedGroup.getId ( )); dao.delete (loadedGroup); // - This generates the above exception loadedGroup = dao.findById (group.getId ( )); assertNull(loadedGroup); Can anyone tell me what I'm doing wrong here? Thanks

    Read the article

  • DexFile.class error in eclipse

    - by ninjasense
    I get this weird error everytime I debug in eclipse. It just seemed to appear one day and I was wondering if anyone else was running int the same problem. It does not affect my app in anyway visibly and does not cause a crash but it is an annoyance while debugging. Here is the full error: // Compiled from DexFile.java (version 1.5 : 49.0, super bit) public final class dalvik.system.DexFile { // Method descriptor #8 (Ljava/io/File;)V // Stack: 3, Locals: 2 public DexFile(java.io.File file) throws java.io.IOException; 0 aload_0 [this] 1 invokespecial java.lang.Object() [1] 4 new java.lang.RuntimeException [2] 7 dup 8 ldc <String "Stub!"> [3] 10 invokespecial java.lang.RuntimeException(java.lang.String) [4] 13 athrow Line numbers: [pc: 0, line: 4] Local variable table: [pc: 0, pc: 14] local: this index: 0 type: dalvik.system.DexFile [pc: 0, pc: 14] local: file index: 1 type: java.io.File // Method descriptor #18 (Ljava/lang/String;)V // Stack: 3, Locals: 2 public DexFile(java.lang.String fileName) throws java.io.IOException; 0 aload_0 [this] 1 invokespecial java.lang.Object() [1] 4 new java.lang.RuntimeException [2] 7 dup 8 ldc <String "Stub!"> [3] 10 invokespecial java.lang.RuntimeException(java.lang.String) [4] 13 athrow Line numbers: [pc: 0, line: 5] Local variable table: [pc: 0, pc: 14] local: this index: 0 type: dalvik.system.DexFile [pc: 0, pc: 14] local: fileName index: 1 type: java.lang.String // Method descriptor #22 (Ljava/lang/String;Ljava/lang/String;I)Ldalvik/system/DexFile; // Stack: 3, Locals: 3 public static dalvik.system.DexFile loadDex(java.lang.String sourcePathName, java.lang.String outputPathName, int flags) throws java.io.IOException; 0 new java.lang.RuntimeException [2] 3 dup 4 ldc <String "Stub!"> [3] 6 invokespecial java.lang.RuntimeException(java.lang.String) [4] 9 athrow Line numbers: [pc: 0, line: 6] Local variable table: [pc: 0, pc: 10] local: sourcePathName index: 0 type: java.lang.String [pc: 0, pc: 10] local: outputPathName index: 1 type: java.lang.String [pc: 0, pc: 10] local: flags index: 2 type: int // Method descriptor #28 ()Ljava/lang/String; // Stack: 3, Locals: 1 public java.lang.String getName(); 0 new java.lang.RuntimeException [2] 3 dup 4 ldc <String "Stub!"> [3] 6 invokespecial java.lang.RuntimeException(java.lang.String) [4] 9 athrow Line numbers: [pc: 0, line: 7] Local variable table: [pc: 0, pc: 10] local: this index: 0 type: dalvik.system.DexFile // Method descriptor #30 ()V // Stack: 3, Locals: 1 public void close() throws java.io.IOException; 0 new java.lang.RuntimeException [2] 3 dup 4 ldc <String "Stub!"> [3] 6 invokespecial java.lang.RuntimeException(java.lang.String) [4] 9 athrow Line numbers: [pc: 0, line: 8] Local variable table: [pc: 0, pc: 10] local: this index: 0 type: dalvik.system.DexFile // Method descriptor #32 (Ljava/lang/String;Ljava/lang/ClassLoader;)Ljava/lang/Class; // Stack: 3, Locals: 3 public java.lang.Class loadClass(java.lang.String name, java.lang.ClassLoader loader); 0 new java.lang.RuntimeException [2] 3 dup 4 ldc <String "Stub!"> [3] 6 invokespecial java.lang.RuntimeException(java.lang.String) [4] 9 athrow Line numbers: [pc: 0, line: 9] Local variable table: [pc: 0, pc: 10] local: this index: 0 type: dalvik.system.DexFile [pc: 0, pc: 10] local: name index: 1 type: java.lang.String [pc: 0, pc: 10] local: loader index: 2 type: java.lang.ClassLoader // Method descriptor #37 ()Ljava/util/Enumeration; // Signature: ()Ljava/util/Enumeration<Ljava/lang/String;>; // Stack: 3, Locals: 1 public java.util.Enumeration entries(); 0 new java.lang.RuntimeException [2] 3 dup 4 ldc <String "Stub!"> [3] 6 invokespecial java.lang.RuntimeException(java.lang.String) [4] 9 athrow Line numbers: [pc: 0, line: 10] Local variable table: [pc: 0, pc: 10] local: this index: 0 type: dalvik.system.DexFile // Method descriptor #30 ()V // Stack: 3, Locals: 1 protected void finalize() throws java.io.IOException; 0 new java.lang.RuntimeException [2] 3 dup 4 ldc <String "Stub!"> [3] 6 invokespecial java.lang.RuntimeException(java.lang.String) [4] 9 athrow Line numbers: [pc: 0, line: 11] Local variable table: [pc: 0, pc: 10] local: this index: 0 type: dalvik.system.DexFile // Method descriptor #42 (Ljava/lang/String;)Z public static native boolean isDexOptNeeded(java.lang.String arg0) throws java.io.FileNotFoundException, java.io.IOException; } Thanks

    Read the article

  • Defines JEE 5 the handling of commit error using bean managed transactions?

    - by marabol
    I'm using glassfish 2.1 and 2.1.1. If I've a bean method annotated by @TransactionAttribute(value = TransactionAttributeType.REQUIRES_NEW). After doing some JPA stuff the commit fails in the afterCompletion-Phase of JTS. GlassFish logs this failure only. And the caller of this bean method has no chance to know something goes wrong. So I wonder, if there is any definition how a jee 5 server has to handle exceptions while commiting. I would expect any runtime exception. I'm using stateless beans. With SessionSynchronisation I could get the commit failue, if I use statefull beans. Is it possible to intercept, so I can throw an exception, that I've declared in my interface? This is the whole exception stacktrace: [#|2010-05-06T12:15:54.840+0000|WARNING|sun-appserver2.1|oracle.toplink.essentials.session.file:/C:/glassfish/domains/domain1/applications/j2ee-apps/my-ear-1.0.0-SNAPSHOT/my-jar-1.1.8_jar/-myPu.transaction|_ThreadID=25;_ThreadName=p: thread-pool-1; w: 15;_RequestID=67a475a1-25c3-4416-abea-0d159f715373;| java.lang.RuntimeException: Got exception during XAResource.end: oracle.jdbc.xa.OracleXAException at com.sun.enterprise.distributedtx.J2EETransactionManagerOpt.delistResource(J2EETransactionManagerOpt.java:224) at com.sun.enterprise.resource.ResourceManagerImpl.unregisterResource(ResourceManagerImpl.java:265) at com.sun.enterprise.resource.ResourceManagerImpl.delistResource(ResourceManagerImpl.java:223) at com.sun.enterprise.resource.PoolManagerImpl.resourceClosed(PoolManagerImpl.java:400) at com.sun.enterprise.resource.ConnectorAllocator$ConnectionListenerImpl.connectionClosed(ConnectorAllocator.java:72) at com.sun.gjc.spi.ManagedConnection.connectionClosed(ManagedConnection.java:639) at com.sun.gjc.spi.base.ConnectionHolder.close(ConnectionHolder.java:201) at com.sun.gjc.spi.jdbc40.ConnectionHolder40.close(ConnectionHolder40.java:519) at oracle.toplink.essentials.internal.databaseaccess.DatabaseAccessor.closeDatasourceConnection(DatabaseAccessor.java:394) at oracle.toplink.essentials.internal.databaseaccess.DatasourceAccessor.closeConnection(DatasourceAccessor.java:382) at oracle.toplink.essentials.internal.databaseaccess.DatabaseAccessor.closeConnection(DatabaseAccessor.java:417) at oracle.toplink.essentials.internal.databaseaccess.DatasourceAccessor.afterJTSTransaction(DatasourceAccessor.java:115) at oracle.toplink.essentials.threetier.ClientSession.afterTransaction(ClientSession.java:119) at oracle.toplink.essentials.internal.sessions.UnitOfWorkImpl.afterTransaction(UnitOfWorkImpl.java:1841) at oracle.toplink.essentials.transaction.AbstractSynchronizationListener.afterCompletion(AbstractSynchronizationListener.java:170) at oracle.toplink.essentials.transaction.JTASynchronizationListener.afterCompletion(JTASynchronizationListener.java:102) at com.sun.jts.jta.SynchronizationImpl.after_completion(SynchronizationImpl.java:154) at com.sun.jts.CosTransactions.RegisteredSyncs.distributeAfter(RegisteredSyncs.java:210) at com.sun.jts.CosTransactions.TopCoordinator.afterCompletion(TopCoordinator.java:2585) at com.sun.jts.CosTransactions.CoordinatorTerm.commit(CoordinatorTerm.java:433) at com.sun.jts.CosTransactions.TerminatorImpl.commit(TerminatorImpl.java:250) at com.sun.jts.CosTransactions.CurrentImpl.commit(CurrentImpl.java:623) at com.sun.jts.jta.TransactionManagerImpl.commit(TransactionManagerImpl.java:309) at com.sun.enterprise.distributedtx.J2EETransactionManagerImpl.commit(J2EETransactionManagerImpl.java:1029) at com.sun.enterprise.distributedtx.J2EETransactionManagerOpt.commit(J2EETransactionManagerOpt.java:398) at com.sun.ejb.containers.BaseContainer.completeNewTx(BaseContainer.java:3817) at com.sun.ejb.containers.BaseContainer.postInvokeTx(BaseContainer.java:3610) at com.sun.ejb.containers.BaseContainer.postInvoke(BaseContainer.java:1379) at com.sun.ejb.containers.BaseContainer.postInvoke(BaseContainer.java:1316) at com.sun.ejb.containers.EJBLocalObjectInvocationHandler.invoke(EJBLocalObjectInvocationHandler.java:205) at com.sun.ejb.containers.EJBLocalObjectInvocationHandlerDelegate.invoke(EJBLocalObjectInvocationHandlerDelegate.java:127) at $Proxy127.myNewTxMethod(Unknown Source) at mypackage.MyBean2.myMethod(MyBean2.java:197) at mypackage.MyBean2.myMethod2(MyBean2.java:166) at mypackage.MyBean2.myMethod3(MyBean2.java:105) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.sun.enterprise.security.application.EJBSecurityManager.runMethod(EJBSecurityManager.java:1011) at com.sun.enterprise.security.SecurityUtil.invoke(SecurityUtil.java:175) at com.sun.ejb.containers.BaseContainer.invokeTargetBeanMethod(BaseContainer.java:2920) at com.sun.ejb.containers.BaseContainer.intercept(BaseContainer.java:4011) at com.sun.ejb.containers.EJBLocalObjectInvocationHandler.invoke(EJBLocalObjectInvocationHandler.java:197) at com.sun.ejb.containers.EJBLocalObjectInvocationHandlerDelegate.invoke(EJBLocalObjectInvocationHandlerDelegate.java:127) at $Proxy158.myMethod3(Unknown Source) at mypackage.MyBean3.myMethod4(MyBean3.java:94) at mypackage.MyBean3.onMessage(MyBean3.java:85) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.sun.enterprise.security.SecurityUtil$2.run(SecurityUtil.java:181) at java.security.AccessController.doPrivileged(Native Method) at com.sun.enterprise.security.application.EJBSecurityManager.doAsPrivileged(EJBSecurityManager.java:985) at com.sun.enterprise.security.SecurityUtil.invoke(SecurityUtil.java:186) at com.sun.ejb.containers.BaseContainer.invokeTargetBeanMethod(BaseContainer.java:2920) at com.sun.ejb.containers.BaseContainer.intercept(BaseContainer.java:4011) at com.sun.ejb.containers.MessageBeanContainer.deliverMessage(MessageBeanContainer.java:1111) at com.sun.ejb.containers.MessageBeanListenerImpl.deliverMessage(MessageBeanListenerImpl.java:74) at com.sun.enterprise.connectors.inflow.MessageEndpointInvocationHandler.invoke(MessageEndpointInvocationHandler.java:179) at $Proxy192.onMessage(Unknown Source) at com.sun.messaging.jms.ra.OnMessageRunner.run(OnMessageRunner.java:258) at com.sun.enterprise.connectors.work.OneWork.doWork(OneWork.java:76) at com.sun.corba.ee.impl.orbutil.threadpool.ThreadPoolImpl$WorkerThread.run(ThreadPoolImpl.java:555) |#]

    Read the article

  • Reuse a facelet in multiple beans

    - by Seitaridis
    How do I invoke/access a property of a managed bean when the bean name is known, but is not yet constructed? For example: <p:selectOneMenu value="#{eval.evaluateAsBean(bean).text}" > <f:selectItems value="#{eval.evaluateAsBean(bean).values}" var="val" itemLabel="#{val}" itemValue="#{val}" /> </p:selectOneMenu> If there is a managed bean called testBean and in my view bean has the "testBean"value, I want the text or values property of testBean to be called. EDIT1 The context An object consists of a list of properties(values). One property is modified with a custom JSF editor, depending on its type. The list of editors is determined from the object's type, and displayed in a form using custom:include tags. This custom tag is used to dynamically include the editors <custom:include src="#{editor.component}">. The component property points to the location of the JSF editor. In my example some editors(rendered as select boxes) will use the same facelet(dynamicDropdown.xhtml). Every editor has a session scoped managed bean. I want to reuse the same facelet with multiple beans and to pass the name of the bean to dynamicDropdown.xhtml using the bean param. genericAccount.xhtml <p:dataTable value="#{group.editors}" var="editor"> <p:column headerText="Key"> <h:outputText value="#{editor.name}" /> </p:column> <p:column headerText="Value"> <h:panelGroup rendered="#{not editor.href}"> <h:outputText value="#{editor.component}" escape="false" /> </h:panelGroup> <h:panelGroup rendered="#{editor.href}"> <custom:include src="#{editor.component}"> <ui:param name="enabled" value="#{editor.enabled}"/> <ui:param name="bean" value="#{editor.bean}"/> <custom:include> </h:panelGroup> </p:column> </p:dataTable> #{editor.component} refers to a dynamicDropdown.xhtml file. dynamicDropdown.xhtml <ui:composition xmlns="http://www.w3.org/1999/xhtml" xmlns:ui="http://java.sun.com/jsf/facelets" xmlns:h="http://java.sun.com/jsf/html" xmlns:f="http://java.sun.com/jsf/core" xmlns:p="http://primefaces.prime.com.tr/ui"> <p:selectOneMenu value="#{eval.evaluateAsBean(bean).text}" > <f:selectItems value="#{eval.evaluateAsBean(bean).values}" var="val" itemLabel="#{val}" itemValue="#{val}" /> </p:selectOneMenu> </ui:composition> eval is a managed bean: @ManagedBean(name = "eval") @ApplicationScoped public class ELEvaluator { ... public Object evaluateAsBean(String el) { FacesContext context = FacesContext.getCurrentInstance(); Object bean = context.getELContext() .getELResolver().getValue(context.getELContext(), null, el); return bean; } ... }

    Read the article

  • [C#][XNA] Draw() 20,000 32 by 32 Textures or 1 Large Texture 20,000 Times

    - by Rudi
    The title may be confusing - sorry about that, it's a poor summary. Here's my dilemma. I'm programming in C# using the .NET Framework 4, and aiming to make a tile-based game with XNA. I have one large texture (256 pixels by 4096 pixels). Remember this is a tile-based game, so this texture is so massive only because it contains many tiles, which are each 32 pixels by 32 pixels. I think the experts will definitely know what a tile-based game is like. The orientation is orthogonal (like a chess board), not isometric. In the Game.Draw() method, I have two choices, one of which will be incredibly more efficient than the other. Choice/Method #1: Semi-Pseudocode: public void Draw() { // map tiles are drawn left-to-right, top-to-bottom for (int x = 0; x < mapWidth; x++) { for (int y = 0; y < mapHeight; y++) { SpriteBatch.Draw( MyLargeTexture, // One large 256 x 4096 texture new Rectangle(x, y, 32, 32), // Destination rectangle - ignore this, its ok new Rectangle(x, y, 32, 32), // Notice the source rectangle 'cuts out' 32 by 32 squares from the texture corresponding to the loop Color.White); // No tint - ignore this, its ok } } } Caption: So, effectively, the first method is referencing one large texture many many times, each time using a small rectangle of this large texture to draw the appropriate tile image. Choice/Method #2: Semi-Pseudocode: public void Draw() { // map tiles are drawn left-to-right, top-to-bottom for (int x = 0; x < mapWidth; x++) { for (int y = 0; y < mapHeight; y++) { Texture2D tileTexture = map.GetTileTexture(x, y); // Getting a small 32 by 32 texture (different each iteration of the loop) SpriteBatch.Draw( tileTexture, new Rectangle(x, y, 32, 32), // Destination rectangle - ignore this, its ok new Rectangle(0, 0, tileTexture.Width, tileTexture.Height), // Notice the source rectangle uses the entire texture, because the entire texture IS 32 by 32 Color.White); // No tint - ignore this, its ok } } } Caption: So, effectively, the second method is drawing many small textures many times. The Question: Which method and why? Personally, I would think it would be incredibly more efficient to use the first method. If you think about what that means for the tile array in a map (think of a large map with 2000 by 2000 tiles, let's say), each Tile object would only have to contain 2 integers, for the X and Y positions of the source rectangle in the one large texture - 8 bytes. If you use method #2, however, each Tile object in the tile array of the map would have to store a 32by32 Texture - an image - which has to allocate memory for the R G B A pixels 32 by 32 times - is that 4096 bytes per tile then? So, which method and why? First priority is speed, then memory-load, then efficiency or whatever you experts believe.

    Read the article

  • Version Assemblies with TFS 2010 Continuous Integration

    - by Steve Michelotti
    When I first heard that TFS 2010 had moved to Workflow Foundation for Team Build, I was *extremely* skeptical. I’ve loved MSBuild and didn’t quite understand the reasons for this change. In fact, given that I’ve been exclusively using Cruise Control for Continuous Integration (CI) for the last 5+ years of my career, I was skeptical of TFS for CI in general. However, after going through the learning process for TFS 2010 recently, I’m starting to become a believer. I’m also starting to see some of the benefits with Workflow Foundation for the overall processing because it gives you constructs not available in MSBuild such as parallel tasks, better control flow constructs, and a slightly better customization story. The first customization I had to make to the build process was to version the assemblies of my solution. This is not new. In fact, I’d recommend reading Mike Fourie’s well known post on Versioning Code in TFS before you get started. This post describes several foundational aspects of versioning assemblies regardless of your version of TFS. The main points are: 1) don’t use source control operations for your version file, 2) use a schema like <Major>.<Minor>.<IncrementalNumber>.0, and 3) do not keep AssemblyVersion and AssemblyFileVersion in sync. To do this in TFS 2010, the best post I’ve found has been Jim Lamb’s post of building a custom TFS 2010 workflow activity. Overall, this post is excellent but the primary issue I have with it is that the assembly version numbers produced are based in a date and look like this: “2010.5.15.1”. This is definitely not what I want. I want to be able to communicate to the developers and stakeholders that we are producing the “1.1 release” or “1.2 release” – which would have an assembly version number of “1.1.317.0” for example. In this post, I’ll walk through the process of customizing the assembly version number based on this method – customizing the concepts in Lamb’s post to suit my needs. I’ll also be combining this with the concepts of Fourie’s post – particularly with regards to the standards around how to version the assemblies. The first thing I’ll do is add a file called SolutionAssemblyVersionInfo.cs to the root of my solution that looks like this: 1: using System; 2: using System.Reflection; 3: [assembly: AssemblyVersion("1.1.0.0")] 4: [assembly: AssemblyFileVersion("1.1.0.0")] I’ll then add that file as a Visual Studio link file to each project in my solution by right-clicking the project, “Add – Existing Item…” then when I click the SolutionAssemblyVersionInfo.cs file, making sure I “Add As Link”: Now the Solution Explorer will show our file. We can see that it’s a “link” file because of the black arrow in the icon within all our projects. Of course you’ll need to remove the AssemblyVersion and AssemblyFileVersion attributes from the AssemblyInfo.cs files to avoid the duplicate attributes since they now leave in the SolutionAssemblyVersionInfo.cs file. This is an extremely common technique so that all the projects in our solution can be versioned as a unit. At this point, we’re ready to write our custom activity. The primary consideration is that I want the developer and/or tech lead to be able to easily be in control of the Major.Minor and then I want the CI process to add the third number with a unique incremental number. We’ll leave the fourth position always “0” for now – it’s held in reserve in case the day ever comes where we need to do an emergency patch to Production based on a branched version.   Writing the Custom Workflow Activity Similar to Lamb’s post, I’m going to write two custom workflow activities. The “outer” activity (a xaml activity) will be pretty straight forward. It will check if the solution version file exists in the solution root and, if so, delegate the replacement of version to the AssemblyVersionInfo activity which is a CodeActivity highlighted in red below:   Notice that the arguments of this activity are the “solutionVersionFile” and “tfsBuildNumber” which will be passed in. The tfsBuildNumber passed in will look something like this: “CI_MyApplication.4” and we’ll need to grab the “4” (i.e., the incremental revision number) and put that in the third position. Then we’ll need to honor whatever was specified for Major.Minor in the SolutionAssemblyVersionInfo.cs file. For example, if the SolutionAssemblyVersionInfo.cs file had “1.1.0.0” for the AssemblyVersion (as shown in the first code block near the beginning of this post), then we want to resulting file to have “1.1.4.0”. Before we do anything, let’s put together a unit test for all this so we can know if we get it right: 1: [TestMethod] 2: public void Assembly_version_should_be_parsed_correctly_from_build_name() 3: { 4: // arrange 5: const string versionFile = "SolutionAssemblyVersionInfo.cs"; 6: WriteTestVersionFile(versionFile); 7: var activity = new VersionAssemblies(); 8: var arguments = new Dictionary<string, object> { 9: { "tfsBuildNumber", "CI_MyApplication.4"}, 10: { "solutionVersionFile", versionFile} 11: }; 12:   13: // act 14: var result = WorkflowInvoker.Invoke(activity, arguments); 15:   16: // assert 17: Assert.AreEqual("1.2.4.0", (string)result["newAssemblyFileVersion"]); 18: var lines = File.ReadAllLines(versionFile); 19: Assert.IsTrue(lines.Contains("[assembly: AssemblyVersion(\"1.2.0.0\")]")); 20: Assert.IsTrue(lines.Contains("[assembly: AssemblyFileVersion(\"1.2.4.0\")]")); 21: } 22: 23: private void WriteTestVersionFile(string versionFile) 24: { 25: var fileContents = "using System.Reflection;\n" + 26: "[assembly: AssemblyVersion(\"1.2.0.0\")]\n" + 27: "[assembly: AssemblyFileVersion(\"1.2.0.0\")]"; 28: File.WriteAllText(versionFile, fileContents); 29: }   At this point, the code for our AssemblyVersion activity is pretty straight forward: 1: [BuildActivity(HostEnvironmentOption.Agent)] 2: public class AssemblyVersionInfo : CodeActivity 3: { 4: [RequiredArgument] 5: public InArgument<string> FileName { get; set; } 6:   7: [RequiredArgument] 8: public InArgument<string> TfsBuildNumber { get; set; } 9:   10: public OutArgument<string> NewAssemblyFileVersion { get; set; } 11:   12: protected override void Execute(CodeActivityContext context) 13: { 14: var solutionVersionFile = this.FileName.Get(context); 15: 16: // Ensure that the file is writeable 17: var fileAttributes = File.GetAttributes(solutionVersionFile); 18: File.SetAttributes(solutionVersionFile, fileAttributes & ~FileAttributes.ReadOnly); 19:   20: // Prepare assembly versions 21: var majorMinor = GetAssemblyMajorMinorVersionBasedOnExisting(solutionVersionFile); 22: var newBuildNumber = GetNewBuildNumber(this.TfsBuildNumber.Get(context)); 23: var newAssemblyVersion = string.Format("{0}.{1}.0.0", majorMinor.Item1, majorMinor.Item2); 24: var newAssemblyFileVersion = string.Format("{0}.{1}.{2}.0", majorMinor.Item1, majorMinor.Item2, newBuildNumber); 25: this.NewAssemblyFileVersion.Set(context, newAssemblyFileVersion); 26:   27: // Perform the actual replacement 28: var contents = this.GetFileContents(newAssemblyVersion, newAssemblyFileVersion); 29: File.WriteAllText(solutionVersionFile, contents); 30:   31: // Restore the file's original attributes 32: File.SetAttributes(solutionVersionFile, fileAttributes); 33: } 34:   35: #region Private Methods 36:   37: private string GetFileContents(string newAssemblyVersion, string newAssemblyFileVersion) 38: { 39: var cs = new StringBuilder(); 40: cs.AppendLine("using System.Reflection;"); 41: cs.AppendFormat("[assembly: AssemblyVersion(\"{0}\")]", newAssemblyVersion); 42: cs.AppendLine(); 43: cs.AppendFormat("[assembly: AssemblyFileVersion(\"{0}\")]", newAssemblyFileVersion); 44: return cs.ToString(); 45: } 46:   47: private Tuple<string, string> GetAssemblyMajorMinorVersionBasedOnExisting(string filePath) 48: { 49: var lines = File.ReadAllLines(filePath); 50: var versionLine = lines.Where(x => x.Contains("AssemblyVersion")).FirstOrDefault(); 51:   52: if (versionLine == null) 53: { 54: throw new InvalidOperationException("File does not contain [assembly: AssemblyVersion] attribute"); 55: } 56:   57: return ExtractMajorMinor(versionLine); 58: } 59:   60: private static Tuple<string, string> ExtractMajorMinor(string versionLine) 61: { 62: var firstQuote = versionLine.IndexOf('"') + 1; 63: var secondQuote = versionLine.IndexOf('"', firstQuote); 64: var version = versionLine.Substring(firstQuote, secondQuote - firstQuote); 65: var versionParts = version.Split('.'); 66: return new Tuple<string, string>(versionParts[0], versionParts[1]); 67: } 68:   69: private string GetNewBuildNumber(string buildName) 70: { 71: return buildName.Substring(buildName.LastIndexOf(".") + 1); 72: } 73:   74: #endregion 75: }   At this point the final step is to incorporate this activity into the overall build template. Make a copy of the DefaultTempate.xaml – we’ll call it DefaultTemplateWithVersioning.xaml. Before the build and labeling happens, drag the VersionAssemblies activity in. Then set the LabelName variable to “BuildDetail.BuildDefinition.Name + "-" + newAssemblyFileVersion since the newAssemblyFileVersion was produced by our activity.   Configuring CI Once you add your solution to source control, you can configure CI with the build definition window as shown here. The main difference is that we’ll change the Process tab to reflect a different build number format and choose our custom build process file:   When the build completes, we’ll see the name of our project with the unique revision number:   If we look at the detailed build log for the latest build, we’ll see the label being created with our custom task:     We can now look at the history labels in TFS and see the project name with the labels (the Assignment activity I added to the workflow):   Finally, if we look at the physical assemblies that are produced, we can right-click on any assembly in Windows Explorer and see the assembly version in its properties:   Full Traceability We now have full traceability for our code. There will never be a question of what code was deployed to Production. You can always see the assembly version in the properties of the physical assembly. That can be traced back to a label in TFS where the unique revision number matches. The label in TFS gives you the complete snapshot of the code in your source control repository at the time the code was built. This type of process for full traceability has been used for many years for CI – in fact, I’ve done similar things with CCNet and SVN for quite some time. This is simply the TFS implementation of that pattern. The new features that TFS 2010 give you to make these types of customizations in your build process are quite easy once you get over the initial curve.

    Read the article

  • Web Matrix released

    - by TATWORTH
    Microsoft have now released Web Matrix (and ASP.NET MVC3 if you so inclined!) One signifcant utility is IIS Express which will replace Cassini It is worth noting that SP1 for VS2010 should be out in Q1. Links: http://www.hanselman.com/blog/ASPNETMVC3WebMatrixNuGetIISExpressAndOrchardReleasedTheMicrosoftJanuaryWebReleaseInContext.aspx http://www.hanselman.com/blog/LinkRollupNewDocumentationAndTutorialsFromWebPlatformAndTools.aspx http://arstechnica.com/microsoft/news/2011/01/microsoft-releases-free-webmatrix-web-development-tool.ars I am impressed by the copious tutorials on MVC, which I include below: Intro to ASP.NET MVC 3 onboarding series. Scott Hanselman and Rick Anderson collaboration and Mike Pope (Editor) Both C# and VB versions: Intro to ASP.NET MVC 3 Adding a Controller Adding a View Entity Framework Code-First Development Accessing your Model's Data from a Controller Adding a Create Method and Create View Adding Validation to the Model Adding a New Field to the Movie Model and Table Implementing Edit, Details and Delete Source code for this series MVC 3 Updated and new tutorials/ API Reference on MSDN Rick Anderson (Lead Programming Writer), Keith Newman and Mike Pope (Editor) ASP.NET MVC 3 Content Map ASP.NET MVC Overview MVC Framework and Application Structure Understanding MVC Application Execution Compatibility of ASP.NET Web Forms and MVC Walkthrough: Creating a Basic ASP.NET MVC Project Walkthrough: Using Forms Authentication in ASP.NET MVC Controllers and Action Methods in ASP.NET MVC Applications Using an Asynchronous Controller in ASP.NET MVC Views and UI Rendering in ASP.NET MVC Applications Rendering a Form Using HTML Helpers Passing Data in an ASP.NET MVC Application Walkthrough: Using Templated Helpers to Display Data in ASP.NET MVC Creating an ASP.NET MVC View by Calling Multiple Actions Models and Validation in ASP.NET MVC How to: Validate Model Data Using DataAnnotations Attributes Walkthrough: Using MVC View Templates How to: Implement Remote Validation in ASP.NET MVC Walkthrough: Adding AJAX Scripting Walkthrough: Organizing an Application using Areas Filtering in ASP.NET MVC Creating Custom Action Filters How to: Create a Custom Action Filter Unit Testing in ASP.NET MVC Applications Walkthrough: Using TDD with ASP.NET MVC How to: Add a Custom ASP.NET MVC Test Framework in Visual Studio ASP.NET MVC 3 Reference System.Web.Mvc System.Web.Mvc.Ajax System.Web.Mvc.Async System.Web.Mvc.Html System.Web.Mvc.Razor

    Read the article

  • Silverlight 4 Training Kit

    - by ScottGu
    We recently released a new free Silverlight 4 Training Kit that walks you through building business applications with Silverlight 4.  You can browse the training kit online or alternatively download an entire offline version of the training kit.  The training material is structured on teaching how to use the new Silverlight 4 features to build an end to end business application. The training kit includes 8 modules, 25 videos, and several hands on labs. Below is a breakdown and links to all of the content. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] Module 1: Introduction Click here to watch this module. In this video John Papa and Ian Griffiths discuss the key areas that the Building Business Applications with Silverlight 4 course focuses on. This module is the overview of the course and covers many key scenarios that are faced when building business applications, and how Silverlight can help address them. Module 2: WCF RIA Services Click here to explore this module. In this lab, you will create a web site for managing conferences that will be the basis for the other labs in this course. Don’t worry if you don’t complete a particular lab in the series – all lab manual instructions are accompanied by completed solutions, so you can either build your own solution from start to finish, or dive straight in at any point using the solutions provided as a starting point. In this lab you will learn how to set up WCF RIA Services, create bindings to the domain context, filter using the domain data source, and create domain service queries. Online Link Download Source Download Lab Document Videos Module 2.1 - WCF RIA Services Ian Griffiths sets up the Entity Framework and WCF RIA Services for the sample Event Manager application for the course. He covers how to set up the services, how the Domain Services work and the role that the DomainContext plays in the sample application. He also reviews the metadata classes and integrating the navigation framework. Module 2.2 – Using WCF RIA Services to Edit Entities Ian Griffiths discusses how he adds the ability to edit and create individual entities with the features built into WCF RIA Services into the sample Event Manager application. He covers data binding fundamentals, IQueryable, LINQ, the DomainDataSource, navigation to a single entity using the navigation framework, and how to use the Visual Studio designer to do much of the work . Module 2.3 – Showing Master/Details Records Using WCF RIA Services Ian Griffiths reviews how to display master/detail records for the sample Event Manager application using WCF RIA Services. He covers how to use the Include attribute to indicate which elements to serialize back to the client. Ian also demonstrates how to use the Data Sources window in the designer to add and bind controls to specific data elements. He wraps up by showing how to create custom services to the Domain Services. Module 3 – Authentication, Validation, MVVM, Commands, Implicit Styles and RichTextBox Click here to visit this module. This lab demonstrates how to build a login screen, integrate ASP.NET authentication, and perform validation on data elements. Model-View-ViewModel (MVVM) is introduced and used in this lab as a pattern to help separate the UI and business logic. You will also learn how to use implicit styling and the new RichTextBox control. Online Link Download Source Download Lab Document Videos Module 3.1 – Authentication Ian Griffiths covers how to integrate a login screen and authentication into the sample Event Manager application. Ian shows how to use the ASP.NET authentication and integrate it into WCF RIA Services and the Silverlight presentation layer. Module 3.2 – MVVM Ian Griffiths covers how to Model-View-ViewModel (MVVM) patterns into the sample Event Manager application. He discusses why MVVM exists, what separated presentation means, and why it is important. He shows how to connect the View to the ViewModel, why data binding is important in this symbiosis, and how everything fits together in the overall application. Module 3.3 –Validation Ian Griffiths discusses how validation of user input can be integrated into the sample Event Manager application. He demonstrates how to use the DataAnnotations, the INotifyDataErrorInfo interface, binding markup extensions, and WCF RIA Services in concert to achieve great validation in the sample application. He discusses how this technique allows for property level validation, entity level validation, and asynchronous server side validation. Module 3.4 – Implicit Styles Ian Griffiths discusses how why implicit styles are important and how they can be integrated into the sample Event Manager application. He shows how implicit styles defined in a resource dictionary can be applied to all elements of a particular kind throughout the application. Module 3.5 – RichTextBox Ian Griffiths discusses how the new RichTextBox control and it can be integrated into the sample Event Manager application. He demonstrates how the RichTextBox can provide editing for the event information and how it can display the rich text for selection and copying. Module 4 – User Profiles, Drop Targets, Webcam and Clipboard Click here to visit this module. This lab builds new features into the sample application to take the user's photo. It teaches you how to use the webcam to capture an image, use Silverlight as a drop target, and take advantage of programmatic access to the clipboard. Link Download Source Download Lab Document Videos Module 4.1 – Webcam Ian Griffiths demonstrates how the webcam adds value to the sample Event Manager application by capturing an image of the attendee. He discusses the VideoCaptureDevice, the CaptureDviceConfiguration, and the CaptureSource classes and how they allow audio and video to be captured so you can grab an image from the capture device and save it. Module 4.2 - Drag and Drop in Silverlight Ian Griffiths demonstrates how to capture and handle the Drop in the sample Event Manager application so the user can drag a photo from a file and drop it into the application. Ian reviews the AllowDrop property, the Drop event, how to access the file that can be dropped, and the other drag related events. He also reviews how to make this work across browsers and the challenges for this. Module 5 – Schedule Planner and Right Mouse Click Click here to visit this module. This lab builds on the application to allow grouping in the DataGrid and implement right mouse click features to add context menu support. Link Download Source Download Lab Document Videos Module 5.1 – Grouping and Binding Ian Griffiths demonstrates how to use the grouping features for data binding in the DataGrid and how it applies to the sample Event Manager application. He reviews the role of the CollectionViewSource in grouping, customizing the templates for headers, and how to work with grouping with ItemsControls. Module 5.2 – Layout Visual States Ian Griffiths demonstrates how to use the Fluid UI animation support for visual states in the ListBox control DataGrid and how it applies to the sample Event Manager application. He reviews the 3 visual states of BeforeLoaded, AfterLoaded, and BeforeUnloaded. Module 5.3 – Right Mouse Click Ian Griffiths demonstrates how to add support for handling the right mouse button click event to display a context menu for the Event Manager application. He demonstrates how to handle the event, show a custom context menu control, and integrate it into the scheduling portion of the application. Module 6 – Printing the Schedule Click here to visit this module. This lab teaches how to use the new printing features in Silverlight 4. The lab walks through the PrintDocument class and the ViewBox control, while showing how to print multiple pages of content using them. Link Download Source Download Lab Document Videos Module 6.1 – Printing and the Viewbox Ian Griffiths demonstrates how to add the ability to print the schedule to the sample Event Manager application. He walks through the importance of the PrintDocument class and its members. He also shows how to handle printing the visual tree and how the ViewBox control can help. Module 6.2 – Multi Page Printing Ian Griffiths expands on his printing discussion by showing how to handle printing multiple pages of content for the sample Event Manager application. He shows how to paginate the content and points out various tips to keep in mind when determining the printable area. Module 7 – Running the Event Dashboard Out of Browser Click here to visit this module. This lab builds a dashboard for the sample application while explaining the fundamentals of the out of browser features, how to handle authentication, displaying notifications (toasts), and how to use native integration to use COM Interop with Silverlight. Link Download Source Download Lab Document Videos Module 7.1 – Out of Browser Ian Griffiths discusses the role of an Out of Browser application for administrators to manage the events and users in the sample Event Manager application. He discusses several reasons why out of browser applications may better suit your needs including custom chrome, toasts, window placement, cross domain access, and file access. He demonstrates the basic technique to take your application and make it work out of browser using the tools. Module 7.2 – NotificationWindow (Toasts) for Elevated Trust Out of Browser Applications Ian Griffiths discusses the how toasts can be used in the sample Event Manager application to show information that may require the user's attention. Ian covers how to create a toast using the NotificationWindow, security implications, and how to make the toast appear as needed. Module 7.3 – Out of Browser Window Placement Ian Griffiths discusses the how to manage the window positioning when building an out of browser application, handling the windows state, and controlling and handling activation of the window. Module 7.4 – Out of Browser Elevated Trust Application Overview Ian Griffiths discusses the implications of creating trusted out of browser application for the Event Manager sample application. He reviews why you might want to use elevated trust, what features is opens to you, and how to take advantage of them. Topics Ian covers include the dynamic keyword in C# 4, the AutomationFactory class, the API to check if you are in a trusted application, and communicating with Excel. Module 8 – Advanced Out of Browser and MEF Click here to visit this module. This hands-on lab walks through the creation of a trusted out of browser application and the new functionality that comes with that. You will learn to use COM Automation, handle the window closing event, set custom window chrome, digitally sign your Silverlight out of browser trusted application, create a silent install option, and take advantage of MEF. Link Download Source Download Lab Document Videos Module 8.1 – Custom Window Chrome for Elevated Trust Out of Browser Applications Ian Griffiths discusses how to replace the standard operating system window chrome with customized chrome for an elevated trusted out of browser application. He covers how it is important to handle close, resize, minimize, and maximize events. Ian mentions that the tooling was not ready when he shot this video, but the good news is that the tooling now supports setting the custom chrome directly from the property page for the Silverlight application. Module 8.2 – Window Closing Event for Out of Browser Applications Ian Griffiths discusses the WindowClosing event and how to handle and optionally cancel the event. Module 8.3 – Silent Install of Out of Browser Applications Ian Griffiths discusses how to use the SLLauncher executable to install an out of browser application. He discusses the optional command line switches that can be set including how the emulate switch can help you emulate the install process. Ian also shows how to setup a shortcut for the application and tell the application where it should look for future updates online. Module 8.4 – Digitally Signing Out of Browser Application Ian Griffiths discusses how and why to digitally sign an out of browser application using the signtool program. He covers what trusted certificates are, the implications of signing (or not signing), and the effect on the user experience. Module 8.5 – The Value of MEF with Silverlight Ian Griffiths discusses what MEF is, how your application can benefit from it, and the fundamental features it puts at your disposal. He covers the 3 step import, export and compose process as well as how to dynamically import XAP files using MEF. Summary As you can probably tell from the long list above – this series contains a ton of great content, and hopefully provides a nice end-to-end walkthrough that helps explain how to take advantage of Silverlight 4 (and all its new features).  Hope this helps, Scott

    Read the article

  • Adventures in MVVM &ndash; My ViewModel Base

    - by Brian Genisio's House Of Bilz
    More Adventures in MVVM First, I’d like to say: THIS IS NOT A NEW MVVM FRAMEWORK. I tend to believe that MVVM support code should be specific to the system you are building and the developers working on it.  I have yet to find an MVVM framework that does everything I want it to without doing too much.  Don’t get me wrong… there are some good frameworks out there.  I just like to pick and choose things that make sense for me.  I’d also like to add that some of these features only work in WPF.  As of Silveright 4, they don’t support binding to dynamic properties, so some of the capabilities are lost. That being said, I want to share my ViewModel base class with the world.  I have had several conversations with people about the problems I have solved using this ViewModel base.  A while back, I posted an article about some experiments with a “Rails Inspired ViewModel”.  What followed from those ideas was a ViewModel base class that I take with me and use in my projects.  It has a lot of features, all designed to reduce the friction in writing view models. I have put the code out on Codeplex under the project: ViewModelSupport. Finally, this article focuses on the ViewModel and only glosses over the View and the Model.  Without all three, you don’t have MVVM.  But this base class is for the ViewModel, so that is what I am focusing on. Features: Automatic Command Plumbing Property Change Notification Strongly Typed Property Getter/Setters Dynamic Properties Default Property values Derived Properties Automatic Method Execution Command CanExecute Change Notification Design-Time Detection What about Silverlight? Automatic Command Plumbing This feature takes the plumbing out of creating commands.  The common pattern for commands in a ViewModel is to have an Execute method as well as an optional CanExecute method.  To plumb that together, you create an ICommand Property, and set it in the constructor like so: Before public class AutomaticCommandViewModel { public AutomaticCommandViewModel() { MyCommand = new DelegateCommand(Execute_MyCommand, CanExecute_MyCommand); } public void Execute_MyCommand() { // Do something } public bool CanExecute_MyCommand() { // Are we in a state to do something? return true; } public DelegateCommand MyCommand { get; private set; } } With the base class, this plumbing is automatic and the property (MyCommand of type ICommand) is created for you.  The base class uses the convention that methods be prefixed with Execute_ and CanExecute_ in order to be plumbed into commands with the property name after the prefix.  You are left to be expressive with your behavior without the plumbing.  If you are wondering how CanExecuteChanged is raised, see the later section “Command CanExecute Change Notification”. After public class AutomaticCommandViewModel : ViewModelBase { public void Execute_MyCommand() { // Do something } public bool CanExecute_MyCommand() { // Are we in a state to do something? return true; } }   Property Change Notification One thing that always kills me when implementing ViewModels is how to make properties that notify when they change (via the INotifyPropertyChanged interface).  There have been many attempts to make this more automatic.  My base class includes one option.  There are others, but I feel like this works best for me. The common pattern (without my base class) is to create a private backing store for the variable and specify a getter that returns the private field.  The setter will set the private field and fire an event that notifies the change, only if the value has changed. Before public class PropertyHelpersViewModel : INotifyPropertyChanged { private string text; public string Text { get { return text; } set { if(text != value) { text = value; RaisePropertyChanged("Text"); } } } protected void RaisePropertyChanged(string propertyName) { var handlers = PropertyChanged; if(handlers != null) handlers(this, new PropertyChangedEventArgs(propertyName)); } public event PropertyChangedEventHandler PropertyChanged; } This way of defining properties is error-prone and tedious.  Too much plumbing.  My base class eliminates much of that plumbing with the same functionality: After public class PropertyHelpersViewModel : ViewModelBase { public string Text { get { return Get<string>("Text"); } set { Set("Text", value);} } }   Strongly Typed Property Getters/Setters It turns out that we can do better than that.  We are using a strongly typed language where the use of “Magic Strings” is often frowned upon.  Lets make the names in the getters and setters strongly typed: A refinement public class PropertyHelpersViewModel : ViewModelBase { public string Text { get { return Get(() => Text); } set { Set(() => Text, value); } } }   Dynamic Properties In C# 4.0, we have the ability to program statically OR dynamically.  This base class lets us leverage the powerful dynamic capabilities in our ecosystem. (This is how the automatic commands are implemented, BTW)  By calling Set(“Foo”, 1), you have now created a dynamic property called Foo.  It can be bound against like any static property.  The opportunities are endless.  One great way to exploit this behavior is if you have a customizable view engine with templates that bind to properties defined by the user.  The base class just needs to create the dynamic properties at runtime from information in the model, and the custom template can bind even though the static properties do not exist. All dynamic properties still benefit from the notifiable capabilities that static properties do. For any nay-sayers out there that don’t like using the dynamic features of C#, just remember this: the act of binding the View to a ViewModel is dynamic already.  Why not exploit it?  Get over it :) Just declare the property dynamically public class DynamicPropertyViewModel : ViewModelBase { public DynamicPropertyViewModel() { Set("Foo", "Bar"); } } Then reference it normally <TextBlock Text="{Binding Foo}" />   Default Property Values The Get() method also allows for default properties to be set.  Don’t set them in the constructor.  Set them in the property and keep the related code together: public string Text { get { return Get(() => Text, "This is the default value"); } set { Set(() => Text, value);} }   Derived Properties This is something I blogged about a while back in more detail.  This feature came from the chaining of property notifications when one property affects the results of another, like this: Before public class DependantPropertiesViewModel : ViewModelBase { public double Score { get { return Get(() => Score); } set { Set(() => Score, value); RaisePropertyChanged("Percentage"); RaisePropertyChanged("Output"); } } public int Percentage { get { return (int)(100 * Score); } } public string Output { get { return "You scored " + Percentage + "%."; } } } The problem is: The setter for Score has to be responsible for notifying the world that Percentage and Output have also changed.  This, to me, is backwards.    It certainly violates the “Single Responsibility Principle.” I have been bitten in the rear more than once by problems created from code like this.  What we really want to do is invert the dependency.  Let the Percentage property declare that it changes when the Score Property changes. After public class DependantPropertiesViewModel : ViewModelBase { public double Score { get { return Get(() => Score); } set { Set(() => Score, value); } } [DependsUpon("Score")] public int Percentage { get { return (int)(100 * Score); } } [DependsUpon("Percentage")] public string Output { get { return "You scored " + Percentage + "%."; } } }   Automatic Method Execution This one is extremely similar to the previous, but it deals with method execution as opposed to property.  When you want to execute a method triggered by property changes, let the method declare the dependency instead of the other way around. Before public class DependantMethodsViewModel : ViewModelBase { public double Score { get { return Get(() => Score); } set { Set(() => Score, value); WhenScoreChanges(); } } public void WhenScoreChanges() { // Handle this case } } After public class DependantMethodsViewModel : ViewModelBase { public double Score { get { return Get(() => Score); } set { Set(() => Score, value); } } [DependsUpon("Score")] public void WhenScoreChanges() { // Handle this case } }   Command CanExecute Change Notification Back to Commands.  One of the responsibilities of commands that implement ICommand – it must fire an event declaring that CanExecute() needs to be re-evaluated.  I wanted to wait until we got past a few concepts before explaining this behavior.  You can use the same mechanism here to fire off the change.  In the CanExecute_ method, declare the property that it depends upon.  When that property changes, the command will fire a CanExecuteChanged event, telling the View to re-evaluate the state of the command.  The View will make appropriate adjustments, like disabling the button. DependsUpon works on CanExecute methods as well public class CanExecuteViewModel : ViewModelBase { public void Execute_MakeLower() { Output = Input.ToLower(); } [DependsUpon("Input")] public bool CanExecute_MakeLower() { return !string.IsNullOrWhiteSpace(Input); } public string Input { get { return Get(() => Input); } set { Set(() => Input, value);} } public string Output { get { return Get(() => Output); } set { Set(() => Output, value); } } }   Design-Time Detection If you want to add design-time data to your ViewModel, the base class has a property that lets you ask if you are in the designer.  You can then set some default values that let your designer see what things might look like in runtime. Use the IsInDesignMode property public DependantPropertiesViewModel() { if(IsInDesignMode) { Score = .5; } }   What About Silverlight? Some of the features in this base class only work in WPF.  As of version 4, Silverlight does not support binding to dynamic properties.  This, in my opinion, is a HUGE limitation.  Not only does it keep you from using many of the features in this ViewModel, it also keeps you from binding to ViewModels designed in IronRuby.  Does this mean that the base class will not work in Silverlight?  No.  Many of the features outlined in this article WILL work.  All of the property abstractions are functional, as long as you refer to them statically in the View.  This, of course, means that the automatic command hook-up doesn’t work in Silverlight.  You need to plumb it to a static property in order for the Silverlight View to bind to it.  Can I has a dynamic property in SL5?     Good to go? So, that concludes the feature explanation of my ViewModel base class.  Feel free to take it, fork it, whatever.  It is hosted on CodePlex.  When I find other useful additions, I will add them to the public repository.  I use this base class every day.  It is mature, and well tested.  If, however, you find any problems with it, please let me know!  Also, feel free to suggest patches to me via the CodePlex site.  :)

    Read the article

  • Passing multiple simple POST Values to ASP.NET Web API

    - by Rick Strahl
    A few weeks backs I posted a blog post  about what does and doesn't work with ASP.NET Web API when it comes to POSTing data to a Web API controller. One of the features that doesn't work out of the box - somewhat unexpectedly -  is the ability to map POST form variables to simple parameters of a Web API method. For example imagine you have this form and you want to post this data to a Web API end point like this via AJAX: <form> Name: <input type="name" name="name" value="Rick" /> Value: <input type="value" name="value" value="12" /> Entered: <input type="entered" name="entered" value="12/01/2011" /> <input type="button" id="btnSend" value="Send" /> </form> <script type="text/javascript"> $("#btnSend").click( function() { $.post("samples/PostMultipleSimpleValues?action=kazam", $("form").serialize(), function (result) { alert(result); }); }); </script> or you might do this more explicitly by creating a simple client map and specifying the POST values directly by hand:$.post("samples/PostMultipleSimpleValues?action=kazam", { name: "Rick", value: 1, entered: "12/01/2012" }, $("form").serialize(), function (result) { alert(result); }); On the wire this generates a simple POST request with Url Encoded values in the content:POST /AspNetWebApi/samples/PostMultipleSimpleValues?action=kazam HTTP/1.1 Host: localhost User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:15.0) Gecko/20100101 Firefox/15.0.1 Accept: application/json Connection: keep-alive Content-Type: application/x-www-form-urlencoded; charset=UTF-8 X-Requested-With: XMLHttpRequest Referer: http://localhost/AspNetWebApi/FormPostTest.html Content-Length: 41 Pragma: no-cache Cache-Control: no-cachename=Rick&value=12&entered=12%2F10%2F2011 Seems simple enough, right? We are basically posting 3 form variables and 1 query string value to the server. Unfortunately Web API can't handle request out of the box. If I create a method like this:[HttpPost] public string PostMultipleSimpleValues(string name, int value, DateTime entered, string action = null) { return string.Format("Name: {0}, Value: {1}, Date: {2}, Action: {3}", name, value, entered, action); }You'll find that you get an HTTP 404 error and { "Message": "No HTTP resource was found that matches the request URI…"} Yes, it's possible to pass multiple POST parameters of course, but Web API expects you to use Model Binding for this - mapping the post parameters to a strongly typed .NET object, not to single parameters. Alternately you can also accept a FormDataCollection parameter on your API method to get a name value collection of all POSTed values. If you're using JSON only, using the dynamic JObject/JValue objects might also work. ModelBinding is fine in many use cases, but can quickly become overkill if you only need to pass a couple of simple parameters to many methods. Especially in applications with many, many AJAX callbacks the 'parameter mapping type' per method signature can lead to serious class pollution in a project very quickly. Simple POST variables are also commonly used in AJAX applications to pass data to the server, even in many complex public APIs. So this is not an uncommon use case, and - maybe more so a behavior that I would have expected Web API to support natively. The question "Why aren't my POST parameters mapping to Web API method parameters" is already a frequent one… So this is something that I think is fairly important, but unfortunately missing in the base Web API installation. Creating a Custom Parameter Binder Luckily Web API is greatly extensible and there's a way to create a custom Parameter Binding to provide this functionality! Although this solution took me a long while to find and then only with the help of some folks Microsoft (thanks Hong Mei!!!), it's not difficult to hook up in your own projects. It requires one small class and a GlobalConfiguration hookup. Web API parameter bindings allow you to intercept processing of individual parameters - they deal with mapping parameters to the signature as well as converting the parameters to the actual values that are returned. Here's the implementation of the SimplePostVariableParameterBinding class:public class SimplePostVariableParameterBinding : HttpParameterBinding { private const string MultipleBodyParameters = "MultipleBodyParameters"; public SimplePostVariableParameterBinding(HttpParameterDescriptor descriptor) : base(descriptor) { } /// <summary> /// Check for simple binding parameters in POST data. Bind POST /// data as well as query string data /// </summary> public override Task ExecuteBindingAsync(ModelMetadataProvider metadataProvider, HttpActionContext actionContext, CancellationToken cancellationToken) { // Body can only be read once, so read and cache it NameValueCollection col = TryReadBody(actionContext.Request); string stringValue = null; if (col != null) stringValue = col[Descriptor.ParameterName]; // try reading query string if we have no POST/PUT match if (stringValue == null) { var query = actionContext.Request.GetQueryNameValuePairs(); if (query != null) { var matches = query.Where(kv => kv.Key.ToLower() == Descriptor.ParameterName.ToLower()); if (matches.Count() > 0) stringValue = matches.First().Value; } } object value = StringToType(stringValue); // Set the binding result here SetValue(actionContext, value); // now, we can return a completed task with no result TaskCompletionSource<AsyncVoid> tcs = new TaskCompletionSource<AsyncVoid>(); tcs.SetResult(default(AsyncVoid)); return tcs.Task; } private object StringToType(string stringValue) { object value = null; if (stringValue == null) value = null; else if (Descriptor.ParameterType == typeof(string)) value = stringValue; else if (Descriptor.ParameterType == typeof(int)) value = int.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(Int32)) value = Int32.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(Int64)) value = Int64.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(decimal)) value = decimal.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(double)) value = double.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(DateTime)) value = DateTime.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(bool)) { value = false; if (stringValue == "true" || stringValue == "on" || stringValue == "1") value = true; } else value = stringValue; return value; } /// <summary> /// Read and cache the request body /// </summary> /// <param name="request"></param> /// <returns></returns> private NameValueCollection TryReadBody(HttpRequestMessage request) { object result = null; // try to read out of cache first if (!request.Properties.TryGetValue(MultipleBodyParameters, out result)) { // parsing the string like firstname=Hongmei&lastname=Ge result = request.Content.ReadAsFormDataAsync().Result; request.Properties.Add(MultipleBodyParameters, result); } return result as NameValueCollection; } private struct AsyncVoid { } }   The ExecuteBindingAsync method is fired for each parameter that is mapped and sent for conversion. This custom binding is fired only if the incoming parameter is a simple type (that gets defined later when I hook up the binding), so this binding never fires on complex types or if the first type is not a simple type. For the first parameter of a request the Binding first reads the request body into a NameValueCollection and caches that in the request.Properties collection. The request body can only be read once, so the first parameter request reads it and then caches it. Subsequent parameters then use the cached POST value collection. Once the form collection is available the value of the parameter is read, and the value is translated into the target type requested by the Descriptor. SetValue writes out the value to be mapped. Once you have the ParameterBinding in place, the binding has to be assigned. This is done along with all other Web API configuration tasks at application startup in global.asax's Application_Start:GlobalConfiguration.Configuration.ParameterBindingRules .Insert(0, (HttpParameterDescriptor descriptor) => { var supportedMethods = descriptor.ActionDescriptor.SupportedHttpMethods; // Only apply this binder on POST and PUT operations if (supportedMethods.Contains(HttpMethod.Post) || supportedMethods.Contains(HttpMethod.Put)) { var supportedTypes = new Type[] { typeof(string), typeof(int), typeof(decimal), typeof(double), typeof(bool), typeof(DateTime) }; if (supportedTypes.Where(typ => typ == descriptor.ParameterType).Count() > 0) return new SimplePostVariableParameterBinding(descriptor); } // let the default bindings do their work return null; });   The ParameterBindingRules.Insert method takes a delegate that checks which type of requests it should handle. The logic here checks whether the request is POST or PUT and whether the parameter type is a simple type that is supported. Web API calls this delegate once for each method signature it tries to map and the delegate returns null to indicate it's not handling this parameter, or it returns a new parameter binding instance - in this case the SimplePostVariableParameterBinding. Once the parameter binding and this hook up code is in place, you can now pass simple POST values to methods with simple parameters. The examples I showed above should now work in addition to the standard bindings. Summary Clearly this is not easy to discover. I spent quite a bit of time digging through the Web API source trying to figure this out on my own without much luck. It took Hong Mei at Micrsoft to provide a base example as I asked around so I can't take credit for this solution :-). But once you know where to look, Web API is brilliantly extensible to make it relatively easy to customize the parameter behavior. I'm very stoked that this got resolved  - in the last two months I've had two customers with projects that decided not to use Web API in AJAX heavy SPA applications because this POST variable mapping wasn't available. This might actually change their mind to still switch back and take advantage of the many great features in Web API. I too frequently use plain POST variables for communicating with server AJAX handlers and while I could have worked around this (with untyped JObject or the Form collection mostly), having proper POST to parameter mapping makes things much easier. I said this in my last post on POST data and say it again here: I think POST to method parameter mapping should have been shipped in the box with Web API, because without knowing about this limitation the expectation is that simple POST variables map to parameters just like query string values do. I hope Microsoft considers including this type of functionality natively in the next version of Web API natively or at least as a built-in HttpParameterBinding that can be just added. This is especially true, since this binding doesn't affect existing bindings. Resources SimplePostVariableParameterBinding Source on GitHub Global.asax hookup source Mapping URL Encoded Post Values in  ASP.NET Web API© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api  AJAX   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

< Previous Page | 342 343 344 345 346 347 348 349 350 351 352 353  | Next Page >