Search Results

Search found 68147 results on 2726 pages for 'context sensitive help'.

Page 202/2726 | < Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >

  • Using RIA DomainServices with ASP.NET and MVC 2

    - by Bobby Diaz
    Recently, I started working on a new ASP.NET MVC 2 project and I wanted to reuse the data access (LINQ to SQL) and business logic methods (WCF RIA Services) that had been developed for a previous project that used Silverlight for the front-end.  I figured that I would be able to instantiate the various DomainService classes from within my controller’s action methods, because after all, the code for those services didn’t look very complicated.  WRONG!  I didn’t realize at first that some of the functionality is handled automatically by the framework when the domain services are hosted as WCF services.  After some initial searching, I came across an invaluable post by Joe McBride, which described how to get RIA Service .svc files to work in an MVC 2 Web Application, and another by Brad Abrams.  Unfortunately, Brad’s solution was for an earlier preview release of RIA Services and no longer works with the version that I am running (PDC Preview). I have not tried the RC version of WCF RIA Services, so I am not sure if any of the issues I am having have been resolved, but I wanted to come up with a way to reuse the shared libraries so I wouldn’t have to write a non-RIA version that basically did the same thing.  The classes I came up with work with the scenarios I have encountered so far, but I wanted to go ahead and post the code in case someone else is having the same trouble I had.  Hopefully this will save you a few headaches! 1. Querying When I first tried to use a DomainService class to perform a query inside one of my controller’s action methods, I got an error stating that “This DomainService has not been initialized.”  To solve this issue, I created an extension method for all DomainServices that creates the required DomainServiceContext and passes it to the service’s Initialize() method.  Here is the code for the extension method; notice that I am creating a sort of mock HttpContext for those cases when the service is running outside of IIS, such as during unit testing!     public static class ServiceExtensions     {         /// <summary>         /// Initializes the domain service by creating a new <see cref="DomainServiceContext"/>         /// and calling the base DomainService.Initialize(DomainServiceContext) method.         /// </summary>         /// <typeparam name="TService">The type of the service.</typeparam>         /// <param name="service">The service.</param>         /// <returns></returns>         public static TService Initialize<TService>(this TService service)             where TService : DomainService         {             var context = CreateDomainServiceContext();             service.Initialize(context);             return service;         }           private static DomainServiceContext CreateDomainServiceContext()         {             var provider = new ServiceProvider(new HttpContextWrapper(GetHttpContext()));             return new DomainServiceContext(provider, DomainOperationType.Query);         }           private static HttpContext GetHttpContext()         {             var context = HttpContext.Current;   #if DEBUG             // create a mock HttpContext to use during unit testing...             if ( context == null )             {                 var writer = new StringWriter();                 var request = new SimpleWorkerRequest("/", "/",                     String.Empty, String.Empty, writer);                   context = new HttpContext(request)                 {                     User = new GenericPrincipal(new GenericIdentity("debug"), null)                 };             } #endif               return context;         }     }   With that in place, I can use it almost as normally as my first attempt, except with a call to Initialize():     public ActionResult Index()     {         var service = new NorthwindService().Initialize();         var customers = service.GetCustomers();           return View(customers);     } 2. Insert / Update / Delete Once I got the records showing up, I was trying to insert new records or update existing data when I ran into the next issue.  I say issue because I wasn’t getting any kind of error, which made it a little difficult to track down.  But once I realized that that the DataContext.SubmitChanges() method gets called automatically at the end of each domain service submit operation, I could start working on a way to mimic the behavior of a hosted domain service.  What I came up with, was a base class called LinqToSqlRepository<T> that basically sits between your implementation and the default LinqToSqlDomainService<T> class.     [EnableClientAccess()]     public class NorthwindService : LinqToSqlRepository<NorthwindDataContext>     {         public IQueryable<Customer> GetCustomers()         {             return this.DataContext.Customers;         }           public void InsertCustomer(Customer customer)         {             this.DataContext.Customers.InsertOnSubmit(customer);         }           public void UpdateCustomer(Customer currentCustomer)         {             this.DataContext.Customers.TryAttach(currentCustomer,                 this.ChangeSet.GetOriginal(currentCustomer));         }           public void DeleteCustomer(Customer customer)         {             this.DataContext.Customers.TryAttach(customer);             this.DataContext.Customers.DeleteOnSubmit(customer);         }     } Notice the new base class name (just change LinqToSqlDomainService to LinqToSqlRepository).  I also added a couple of DataContext (for Table<T>) extension methods called TryAttach that will check to see if the supplied entity is already attached before attempting to attach it, which would cause an error! 3. LinqToSqlRepository<T> Below is the code for the LinqToSqlRepository class.  The comments are pretty self explanatory, but be aware of the [IgnoreOperation] attributes on the generic repository methods, which ensures that they will be ignored by the code generator and not available in the Silverlight client application.     /// <summary>     /// Provides generic repository methods on top of the standard     /// <see cref="LinqToSqlDomainService&lt;TContext&gt;"/> functionality.     /// </summary>     /// <typeparam name="TContext">The type of the context.</typeparam>     public abstract class LinqToSqlRepository<TContext> : LinqToSqlDomainService<TContext>         where TContext : System.Data.Linq.DataContext, new()     {         /// <summary>         /// Retrieves an instance of an entity using it's unique identifier.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <param name="keyValues">The key values.</param>         /// <returns></returns>         [IgnoreOperation]         public virtual TEntity GetById<TEntity>(params object[] keyValues) where TEntity : class         {             var table = this.DataContext.GetTable<TEntity>();             var mapping = this.DataContext.Mapping.GetTable(typeof(TEntity));               var keys = mapping.RowType.IdentityMembers                 .Select((m, i) => m.Name + " = @" + i)                 .ToArray();               return table.Where(String.Join(" && ", keys), keyValues).FirstOrDefault();         }           /// <summary>         /// Creates a new query that can be executed to retrieve a collection         /// of entities from the <see cref="DataContext"/>.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <returns></returns>         [IgnoreOperation]         public virtual IQueryable<TEntity> GetEntityQuery<TEntity>() where TEntity : class         {             return this.DataContext.GetTable<TEntity>();         }           /// <summary>         /// Inserts the specified entity.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <param name="entity">The entity.</param>         /// <returns></returns>         [IgnoreOperation]         public virtual bool Insert<TEntity>(TEntity entity) where TEntity : class         {             //var table = this.DataContext.GetTable<TEntity>();             //table.InsertOnSubmit(entity);               return this.Submit(entity, null, DomainOperation.Insert);         }           /// <summary>         /// Updates the specified entity.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <param name="entity">The entity.</param>         /// <returns></returns>         [IgnoreOperation]         public virtual bool Update<TEntity>(TEntity entity) where TEntity : class         {             return this.Update(entity, null);         }           /// <summary>         /// Updates the specified entity.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <param name="entity">The entity.</param>         /// <param name="original">The original.</param>         /// <returns></returns>         [IgnoreOperation]         public virtual bool Update<TEntity>(TEntity entity, TEntity original)             where TEntity : class         {             if ( original == null )             {                 original = GetOriginal(entity);             }               var table = this.DataContext.GetTable<TEntity>();             table.TryAttach(entity, original);               return this.Submit(entity, original, DomainOperation.Update);         }           /// <summary>         /// Deletes the specified entity.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <param name="entity">The entity.</param>         /// <returns></returns>         [IgnoreOperation]         public virtual bool Delete<TEntity>(TEntity entity) where TEntity : class         {             //var table = this.DataContext.GetTable<TEntity>();             //table.TryAttach(entity);             //table.DeleteOnSubmit(entity);               return this.Submit(entity, null, DomainOperation.Delete);         }           protected virtual bool Submit(Object entity, Object original, DomainOperation operation)         {             var entry = new ChangeSetEntry(0, entity, original, operation);             var changes = new ChangeSet(new ChangeSetEntry[] { entry });             return base.Submit(changes);         }           private TEntity GetOriginal<TEntity>(TEntity entity) where TEntity : class         {             var context = CreateDataContext();             var table = context.GetTable<TEntity>();             return table.FirstOrDefault(e => e == entity);         }     } 4. Conclusion So there you have it, a fully functional Repository implementation for your RIA Domain Services that can be consumed by your ASP.NET and MVC applications.  I have uploaded the source code along with unit tests and a sample web application that queries the Customers table from inside a Controller, as well as a Silverlight usage example. As always, I welcome any comments or suggestions on the approach I have taken.  If there is enough interest, I plan on contacting Colin Blair or maybe even the man himself, Brad Abrams, to see if this is something worthy of inclusion in the WCF RIA Services Contrib project.  What do you think? Enjoy!

    Read the article

  • SQL SERVER – Effect of Collation on Resultset – SQL in Sixty Seconds #026 – Video

    - by pinaldave
    Collation is a very important concept but often ignored. I have often seen developers either not understanding this or ignored it – this is plain wrong. In simple word we can say Collation is the language or interpreting done by SQL Server. Well, in today’s SQL in Sixty Seconds we are going to observe how collation affects the resultset. Today’s blog post is inspired from my earlier blog post SQL SERVER – Effect of Case Sensitive Collation on Resultset. I strongly encourage you to read this earlier blog post for sample code as well additional explanation related to the concept shared in today’s SQL in Sixty Seconds. Here is the code used in the video. USE TempDB GO -- Sample Data Building CREATE TABLE ColTable (Col1 VARCHAR(15) COLLATE Latin1_General_CI_AS, Col2 VARCHAR(14) COLLATE Latin1_General_CS_AS) ; INSERT ColTable(Col1, Col2) VALUES ('Apple','Apple'), ('apple','apple'), ('pineapple','pineapple'), ('Pineapple','Pineapple'); GO -- Retrieve Data SELECT * FROM ColTable GO -- Retrieve Data SELECT * FROM ColTable ORDER BY Col1 GO -- Retrieve Data SELECT * FROM ColTable ORDER BY Col2 GO -- Clean up DROP TABLE ColTable GO Related Tips in SQL in Sixty Seconds: SQL SERVER – Effect of Case Sensitive Collation on Resultset Example of Width Sensitive and Width Insensitive Collation Collation and Collation Sensitivity – Quiz – Puzzle – 6 of 31 Change Collation of Database Column – T-SQL Script Find Collation of Database and Table Column Using T-SQL Default Collation of SQL Server 2008 Cannot resolve collation conflict for equal to operation If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • Entity Framework v1 &ndash; tips and Tricks Part 3

    - by Rohit Gupta
    General Tips on Entity Framework v1 & Linq to Entities: ToTraceString() If you need to know the underlying SQL that the EF generates for a Linq To Entities query, then use the ToTraceString() method of the ObjectQuery class. (or use LINQPAD) Note that you need to cast the LINQToEntities query to ObjectQuery before calling TotraceString() as follows: 1: string efSQL = ((ObjectQuery)from c in ctx.Contact 2: where c.Address.Any(a => a.CountryRegion == "US") 3: select c.ContactID).ToTraceString(); ================================================================================ MARS or MultipleActiveResultSet When you create a EDM Model (EDMX file) from the database using Visual Studio, it generates a connection string with the same name as the name of the EntityContainer in CSDL. In the ConnectionString so generated it sets the MultipleActiveResultSet attribute to true by default. So if you are running the following query then it streams multiple readers over the same connection: 1: using (BAEntities context = new BAEntities()) 2: { 3: var cons = 4: from con in context.Contacts 5: where con.FirstName == "Jose" 6: select con; 7: foreach (var c in cons) 8: { 9: if (c.AddDate < new System.DateTime(2007, 1, 1)) 10: { 11: c.Addresses.Load(); 12: } 13: } 14: } ================================================================================= Explicitly opening and closing EntityConnection When you call ToList() or foreach on a LINQToEntities query the EF automatically closes the connection after all the records from the query have been consumed. Thus if you need to run many LINQToEntities queries over the same connection then explicitly open and close the connection as follows: 1: using (BAEntities context = new BAEntities()) 2: { 3: context.Connection.Open(); 4: var cons = from con in context.Contacts where con.FirstName == "Jose" 5: select con; 6: var conList = cons.ToList(); 7: var allCustomers = from con in context.Contacts.OfType<Customer>() 8: select con; 9: var allcustList = allCustomers.ToList(); 10: context.Connection.Close(); 11: } ====================================================================== Dispose ObjectContext only if required After you retrieve entities using the ObjectContext and you are not explicitly disposing the ObjectContext then insure that your code does consume all the records from the LinqToEntities query by calling .ToList() or foreach statement, otherwise the the database connection will remain open and will be closed by the garbage collector when it gets to dispose the ObjectContext. Secondly if you are making updates to the entities retrieved using LinqToEntities then insure that you dont inadverdently dispose of the ObjectContext after the entities are retrieved and before calling .SaveChanges() since you need the SAME ObjectContext to keep track of changes made to the Entities (by using ObjectStateEntry objects). So if you do need to explicitly dispose of the ObjectContext do so only after calling SaveChanges() and only if you dont need to change track the entities retrieved any further. ======================================================================= SQL InjectionAttacks under control with EFv1 LinqToEntities and LinqToSQL queries are parameterized before they are sent to the DB hence they are not vulnerable to SQL Injection attacks. EntitySQL may be slightly vulnerable to attacks since it does not use parameterized queries. However since the EntitySQL demands that the query be valid Entity SQL syntax and valid native SQL syntax at the same time. So the only way one can do a SQLInjection Attack is by knowing the SSDL of the EDM Model and be able to write the correct EntitySQL (note one cannot append regular SQL since then the query wont be a valid EntitySQL syntax) and append it to a parameter. ====================================================================== Improving Performance You can convert the EntitySets and AssociationSets in a EDM Model into precompiled Views using the edmgen utility. for e.g. the Customer Entity can be converted into a precompiled view using edmgen and all LinqToEntities query against the contaxt.Customer EntitySet will use the precompiled View instead of the EntitySet itself (the same being true for relationships (EntityReference & EntityCollections of a Entity)). The advantage being that when using precompiled views the performance will be much better. The syntax for generating precompiled views for a existing EF project is : edmgen /mode:ViewGeneration /inssdl:BAModel.ssdl /incsdl:BAModel.csdl /inmsl:BAModel.msl /p:Chap14.csproj Note that this will only generate precompiled views for EntitySets and Associations and not for existing LinqToEntities queries in the project.(for that use CompiledQuery.Compile<>) Secondly if you have a LinqToEntities query that you need to run multiple times, then one should precompile the query using CompiledQuery.Compile method. The CompiledQuery.Compile<> method accepts a lamda expression as a parameter, which denotes the LinqToEntities query  that you need to precompile. The following is a example of a lamda that we can pass into the CompiledQuery.Compile() method 1: Expression<Func<BAEntities, string, IQueryable<Customer>>> expr = (BAEntities ctx1, string loc) => 2: from c in ctx1.Contacts.OfType<Customer>() 3: where c.Reservations.Any(r => r.Trip.Destination.DestinationName == loc) 4: select c; Then we call the Compile Query as follows: 1: var query = CompiledQuery.Compile<BAEntities, string, IQueryable<Customer>>(expr); 2:  3: using (BAEntities ctx = new BAEntities()) 4: { 5: var loc = "Malta"; 6: IQueryable<Customer> custs = query.Invoke(ctx, loc); 7: var custlist = custs.ToList(); 8: foreach (var item in custlist) 9: { 10: Console.WriteLine(item.FullName); 11: } 12: } Note that if you created a ObjectQuery or a Enitity SQL query instead of the LINQToEntities query, you dont need precompilation for e.g. 1: An Example of EntitySQL query : 2: string esql = "SELECT VALUE c from Contacts AS c where c is of(BAGA.Customer) and c.LastName = 'Gupta'"; 3: ObjectQuery<Customer> custs = CreateQuery<Customer>(esql); 1: An Example of ObjectQuery built using ObjectBuilder methods: 2: from c in Contacts.OfType<Customer>().Where("it.LastName == 'Gupta'") 3: select c This is since the Query plan is cached and thus the performance improves a bit, however since the ObjectQuery or EntitySQL query still needs to materialize the results into Entities hence it will take the same amount of performance hit as with LinqToEntities. However note that not ALL EntitySQL based or QueryBuilder based ObjectQuery plans are cached. So if you are in doubt always create a LinqToEntities compiled query and use that instead ============================================================ GetObjectStateEntry Versus GetObjectByKey We can get to the Entity being referenced by the ObjectStateEntry via its Entity property and there are helper methods in the ObjectStateManager (osm.TryGetObjectStateEntry) to get the ObjectStateEntry for a entity (for which we know the EntityKey). Similarly The ObjectContext has helper methods to get an Entity i.e. TryGetObjectByKey(). TryGetObjectByKey() uses GetObjectStateEntry method under the covers to find the object, however One important difference between these 2 methods is that TryGetObjectByKey queries the database if it is unable to find the object in the context, whereas TryGetObjectStateEntry only looks in the context for existing entries. It will not make a trip to the database ============================================================= POCO objects with EFv1: To create POCO objects that can be used with EFv1. We need to implement 3 key interfaces: IEntityWithKey IEntityWithRelationships IEntityWithChangeTracker Implementing IEntityWithKey is not mandatory, but if you dont then we need to explicitly provide values for the EntityKey for various functions (for e.g. the functions needed to implement IEntityWithChangeTracker and IEntityWithRelationships). Implementation of IEntityWithKey involves exposing a property named EntityKey which returns a EntityKey object. Implementation of IEntityWithChangeTracker involves implementing a method named SetChangeTracker since there can be multiple changetrackers (Object Contexts) existing in memory at the same time. 1: public void SetChangeTracker(IEntityChangeTracker changeTracker) 2: { 3: _changeTracker = changeTracker; 4: } Additionally each property in the POCO object needs to notify the changetracker (objContext) that it is updating itself by calling the EntityMemberChanged and EntityMemberChanging methods on the changeTracker. for e.g.: 1: public EntityKey EntityKey 2: { 3: get { return _entityKey; } 4: set 5: { 6: if (_changeTracker != null) 7: { 8: _changeTracker.EntityMemberChanging("EntityKey"); 9: _entityKey = value; 10: _changeTracker.EntityMemberChanged("EntityKey"); 11: } 12: else 13: _entityKey = value; 14: } 15: } 16: ===================== Custom Property ==================================== 17:  18: [EdmScalarPropertyAttribute(IsNullable = false)] 19: public System.DateTime OrderDate 20: { 21: get { return _orderDate; } 22: set 23: { 24: if (_changeTracker != null) 25: { 26: _changeTracker.EntityMemberChanging("OrderDate"); 27: _orderDate = value; 28: _changeTracker.EntityMemberChanged("OrderDate"); 29: } 30: else 31: _orderDate = value; 32: } 33: } Finally you also need to create the EntityState property as follows: 1: public EntityState EntityState 2: { 3: get { return _changeTracker.EntityState; } 4: } The IEntityWithRelationships involves creating a property that returns RelationshipManager object: 1: public RelationshipManager RelationshipManager 2: { 3: get 4: { 5: if (_relManager == null) 6: _relManager = RelationshipManager.Create(this); 7: return _relManager; 8: } 9: } ============================================================ Tip : ProviderManifestToken – change EDMX File to use SQL 2008 instead of SQL 2005 To use with SQL Server 2008, edit the EDMX file (the raw XML) changing the ProviderManifestToken in the SSDL attributes from "2005" to "2008" ============================================================= With EFv1 we cannot use Structs to replace a anonymous Type while doing projections in a LINQ to Entities query. While the same is supported with LINQToSQL, it is not with LinqToEntities. For e.g. the following is not supported with LinqToEntities since only parameterless constructors and initializers are supported in LINQ to Entities. (the same works with LINQToSQL) 1: public struct CompanyInfo 2: { 3: public int ID { get; set; } 4: public string Name { get; set; } 5: } 6: var companies = (from c in dc.Companies 7: where c.CompanyIcon == null 8: select new CompanyInfo { Name = c.CompanyName, ID = c.CompanyId }).ToList(); ;

    Read the article

  • Unity – Part 5: Injecting Values

    - by Ricardo Peres
    Introduction This is the fifth post on Unity. You can find the introductory post here, the second post, on dependency injection here, a third one on Aspect Oriented Programming (AOP) here and the latest so far, on writing custom extensions, here. This time we will talk about injecting simple values. An Inversion of Control (IoC) / Dependency Injector (DI) container like Unity can be used for things other than injecting complex class dependencies. It can also be used for setting property values or method/constructor parameters whenever a class is built. The main difference is that these values do not have a lifetime manager associated with them and do not come from the regular IoC registration store. Unlike, for instance, MEF, Unity won’t let you register as a dependency a string or an integer, so you have to take a different approach, which I will describe in this post. Scenario Let’s imagine we have a base interface that describes a logger – the same as in previous examples: 1: public interface ILogger 2: { 3: void Log(String message); 4: } And a concrete implementation that writes to a file: 1: public class FileLogger : ILogger 2: { 3: public String Filename 4: { 5: get; 6: set; 7: } 8:  9: #region ILogger Members 10:  11: public void Log(String message) 12: { 13: using (Stream file = File.OpenWrite(this.Filename)) 14: { 15: Byte[] data = Encoding.Default.GetBytes(message); 16: 17: file.Write(data, 0, data.Length); 18: } 19: } 20:  21: #endregion 22: } And let’s say we want the Filename property to come from the application settings (appSettings) section on the Web/App.config file. As usual with Unity, there is an extensibility point that allows us to automatically do this, both with code configuration or statically on the configuration file. Extending Injection We start by implementing a class that will retrieve a value from the appSettings by inheriting from ValueElement: 1: sealed class AppSettingsParameterValueElement : ValueElement, IDependencyResolverPolicy 2: { 3: #region Private methods 4: private Object CreateInstance(Type parameterType) 5: { 6: Object configurationValue = ConfigurationManager.AppSettings[this.AppSettingsKey]; 7:  8: if (parameterType != typeof(String)) 9: { 10: TypeConverter typeConverter = this.GetTypeConverter(parameterType); 11:  12: configurationValue = typeConverter.ConvertFromInvariantString(configurationValue as String); 13: } 14:  15: return (configurationValue); 16: } 17: #endregion 18:  19: #region Private methods 20: private TypeConverter GetTypeConverter(Type parameterType) 21: { 22: if (String.IsNullOrEmpty(this.TypeConverterTypeName) == false) 23: { 24: return (Activator.CreateInstance(TypeResolver.ResolveType(this.TypeConverterTypeName)) as TypeConverter); 25: } 26: else 27: { 28: return (TypeDescriptor.GetConverter(parameterType)); 29: } 30: } 31: #endregion 32:  33: #region Public override methods 34: public override InjectionParameterValue GetInjectionParameterValue(IUnityContainer container, Type parameterType) 35: { 36: Object value = this.CreateInstance(parameterType); 37: return (new InjectionParameter(parameterType, value)); 38: } 39: #endregion 40:  41: #region IDependencyResolverPolicy Members 42:  43: public Object Resolve(IBuilderContext context) 44: { 45: Type parameterType = null; 46:  47: if (context.CurrentOperation is ResolvingPropertyValueOperation) 48: { 49: ResolvingPropertyValueOperation op = (context.CurrentOperation as ResolvingPropertyValueOperation); 50: PropertyInfo prop = op.TypeBeingConstructed.GetProperty(op.PropertyName); 51: parameterType = prop.PropertyType; 52: } 53: else if (context.CurrentOperation is ConstructorArgumentResolveOperation) 54: { 55: ConstructorArgumentResolveOperation op = (context.CurrentOperation as ConstructorArgumentResolveOperation); 56: String args = op.ConstructorSignature.Split('(')[1].Split(')')[0]; 57: Type[] types = args.Split(',').Select(a => Type.GetType(a.Split(' ')[0])).ToArray(); 58: ConstructorInfo ctor = op.TypeBeingConstructed.GetConstructor(types); 59: parameterType = ctor.GetParameters().Where(p => p.Name == op.ParameterName).Single().ParameterType; 60: } 61: else if (context.CurrentOperation is MethodArgumentResolveOperation) 62: { 63: MethodArgumentResolveOperation op = (context.CurrentOperation as MethodArgumentResolveOperation); 64: String methodName = op.MethodSignature.Split('(')[0].Split(' ')[1]; 65: String args = op.MethodSignature.Split('(')[1].Split(')')[0]; 66: Type[] types = args.Split(',').Select(a => Type.GetType(a.Split(' ')[0])).ToArray(); 67: MethodInfo method = op.TypeBeingConstructed.GetMethod(methodName, types); 68: parameterType = method.GetParameters().Where(p => p.Name == op.ParameterName).Single().ParameterType; 69: } 70:  71: return (this.CreateInstance(parameterType)); 72: } 73:  74: #endregion 75:  76: #region Public properties 77: [ConfigurationProperty("appSettingsKey", IsRequired = true)] 78: public String AppSettingsKey 79: { 80: get 81: { 82: return ((String)base["appSettingsKey"]); 83: } 84:  85: set 86: { 87: base["appSettingsKey"] = value; 88: } 89: } 90: #endregion 91: } As you can see from the implementation of the IDependencyResolverPolicy.Resolve method, this will work in three different scenarios: When it is applied to a property; When it is applied to a constructor parameter; When it is applied to an initialization method. The implementation will even try to convert the value to its declared destination, for example, if the destination property is an Int32, it will try to convert the appSettings stored string to an Int32. Injection By Configuration If we want to configure injection by configuration, we need to implement a custom section extension by inheriting from SectionExtension, and registering our custom element with the name “appSettings”: 1: sealed class AppSettingsParameterInjectionElementExtension : SectionExtension 2: { 3: public override void AddExtensions(SectionExtensionContext context) 4: { 5: context.AddElement<AppSettingsParameterValueElement>("appSettings"); 6: } 7: } And on the configuration file, for setting a property, we use it like this: 1: <appSettings> 2: <add key="LoggerFilename" value="Log.txt"/> 3: </appSettings> 4: <unity xmlns="http://schemas.microsoft.com/practices/2010/unity"> 5: <container> 6: <register type="MyNamespace.ILogger, MyAssembly" mapTo="MyNamespace.ConsoleLogger, MyAssembly"/> 7: <register type="MyNamespace.ILogger, MyAssembly" mapTo="MyNamespace.FileLogger, MyAssembly" name="File"> 8: <lifetime type="singleton"/> 9: <property name="Filename"> 10: <appSettings appSettingsKey="LoggerFilename"/> 11: </property> 12: </register> 13: </container> 14: </unity> If we would like to inject the value as a constructor parameter, it would be instead: 1: <unity xmlns="http://schemas.microsoft.com/practices/2010/unity"> 2: <sectionExtension type="MyNamespace.AppSettingsParameterInjectionElementExtension, MyAssembly" /> 3: <container> 4: <register type="MyNamespace.ILogger, MyAssembly" mapTo="MyNamespace.ConsoleLogger, MyAssembly"/> 5: <register type="MyNamespace.ILogger, MyAssembly" mapTo="MyNamespace.FileLogger, MyAssembly" name="File"> 6: <lifetime type="singleton"/> 7: <constructor> 8: <param name="filename" type="System.String"> 9: <appSettings appSettingsKey="LoggerFilename"/> 10: </param> 11: </constructor> 12: </register> 13: </container> 14: </unity> Notice the appSettings section, where we add a LoggerFilename entry, which is the same as the one referred by our AppSettingsParameterInjectionElementExtension extension. For more advanced behavior, you can add a TypeConverterName attribute to the appSettings declaration, where you can pass an assembly qualified name of a class that inherits from TypeConverter. This class will be responsible for converting the appSettings value to a destination type. Injection By Attribute If we would like to use attributes instead, we need to create a custom attribute by inheriting from DependencyResolutionAttribute: 1: [Serializable] 2: [AttributeUsage(AttributeTargets.Parameter | AttributeTargets.Property, AllowMultiple = false, Inherited = true)] 3: public sealed class AppSettingsDependencyResolutionAttribute : DependencyResolutionAttribute 4: { 5: public AppSettingsDependencyResolutionAttribute(String appSettingsKey) 6: { 7: this.AppSettingsKey = appSettingsKey; 8: } 9:  10: public String TypeConverterTypeName 11: { 12: get; 13: set; 14: } 15:  16: public String AppSettingsKey 17: { 18: get; 19: private set; 20: } 21:  22: public override IDependencyResolverPolicy CreateResolver(Type typeToResolve) 23: { 24: return (new AppSettingsParameterValueElement() { AppSettingsKey = this.AppSettingsKey, TypeConverterTypeName = this.TypeConverterTypeName }); 25: } 26: } As for file configuration, there is a mandatory property for setting the appSettings key and an optional TypeConverterName  for setting the name of a TypeConverter. Both the custom attribute and the custom section return an instance of the injector AppSettingsParameterValueElement that we implemented in the first place. Now, the attribute needs to be placed before the injected class’ Filename property: 1: public class FileLogger : ILogger 2: { 3: [AppSettingsDependencyResolution("LoggerFilename")] 4: public String Filename 5: { 6: get; 7: set; 8: } 9:  10: #region ILogger Members 11:  12: public void Log(String message) 13: { 14: using (Stream file = File.OpenWrite(this.Filename)) 15: { 16: Byte[] data = Encoding.Default.GetBytes(message); 17: 18: file.Write(data, 0, data.Length); 19: } 20: } 21:  22: #endregion 23: } Or, if we wanted to use constructor injection: 1: public class FileLogger : ILogger 2: { 3: public String Filename 4: { 5: get; 6: set; 7: } 8:  9: public FileLogger([AppSettingsDependencyResolution("LoggerFilename")] String filename) 10: { 11: this.Filename = filename; 12: } 13:  14: #region ILogger Members 15:  16: public void Log(String message) 17: { 18: using (Stream file = File.OpenWrite(this.Filename)) 19: { 20: Byte[] data = Encoding.Default.GetBytes(message); 21: 22: file.Write(data, 0, data.Length); 23: } 24: } 25:  26: #endregion 27: } Usage Just do: 1: ILogger logger = ServiceLocator.Current.GetInstance<ILogger>("File"); And off you go! A simple way do avoid hardcoded values in component registrations. Of course, this same concept can be applied to registry keys, environment values, XML attributes, etc, etc, just change the implementation of the AppSettingsParameterValueElement class. Next stop: custom lifetime managers.

    Read the article

  • Custom Content Pipeline with Automatic Serialization Load Error

    - by Direweasel
    I'm running into this error: Error loading "desert". Cannot find type TiledLib.MapContent, TiledLib, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null. at Microsoft.Xna.Framework.Content.ContentTypeReaderManager.InstantiateTypeReader(String readerTypeName, ContentReader contentReader, ContentTypeReader& reader) at Microsoft.Xna.Framework.Content.ContentTypeReaderManager.GetTypeReader(String readerTypeName, ContentReader contentReader, List1& newTypeReaders) at Microsoft.Xna.Framework.Content.ContentTypeReaderManager.ReadTypeManifest(Int32 typeCount, ContentReader contentReader) at Microsoft.Xna.Framework.Content.ContentReader.ReadHeader() at Microsoft.Xna.Framework.Content.ContentReader.ReadAsset[T]() at Microsoft.Xna.Framework.Content.ContentManager.ReadAsset[T](String assetName, Action1 recordDisposableObject) at Microsoft.Xna.Framework.Content.ContentManager.Load[T](String assetName) at TiledTest.Game1.LoadContent() in C:\My Documents\Dropbox\Visual Studio Projects\TiledTest\TiledTest\TiledTest\Game1.cs:line 51 at Microsoft.Xna.Framework.Game.Initialize() at TiledTest.Game1.Initialize() in C:\My Documents\Dropbox\Visual Studio Projects\TiledTest\TiledTest\TiledTest\Game1.cs:line 39 at Microsoft.Xna.Framework.Game.RunGame(Boolean useBlockingRun) at Microsoft.Xna.Framework.Game.Run() at TiledTest.Program.Main(String[] args) in C:\My Documents\Dropbox\Visual Studio Projects\TiledTest\TiledTest\TiledTest\Program.cs:line 15 When trying to run the game. This is a basic demo to try and utilize a separate project library called TiledLib. I have four projects overall: TiledLib (C# Class Library) TiledTest (Windows Game) TiledTestContent (Content) TMX CP Ext (Content Pipeline Extension Library) TiledLib contains MapContent which is throwing the error, however I believe this may just be a generic error with a deeper root problem. EMX CP Ext contains one file: MapProcessor.cs using System; using System.Collections.Generic; using System.Linq; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Content.Pipeline; using Microsoft.Xna.Framework.Content.Pipeline.Graphics; using Microsoft.Xna.Framework.Content.Pipeline.Processors; using Microsoft.Xna.Framework.Content; using TiledLib; namespace TMX_CP_Ext { // Each tile has a texture, source rect, and sprite effects. [ContentSerializerRuntimeType("TiledTest.Tile, TiledTest")] public class DemoMapTileContent { public ExternalReference<Texture2DContent> Texture; public Rectangle SourceRectangle; public SpriteEffects SpriteEffects; } // For each layer, we store the size of the layer and the tiles. [ContentSerializerRuntimeType("TiledTest.Layer, TiledTest")] public class DemoMapLayerContent { public int Width; public int Height; public DemoMapTileContent[] Tiles; } // For the map itself, we just store the size, tile size, and a list of layers. [ContentSerializerRuntimeType("TiledTest.Map, TiledTest")] public class DemoMapContent { public int TileWidth; public int TileHeight; public List<DemoMapLayerContent> Layers = new List<DemoMapLayerContent>(); } [ContentProcessor(DisplayName = "TMX Processor - TiledLib")] public class MapProcessor : ContentProcessor<MapContent, DemoMapContent> { public override DemoMapContent Process(MapContent input, ContentProcessorContext context) { // build the textures TiledHelpers.BuildTileSetTextures(input, context); // generate source rectangles TiledHelpers.GenerateTileSourceRectangles(input); // now build our output, first by just copying over some data DemoMapContent output = new DemoMapContent { TileWidth = input.TileWidth, TileHeight = input.TileHeight }; // iterate all the layers of the input foreach (LayerContent layer in input.Layers) { // we only care about tile layers in our demo TileLayerContent tlc = layer as TileLayerContent; if (tlc != null) { // create the new layer DemoMapLayerContent outLayer = new DemoMapLayerContent { Width = tlc.Width, Height = tlc.Height, }; // we need to build up our tile list now outLayer.Tiles = new DemoMapTileContent[tlc.Data.Length]; for (int i = 0; i < tlc.Data.Length; i++) { // get the ID of the tile uint tileID = tlc.Data[i]; // use that to get the actual index as well as the SpriteEffects int tileIndex; SpriteEffects spriteEffects; TiledHelpers.DecodeTileID(tileID, out tileIndex, out spriteEffects); // figure out which tile set has this tile index in it and grab // the texture reference and source rectangle. ExternalReference<Texture2DContent> textureContent = null; Rectangle sourceRect = new Rectangle(); // iterate all the tile sets foreach (var tileSet in input.TileSets) { // if our tile index is in this set if (tileIndex - tileSet.FirstId < tileSet.Tiles.Count) { // store the texture content and source rectangle textureContent = tileSet.Texture; sourceRect = tileSet.Tiles[(int)(tileIndex - tileSet.FirstId)].Source; // and break out of the foreach loop break; } } // now insert the tile into our output outLayer.Tiles[i] = new DemoMapTileContent { Texture = textureContent, SourceRectangle = sourceRect, SpriteEffects = spriteEffects }; } // add the layer to our output output.Layers.Add(outLayer); } } // return the output object. because we have ContentSerializerRuntimeType attributes on our // objects, we don't need a ContentTypeWriter and can just use the automatic serialization. return output; } } } TiledLib contains a large amount of files including MapContent.cs using System; using System.Collections.Generic; using System.Globalization; using System.Xml; using Microsoft.Xna.Framework.Content.Pipeline; namespace TiledLib { public enum Orientation : byte { Orthogonal, Isometric, } public class MapContent { public string Filename; public string Directory; public string Version = string.Empty; public Orientation Orientation; public int Width; public int Height; public int TileWidth; public int TileHeight; public PropertyCollection Properties = new PropertyCollection(); public List<TileSetContent> TileSets = new List<TileSetContent>(); public List<LayerContent> Layers = new List<LayerContent>(); public MapContent(XmlDocument document, ContentImporterContext context) { XmlNode mapNode = document["map"]; Version = mapNode.Attributes["version"].Value; Orientation = (Orientation)Enum.Parse(typeof(Orientation), mapNode.Attributes["orientation"].Value, true); Width = int.Parse(mapNode.Attributes["width"].Value, CultureInfo.InvariantCulture); Height = int.Parse(mapNode.Attributes["height"].Value, CultureInfo.InvariantCulture); TileWidth = int.Parse(mapNode.Attributes["tilewidth"].Value, CultureInfo.InvariantCulture); TileHeight = int.Parse(mapNode.Attributes["tileheight"].Value, CultureInfo.InvariantCulture); XmlNode propertiesNode = document.SelectSingleNode("map/properties"); if (propertiesNode != null) { Properties = new PropertyCollection(propertiesNode, context); } foreach (XmlNode tileSet in document.SelectNodes("map/tileset")) { if (tileSet.Attributes["source"] != null) { TileSets.Add(new ExternalTileSetContent(tileSet, context)); } else { TileSets.Add(new TileSetContent(tileSet, context)); } } foreach (XmlNode layerNode in document.SelectNodes("map/layer|map/objectgroup")) { LayerContent layerContent; if (layerNode.Name == "layer") { layerContent = new TileLayerContent(layerNode, context); } else if (layerNode.Name == "objectgroup") { layerContent = new MapObjectLayerContent(layerNode, context); } else { throw new Exception("Unknown layer name: " + layerNode.Name); } // Layer names need to be unique for our lookup system, but Tiled // doesn't require unique names. string layerName = layerContent.Name; int duplicateCount = 2; // if a layer already has the same name... if (Layers.Find(l => l.Name == layerName) != null) { // figure out a layer name that does work do { layerName = string.Format("{0}{1}", layerContent.Name, duplicateCount); duplicateCount++; } while (Layers.Find(l => l.Name == layerName) != null); // log a warning for the user to see context.Logger.LogWarning(string.Empty, new ContentIdentity(), "Renaming layer \"{1}\" to \"{2}\" to make a unique name.", layerContent.Type, layerContent.Name, layerName); // save that name layerContent.Name = layerName; } Layers.Add(layerContent); } } } } I'm lost as to why this is failing. Thoughts? -- EDIT -- After playing with it a bit, I would think it has something to do with referencing the projects. I'm already referencing the TiledLib within my main windows project (TiledTest). However, this doesn't seem to make a difference. I can place the dll generated from the TiledLib project into the debug folder of TiledTest, and this causes it to generate a different error: Error loading "desert". Cannot find ContentTypeReader for Microsoft.Xna.Framework.Content.Pipeline.ExternalReference`1[Microsoft.Xna.Framework.Content.Pipeline.Graphics.Texture2DContent]. at Microsoft.Xna.Framework.Content.ContentTypeReaderManager.GetTypeReader(Type targetType, ContentReader contentReader) at Microsoft.Xna.Framework.Content.ContentTypeReaderManager.GetTypeReader(Type targetType) at Microsoft.Xna.Framework.Content.ReflectiveReaderMemberHelper..ctor(ContentTypeReaderManager manager, FieldInfo fieldInfo, PropertyInfo propertyInfo, Type memberType, Boolean canWrite) at Microsoft.Xna.Framework.Content.ReflectiveReaderMemberHelper.TryCreate(ContentTypeReaderManager manager, Type declaringType, FieldInfo fieldInfo) at Microsoft.Xna.Framework.Content.ReflectiveReader1.Initialize(ContentTypeReaderManager manager) at Microsoft.Xna.Framework.Content.ContentTypeReaderManager.ReadTypeManifest(Int32 typeCount, ContentReader contentReader) at Microsoft.Xna.Framework.Content.ContentReader.ReadHeader() at Microsoft.Xna.Framework.Content.ContentReader.ReadAsset[T]() at Microsoft.Xna.Framework.Content.ContentManager.ReadAsset[T](String assetName, Action1 recordDisposableObject) at Microsoft.Xna.Framework.Content.ContentManager.Load[T](String assetName) at TiledTest.Game1.LoadContent() in C:\My Documents\Dropbox\Visual Studio Projects\TiledTest\TiledTest\TiledTest\Game1.cs:line 51 at Microsoft.Xna.Framework.Game.Initialize() at TiledTest.Game1.Initialize() in C:\My Documents\Dropbox\Visual Studio Projects\TiledTest\TiledTest\TiledTest\Game1.cs:line 39 at Microsoft.Xna.Framework.Game.RunGame(Boolean useBlockingRun) at Microsoft.Xna.Framework.Game.Run() at TiledTest.Program.Main(String[] args) in C:\My Documents\Dropbox\Visual Studio Projects\TiledTest\TiledTest\TiledTest\Program.cs:line 15 This is all incredibly frustrating as the demo doesn't appear to have any special linking properties. The TiledLib I am utilizing is from Nick Gravelyn, and can be found here: https://bitbucket.org/nickgravelyn/tiledlib. The demo it comes with works fine, and yet in recreating I always run into this error.

    Read the article

  • Introduction to LinqPad Driver for StreamInsight 2.1

    - by Roman Schindlauer
    We are announcing the availability of the LinqPad driver for StreamInsight 2.1. The purpose of this blog post is to offer a quick introduction into the new features that we added to the StreamInsight LinqPad driver. We’ll show you how to connect to a remote server, how to inspect the entities present of that server, how to compose on top of them and how to manage their lifetime. Installing the driver Info on how to install the driver can be found in an earlier blog post here. Establishing connections As you click on the “Add Connection” link in the left pane you will notice that now it’s possible to build the data context automatically. The new driver appears as an option in the upper list, and if you pick it you will open a connection dialog that lets you connect to a remote StreamInsight server. The connection dialog lets you specify the address of the remote server. You will notice that it’s possible to pick up the binding information from the configuration file of the LinqPad application (which is normally in the same folder as LinqPad.exe and is called LinqPad.exe.config). In order for the context to be generated you need to pick an application from the server. The control is editable hence you can create a new application if you don’t want to make changes to an existing application. If you choose a new application name you will be prompted for confirmation before this gets created. Once you click OK the connection is created and you can start issuing queries against the remote server. If there’s any connectivity error the connection is marked with a red X and you can see the error message informing you what went wrong (i.e., the remote server could not be reached etc.). The context for remote servers Let’s take a look at what happens after we are connected successfully. Every LinqPad query runs inside a context – think of it as a class that wraps all the code that you’re writing. If you’re connecting to a live server the context will contain the following: The application object itself. All entities present in this application (sources, sinks, subjects and processes). The picture below shows a snapshot of the left pane of LinqPad after a successful connection. Every entity on the server has a different icon which will allow users to figure out its purpose. You will also notice that some entities have a string in parentheses following the name. It should be interpreted as such: the first name is the name of the property of the context class and the second name is the name of the entity as it exists on the server. Not all valid entity names are valid identifier names so in cases where we had to make a transformation you see both. Note also that as you hover over the entities you get IntelliSense with their types – more on that later. Remoting is not supported As you play with the entities exposed by the context you will notice that you can’t read and write directly to/from them. If for instance you’re trying to dump the content of an entity you will get an error message telling you that in the current version remoting is not supported. This is because the entity lives on the remote server and dumping its content means reading the events produced by this entity into the local process. ObservableSource.Dump(); Will yield the following error: Reading from a remote 'System.Reactive.Linq.IQbservable`1[System.Int32]' is not supported. Use the 'Microsoft.ComplexEventProcessing.Linq.RemoteProvider.Bind' method to read from the source using a remote observer. This basically tells you that you can call the Bind() method to direct the output of this source to a sink that has to be defined on the remote machine as well. You can’t bring the results to the LinqPad window unless you write code specifically for that. Compose queries You may ask – what's the purpose of all that? After all the same information is present in the EventFlowDebugger, why bother with showing it in LinqPad? First of all, What gets exposed in LinqPad is not what you see in the debugger. In LinqPad we have a property on the context class for every entity that lives on the server. Because LinqPad offers IntelliSense we in fact have much more information about the entity, and more importantly we can compose with that entity very easily. For example, let’s say that this code creates an entity: using (var server = Server.Connect(...)) {     var a = server.CreateApplication("WhiteFish");     var src = a         .DefineObservable<int>(() => Observable.Range(0, 3))         .Deploy("ObservableSource"); If later we want to compose with the source we have to fetch it and then we can bind something to     a.GetObservable<int>("ObservableSource)").Bind(... This means that we had to know a bunch of things about this: that it’s a source, that it’s an observable, it produces a result with payload Int32 and it’s named “ObservableSource”. Only the second and last bits of information are present in the debugger, by the way. As you type in the query window you see that all the entities are present, you get IntelliSense support for them and it’s much easier to make sense of what’s available. Let’s look at a scenario where composition is plausible. With the new programming model it’s possible to create “cold” sources that are parameterized. There was a way to accomplish that even in the previous version by passing parameters to the adapters, but this time it’s much more elegant because the expression declares what parameters are required. Say that we hover the mouse over the ThrottledSource source – we will see that its type is Func<int, int, IQbservable<int>> - this in effect means that we need to pass two int parameters before we can get a source that produces events, and the type for those events is int – in the particular case of my example I had the source produce a range of integers and the two parameters were the start and end of the range. So we see how a developer can create a source that is not running yet. Then someone else (e.g. an administrator) can pass whatever parameters appropriate and run the process. Proxy Types Here’s an interesting scenario – what if someone created a source on a server but they forgot to tell you what type they used. Worse yet, they might have used an anonymous type and even though they can refer to it by name you can’t figure out how to use that type. Let’s walk through an example that shows how you can compose against types you don’t need to have the definition of. This is how we can create a source that returns an anonymous type: Application.DefineObservable(() => Observable.Range(1, 10).Select(i => new { I = i })).Deploy("O1"); Now if we refresh the connection we can see the new source named O1 appear in the list. But what’s more important is that we now have a type to work with. So we can compose a query that refers to the anonymous type. var threshold = new StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0<int>(5); var filter = from i in O1              where i > threshold              select i; filter.Deploy("O2"); You will notice that the anonymous type defined with this statement: new { I = i } can now be manipulated by a client that does not have access to it because the LinqPad driver has generated another type in its stead, named StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0. This type has all the properties and fields of the type defined on the server, except in this case we can instantiate values and use it to compose more queries. It is worth noting that the same thing works for types that are not anonymous – the test is if the LinqPad driver can resolve the type or not. If it’s not possible then a new type will be generated that approximates the type that exists on the server. Control metadata In addition to composing processes on top of the existing entities we can do other useful things. We can delete them – nothing new here as we simply access the entities through the Entities collection of the application class. Here is where having their real name in parentheses comes handy. There’s another way to find out what’s behind a property – dump its expression. The first line in the output tells us what’s the name of the entity used to build this property in the context. Runtime information So let’s create a process to see what happens. We can bind a source to a sink and run the resulting process. If you right click on the connection you can refresh it and see the process present in the list of entities. Then you can drag the process to the query window and see that you can have access to process object in the Processes collection of the application. You can then manipulate the process (delete it, read its diagnostic view etc.). Regards, The StreamInsight Team

    Read the article

  • Introduction to LinqPad Driver for StreamInsight 2.1

    - by Roman Schindlauer
    We are announcing the availability of the LinqPad driver for StreamInsight 2.1. The purpose of this blog post is to offer a quick introduction into the new features that we added to the StreamInsight LinqPad driver. We’ll show you how to connect to a remote server, how to inspect the entities present of that server, how to compose on top of them and how to manage their lifetime. Installing the driver Info on how to install the driver can be found in an earlier blog post here. Establishing connections As you click on the “Add Connection” link in the left pane you will notice that now it’s possible to build the data context automatically. The new driver appears as an option in the upper list, and if you pick it you will open a connection dialog that lets you connect to a remote StreamInsight server. The connection dialog lets you specify the address of the remote server. You will notice that it’s possible to pick up the binding information from the configuration file of the LinqPad application (which is normally in the same folder as LinqPad.exe and is called LinqPad.exe.config). In order for the context to be generated you need to pick an application from the server. The control is editable hence you can create a new application if you don’t want to make changes to an existing application. If you choose a new application name you will be prompted for confirmation before this gets created. Once you click OK the connection is created and you can start issuing queries against the remote server. If there’s any connectivity error the connection is marked with a red X and you can see the error message informing you what went wrong (i.e., the remote server could not be reached etc.). The context for remote servers Let’s take a look at what happens after we are connected successfully. Every LinqPad query runs inside a context – think of it as a class that wraps all the code that you’re writing. If you’re connecting to a live server the context will contain the following: The application object itself. All entities present in this application (sources, sinks, subjects and processes). The picture below shows a snapshot of the left pane of LinqPad after a successful connection. Every entity on the server has a different icon which will allow users to figure out its purpose. You will also notice that some entities have a string in parentheses following the name. It should be interpreted as such: the first name is the name of the property of the context class and the second name is the name of the entity as it exists on the server. Not all valid entity names are valid identifier names so in cases where we had to make a transformation you see both. Note also that as you hover over the entities you get IntelliSense with their types – more on that later. Remoting is not supported As you play with the entities exposed by the context you will notice that you can’t read and write directly to/from them. If for instance you’re trying to dump the content of an entity you will get an error message telling you that in the current version remoting is not supported. This is because the entity lives on the remote server and dumping its content means reading the events produced by this entity into the local process. ObservableSource.Dump(); Will yield the following error: Reading from a remote 'System.Reactive.Linq.IQbservable`1[System.Int32]' is not supported. Use the 'Microsoft.ComplexEventProcessing.Linq.RemoteProvider.Bind' method to read from the source using a remote observer. This basically tells you that you can call the Bind() method to direct the output of this source to a sink that has to be defined on the remote machine as well. You can’t bring the results to the LinqPad window unless you write code specifically for that. Compose queries You may ask – what's the purpose of all that? After all the same information is present in the EventFlowDebugger, why bother with showing it in LinqPad? First of all, What gets exposed in LinqPad is not what you see in the debugger. In LinqPad we have a property on the context class for every entity that lives on the server. Because LinqPad offers IntelliSense we in fact have much more information about the entity, and more importantly we can compose with that entity very easily. For example, let’s say that this code creates an entity: using (var server = Server.Connect(...)) {     var a = server.CreateApplication("WhiteFish");     var src = a         .DefineObservable<int>(() => Observable.Range(0, 3))         .Deploy("ObservableSource"); If later we want to compose with the source we have to fetch it and then we can bind something to     a.GetObservable<int>("ObservableSource)").Bind(... This means that we had to know a bunch of things about this: that it’s a source, that it’s an observable, it produces a result with payload Int32 and it’s named “ObservableSource”. Only the second and last bits of information are present in the debugger, by the way. As you type in the query window you see that all the entities are present, you get IntelliSense support for them and it’s much easier to make sense of what’s available. Let’s look at a scenario where composition is plausible. With the new programming model it’s possible to create “cold” sources that are parameterized. There was a way to accomplish that even in the previous version by passing parameters to the adapters, but this time it’s much more elegant because the expression declares what parameters are required. Say that we hover the mouse over the ThrottledSource source – we will see that its type is Func<int, int, IQbservable<int>> - this in effect means that we need to pass two int parameters before we can get a source that produces events, and the type for those events is int – in the particular case of my example I had the source produce a range of integers and the two parameters were the start and end of the range. So we see how a developer can create a source that is not running yet. Then someone else (e.g. an administrator) can pass whatever parameters appropriate and run the process. Proxy Types Here’s an interesting scenario – what if someone created a source on a server but they forgot to tell you what type they used. Worse yet, they might have used an anonymous type and even though they can refer to it by name you can’t figure out how to use that type. Let’s walk through an example that shows how you can compose against types you don’t need to have the definition of. This is how we can create a source that returns an anonymous type: Application.DefineObservable(() => Observable.Range(1, 10).Select(i => new { I = i })).Deploy("O1"); Now if we refresh the connection we can see the new source named O1 appear in the list. But what’s more important is that we now have a type to work with. So we can compose a query that refers to the anonymous type. var threshold = new StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0<int>(5); var filter = from i in O1              where i > threshold              select i; filter.Deploy("O2"); You will notice that the anonymous type defined with this statement: new { I = i } can now be manipulated by a client that does not have access to it because the LinqPad driver has generated another type in its stead, named StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0. This type has all the properties and fields of the type defined on the server, except in this case we can instantiate values and use it to compose more queries. It is worth noting that the same thing works for types that are not anonymous – the test is if the LinqPad driver can resolve the type or not. If it’s not possible then a new type will be generated that approximates the type that exists on the server. Control metadata In addition to composing processes on top of the existing entities we can do other useful things. We can delete them – nothing new here as we simply access the entities through the Entities collection of the application class. Here is where having their real name in parentheses comes handy. There’s another way to find out what’s behind a property – dump its expression. The first line in the output tells us what’s the name of the entity used to build this property in the context. Runtime information So let’s create a process to see what happens. We can bind a source to a sink and run the resulting process. If you right click on the connection you can refresh it and see the process present in the list of entities. Then you can drag the process to the query window and see that you can have access to process object in the Processes collection of the application. You can then manipulate the process (delete it, read its diagnostic view etc.). Regards, The StreamInsight Team

    Read the article

  • Ubuntu touch flashing failed/taking to long. Black screen! Redeploy help!

    - by Marius Rye
    So i'm brand new to the whole unlock, root and flashing thing. I just tried flashing Ubuntu touch on my Galaxy Nexus (international GSM/HSPA+) and everything seamed fine until the very end when i got this ERROR message. ERROR:phablet-flash:Installation is taking too long or an error occured along the way. I didn't know what to do so i disconnected my device (which was in Ubuntu recovery) and tried rebooting it from there, it worked "sort of". i got the Google logo with the unlocked icon then it continued to a black screen. In the installation guide on http://www.ubuntu.com/phone/install said i should: "try wiping the /data partition on your device and redeploy". the problem is that i don't know how to do this and i'm basically sitting here with a useless phone right now. I would be extremely grateful for some proffeexpert help.

    Read the article

  • Does it help to be core programmer of a product (product meant for social good ) for getting into Ph. D. in top university in USA say top 20?

    - by Maddy.Shik
    Hey i am working upon a product as core developer which will be launched in USA market in few months if successful. Can this factor improve my chance to get Ph.D. in good university(say top 20 in US). Normally good universities like CMU, standford, MIT, Cornell are more interested in student's profile like research work, under graduate school etc. I am not passed out from very good university its ranked in top 20 of India only. Neither did i do research work till now. But being one of founding member of company and developing product for same, i want to know if this factor can help and to what extent. For university with ranking lower than 20 what matters most is GRE General score and GPA but i guess top university must be appreciating a person's real efforts.

    Read the article

  • I need to hire a web developer but I'm not sure which qualifications/skills they should have to do what I see in my head. Any help?

    - by Beau
    I'm looking to develop a simple image editor wherein a user can cycle through multiple layers (6-10) of images to develop a unique avatar. For example, the initial image layer will be a 2D character of their choice, mostly void of any accessories. The next image layer would be a shirt, another pants, another jewelry etc. I like the idea of frontend HTML5 so it can be a web app compatible across most platforms, but I'm lost on the backend requirements of such a project. #noob Any help would be greatly appreciated.

    Read the article

  • What is the value to checking in broken unit tests?

    - by Adam W.
    While there are ways of keeping unit tests from being executed, what is the value of checking in broken unit tests? I will use a simple example. Case sensitivity. The current code is Case Sensitive. A valid input into the method is "Cat" and it would return an enum of Animal.Cat. However, the desired functionality of the method should not be case sensitive. So if the method described was passed "cat" it could possibly return something like Animal.Null instead of Animal.Cat and the unit test would fail. Though a simple code change would make this work, a more complex issue may take weeks to fix, but identifying the bug with a unit test could be a less complex task. The application currently being analyzed has 4 years of code that "works". However, recent discussions regarding unit tests has found flaws in the code. Some just need explicit implementation documentation (ex. case sensitive or not), or code that does not execute the bug based on how it is currently called. But unit tests can be created executing specific scenarios that will cause the bug to be seen and are valid inputs. What is the value of checking in unit tests that exercise the bug until someone can get around to fixing the code? Should this unit test be flagged with ignore, priority, category etc, to determine whether a build was successful based on tests executed? Eventually the unit test should be created to execute the code once someone fixes it. On one hand it shows that identified bugs have not been fixed. On the other, there could be hundreds of failed unit tests showing up in the logs and weeding through the ones that should fail vs. failures due to a code check-in would be difficult to find.

    Read the article

  • What is the value of checking in failing unit tests?

    - by Adam W.
    While there are ways of keeping unit tests from being executed, what is the value of checking in failing unit tests? I will use a simple example: Case Sensitivity. The current code is case sensitive. A valid input into the method is "Cat" and it would return an enum of Animal.Cat. However, the desired functionality of the method should not be case sensitive. So if the method described was passed "cat" it could possibly return something like Animal.Null instead of Animal.Cat and the unit test would fail. Though a simple code change would make this work, a more complex issue may take weeks to fix, but identifying the bug with a unit test could be a less complex task. The application currently being analyzed has 4 years of code that "works". However, recent discussions regarding unit tests have found flaws in the code. Some just need explicit implementation documentation (ex. case sensitive or not), or code that does not execute the bug based on how it is currently called. But unit tests can be created executing specific scenarios that will cause the bug to be seen and are valid inputs. What is the value of checking in unit tests that exercise the bug until someone can get around to fixing the code? Should this unit test be flagged with ignore, priority, category etc, to determine whether a build was successful based on tests executed? Eventually the unit test should be created to execute the code once someone fixes it. On one hand it shows that identified bugs have not been fixed. On the other, there could be hundreds of failed unit tests showing up in the logs and weeding through the ones that should fail vs. failures due to a code check-in would be difficult to find.

    Read the article

  • What is the value of checking in failing unit tests?

    - by user20194
    While there are ways of keeping unit tests from being executed, what is the value of checking in failing unit tests? I will use a simple example: Case Sensitivity. The current code is case sensitive. A valid input into the method is "Cat" and it would return an enum of Animal.Cat. However, the desired functionality of the method should not be case sensitive. So if the method described was passed "cat" it could possibly return something like Animal.Null instead of Animal.Cat and the unit test would fail. Though a simple code change would make this work, a more complex issue may take weeks to fix, but identifying the bug with a unit test could be a less complex task. The application currently being analyzed has 4 years of code that "works". However, recent discussions regarding unit tests have found flaws in the code. Some just need explicit implementation documentation (ex. case sensitive or not), or code that does not execute the bug based on how it is currently called. But unit tests can be created executing specific scenarios that will cause the bug to be seen and are valid inputs. What is the value of checking in unit tests that exercise the bug until someone can get around to fixing the code? Should this unit test be flagged with ignore, priority, category etc, to determine whether a build was successful based on tests executed? Eventually the unit test should be created to execute the code once someone fixes it. On one hand it shows that identified bugs have not been fixed. On the other, there could be hundreds of failed unit tests showing up in the logs and weeding through the ones that should fail vs. failures due to a code check-in would be difficult to find.

    Read the article

  • Securing a Cloud-Based Data Center

    - by Orgad Kimchi
    No doubt, with all the media reports about stolen databases and private information, a major concern when committing to a public or private cloud must be preventing unauthorized access of data and applications. In this article, we discuss the security features of Oracle Solaris 11 that provide a bullet-proof cloud environment. As an example, we show how the Oracle Solaris Remote Lab implementation utilizes these features to provide a high level of security for its users. Note: This is the second article in a series on cloud building with Oracle Solaris 11. See Part 1 here.  When we build a cloud, the following aspects related to the security of the data and applications in the cloud become a concern: • Sensitive data must be protected from unauthorized access while residing on storage devices, during transmission between servers and clients, and when it is used by applications. • When a project is completed, all copies of sensitive data must be securely deleted and the original data must be kept permanently secure. • Communications between users and the cloud must be protected to prevent exposure of sensitive information from “man in a middle attacks.” • Limiting the operating system’s exposure protects against malicious attacks and penetration by unauthorized users or automated “bots” and “rootkits” designed to gain privileged access. • Strong authentication and authorization procedures further protect the operating system from tampering. • Denial of Service attacks, whether they are started intentionally by hackers or accidentally by other cloud users, must be quickly detected and deflected, and the service must be restored. In addition to the security features in the operating system, deep auditing provides a trail of actions that can identify violations,issues, and attempts to penetrate the security of the operating system. Combined, these threats and risks reinforce the need for enterprise-grade security solutions that are specifically designed to protect cloud environments. With Oracle Solaris 11, the security of any cloud is ensured. This article explains how.

    Read the article

  • Can't install Ubuntu 11.10 on a seperate disk partition. I have a log file. Please help!

    - by les02jen17
    Before I installed Ubuntu, I resized my C: drive (which has my Windows 7 OS). Prior to resizing, I even defragged the said drive. I tried using Wubi to install it on the created partition, but I receive an error. I have the log file and I would post it but it's too long and the site won't let me. :(So anyway, I wasn't able to install it, so I uninstalled it. Upon uninstalling Wubi and Ubuntu, I tried installing it by having the CD to my CD ROM. The installation was a success, however, upon rebooting I can neither boot to Ubuntu OR Windows 7. I get stuck in an infinite bootloop. I hope you can help me out! :(

    Read the article

  • HELP!! Rake aborted! Can't modify frozen hash

    - by pmneve
    Too bad even the trace doesn't say which hash is involved. Sorry this post is long: am trying to provide enough context to be meaningful. Occurs intermittently when rake jobs:work is pulling a command out of delayed_jobs while my status observer is in the process of parsing a log file for detailed results of the previous delayed_job denizen. I have an observer class (in RAILS_ROOT/lib ) which listens for the events, makes a copy of them and calls the owner class ( in apps/models ) which then calls on the log parser (also in /lib) to do the actual work. (Should both of those classes, the observer and the parser be in app/models?) Am due to deliver this application in a few days and this is killing it (and me). Am using DirectoryWatcher to look for flag files that indicate the start and finish of the delayed_jobs. That is started at the end of environment.rb like this: require 'directory_watcher' $scriptStatusObserver = ScriptStatusObserver.new dirToWatch ="#{RAILS_ROOT}/tmp/flags" $directoryWatcher = DirectoryWatcher.new( dirToWatch ) $directoryWatcher.glob= "*.flg" $directoryWatcher.interval=(15) $directoryWatcher.add_observer( $scriptStatusObserver ) $directoryWatcher.persist=("#{RAILS_ROOT}/tmp/flags/dw_state.yml") $directoryWatcher.start at_exit { $directoryWatcher.stop } This code is outside of the run method (btw is that the best place or is inside the run better?) Here is the observer: require 'script_run' class ScriptStatusObserver def initialize @rcvdEvents = [] end def update( *events ) begin puts "#{LINE.to_s}: ScriptStatusObserver events: \n"+events.to_yaml cnt = 0 events.each do |e| if e.to_s.match(/^\s*added/) cnt = cnt + 1 @rcvdEvents << e end end ScriptRun.new.catch_up( @rcvdEvents ) if cnt > 0 @rcvdEvents.clear rescue puts $! end end end Here is ScriptRun (it attaches to an associative table built with has_many:through) require 'observer' class ScriptRun < ActiveRecord::Base set_table_name "scripts_runs" belongs_to :script belongs_to :run def parse( result ) parser = LogParser.new parser.parse(result) end def catch_up( events ) events.each do |e| typ = e.type path = e.path thisMatch = path.match(/flags\/(\d+)_(\d+)_([\d\.]+)_(\w+)\.flg/) run_id = thisMatch[1] script_id = thisMatch[2] ts = thisMatch[3] status = thisMatch[4] if e.to_s.match(/^\s*added/) status_update( script_id, run_id, status, ts, path ) end end end def status_update( script_id, run_id, status, ts, path ) scriptrun = ScriptRun.find(:first, :conditions => [ "run_id = ? and script_id = ?", run_id.to_i, script_id.to_i ]) if scriptrun.kind_of?(ScriptRun) currStatus = scriptrun.status if not currStatus == 'completed' scriptrun.update_attribute(:status, status) if status == 'parse' flag = File.new(path) logSpec = flag.gets flag.close logName = File.basename(logSpec) logPath = logSpec.sub(logName, '') logName =~ /^(([\w_]+)_([\w]+)_(\d+))\.log$/ name = $1 basename = $2 runenv = $3 tsOrPid = $4 result = Result.new result.log_path = logPath result.basename = basename result.name = name result.script_id = script_id.to_i result.run_id = run_id.to_i if runenv == 'sit' runenv = 'SIT3348' end result.application_environment_id = ApplicationEnvironment.find(:first, :conditions => [ "nodename = ?", runenv]).id parse(result) if run_completed?( run_id ) myRun = Run.find(run_id.to_i) if myRun.kind_of?( Run ) myRun.update_attribute( :completed, Time.now.to_f ) end end end end else puts "#{__LINE__.to_s}: ScriptRun.status_update: ScriptRun not found for run #{run_id} script #{script_id} ts #{ts.to_s}" end File.delete(path) end def run_completed?( id ) scriptruns = ScriptRun.find(:all, :conditions = [ "run_id = ?", id.to_i] ) scriptruns.each do |sr| if not sr.status == 'completed' return false end end return true end end LogParser is too long even for this post but it reads the script log and pulls detailed information (counts and timings) out of the log and writes to a details table. It also tallies and calculates averages and rolls those up into summary tables for quicker access from the web pages. Here is the error trace: (don't ask why everything is under my Windows profile. It's a long story) Scanner running 1270239731.43 directory_watcher.notify_observers: #, #] update:[#, /pneve/workspace/waftt-0.29/tmp/flags/100039_18_1270239550.108_parse.flg"] rake aborted! can't modify frozen hash C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/rails/activerecord/l ib/active_record/attribute_methods.rb:313:in []=' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/rails/activerecord/l ib/active_record/attribute_methods.rb:313:inwrite_attribute_without_dirty' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/rails/activerecord/l ib/active_record/dirty.rb:139:in write_attribute' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/rails/activerecord/l ib/active_record/attribute_methods.rb:211:inlast_error=' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/plugins/delayed_job/ lib/delayed/worker.rb:141:in handle_failed_job' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/plugins/delayed_job/ lib/delayed/worker.rb:115:inrun' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/plugins/delayed_job/ lib/delayed/worker.rb:162:in reserve_and_run_one_job' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/plugins/delayed_job/ lib/delayed/worker.rb:92:inwork_off' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/plugins/delayed_job/ lib/delayed/worker.rb:91:in times' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/plugins/delayed_job/ lib/delayed/worker.rb:91:inwork_off' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/plugins/delayed_job/ lib/delayed/worker.rb:66:in start' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/rails/activesupport/ lib/active_support/core_ext/benchmark.rb:10:inrealtime' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/plugins/delayed_job/ lib/delayed/worker.rb:65:in start' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/plugins/delayed_job/ lib/delayed/worker.rb:62:inloop' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/plugins/delayed_job/ lib/delayed/worker.rb:62:in start' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/plugins/delayed_job/ lib/delayed/tasks.rb:13 c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:636:incall' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:636:in execute' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:631:ineach' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:631:in execute' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:597:ininvoke_with_call_chain' c:/Documents and Settings/pneve/ruby/lib/ruby/1.8/monitor.rb:242:in synchronize ' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:590:ininvoke_with_call_chain' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:583:in invoke' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:2051:ininvoke_task' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:2029:in top_level' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:2029:ineach' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:2029:in top_level' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:2068:instandard_exception_handling' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:2023:in top_level' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:2001:inrun' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:2068:in standard_exception_handling' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:1998:inrun' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/bin/rake: 31 c:/Documents and Settings/pneve/ruby/bin/rake:16:in `load' c:/Documents and Settings/pneve/ruby/bin/rake:16

    Read the article

  • HELP!! Delayed-job: Rake aborted! Can't modify frozen hash

    - by pmneve
    Too bad even the trace doesn't say which hash is involved. Sorry this post is long: am trying to provide enough context to be meaningful. Occurs intermittently when rake jobs:work is pulling a command out of delayed_jobs while my status observer is in the process of parsing a log file for detailed results of the previous delayed_job denizen. I have an observer class (in RAILS_ROOT/lib ) which listens for the events, makes a copy of them and calls the owner class ( in apps/models ) which then calls on the log parser (also in /lib) to do the actual work. (Should both of those classes, the observer and the parser be in app/models?) Am due to deliver this application in a few days and this is killing it (and me). Am using DirectoryWatcher to look for flag files that indicate the start and finish of the delayed_jobs. That is started at the end of environment.rb like this: require 'directory_watcher' $scriptStatusObserver = ScriptStatusObserver.new dirToWatch ="#{RAILS_ROOT}/tmp/flags" $directoryWatcher = DirectoryWatcher.new( dirToWatch ) $directoryWatcher.glob= "*.flg" $directoryWatcher.interval=(15) $directoryWatcher.add_observer( $scriptStatusObserver ) $directoryWatcher.persist=("#{RAILS_ROOT}/tmp/flags/dw_state.yml") $directoryWatcher.start at_exit { $directoryWatcher.stop } This code is outside of the run method (btw is that the best place or is inside the run better?) Here is the observer: require 'script_run' class ScriptStatusObserver def initialize @rcvdEvents = [] end def update( *events ) begin puts "#{LINE.to_s}: ScriptStatusObserver events: \n"+events.to_yaml cnt = 0 events.each do |e| if e.to_s.match(/^\s*added/) cnt = cnt + 1 @rcvdEvents << e end end ScriptRun.new.catch_up( @rcvdEvents ) if cnt > 0 @rcvdEvents.clear rescue puts $! end end end Here is ScriptRun (it attaches to an associative table built with has_many:through) require 'observer' class ScriptRun < ActiveRecord::Base set_table_name "scripts_runs" belongs_to :script belongs_to :run def parse( result ) parser = LogParser.new parser.parse(result) end def catch_up( events ) events.each do |e| typ = e.type path = e.path thisMatch = path.match(/flags\/(\d+)_(\d+)_([\d\.]+)_(\w+)\.flg/) run_id = thisMatch[1] script_id = thisMatch[2] ts = thisMatch[3] status = thisMatch[4] if e.to_s.match(/^\s*added/) status_update( script_id, run_id, status, ts, path ) end end end def status_update( script_id, run_id, status, ts, path ) scriptrun = ScriptRun.find(:first, :conditions => [ "run_id = ? and script_id = ?", run_id.to_i, script_id.to_i ]) if scriptrun.kind_of?(ScriptRun) currStatus = scriptrun.status if not currStatus == 'completed' scriptrun.update_attribute(:status, status) if status == 'parse' flag = File.new(path) logSpec = flag.gets flag.close logName = File.basename(logSpec) logPath = logSpec.sub(logName, '') logName =~ /^(([\w_]+)_([\w]+)_(\d+))\.log$/ name = $1 basename = $2 runenv = $3 tsOrPid = $4 result = Result.new result.log_path = logPath result.basename = basename result.name = name result.script_id = script_id.to_i result.run_id = run_id.to_i if runenv == 'sit' runenv = 'SIT3348' end result.application_environment_id = ApplicationEnvironment.find(:first, :conditions => [ "nodename = ?", runenv]).id parse(result) if run_completed?( run_id ) myRun = Run.find(run_id.to_i) if myRun.kind_of?( Run ) myRun.update_attribute( :completed, Time.now.to_f ) end end end end else puts "#{__LINE__.to_s}: ScriptRun.status_update: ScriptRun not found for run #{run_id} script #{script_id} ts #{ts.to_s}" end File.delete(path) end def run_completed?( id ) scriptruns = ScriptRun.find(:all, :conditions = [ "run_id = ?", id.to_i] ) scriptruns.each do |sr| if not sr.status == 'completed' return false end end return true end end LogParser is too long even for this post but it reads the script log and pulls detailed information (counts and timings) out of the log and writes to a details table. It also tallies and calculates averages and rolls those up into summary tables for quicker access from the web pages. Here is the error trace: (don't ask why everything is under my Windows profile. It's a long story) Scanner running 1270239731.43 directory_watcher.notify_observers: #, #] update:[#, /pneve/workspace/waftt-0.29/tmp/flags/100039_18_1270239550.108_parse.flg"] rake aborted! can't modify frozen hash C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/rails/activerecord/l ib/active_record/attribute_methods.rb:313:in []=' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/rails/activerecord/l ib/active_record/attribute_methods.rb:313:inwrite_attribute_without_dirty' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/rails/activerecord/l ib/active_record/dirty.rb:139:in write_attribute' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/rails/activerecord/l ib/active_record/attribute_methods.rb:211:inlast_error=' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/plugins/delayed_job/ lib/delayed/worker.rb:141:in handle_failed_job' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/plugins/delayed_job/ lib/delayed/worker.rb:115:inrun' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/plugins/delayed_job/ lib/delayed/worker.rb:162:in reserve_and_run_one_job' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/plugins/delayed_job/ lib/delayed/worker.rb:92:inwork_off' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/plugins/delayed_job/ lib/delayed/worker.rb:91:in times' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/plugins/delayed_job/ lib/delayed/worker.rb:91:inwork_off' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/plugins/delayed_job/ lib/delayed/worker.rb:66:in start' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/rails/activesupport/ lib/active_support/core_ext/benchmark.rb:10:inrealtime' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/plugins/delayed_job/ lib/delayed/worker.rb:65:in start' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/plugins/delayed_job/ lib/delayed/worker.rb:62:inloop' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/plugins/delayed_job/ lib/delayed/worker.rb:62:in start' C:/Documents and Settings/pneve/workspace/waftt-0.29/vendor/plugins/delayed_job/ lib/delayed/tasks.rb:13 c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:636:incall' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:636:in execute' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:631:ineach' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:631:in execute' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:597:ininvoke_with_call_chain' c:/Documents and Settings/pneve/ruby/lib/ruby/1.8/monitor.rb:242:in synchronize ' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:590:ininvoke_with_call_chain' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:583:in invoke' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:2051:ininvoke_task' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:2029:in top_level' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:2029:ineach' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:2029:in top_level' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:2068:instandard_exception_handling' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:2023:in top_level' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:2001:inrun' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:2068:in standard_exception_handling' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake. rb:1998:inrun' c:/Documents and Settings/pneve/ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/bin/rake: 31 c:/Documents and Settings/pneve/ruby/bin/rake:16:in `load' c:/Documents and Settings/pneve/ruby/bin/rake:16

    Read the article

  • How do I fix the issue with tables in xsl-fo, please help...

    - by atrueguy
    <?xml version="1.0" encoding="ISO-8859-1"?> <?xml:stylesheet type="text/xsl" href="currency.xslt"?> <currencylist> <title>Currencies By Country</title> <countries> <country>Australia</country> <currency>Australian Dollar</currency> </countries> <countries> <country>Austria</country> <currency>Schilling</currency> </countries> <countries> <country>Belgium</country> <currency>Belgium Franc</currency> </countries> <countries> <country>Canada</country> <currency>Canadian Dollar</currency> </countries> <countries> <country>England</country> <currency>Pound</currency> </countries> <countries> <country>Fiji</country> <currency>Fijian Dollar</currency> </countries> <countries> <country>France</country> <currency>Franc</currency> </countries> <countries> <country>Germany</country> <currency>DMark</currency> </countries> <countries> <country>Hong Kong</country> <currency>Hong Kong Dollar</currency> </countries> <countries> <country>Italy</country> <currency>Lira</currency> </countries> <countries> <country>Japan</country> <currency>Yen</currency> </countries> <countries> <country>Netherlands</country> <currency>Guilder</currency> </countries> <countries> <country>Switzerland</country> <currency>SFranc</currency> </countries> <countries> <country>USA</country> <currency>Dollar</currency> </countries> </currencylist> This is my exact xml code. I have written a xsl-fo for this xml file and I am failing to produce the output in a table. please check and help me in this. ASAP. <?xml version="1.0"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="/"> <fo:root xmlns:fo="http://www.w3.org/1999/XSL/Format"> <fo:layout-master-set> <fo:simple-page-master master-name="Letter" page-height="11in" page-width="8.5in"> <fo:region-body region-name="only_region" margin="1in" background-color="#CCCCCC"/> </fo:simple-page-master> </fo:layout-master-set> <fo:page-sequence master-reference="Letter"> <fo:flow flow-name="only_region"> <fo:block text-align="left"><xsl:call-template name="show_title"/></fo:block> <fo:table-and-caption> <fo:table> <fo:table-column column-width="25mm"/> <fo:table-column column-width="25mm"/> <fo:table-column column-width="25mm"/> <fo:table-header> <fo:table-row> <fo:table-cell> <fo:block font-weight="bold">SI No</fo:block> </fo:table-cell> <fo:table-cell> <fo:block font-weight="bold">Country</fo:block> </fo:table-cell> <fo:table-cell> <fo:block font-weight="bold">Currency</fo:block> </fo:table-cell> </fo:table-row> </fo:table-header> <fo:table-body> <fo:table-row> <fo:table-cell> <fo:block><xsl:call-template name="select_position"/></fo:block> </fo:table-cell> <fo:table-cell> <fo:block><xsl:call-template name="select_country"/></fo:block> </fo:table-cell> <fo:table-cell> <fo:block><xsl:call-template name="select_currency"/></fo:block> </fo:table-cell> </fo:table-row> </fo:table-body> </fo:table> </fo:table-and-caption> </fo:flow> </fo:page-sequence> </fo:root> </xsl:template> <xsl:template name="show_title" match="currencylist"> <h2><xsl:value-of select="currencylist/title"/></h2> </xsl:template> <xsl:template name="select_position" match="currencylist"> <xsl:for-each select="currencylist/countries"> <xsl:value-of select="position()"/> </xsl:for-each> </xsl:template> <xsl:template name="select_country" match="currencylist"> <xsl:for-each select="currencylist/countries"> <xsl:value-of select="country"/> </xsl:for-each> </xsl:template> <xsl:template name="select_currency" match="currencylist"> <xsl:for-each select="currencylist/countries"> <xsl:value-of select="currency"/> </xsl:for-each> </xsl:template> </xsl:stylesheet> Kindly help me out in this to produce a output in the table.

    Read the article

  • Grid overlayed on image using javascript, need help getting grid coordinates.

    - by Alos
    Hi I am fairly new to javascript and could use some help, I am trying to overlay a grid on top of an image and then be able to have the user click on the grid and get the grid coordinate from the box that the user clicked. I have been working with the code from the following stackoverflow question: Creating a grid overlay over image. link text Here is the code that I have so far: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> </script> <script type="text/javascript"> var SetGrid = function(el, sz, nr, nc){ //get number of rows/columns according to the 'grid' size //numcols = el.getSize().x/sz; //numrows = el.getSize().y/sz; numcols = 48; numrows = 32; //create table element for injecting cols/rows var gridTable = new Element('table', { 'id' : 'gridTable', 'styles' : { 'width' : el.getSize().x, 'height' : el.getSize().y, 'top' : el.getCoordinates().top, 'left' : el.getCoordinates().left } }); //inject rows/cols into gridTable for (row = 1; row<=numrows; row++){ thisRow = new Element('tr', { 'id' : row, 'class' : 'gridRow' }); for(col = 1; col<=numcols; col++){ thisCol = new Element('td', { 'id' : col, 'class' : 'gridCol0' }); //each cell gets down/up over event... down starts dragging|up stops|over draws area if down was fired thisCol.addEvents({ 'mousedown' : function(){ dragFlag = true; startRow = this.getParent().get('id'); startCol = this.get('id'); }, 'mouseup' : function(){ dragFlag = false; }, 'mouseover' : function(){ if (dragFlag==true){ this.set('class', 'gridCol'+$$('#lvlSelect .on').get('value')); } }, 'click' : function(){ //this.set('class', 'gridCol'+$$('#lvlSelect .on').get('id').substr(3, 1) ); str = $$('#lvlSelect .on').get('id'); alert(str.substr(2, 3)); } }); thisCol.inject(thisRow, 'bottom'); }; thisRow.inject(gridTable, 'bottom'); }; gridTable.inject(el.getParent()); } //sens level selector func var SetSensitivitySelector = function(el, sz, nr, nc){ $$('#lvlSelect ul li').each(function(el){ el.addEvents({ 'click' : function(){ $$('#lvlSelect ul li').set('class', ''); this.set('class', 'on'); }, 'mouseover' : function(){ el.setStyle('cursor','pointer'); }, 'mouseout' : function(){ el.setStyle('cursor',''); } }); }); } //execute window.addEvent('load', function(){ SetGrid($('imagetomap'), 32); SetSensitivitySelector(); }); var gridSize = { x: 48, y: 32 }; var img = document.getElementById('imagetomap'); img.onclick = function(e) { if (!e) e = window.event; alert(Math.floor(e.offsetX/ gridSize.x) + ', ' + Math.floor(e.offsetY / gridSize.y)); } </script> <style> #imagetomapdiv { float:left; display: block; } #gridTable { border:1px solid red; border-collapse:collapse; position:absolute; z-index:5; } #gridTable td { opacity:0.2; filter:alpha(opacity=20); } #gridTable .gridCol0 { border:1px solid gray; background-color: none; } #gridTable .gridCol1 { border:1px solid gray; background-color: green; } #gridTable .gridCol2 { border:1px solid gray; background-color: blue; } #gridTable .gridCol3 { border:1px solid gray; background-color: yellow; } #gridTable .gridCol4 { border:1px solid gray; background-color: orange; } #gridTable .gridCol5 { border:1px solid gray; background-color: red; } #lvlSelect ul {float: left; display:block; position:relative; margin-left: 20px; padding: 10px; } #lvlSelect ul li { width:40px; text-align:center; display:block; border:1px solid black; position:relative; padding: 10px; list-style:none; opacity:0.2; filter:alpha(opacity=20); } #lvlSelect ul li.on { opacity:1; filter:alpha(opacity=100); } #lvlSelect ul #li0 { background-color: none; } #lvlSelect ul #li1 { background-color: green; } #lvlSelect ul #li2 { background-color: blue; } #lvlSelect ul #li3 { background-color: yellow; } #lvlSelect ul #li4 { background-color: orange; } #lvlSelect ul #li5 { background-color: red; } </style> </div> <div id="lvlSelect"> <ul> <li value="0" id="li0">0</li> <li value="1" id="li1">1</li> <li value="2" id="li2">2</li> <li value="3" id="li3">3</li> <li value="4" id="li4">4</li> <li value="5" id="li5" class="on">5</li> </ul> </div> In this example the grid box changes color when the image is grid box is clicked, but I would like to be able to have the coordinates of the box. Any help would be great. Thank you

    Read the article

  • Help Me Make My Pascal Program Display Multiple Names Which Were Entered Plz.

    - by Sketchie
    Hey,Guys I'm having trouble displaying names using arrays in my Pascal program. But first heres the question I'm writing the program to answer : Develop pseudocode that accepts as input the name of an unspecified number of masqueraders who each have paid the full cost of their costume and the amount each has paid. A masquerader may have paid for a costume in any of the five sections in the band. The algorithm should determine the section in which the masquerader plays, based on the amount he/she has paid for the costume. The algorithm should also determine the number of masqueraders who have paid for costumes in each section. The names of persons and the section for which they have paid should be printed. A listing of the sections and the total number of persons registered to play in each section should also be printed, along with the total amount paid in each section. And here's what I've done so far(It's a bit messy): program Maqueprgrm; uses wincrt; type names=array [1..30]of string; var name:string; s1_nms:names; s2_nms:names; s3_nms:names; s4_nms:names; s5_nms:names; amt_paid:integer; stop,x,full,ins,ttl1,ttl2,ttl3,ttl4,ttl5,sec_1,sec_2,sec_3,sec_4,sec_5,sec:integer; begin stop := 0; writeln ('To Begin Data Collecting Section Press 1, To End Press 0'); readln(stop); while stop <0 do begin for x:= 1 to 30 do x:= x+1; writeln ('Please Enter The Name of Masquerader.'); readln (name); writeln ('Please Enter The Amount Paid For Costume.'); readln (amt_paid); IF amt_paid = 144 THEN Begin full:=full+1; {people who paid in full + 1} sec_1:=sec_1+1; {people in this section + 1} ttl1:= ttl1+amt_paid; {total money earned in this section + amount person paid} s1_nms[x]:= name; {People's names in this section plus current entry's name} End else IF amt_paid = 184 then Begin ins:=ins+1; sec_1:=sec_1+1; ttl1:=ttl1+amt_paid; s1_nms[x]:= name; end else IF amt_paid = 198 then Begin full:=full+1; sec_2:=sec_2+1; ttl2:=ttl2+amt_paid; s2_nms[x]:= 'name'; end else IF amt_paid = 153 then Begin ins:=ins+1; sec_2:=sec_2+1; ttl2:=ttl2+amt_paid; s2_nms[x]:= 'name'; end else IF amt_paid = 264 then Begin full:=full+1; sec_3:=sec_3+1; ttl3:=ttl3+amt_paid; s3_nms[x]:= 'name'; end else IF amt_paid = 322 then Begin ins:=ins+1; sec_3:=sec_3+1; ttl3:=ttl3+amt_paid; s3_nms[x]:= 'name'; end else IF amt_paid = 315 then Begin full:=full+1; sec_4:=sec_4+1; ttl4:=ttl4+amt_paid; s4_nms[x]:= 'name'; end else IF amt_paid = 402 then Begin ins:=ins+1; sec_4:=sec_4+1; ttl4:=ttl4+amt_paid; s4_nms[x]:= 'name'; end else IF amt_paid = 382. then Begin ins:=ins+1; sec_4:=sec_4+1; ttl4:=ttl4+amt_paid; s4_nms[x]:= 'name'; end else IF amt_paid = 488 then Begin ins:=ins+1; sec_5:=sec_5+1; ttl5:=ttl5+amt_paid; s5_nms[x]:= 'name'; end else begin Writeln ('Invalid Entry'); For x:= 1 to 30 do end; writeln (' To Enter More Data Press 1, To Print Data and Calculations Press 0'); Readln (stop); End; writeln ('For Information on Sections Press 1, 2, 3, 4, or 5 Respectively'); Readln (sec); IF Sec= 1 THEN Begin for x:= 1 to 30 do; begin writeln ('The Number of Revelers in This Section is ',Sec_1,'.'); Writeln ('The Number of Revelers in the Band Who Paid in Full Is ', Full,'.'); Writeln ('The Number of Revelers in the Band Who Paid in Installments is ', Ins ,'.'); Writeln ('The Names of Revelers in This Section is ', S1_Nms[x], '.'); Writeln ('The Total Amount Collected By This Section Is ', Ttl1, '.'); End; end else IF Sec= 2 THEN Begin For x:= 1 to 30 do begin Writeln ('The Number of Revelers in This Section is ', Sec_2,'.'); Writeln ('The Number of Revelers in the Band Who Paid in Full Is ', Full,'.'); Writeln ('The Number of Revelers in the Band Who Paid in Installments is ', Ins ,'.'); Writeln ('The Names of Revelers in This Section is ', S2_Nms[x] ,'.'); Writeln ('The Total Amount Collected By This Section Is ', Ttl2, '.'); End; End else IF Sec= 3 THEN Begin For x:= 1 to 30 do begin writeln ('The Number of Revelers in This Section is ', Sec_3,'.'); Writeln ('The Number of Revelers in the Band Who Paid in Full Is ', Full,'.'); Writeln ('The Number of Revelers in the Band Who Paid in Installments is ', Ins ,'.'); Writeln ('The Names of Revelers in This Section is ', S3_Nms[x] ,'.'); Writeln ('The Total Amount Collected By This Section Is ', Ttl3, '.'); End; End else IF Sec= 4 THEN Begin For x:= 1 to 30 do begin writeln ('The Number of Revelers in This Section is ', Sec_4,'.'); Writeln ('The Number of Revelers in the Band Who Paid in Full Is ', Full,'.'); Writeln ('The Number of Revelers in the Band Who Paid in Installments is ', Ins, '.'); Writeln ('The Names of Revelers in This Section is ', S4_Nms [x],'.'); Writeln ('The Total Amount Collected By This Section Is ', Ttl4 ,'.'); End; End else IF Sec= 5 THEN Begin For x:= 1 to 30 do begin writeln ('The Number of Revelers in This Section is ', Sec_5,'.'); Writeln ('The Number of Revelers in the Band Who Paid in Full Is ', Full,'.'); Writeln ('The Number of Revelers in the Band Who Paid in Installments is ', Ins, '.'); Writeln ('The Names of Revelers in This Section is ', S5_Nms[x], '.'); Writeln ('The Total Amount Collected By This Section Is ', Ttl5 ,'.'); End; End else BEGIN Writeln ('Invalid Entry. For Information on Sections Press 1, 2, 3, 4, or 5 Respectively. '); End; end. The result I keep getting is that for the names in the section it is only the name of the last person entered that is displayed. For example it would display: The number of revelers in this section is 2 The number of revelers in the band who paid in full is 2 {Band is the whole thing divided into 5 section} The number of revelers in the band who paid in installments is 0 The names of revelers in this section is Amy Total amount collected by this section is 288 So yea, the program only displays "Amy" even though I entered more than 1 name. Do you guys know how to fix it also if it's not asking too much could you help me to make a while loop for the last portion of the program so that after the user sees the info for a section he can see another section. It sucks that I've been working on it for a while but if it doesn't run properly tomorrow I'll fail. Help would be greatly appreciated and responses are welcome anytime cause i'll be up anyway. P.S. Sorry for the long post.

    Read the article

  • Enterprise Library Logging / Exception handling and Postsharp

    - by subodhnpushpak
    One of my colleagues came-up with a unique situation where it was required to create log files based on the input file which is uploaded. For example if A.xml is uploaded, the corresponding log file should be A_log.txt. I am a strong believer that Logging / EH / caching are cross-cutting architecture aspects and should be least invasive to the business-logic written in enterprise application. I have been using Enterprise Library for logging / EH (i use to work with Avanade, so i have affection towards the library!! :D ). I have been also using excellent library called PostSharp for cross cutting aspect. Here i present a solution with and without PostSharp all in a unit test. Please see full source code at end of the this blog post. But first, we need to tweak the enterprise library so that the log files are created at runtime based on input given. Below is Custom trace listner which writes log into a given file extracted out of Logentry extendedProperties property. using Microsoft.Practices.EnterpriseLibrary.Common.Configuration; using Microsoft.Practices.EnterpriseLibrary.Logging.Configuration; using Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners; using Microsoft.Practices.EnterpriseLibrary.Logging; using System.IO; using System.Text; using System; using System.Diagnostics;   namespace Subodh.Framework.Logging { [ConfigurationElementType(typeof(CustomTraceListenerData))] public class LogToFileTraceListener : CustomTraceListener {   private static object syncRoot = new object();   public override void TraceData(TraceEventCache eventCache, string source, TraceEventType eventType, int id, object data) {   if ((data is LogEntry) & this.Formatter != null) { WriteOutToLog(this.Formatter.Format((LogEntry)data), (LogEntry)data); } else { WriteOutToLog(data.ToString(), (LogEntry)data); } }   public override void Write(string message) { Debug.Print(message.ToString()); }   public override void WriteLine(string message) { Debug.Print(message.ToString()); }   private void WriteOutToLog(string BodyText, LogEntry logentry) { try { //Get the filelocation from the extended properties if (logentry.ExtendedProperties.ContainsKey("filelocation")) { string fullPath = Path.GetFullPath(logentry.ExtendedProperties["filelocation"].ToString());   //Create the directory where the log file is written to if it does not exist. DirectoryInfo directoryInfo = new DirectoryInfo(Path.GetDirectoryName(fullPath));   if (directoryInfo.Exists == false) { directoryInfo.Create(); }   //Lock the file to prevent another process from using this file //as data is being written to it.   lock (syncRoot) { using (FileStream fs = new FileStream(fullPath, FileMode.Append, FileAccess.Write, FileShare.Write, 4096, true)) { using (StreamWriter sw = new StreamWriter(fs, Encoding.UTF8)) { Log(BodyText, sw); sw.Close(); } fs.Close(); } } } } catch (Exception ex) { throw new LoggingException(ex.Message, ex); } }   /// <summary> /// Write message to named file /// </summary> public static void Log(string logMessage, TextWriter w) { w.WriteLine("{0}", logMessage); } } }   The above can be “plugged into” the code using below configuration <loggingConfiguration name="Logging Application Block" tracingEnabled="true" defaultCategory="Trace" logWarningsWhenNoCategoriesMatch="true"> <listeners> <add listenerDataType="Microsoft.Practices.EnterpriseLibrary.Logging.Configuration.CustomTraceListenerData, Microsoft.Practices.EnterpriseLibrary.Logging, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" traceOutputOptions="None" filter="All" type="Subodh.Framework.Logging.LogToFileTraceListener, Subodh.Framework.Logging, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" name="Subodh Custom Trace Listener" initializeData="" formatter="Text Formatter" /> </listeners> Similarly we can use PostSharp to expose the above as cross cutting aspects as below using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Reflection; using PostSharp.Laos; using System.Diagnostics; using GC.FrameworkServices.ExceptionHandler; using Subodh.Framework.Logging;   namespace Subodh.Framework.ExceptionHandling { [Serializable] public sealed class LogExceptionAttribute : OnExceptionAspect { private string prefix; private MethodFormatStrings formatStrings;   // This field is not serialized. It is used only at compile time. [NonSerialized] private readonly Type exceptionType; private string fileName;   /// <summary> /// Declares a <see cref="XTraceExceptionAttribute"/> custom attribute /// that logs every exception flowing out of the methods to which /// the custom attribute is applied. /// </summary> public LogExceptionAttribute() { }   /// <summary> /// Declares a <see cref="XTraceExceptionAttribute"/> custom attribute /// that logs every exception derived from a given <see cref="Type"/> /// flowing out of the methods to which /// the custom attribute is applied. /// </summary> /// <param name="exceptionType"></param> public LogExceptionAttribute( Type exceptionType ) { this.exceptionType = exceptionType; }   public LogExceptionAttribute(Type exceptionType, string fileName) { this.exceptionType = exceptionType; this.fileName = fileName; }   /// <summary> /// Gets or sets the prefix string, printed before every trace message. /// </summary> /// <value> /// For instance <c>[Exception]</c>. /// </value> public string Prefix { get { return this.prefix; } set { this.prefix = value; } }   /// <summary> /// Initializes the current object. Called at compile time by PostSharp. /// </summary> /// <param name="method">Method to which the current instance is /// associated.</param> public override void CompileTimeInitialize( MethodBase method ) { // We just initialize our fields. They will be serialized at compile-time // and deserialized at runtime. this.formatStrings = Formatter.GetMethodFormatStrings( method ); this.prefix = Formatter.NormalizePrefix( this.prefix ); }   public override Type GetExceptionType( MethodBase method ) { return this.exceptionType; }   /// <summary> /// Method executed when an exception occurs in the methods to which the current /// custom attribute has been applied. We just write a record to the tracing /// subsystem. /// </summary> /// <param name="context">Event arguments specifying which method /// is being called and with which parameters.</param> public override void OnException( MethodExecutionEventArgs context ) { string message = String.Format("{0}Exception {1} {{{2}}} in {{{3}}}. \r\n\r\nStack Trace {4}", this.prefix, context.Exception.GetType().Name, context.Exception.Message, this.formatStrings.Format(context.Instance, context.Method, context.GetReadOnlyArgumentArray()), context.Exception.StackTrace); if(!string.IsNullOrEmpty(fileName)) { ApplicationLogger.LogException(message, fileName); } else { ApplicationLogger.LogException(message, Source.UtilityService); } } } } To use the above below is the unit test [TestMethod] [ExpectedException(typeof(NotImplementedException))] public void TestMethod1() { MethodThrowingExceptionForLog(); try { MethodThrowingExceptionForLogWithPostSharp(); } catch (NotImplementedException ex) { throw ex; } }   private void MethodThrowingExceptionForLog() { try { throw new NotImplementedException(); } catch (NotImplementedException ex) { // create file and then write log ApplicationLogger.TraceMessage("this is a trace message which will be logged in Test1MyFile", @"D:\EL\Test1Myfile.txt"); ApplicationLogger.TraceMessage("this is a trace message which will be logged in YetAnotherTest1Myfile", @"D:\EL\YetAnotherTest1Myfile.txt"); } }   // Automatically log details using attributes // Log exception using attributes .... A La WCF [FaultContract(typeof(FaultMessage))] style] [Log(@"D:\EL\Test1MyfileLogPostsharp.txt")] [LogException(typeof(NotImplementedException), @"D:\EL\Test1MyfileExceptionPostsharp.txt")] private void MethodThrowingExceptionForLogWithPostSharp() { throw new NotImplementedException(); } The good thing about the approach is that all the logging and EH is done at centralized location controlled by PostSharp. Of Course, if some other library has to be used instead of EL, it can easily be plugged in. Also, the coder ARE ONLY involved in writing business code in methods, which makes code cleaner. Here is the full source code. The third party assemblies provided are from EL and PostSharp and i presume you will find these useful. Do let me know your thoughts / ideas on the same. Technorati Tags: PostSharp,Enterprize library,C#,Logging,Exception handling

    Read the article

  • Launching a WPF Window in a Separate Thread, Part 1

    - by Reed
    Typically, I strongly recommend keeping the user interface within an application’s main thread, and using multiple threads to move the actual “work” into background threads.  However, there are rare times when creating a separate, dedicated thread for a Window can be beneficial.  This is even acknowledged in the MSDN samples, such as the Multiple Windows, Multiple Threads sample.  However, doing this correctly is difficult.  Even the referenced MSDN sample has major flaws, and will fail horribly in certain scenarios.  To ease this, I wrote a small class that alleviates some of the difficulties involved. The MSDN Multiple Windows, Multiple Threads Sample shows how to launch a new thread with a WPF Window, and will work in most cases.  The sample code (commented and slightly modified) works out to the following: // Create a thread Thread newWindowThread = new Thread(new ThreadStart( () => { // Create and show the Window Window1 tempWindow = new Window1(); tempWindow.Show(); // Start the Dispatcher Processing System.Windows.Threading.Dispatcher.Run(); })); // Set the apartment state newWindowThread.SetApartmentState(ApartmentState.STA); // Make the thread a background thread newWindowThread.IsBackground = true; // Start the thread newWindowThread.Start(); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This sample creates a thread, marks it as single threaded apartment state, and starts the Dispatcher on that thread. That is the minimum requirements to get a Window displaying and handling messages correctly, but, unfortunately, has some serious flaws. The first issue – the created thread will run continuously until the application shuts down, given the code in the sample.  The problem is that the ThreadStart delegate used ends with running the Dispatcher.  However, nothing ever stops the Dispatcher processing.  The thread was created as a Background thread, which prevents it from keeping the application alive, but the Dispatcher will continue to pump dispatcher frames until the application shuts down. In order to fix this, we need to call Dispatcher.InvokeShutdown after the Window is closed.  This would require modifying the above sample to subscribe to the Window’s Closed event, and, at that point, shutdown the Dispatcher: // Create a thread Thread newWindowThread = new Thread(new ThreadStart( () => { Window1 tempWindow = new Window1(); // When the window closes, shut down the dispatcher tempWindow.Closed += (s,e) => Dispatcher.CurrentDispatcher.BeginInvokeShutdown(DispatcherPriority.Background); tempWindow.Show(); // Start the Dispatcher Processing System.Windows.Threading.Dispatcher.Run(); })); // Setup and start thread as before This eliminates the first issue.  Now, when the Window is closed, the new thread’s Dispatcher will shut itself down, which in turn will cause the thread to complete. The above code will work correctly for most situations.  However, there is still a potential problem which could arise depending on the content of the Window1 class.  This is particularly nasty, as the code could easily work for most windows, but fail on others. The problem is, at the point where the Window is constructed, there is no active SynchronizationContext.  This is unlikely to be a problem in most cases, but is an absolute requirement if there is code within the constructor of Window1 which relies on a context being in place. While this sounds like an edge case, it’s fairly common.  For example, if a BackgroundWorker is started within the constructor, or a TaskScheduler is built using TaskScheduler.FromCurrentSynchronizationContext() with the expectation of synchronizing work to the UI thread, an exception will be raised at some point.  Both of these classes rely on the existence of a proper context being installed to SynchronizationContext.Current, which happens automatically, but not until Dispatcher.Run is called.  In the above case, SynchronizationContext.Current will return null during the Window’s construction, which can cause exceptions to occur or unexpected behavior. Luckily, this is fairly easy to correct.  We need to do three things, in order, prior to creating our Window: Create and initialize the Dispatcher for the new thread manually Create a synchronization context for the thread which uses the Dispatcher Install the synchronization context Creating the Dispatcher is quite simple – The Dispatcher.CurrentDispatcher property gets the current thread’s Dispatcher and “creates a new Dispatcher if one is not already associated with the thread.”  Once we have the correct Dispatcher, we can create a SynchronizationContext which uses the dispatcher by creating a DispatcherSynchronizationContext.  Finally, this synchronization context can be installed as the current thread’s context via SynchronizationContext.SetSynchronizationContext.  These three steps can easily be added to the above via a single line of code: // Create a thread Thread newWindowThread = new Thread(new ThreadStart( () => { // Create our context, and install it: SynchronizationContext.SetSynchronizationContext( new DispatcherSynchronizationContext( Dispatcher.CurrentDispatcher)); Window1 tempWindow = new Window1(); // When the window closes, shut down the dispatcher tempWindow.Closed += (s,e) => Dispatcher.CurrentDispatcher.BeginInvokeShutdown(DispatcherPriority.Background); tempWindow.Show(); // Start the Dispatcher Processing System.Windows.Threading.Dispatcher.Run(); })); // Setup and start thread as before This now forces the synchronization context to be in place before the Window is created and correctly shuts down the Dispatcher when the window closes. However, there are quite a few steps.  In my next post, I’ll show how to make this operation more reusable by creating a class with a far simpler API…

    Read the article

  • OpenGL loading functions error [on hold]

    - by Ghilliedrone
    I'm new to OpenGL, and I bought a book on it for beginners. I finished writing the sample code for making a context/window. I get an error on this line at the part PFNWGLCREATECONTEXTATTRIBSARBPROC, saying "Error: expected a ')'": typedef HGLRC(APIENTRYP PFNWGLCREATECONTEXTATTRIBSARBPROC)(HDC, HGLRC, const int*); Replacing it or adding a ")" makes it error, but the error disappears when I use the OpenGL headers included in the books CD, which are OpenGL 3.0. I would like a way to make this work with the newest gl.h/wglext.h and without libraries. Here's the rest of the class if it's needed: #include <ctime> #include <windows.h> #include <iostream> #include <gl\GL.h> #include <gl\wglext.h> #include "Example.h" #include "GLWindow.h" typedef HGLRC(APIENTRYP PFNWGLCREATECONTEXTATTRIBSARBPROC)(HDC, HGLRC, const int*); PFNWGLCREATECONTEXTATTRIBSARBPROC wglCreateContextAttribsARB = NULL; bool GLWindow::create(int width, int height, int bpp, bool fullscreen) { DWORD dwExStyle; //Window Extended Style DWORD dwStyle; //Window Style m_isFullscreen = fullscreen;//Store the fullscreen flag m_windowRect.left = 0L; m_windowRect.right = (long)width; m_windowRect.top = 0L; m_windowRect.bottom = (long)height;//Set bottom to height // fill out the window class structure m_windowClass.cbSize = sizeof(WNDCLASSEX); m_windowClass.style = CS_HREDRAW | CS_VREDRAW; m_windowClass.lpfnWndProc = GLWindow::StaticWndProc; //We set our static method as the event handler m_windowClass.cbClsExtra = 0; m_windowClass.cbWndExtra = 0; m_windowClass.hInstance = m_hinstance; m_windowClass.hIcon = LoadIcon(NULL, IDI_APPLICATION); // default icon m_windowClass.hCursor = LoadCursor(NULL, IDC_ARROW); // default arrow m_windowClass.hbrBackground = NULL; // don't need background m_windowClass.lpszMenuName = NULL; // no menu m_windowClass.lpszClassName = (LPCWSTR)"GLClass"; m_windowClass.hIconSm = LoadIcon(NULL, IDI_WINLOGO); // windows logo small icon if (!RegisterClassEx(&m_windowClass)) { MessageBox(NULL, (LPCWSTR)"Failed to register window class", NULL, MB_OK); return false; } if (m_isFullscreen)//If we are fullscreen, we need to change the display { DEVMODE dmScreenSettings; //Device mode memset(&dmScreenSettings, 0, sizeof(dmScreenSettings)); dmScreenSettings.dmSize = sizeof(dmScreenSettings); dmScreenSettings.dmPelsWidth = width; //Screen width dmScreenSettings.dmPelsHeight = height; //Screen height dmScreenSettings.dmBitsPerPel = bpp; //Bits per pixel dmScreenSettings.dmFields = DM_BITSPERPEL | DM_PELSWIDTH | DM_PELSHEIGHT; if (ChangeDisplaySettings(&dmScreenSettings, CDS_FULLSCREEN) != DISP_CHANGE_SUCCESSFUL) { MessageBox(NULL, (LPCWSTR)"Display mode failed", NULL, MB_OK); m_isFullscreen = false; } } if (m_isFullscreen) //Is it fullscreen? { dwExStyle = WS_EX_APPWINDOW; //Window Extended Style dwStyle = WS_POPUP; //Windows Style ShowCursor(false); //Hide mouse pointer } else { dwExStyle = WS_EX_APPWINDOW | WS_EX_WINDOWEDGE; //Window Exteneded Style dwStyle = WS_OVERLAPPEDWINDOW; //Windows Style } AdjustWindowRectEx(&m_windowRect, dwStyle, false, dwExStyle); //Adjust window to true requested size //Class registered, so now create window m_hwnd = CreateWindowEx(NULL, //Extended Style (LPCWSTR)"GLClass", //Class name (LPCWSTR)"Chapter 2", //App name dwStyle | WS_CLIPCHILDREN | WS_CLIPSIBLINGS, 0, 0, //x, y coordinates m_windowRect.right - m_windowRect.left, m_windowRect.bottom - m_windowRect.top, //Width and height NULL, //Handle to parent NULL, //Handle to menu m_hinstance, //Application instance this); //Pass a pointer to the GLWindow here //Check if window creation failed, hwnd would equal NULL if (!m_hwnd) { return 0; } m_hdc = GetDC(m_hwnd); ShowWindow(m_hwnd, SW_SHOW); UpdateWindow(m_hwnd); m_lastTime = GetTickCount() / 1000.0f; return true; } LRESULT CALLBACK GLWindow::StaticWndProc(HWND hWnd, UINT uMsg, WPARAM wParam, LPARAM lParam) { GLWindow* window = nullptr; //If this is the create message if (uMsg == WM_CREATE) { //Get the pointer we stored during create window = (GLWindow*)((LPCREATESTRUCT)lParam)->lpCreateParams; //Associate the window pointer with the hwnd for the other events to access SetWindowLongPtr(hWnd, GWL_USERDATA, (LONG_PTR)window); } else { //If this is not a creation event, then we should have stored a pointer to the window window = (GLWindow*)GetWindowLongPtr(hWnd, GWL_USERDATA); if (!window) { //Do the default event handling return DefWindowProc(hWnd, uMsg, wParam, lParam); } } //Call our window's member WndProc(allows us to access member variables) return window->WndProc(hWnd, uMsg, wParam, lParam); } LRESULT GLWindow::WndProc(HWND hWnd, UINT uMsg, WPARAM wParam, LPARAM lParam) { switch (uMsg) { case WM_CREATE: { m_hdc = GetDC(hWnd); setupPixelFormat(); //Set the version that we want, in this case 3.0 int attribs[] = { WGL_CONTEXT_MAJOR_VERSION_ARB, 3, WGL_CONTEXT_MINOR_VERSION_ARB, 0, 0}; //Create temporary context so we can get a pointer to the function HGLRC tmpContext = wglCreateContext(m_hdc); //Make the context current wglMakeCurrent(m_hdc, tmpContext); //Get the function pointer wglCreateContextAttribsARB = (PFNWGLCREATECONTEXTATTRIBSARBPROC)wglGetProcAddress("wglCreateContextAttribsARB"); //If this is NULL then OpenGl 3.0 is not supported if (!wglCreateContextAttribsARB) { MessageBox(NULL, (LPCWSTR)"OpenGL 3.0 is not supported", (LPCWSTR)"An error occured", MB_ICONERROR | MB_OK); DestroyWindow(hWnd); return 0; } //Create an OpenGL 3.0 context using the new function m_hglrc = wglCreateContextAttribsARB(m_hdc, 0, attribs); //Delete the temporary context wglDeleteContext(tmpContext); //Make the GL3 context current wglMakeCurrent(m_hdc, m_hglrc); m_isRunning = true; } break; case WM_DESTROY: //Window destroy case WM_CLOSE: //Windows is closing wglMakeCurrent(m_hdc, NULL); wglDeleteContext(m_hglrc); m_isRunning = false; //Stop the main loop PostQuitMessage(0); break; case WM_SIZE: { int height = HIWORD(lParam); //Get height and width int width = LOWORD(lParam); getAttachedExample()->onResize(width, height); //Call the example's resize method } break; case WM_KEYDOWN: if (wParam == VK_ESCAPE) //If the escape key was pressed { DestroyWindow(m_hwnd); } break; default: break; } return DefWindowProc(hWnd, uMsg, wParam, lParam); } void GLWindow::processEvents() { MSG msg; //While there are messages in the queue, store them in msg while (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) { //Process the messages TranslateMessage(&msg); DispatchMessage(&msg); } } Here is the header: #pragma once #include <ctime> #include <windows.h> class Example;//Declare our example class class GLWindow { public: GLWindow(HINSTANCE hInstance); //default constructor bool create(int width, int height, int bpp, bool fullscreen); void destroy(); void processEvents(); void attachExample(Example* example); bool isRunning(); //Is the window running? void swapBuffers() { SwapBuffers(m_hdc); } static LRESULT CALLBACK StaticWndProc(HWND wnd, UINT msg, WPARAM wParam, LPARAM lParam); LRESULT CALLBACK WndProc(HWND wnd, UINT msg, WPARAM wParam, LPARAM lParam); float getElapsedSeconds(); private: Example* m_example; //A link to the example program bool m_isRunning; //Is the window still running? bool m_isFullscreen; HWND m_hwnd; //Window handle HGLRC m_hglrc; //Rendering context HDC m_hdc; //Device context RECT m_windowRect; //Window bounds HINSTANCE m_hinstance; //Application instance WNDCLASSEX m_windowClass; void setupPixelFormat(void); Example* getAttachedExample() { return m_example; } float m_lastTime; };

    Read the article

  • C# Code Help With Amazon (AWS) - The request must contain the parameter Signature.

    - by leen3o
    I'm struggling with the final part of getting my first bit of code working with the AWS - I have got this far, I attached the web reference in VS and this have this amazon.AWSECommerceService service = new amazon.AWSECommerceService(); // prepare an ItemSearch request amazon.ItemSearchRequest request = new amazon.ItemSearchRequest(); request.SearchIndex = "DVD"; request.Title = "scream"; request.ResponseGroup = new string[] { "Small" }; amazon.ItemSearch itemSearch = new amazon.ItemSearch(); itemSearch.AssociateTag = ""; itemSearch.Request = new ItemSearchRequest[] { request }; itemSearch.AWSAccessKeyId = ConfigurationManager.AppSettings["AwsAccessKeyId"]; itemSearch.Request = new ItemSearchRequest[] { request }; ItemSearchResponse response = service.ItemSearch(itemSearch); // write out the results foreach (var item in response.Items[0].Item) { Response.Write(item.ItemAttributes.Title + "<br>"); } I get the error The request must contain the parameter Signature. I know you have to 'sign' requests now, but can't figure out 'where' I would do this or how? any help greatly appreciated?

    Read the article

  • Selenium RC selenium-testrunner.js Access denied error on IEProxy - Help??

    - by melaos
    Webpage error details User Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; .NET4.0C; .NET4.0E) Timestamp: Wed, 28 Apr 2010 02:07:17 UTC Message: Access is denied. Line: 177 Char: 9 Code: 0 URI: http://www.google.com/selenium-server/core/scripts/selenium-testrunner.js Hi guys, i'm just starting to learn up on selenium and while testing using mostly test cases and test suite creating using selenium IDE firefox, i'm having some problem getting it to work properly in internet explorer. this is the cmd line that i'm using: java -jar "selenium-server.jar" -htmlSuite *iexploreproxy "http://www.google.com/" tests/OR_Discount_UAT_Suite.htm results.html -userExtensions user-extensions.js i try using the *iexplore but kept getting session id expired error and try with the proxy version instead. i can now see the testrunner but keep getting the access denied error. i then try the same cmd line using firefox: java -jar "selenium-server.jar" -htmlSuite *firefox3 "http://www.google.com/" tests/OR_Discount_UAT_Suite.htm results.html -userExtensions user-extensions.js FYI, i've already unchecked the auto detect proxy setting in IE8. and i can get everything running perfectly. so im not sure what's the problem right now :( anybody can help? thanks!

    Read the article

< Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >