Search Results

Search found 6917 results on 277 pages for 'runtime compilation'.

Page 54/277 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • Macro access to members of object where macro is defined

    - by Marc Grue
    Say I have a trait Foo that I instantiate with an initial value val foo = new Foo(6) // class Foo(i: Int) and I later call a second method that in turn calls myMacro foo.secondMethod(7) // def secondMethod(j: Int) = macro myMacro then, how can myMacro find out what my initial value of i (6) is? I didn't succeed with normal compilation reflection using c.prefix, c.eval(...) etc but instead found a 2-project solution: Project B: object CompilationB { def resultB(x: Int, y: Int) = macro resultB_impl def resultB_impl(c: Context)(x: c.Expr[Int], y: c.Expr[Int]) = c.universe.reify(x.splice * y.splice) } Project A (depends on project B): trait Foo { val i: Int // Pass through `i` to compilation B: def apply(y: Int) = CompilationB.resultB(i, y) } object CompilationA { def makeFoo(x: Int): Foo = macro makeFoo_impl def makeFoo_impl(c: Context)(x: c.Expr[Int]): c.Expr[Foo] = c.universe.reify(new Foo {val i = x.splice}) } We can create a Foo and set the i value either with normal instantiation or with a macro like makeFoo. The second approach allows us to customize a Foo at compile time in the first compilation and then in the second compilation further customize its response to input (i in this case)! In some way we get "meta-meta" capabilities (or "pataphysic"-capabilities ;-) Normally we would need to have foo in scope to introspect i (with for instance c.eval(...)). But by saving the i value inside the Foo object we can access it anytime and we could instantiate Foo anywhere: object Test extends App { import CompilationA._ // Normal instantiation val foo1 = new Foo {val i = 7} val r1 = foo1(6) // Macro instantiation val foo2 = makeFoo(7) val r2 = foo2(6) // "Curried" invocation val r3 = makeFoo(6)(7) println(s"Result 1 2 3: $r1 $r2 $r3") assert((r1, r2, r3) ==(42, 42, 42)) } My question Can I find i inside my example macros without this double compilation hackery?

    Read the article

  • Strange error in ASP.NET

    - by rockinthesixstring
    Every once in a while I get about 10 - 20 identical errors on my asp.net app. It's always the same, and I'm wondering if it is someone trying to hack in (it happens about once a month). Source: System.Web Message: The file '/~/Default.aspx' does not exist. User IP: 89.122.29.80 User Browser: Unknown 0.0 User OS: Unknown Stack trace: at System.Web.UI.Util.CheckVirtualFileExists(VirtualPath virtualPath) at System.Web.Compilation.BuildManager.GetVPathBuildResultInternal(VirtualPath virtualPath, Boolean noBuild, Boolean allowCrossApp, Boolean allowBuildInPrecompile) at System.Web.Compilation.BuildManager.GetVPathBuildResultWithNoAssert(HttpContext context, VirtualPath virtualPath, Boolean noBuild, Boolean allowCrossApp, Boolean allowBuildInPrecompile) at System.Web.Compilation.BuildManager.GetVirtualPathObjectFactory(VirtualPath virtualPath, HttpContext context, Boolean allowCrossApp, Boolean noAssert) at System.Web.Compilation.BuildManager.CreateInstanceFromVirtualPath(VirtualPath virtualPath, Type requiredBaseType, HttpContext context, Boolean allowCrossApp, Boolean noAssert) at System.Web.UI.PageHandlerFactory.GetHandlerHelper(HttpContext context, String requestType, VirtualPath virtualPath, String physicalPath) at System.Web.UI.PageHandlerFactory.System.Web.IHttpHandlerFactory2.GetHandler(HttpContext context, String requestType, VirtualPath virtualPath, String physicalPath) at System.Web.HttpApplication.MapHttpHandler(HttpContext context, String requestType, VirtualPath path, String pathTranslated, Boolean useAppConfig) at System.Web.HttpApplication.MapHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean completedSynchronously) why on earth would someone be trying to access "/~/Default.aspx" ?

    Read the article

  • typeid , dynamic casting (upcast) and templates

    - by David
    Hello, I have few questions regarding dynamic casting , typeid() and templates 1) How come typeid does not require RTTI ? 2) dynamic_cast on polymorphic type: When I do downcast (Base to Derive) with RTTI - compilation passes When RTTI is off - I get a warning (warning C4541: 'dynamic_cast' used on polymorphic type 'CBase' with /GR-; unpredictable behavior may result) When I do upcast (Derive to Base), with or without RTTI - compilation passes smoothly What I don't understand is why when I do upcast and RTTI is off - I don't get any warning/error! 3) dynamic_cast on NON polymorphic type: When I do downcast with or without RTTI - compilation fails (error C2683: 'dynamic_cast' : 'CBase' is not a polymorphic type) BUT When I do upcast with or without RTTI - compilation passes smoothly. How come on upcast on NON polymorphic type passes w/o RTTI ? 4) Does 'inline' in front of a template function has any effect, i.e. when the compiler compiles the function and see it is 'inline' it will actually treat the function as inline or it is ignored? Thank you very much for the assistance David

    Read the article

  • Is there any way to evaluate (expand-file-name) in .dir-locals.el?

    - by vava
    I'm trying to move all my compilation config (compilation-command and compilation-directory to be exact) to .dir-locals.el file at the top of my project. It is working fine except that I can't find the way to use expand-file-name there and without it I have to use absolute path, which is not really convenient. So, is there a way (or a dirty hack) to make local directory variables to evaluate values before assigning?

    Read the article

  • How to force VS 2010 to skip "builds" of projects which haven't changed?

    - by Ladislav Mrnka
    Our product's solution has more than 100+ projects (500+ksloc of production code). Most of them are C# projects but we also have few using C++/CLI to bridge communication with native code. Rebuilding the whole solution takes several minutes. That's fine. If I want to rebuilt the solution I expect that it will really take some time. What is not fine is time needed to build solution after full rebuild. Imagine I used full rebuild and know without doing any changes to to the solution I press Build (F6 or Ctrl+Shift+B). Why it takes 35s if there was no change? In output I see that it started "building" of each project - it doesn't perform real build but it does something which consumes significant amount of time. That 35s delay is pain in the ass. Yes I can improve the time by not using build solution but only build project (Shift+F6). If I run build project on particular test project I'm currently working on it will take "only" 8+s. It requires me to run project build on correct project (the test project to ensure dependent tested code is build as well). At least ReSharper test runner correctly recognizes that only this single project must be build and rerunning test usually contains only 8+s compilation. My current coding Kata is: don't touch Ctrl+Shift+B. The test project build will take 8s even if I don't do any changes. The reason why it takes 8s is because it also "builds" dependencies = in my case it "builds" more than 20 projects but I made changes only to unit test or single dependency! I don't want it to touch other projects. Is there a way to simply tell VS to build only projects where some changes were done and projects which are dependent on changed ones (preferably this part as another build option)? I worry you will tell me that it is exactly what VS is doing but in MS way ... I want to improve my TDD experience and reduce the time of compilation (in TDD the compilation can happen twice per minute). To make this even more frustrated I'm working in a team where most of developers used to work on Java projects prior to joining this one. So you can imagine how they are pissed off when they must use VS in contrast to full incremental compilation in Java. I don't require incremental compilation of classes. I expect working incremental compilation of solutions. Especially in product like VS 2010 Ultimate which costs several thousands dollars. I really don't want to get answers like: Make a separate solution Unload projects you don't need etc. I can read those answers here. Those are not acceptable solutions. We're not paying for VS to do such compromises.

    Read the article

  • C++: Template functor cannot deduce reference type

    - by maciekp
    I've got a functor f, which takes a function func and a parameter t of the same type as func. I cannot pass g to f because of compilation error (no matching function for call to f(int&, void (&)(int&)) ). If g would take non-reference parameter g(int s), compilation finishes. Or if I manually specify template parameter f(i, g), compilation also finishes. template<typename T> void f(T t, void (*func)(T)) {} void g(int& s) {} int main(int, char*[]) { int i = 7; f(i, g); // compilation error here return 0; } How can I get deduction to work?

    Read the article

  • How does a java compiler resolve a non-imported name

    - by gexicide
    Consider I use a type X in my java compilation unit from package foo.bar and X is not defined in the compilation unit itself nor is it directly imported. How does a java compiler resolve X now efficiently? There are a few possibilities where X could reside: X might be imported via a star import a.b.* X might reside in the same package as the compilation unit X might be a language type, i.e. reside in java.lang The problem I see is especially (2.). Since X might be a package-private type, it is not even required that X resides in a compilation unit that is named X.java. Thus, the compiler must look into all entries of the class path and search for any classes in a package foo.bar, it then must read every class that is in package foo.bar to check whether X is included. That sounds very expensive. Especially when I compile only a single file, the compiler has to read dozens of class files only to find a type X. If I use a lot of star imports, this procedure has to be repeated for a lot of types (although class files won't be read twice, of course). So is it advisable to import also types from the same package to speed up the compilation process? Or is there a faster method for resolving an unimported type X which I was not able to find?

    Read the article

  • Creating a Dynamic DataRow for easier DataRow Syntax

    - by Rick Strahl
    I've been thrown back into an older project that uses DataSets and DataRows as their entity storage model. I have several applications internally that I still maintain that run just fine (and I sometimes wonder if this wasn't easier than all this ORM crap we deal with with 'newer' improved technology today - but I disgress) but use this older code. For the most part DataSets/DataTables/DataRows are abstracted away in a pseudo entity model, but in some situations like queries DataTables and DataRows are still surfaced to the business layer. Here's an example. Here's a business object method that runs dynamic query and the code ends up looping over the result set using the ugly DataRow Array syntax:public int UpdateAllSafeTitles() { int result = this.Execute("select pk, title, safetitle from " + Tablename + " where EntryType=1", "TPks"); if (result < 0) return result; result = 0; foreach (DataRow row in this.DataSet.Tables["TPks"].Rows) { string title = row["title"] as string; string safeTitle = row["safeTitle"] as string; int pk = (int)row["pk"]; string newSafeTitle = this.GetSafeTitle(title); if (newSafeTitle != safeTitle) { this.ExecuteNonQuery("update " + this.Tablename + " set safeTitle=@safeTitle where pk=@pk", this.CreateParameter("@safeTitle",newSafeTitle), this.CreateParameter("@pk",pk) ); result++; } } return result; } The problem with looping over DataRow objecs is two fold: The array syntax is tedious to type and not real clear to look at, and explicit casting is required in order to do anything useful with the values. I've highlighted the place where this matters. Using the DynamicDataRow class I'll show in a minute this code can be changed to look like this:public int UpdateAllSafeTitles() { int result = this.Execute("select pk, title, safetitle from " + Tablename + " where EntryType=1", "TPks"); if (result < 0) return result; result = 0; foreach (DataRow row in this.DataSet.Tables["TPks"].Rows) { dynamic entry = new DynamicDataRow(row); string newSafeTitle = this.GetSafeTitle(entry.title); if (newSafeTitle != entry.safeTitle) { this.ExecuteNonQuery("update " + this.Tablename + " set safeTitle=@safeTitle where pk=@pk", this.CreateParameter("@safeTitle",newSafeTitle), this.CreateParameter("@pk",entry.pk) ); result++; } } return result; } The code looks much a bit more natural and describes what's happening a little nicer as well. Well, using the new dynamic features in .NET it's actually quite easy to implement the DynamicDataRow class. Creating your own custom Dynamic Objects .NET 4.0 introduced the Dynamic Language Runtime (DLR) and opened up a whole bunch of new capabilities for .NET applications. The dynamic type is an easy way to avoid Reflection and directly access members of 'dynamic' or 'late bound' objects at runtime. There's a lot of very subtle but extremely useful stuff that dynamic does (especially for COM Interop scenearios) but in its simplest form it often allows you to do away with manual Reflection at runtime. In addition you can create DynamicObject implementations that can perform  custom interception of member accesses and so allow you to provide more natural access to more complex or awkward data structures like the DataRow that I use as an example here. Bascially you can subclass DynamicObject and then implement a few methods (TryGetMember, TrySetMember, TryInvokeMember) to provide the ability to return dynamic results from just about any data structure using simple property/method access. In the code above, I created a custom DynamicDataRow class which inherits from DynamicObject and implements only TryGetMember and TrySetMember. Here's what simple class looks like:/// <summary> /// This class provides an easy way to turn a DataRow /// into a Dynamic object that supports direct property /// access to the DataRow fields. /// /// The class also automatically fixes up DbNull values /// (null into .NET and DbNUll to DataRow) /// </summary> public class DynamicDataRow : DynamicObject { /// <summary> /// Instance of object passed in /// </summary> DataRow DataRow; /// <summary> /// Pass in a DataRow to work off /// </summary> /// <param name="instance"></param> public DynamicDataRow(DataRow dataRow) { DataRow = dataRow; } /// <summary> /// Returns a value from a DataRow items array. /// If the field doesn't exist null is returned. /// DbNull values are turned into .NET nulls. /// /// </summary> /// <param name="binder"></param> /// <param name="result"></param> /// <returns></returns> public override bool TryGetMember(GetMemberBinder binder, out object result) { result = null; try { result = DataRow[binder.Name]; if (result == DBNull.Value) result = null; return true; } catch { } result = null; return false; } /// <summary> /// Property setter implementation tries to retrieve value from instance /// first then into this object /// </summary> /// <param name="binder"></param> /// <param name="value"></param> /// <returns></returns> public override bool TrySetMember(SetMemberBinder binder, object value) { try { if (value == null) value = DBNull.Value; DataRow[binder.Name] = value; return true; } catch {} return false; } } To demonstrate the basic features here's a short test: [TestMethod] [ExpectedException(typeof(RuntimeBinderException))] public void BasicDataRowTests() { DataTable table = new DataTable("table"); table.Columns.Add( new DataColumn() { ColumnName = "Name", DataType=typeof(string) }); table.Columns.Add( new DataColumn() { ColumnName = "Entered", DataType=typeof(DateTime) }); table.Columns.Add(new DataColumn() { ColumnName = "NullValue", DataType = typeof(string) }); DataRow row = table.NewRow(); DateTime now = DateTime.Now; row["Name"] = "Rick"; row["Entered"] = now; row["NullValue"] = null; // converted in DbNull dynamic drow = new DynamicDataRow(row); string name = drow.Name; DateTime entered = drow.Entered; string nulled = drow.NullValue; Assert.AreEqual(name, "Rick"); Assert.AreEqual(entered,now); Assert.IsNull(nulled); // this should throw a RuntimeBinderException Assert.AreEqual(entered,drow.enteredd); } The DynamicDataRow requires a custom constructor that accepts a single parameter that sets the DataRow. Once that's done you can access property values that match the field names. Note that types are automatically converted - no type casting is needed in the code you write. The class also automatically converts DbNulls to regular nulls and vice versa which is something that makes it much easier to deal with data returned from a database. What's cool here isn't so much the functionality - even if I'd prefer to leave DataRow behind ASAP -  but the fact that we can create a dynamic type that uses a DataRow as it's 'DataSource' to serve member values. It's pretty useful feature if you think about it, especially given how little code it takes to implement. By implementing these two simple methods we get to provide two features I was complaining about at the beginning that are missing from the DataRow: Direct Property Syntax Automatic Type Casting so no explicit casts are required Caveats As cool and easy as this functionality is, it's important to understand that it doesn't come for free. The dynamic features in .NET are - well - dynamic. Which means they are essentially evaluated at runtime (late bound). Rather than static typing where everything is compiled and linked by the compiler/linker, member invokations are looked up at runtime and essentially call into your custom code. There's some overhead in this. Direct invocations - the original code I showed - is going to be faster than the equivalent dynamic code. However, in the above code the difference of running the dynamic code and the original data access code was very minor. The loop running over 1500 result records took on average 13ms with the original code and 14ms with the dynamic code. Not exactly a serious performance bottleneck. One thing to remember is that Microsoft optimized the DLR code significantly so that repeated calls to the same operations are routed very efficiently which actually makes for very fast evaluation. The bottom line for performance with dynamic code is: Make sure you test and profile your code if you think that there might be a performance issue. However, in my experience with dynamic types so far performance is pretty good for repeated operations (ie. in loops). While usually a little slower the perf hit is a lot less typically than equivalent Reflection work. Although the code in the second example looks like standard object syntax, dynamic is not static code. It's evaluated at runtime and so there's no type recognition until runtime. This means no Intellisense at development time, and any invalid references that call into 'properties' (ie. fields in the DataRow) that don't exist still cause runtime errors. So in the case of the data row you still get a runtime error if you mistype a column name:// this should throw a RuntimeBinderException Assert.AreEqual(entered,drow.enteredd); Dynamic - Lots of uses The arrival of Dynamic types in .NET has been met with mixed emotions. Die hard .NET developers decry dynamic types as an abomination to the language. After all what dynamic accomplishes goes against all that a static language is supposed to provide. On the other hand there are clearly scenarios when dynamic can make life much easier (COM Interop being one place). Think of the possibilities. What other data structures would you like to expose to a simple property interface rather than some sort of collection or dictionary? And beyond what I showed here you can also implement 'Method missing' behavior on objects with InvokeMember which essentially allows you to create dynamic methods. It's all very flexible and maybe just as important: It's easy to do. There's a lot of power hidden in this seemingly simple interface. Your move…© Rick Strahl, West Wind Technologies, 2005-2011Posted in CSharp  .NET   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • C#: Adding Functionality to 3rd Party Libraries With Extension Methods

    - by James Michael Hare
    Ever have one of those third party libraries that you love but it's missing that one feature or one piece of syntactical candy that would make it so much more useful?  This, I truly think, is one of the best uses of extension methods.  I began discussing extension methods in my last post (which you find here) where I expounded upon what I thought were some rules of thumb for using extension methods correctly.  As long as you keep in line with those (or similar) rules, they can often be useful for adding that little extra functionality or syntactical simplification for a library that you have little or no control over. Oh sure, you could take an open source project, download the source and add the methods you want, but then every time the library is updated you have to re-add your changes, which can be cumbersome and error prone.  And yes, you could possibly extend a class in a third party library and override features, but that's only if the class is not sealed, static, or constructed via factories. This is the perfect place to use an extension method!  And the best part is, you and your development team don't need to change anything!  Simply add the using for the namespace the extensions are in! So let's consider this example.  I love log4net!  Of all the logging libraries I've played with, it, to me, is one of the most flexible and configurable logging libraries and it performs great.  But this isn't about log4net, well, not directly.  So why would I want to add functionality?  Well, it's missing one thing I really want in the ILog interface: ability to specify logging level at runtime. For example, let's say I declare my ILog instance like so:     using log4net;     public class LoggingTest     {         private static readonly ILog _log = LogManager.GetLogger(typeof(LoggingTest));         ...     }     If you don't know log4net, the details aren't important, just to show that the field _log is the logger I have gotten from log4net. So now that I have that, I can log to it like so:     _log.Debug("This is the lowest level of logging and just for debugging output.");     _log.Info("This is an informational message.  Usual normal operation events.");     _log.Warn("This is a warning, something suspect but not necessarily wrong.");     _log.Error("This is an error, some sort of processing problem has happened.");     _log.Fatal("Fatals usually indicate the program is dying hideously."); And there's many flavors of each of these to log using string formatting, to log exceptions, etc.  But one thing there isn't: the ability to easily choose the logging level at runtime.  Notice, the logging levels above are chosen at compile time.  Of course, you could do some fun stuff with lambdas and wrap it, but that would obscure the simplicity of the interface.  And yes there is a Logger property you can dive down into where you can specify a Level, but the Level properties don't really match the ILog interface exactly and then you have to manually build a LogEvent and... well, it gets messy.  I want something simple and sexy so I can say:     _log.Log(someLevel, "This will be logged at whatever level I choose at runtime!");     Now, some purists out there might say you should always know what level you want to log at, and for the most part I agree with them.  For the most party the ILog interface satisfies 99% of my needs.  In fact, for most application logging yes you do always know the level you will be logging at, but when writing a utility class, you may not always know what level your user wants. I'll tell you, one of my favorite things is to write reusable components.  If I had my druthers I'd write framework libraries and shared components all day!  And being able to easily log at a runtime-chosen level is a big need for me.  After all, if I want my code to really be re-usable, I shouldn't force a user to deal with the logging level I choose. One of my favorite uses for this is in Interceptors -- I'll describe Interceptors in my next post and some of my favorites -- for now just know that an Interceptor wraps a class and allows you to add functionality to an existing method without changing it's signature.  At the risk of over-simplifying, it's a very generic implementation of the Decorator design pattern. So, say for example that you were writing an Interceptor that would time method calls and emit a log message if the method call execution time took beyond a certain threshold of time.  For instance, maybe if your database calls take more than 5,000 ms, you want to log a warning.  Or if a web method call takes over 1,000 ms, you want to log an informational message.  This would be an excellent use of logging at a generic level. So here was my personal wish-list of requirements for my task: Be able to determine if a runtime-specified logging level is enabled. Be able to log generically at a runtime-specified logging level. Have the same look-and-feel of the existing Debug, Info, Warn, Error, and Fatal calls.    Having the ability to also determine if logging for a level is on at runtime is also important so you don't spend time building a potentially expensive logging message if that level is off.  Consider an Interceptor that may log parameters on entrance to the method.  If you choose to log those parameter at DEBUG level and if DEBUG is not on, you don't want to spend the time serializing those parameters. Now, mine may not be the most elegant solution, but it performs really well since the enum I provide all uses contiguous values -- while it's never guaranteed, contiguous switch values usually get compiled into a jump table in IL which is VERY performant - O(1) - but even if it doesn't, it's still so fast you'd never need to worry about it. So first, I need a way to let users pass in logging levels.  Sure, log4net has a Level class, but it's a class with static members and plus it provides way too many options compared to ILog interface itself -- and wouldn't perform as well in my level-check -- so I define an enum like below.     namespace Shared.Logging.Extensions     {         // enum to specify available logging levels.         public enum LoggingLevel         {             Debug,             Informational,             Warning,             Error,             Fatal         }     } Now, once I have this, writing the extension methods I need is trivial.  Once again, I would typically /// comment fully, but I'm eliminating for blogging brevity:     namespace Shared.Logging.Extensions     {         // the extension methods to add functionality to the ILog interface         public static class LogExtensions         {             // Determines if logging is enabled at a given level.             public static bool IsLogEnabled(this ILog logger, LoggingLevel level)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         return logger.IsDebugEnabled;                     case LoggingLevel.Informational:                         return logger.IsInfoEnabled;                     case LoggingLevel.Warning:                         return logger.IsWarnEnabled;                     case LoggingLevel.Error:                         return logger.IsErrorEnabled;                     case LoggingLevel.Fatal:                         return logger.IsFatalEnabled;                 }                                 return false;             }             // Logs a simple message - uses same signature except adds LoggingLevel             public static void Log(this ILog logger, LoggingLevel level, object message)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         logger.Debug(message);                         break;                     case LoggingLevel.Informational:                         logger.Info(message);                         break;                     case LoggingLevel.Warning:                         logger.Warn(message);                         break;                     case LoggingLevel.Error:                         logger.Error(message);                         break;                     case LoggingLevel.Fatal:                         logger.Fatal(message);                         break;                 }             }             // Logs a message and exception to the log at specified level.             public static void Log(this ILog logger, LoggingLevel level, object message, Exception exception)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         logger.Debug(message, exception);                         break;                     case LoggingLevel.Informational:                         logger.Info(message, exception);                         break;                     case LoggingLevel.Warning:                         logger.Warn(message, exception);                         break;                     case LoggingLevel.Error:                         logger.Error(message, exception);                         break;                     case LoggingLevel.Fatal:                         logger.Fatal(message, exception);                         break;                 }             }             // Logs a formatted message to the log at the specified level.              public static void LogFormat(this ILog logger, LoggingLevel level, string format,                                          params object[] args)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         logger.DebugFormat(format, args);                         break;                     case LoggingLevel.Informational:                         logger.InfoFormat(format, args);                         break;                     case LoggingLevel.Warning:                         logger.WarnFormat(format, args);                         break;                     case LoggingLevel.Error:                         logger.ErrorFormat(format, args);                         break;                     case LoggingLevel.Fatal:                         logger.FatalFormat(format, args);                         break;                 }             }         }     } So there it is!  I didn't have to modify the log4net source code, so if a new version comes out, i can just add the new assembly with no changes.  I didn't have to subclass and worry about developers not calling my sub-class instead of the original.  I simply provide the extension methods and it's as if the long lost extension methods were always a part of the ILog interface! Consider a very contrived example using the original interface:     // using the original ILog interface     public class DatabaseUtility     {         private static readonly ILog _log = LogManager.Create(typeof(DatabaseUtility));                 // some theoretical method to time         IDataReader Execute(string statement)         {             var timer = new System.Diagnostics.Stopwatch();                         // do DB magic                                    // this is hard-coded to warn, if want to change at runtime tough luck!             if (timer.ElapsedMilliseconds > 5000 && _log.IsWarnEnabled)             {                 _log.WarnFormat("Statement {0} took too long to execute.", statement);             }             ...         }     }     Now consider this alternate call where the logging level could be perhaps a property of the class          // using the original ILog interface     public class DatabaseUtility     {         private static readonly ILog _log = LogManager.Create(typeof(DatabaseUtility));                 // allow logging level to be specified by user of class instead         public LoggingLevel ThresholdLogLevel { get; set; }                 // some theoretical method to time         IDataReader Execute(string statement)         {             var timer = new System.Diagnostics.Stopwatch();                         // do DB magic                                    // this is hard-coded to warn, if want to change at runtime tough luck!             if (timer.ElapsedMilliseconds > 5000 && _log.IsLogEnabled(ThresholdLogLevel))             {                 _log.LogFormat(ThresholdLogLevel, "Statement {0} took too long to execute.",                     statement);             }             ...         }     } Next time, I'll show one of my favorite uses for these extension methods in an Interceptor.

    Read the article

  • Dynamic Types and DynamicObject References in C#

    - by Rick Strahl
    I've been working a bit with C# custom dynamic types for several customers recently and I've seen some confusion in understanding how dynamic types are referenced. This discussion specifically centers around types that implement IDynamicMetaObjectProvider or subclass from DynamicObject as opposed to arbitrary type casts of standard .NET types. IDynamicMetaObjectProvider types  are treated special when they are cast to the dynamic type. Assume for a second that I've created my own implementation of a custom dynamic type called DynamicFoo which is about as simple of a dynamic class that I can think of:public class DynamicFoo : DynamicObject { Dictionary<string, object> properties = new Dictionary<string, object>(); public string Bar { get; set; } public DateTime Entered { get; set; } public override bool TryGetMember(GetMemberBinder binder, out object result) { result = null; if (!properties.ContainsKey(binder.Name)) return false; result = properties[binder.Name]; return true; } public override bool TrySetMember(SetMemberBinder binder, object value) { properties[binder.Name] = value; return true; } } This class has an internal dictionary member and I'm exposing this dictionary member through a dynamic by implementing DynamicObject. This implementation exposes the properties dictionary so the dictionary keys can be referenced like properties (foo.NewProperty = "Cool!"). I override TryGetMember() and TrySetMember() which are fired at runtime every time you access a 'property' on a dynamic instance of this DynamicFoo type. Strong Typing and Dynamic Casting I now can instantiate and use DynamicFoo in a couple of different ways: Strong TypingDynamicFoo fooExplicit = new DynamicFoo(); var fooVar = new DynamicFoo(); These two commands are essentially identical and use strong typing. The compiler generates identical code for both of them. The var statement is merely a compiler directive to infer the type of fooVar at compile time and so the type of fooExplicit is DynamicFoo, just like fooExplicit. This is very static - nothing dynamic about it - and it completely ignores the IDynamicMetaObjectProvider implementation of my class above as it's never used. Using either of these I can access the native properties:DynamicFoo fooExplicit = new DynamicFoo();// static typing assignmentsfooVar.Bar = "Barred!"; fooExplicit.Entered = DateTime.Now; // echo back static values Console.WriteLine(fooVar.Bar); Console.WriteLine(fooExplicit.Entered); but I have no access whatsoever to the properties dictionary. Basically this creates a strongly typed instance of the type with access only to the strongly typed interface. You get no dynamic behavior at all. The IDynamicMetaObjectProvider features don't kick in until you cast the type to dynamic. If I try to access a non-existing property on fooExplicit I get a compilation error that tells me that the property doesn't exist. Again, it's clearly and utterly non-dynamic. Dynamicdynamic fooDynamic = new DynamicFoo(); fooDynamic on the other hand is created as a dynamic type and it's a completely different beast. I can also create a dynamic by simply casting any type to dynamic like this:DynamicFoo fooExplicit = new DynamicFoo(); dynamic fooDynamic = fooExplicit; Note that dynamic typically doesn't require an explicit cast as the compiler automatically performs the cast so there's no need to use as dynamic. Dynamic functionality works at runtime and allows for the dynamic wrapper to look up and call members dynamically. A dynamic type will look for members to access or call in two places: Using the strongly typed members of the object Using theIDynamicMetaObjectProvider Interface methods to access members So rather than statically linking and calling a method or retrieving a property, the dynamic type looks up - at runtime  - where the value actually comes from. It's essentially late-binding which allows runtime determination what action to take when a member is accessed at runtime *if* the member you are accessing does not exist on the object. Class members are checked first before IDynamicMetaObjectProvider interface methods are kick in. All of the following works with the dynamic type:dynamic fooDynamic = new DynamicFoo(); // dynamic typing assignments fooDynamic.NewProperty = "Something new!"; fooDynamic.LastAccess = DateTime.Now; // dynamic assigning static properties fooDynamic.Bar = "dynamic barred"; fooDynamic.Entered = DateTime.Now; // echo back dynamic values Console.WriteLine(fooDynamic.NewProperty); Console.WriteLine(fooDynamic.LastAccess); Console.WriteLine(fooDynamic.Bar); Console.WriteLine(fooDynamic.Entered); The dynamic type can access the native class properties (Bar and Entered) and create and read new ones (NewProperty,LastAccess) all using a single type instance which is pretty cool. As you can see it's pretty easy to create an extensible type this way that can dynamically add members at runtime dynamically. The Alter Ego of IDynamicObject The key point here is that all three statements - explicit, var and dynamic - declare a new DynamicFoo(), but the dynamic declaration results in completely different behavior than the first two simply because the type has been cast to dynamic. Dynamic binding means that the type loses its typical strong typing, compile time features. You can see this easily in the Visual Studio code editor. As soon as you assign a value to a dynamic you lose Intellisense and you see which means there's no Intellisense and no compiler type checking on any members you apply to this instance. If you're new to the dynamic type it might seem really confusing that a single type can behave differently depending on how it is cast, but that's exactly what happens when you use a type that implements IDynamicMetaObjectProvider. Declare the type as its strong type name and you only get to access the native instance members of the type. Declare or cast it to dynamic and you get dynamic behavior which accesses native members plus it uses IDynamicMetaObjectProvider implementation to handle any missing member definitions by running custom code. You can easily cast objects back and forth between dynamic and the original type:dynamic fooDynamic = new DynamicFoo(); fooDynamic.NewProperty = "New Property Value"; DynamicFoo foo = fooDynamic; foo.Bar = "Barred"; Here the code starts out with a dynamic cast and a dynamic assignment. The code then casts back the value to the DynamicFoo. Notice that when casting from dynamic to DynamicFoo and back we typically do not have to specify the cast explicitly - the compiler can induce the type so I don't need to specify as dynamic or as DynamicFoo. Moral of the Story This easy interchange between dynamic and the underlying type is actually super useful, because it allows you to create extensible objects that can expose non-member data stores and expose them as an object interface. You can create an object that hosts a number of strongly typed properties and then cast the object to dynamic and add additional dynamic properties to the same type at runtime. You can easily switch back and forth between the strongly typed instance to access the well-known strongly typed properties and to dynamic for the dynamic properties added at runtime. Keep in mind that dynamic object access has quite a bit of overhead and is definitely slower than strongly typed binding, so if you're accessing the strongly typed parts of your objects you definitely want to use a strongly typed reference. Reserve dynamic for the dynamic members to optimize your code. The real beauty of dynamic is that with very little effort you can build expandable objects or objects that expose different data stores to an object interface. I'll have more on this in my next post when I create a customized and extensible Expando object based on DynamicObject.© Rick Strahl, West Wind Technologies, 2005-2012Posted in CSharp  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • tomcat JAXB 1 and 2 linkageerror

    - by Alex
    Hi there, I'm running a tomcat 6, spring, apache cxf webservice, know it is a must to add one third party library to my webapp to fulfill an order. I have jaxb-impl-2.1.12.jar for apache cxf in WEB-INF/lib folder and the new library which contains the JAXB 1.0 runtime. JAXB 2 ist used by apache cxf for dynamic clients (i need them). So is there a possibility to run the webapps with both libraries? Best regards Alex Caused by: java.lang.LinkageError: You are trying to run JAXB 2.0 runtime but you have old JAXB 1.0 runtime earlier in the classpath. Please remove the JAXB 1.0 runtime for 2.0 runtime to work correctly.

    Read the article

  • Why is C# winforms application not working without VS.NET installed?

    - by Shane
    Hi folks, I have a winforms c# app that has an embedded webbrowser control inside it generated through VS.NET 2008. We sink events by inheriting our events class from HTMLDocumentEvents2. public class IEHTMLDocumentEvents : mshtml.HTMLDocumentEvents2 { public bool onclick(mshtml.IHTMLEventObj pEvtObj) { // Clicking on an input (checkbox, radio, button, image) if (pEvtObj.srcElement.tagName == "INPUT") { // The following will result in a null pointer without VS.NET installed HTMLInputElementClass input = pEvtObj.srcElement as HTMLInputElementClass; } } } The code above works fine when clicking on elements in the webbrowser control on our dev machines with VS.NET installed. However it fails to cast the pEvtObj.srcElement when VS.NET is not installed. This immediately starts working when we install the most basic VS.NET with C# that you can. To note: The rest of the c# app works fine, and you can browser the web through the control fine as well, just that the events like the above 'onclick' can't be handled properly. I thought it would be a DLL version loaded issue but doing a diff of the files loaded indicates only minor differences. 1c1 < Process: C# App without VS.NET installed --- > Process: C# App with VS.NET 2008 installed 18d17 < C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\CustomMarshalers\e148983beeb0f30918b0564849a16456\CustomMarshalers.ni.dll CustomMarshalers.ni.dll Microsoft .NET Framework Custom Marshalers Microsoft Corporation 2.0.50727.3053 36d34 < C:\Documents and Settings\XpHome\Local Settings\History\History.IE5\index.dat index.dat 37a36 > C:\Documents and Settings\XpHome\Local Settings\History\History.IE5\index.dat index.dat 44,45c43,44 < C:\Program Files\<hidden>\<hidden>\Microsoft.mshtml.dll Microsoft.mshtml.dll 7.0.3300.1 < C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\Microsoft.VisualBas#\5b3d048d8c003d743ea5e72caf07773a\Microsoft.VisualBasic.ni.dll Microsoft.VisualBasic.ni.dll Visual Basic Runtime Library Microsoft Corporation 8.0.50727.3053 --- > C:\WINDOWS\assembly\GAC\Microsoft.mshtml\7.0.3300.0__b03f5f7f11d50a3a\Microsoft.mshtml.dll Microsoft.mshtml.dll 7.0.3300.1 > C:\WINDOWS\assembly\GAC_MSIL\Microsoft.VisualBasic\8.0.0.0__b03f5f7f11d50a3a\Microsoft.VisualBasic.dll Microsoft.VisualBasic.dll Visual Basic Runtime Library Microsoft Corporation 8.0.50727.3053 50,52c49,51 < c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\mscorjit.dll mscorjit.dll Microsoft .NET Runtime Just-In-Time Compiler Microsoft Corporation 2.0.50727.3053 < C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\mscorlib\9adb89fa22fd5b4ce433b5aca7fb1b07\mscorlib.ni.dll mscorlib.ni.dll Microsoft Common Language Runtime Class Library Microsoft Corporation 2.0.50727.3053 < c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\mscorwks.dll mscorwks.dll Microsoft .NET Runtime Common Language Runtime - WorkStation Microsoft Corporation 2.0.50727.3053 --- > c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\mscorjit.dll mscorjit.dll Microsoft .NET Runtime Just-In-Time Compiler Microsoft Corporation 2.0.50727.3082 > C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\mscorlib\7124a40b9998f7b63c86bd1a2125ce26\mscorlib.ni.dll mscorlib.ni.dll Microsoft Common Language Runtime Class Library Microsoft Corporation 2.0.50727.3603 > c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\mscorwks.dll mscorwks.dll Microsoft .NET Runtime Common Language Runtime - WorkStation Microsoft Corporation 2.0.50727.3603 94,98c93,97 < C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\System.Configuration\cb4cb21d14767292e079366a5d3d76cd\System.Configuration.ni.dll System.Configuration.ni.dll System.Configuration.dll Microsoft Corporation 2.0.50727.3053 < C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\System.Drawing\6978f2e90f13bc720d57fa6895c911e2\System.Drawing.ni.dll System.Drawing.ni.dll .NET Framework Microsoft Corporation 2.0.50727.3053 < C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\System\aa7926460a336408c8041330ad90929d\System.ni.dll System.ni.dll .NET Framework Microsoft Corporation 2.0.50727.3053 < C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\System.Windows.Forms\9a254c455892c02355ab0ab0f0727c5b\System.Windows.Forms.ni.dll System.Windows.Forms.ni.dll .NET Framework Microsoft Corporation 2.0.50727.3053 < C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\System.Xml\36f3953f24d4f0b767bf172331ad6f3e\System.Xml.ni.dll System.Xml.ni.dll .NET Framework Microsoft Corporation 2.0.50727.3053 --- > C:\WINDOWS\assembly\GAC_MSIL\System.Configuration\2.0.0.0__b03f5f7f11d50a3a\System.Configuration.dll System.Configuration.dll System.Configuration.dll Microsoft Corporation 2.0.50727.3053 > C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\System.Drawing\abb2ac7e08bee026f857d8fa36f9fe6f\System.Drawing.ni.dll System.Drawing.ni.dll .NET Framework Microsoft Corporation 2.0.50727.3053 > C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\System\3de5bd01124463d7862bd173af90bc83\System.ni.dll System.ni.dll .NET Framework Microsoft Corporation 2.0.50727.3053 > C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\System.Windows.Forms\d2ea8d76f015817db1607075812b555f\System.Windows.Forms.ni.dll System.Windows.Forms.ni.dll .NET Framework Microsoft Corporation 2.0.50727.3053 > C:\WINDOWS\assembly\NativeImages_v2.0.50727_32\System.Xml\5913d3f81e77194ec833991b1047a532\System.Xml.ni.dll System.Xml.ni.dll .NET Framework Microsoft Corporation 2.0.50727.3082

    Read the article

  • Tomcat JAXB 1 and 2 linkage error

    - by Alex
    I'm running a tomcat 6, spring, apache cxf webservice, know it is a must to add one third party library to my webapp to fulfill an order. I have jaxb-impl-2.1.12.jar for apache cxf in WEB-INF/lib folder and the new library which contains the JAXB 1.0 runtime. JAXB 2 ist used by apache cxf for dynamic clients (i need them). So is there a possibility to run the webapps with both libraries? Error: Caused by: java.lang.LinkageError: You are trying to run JAXB 2.0 runtime but you have old JAXB 1.0 runtime earlier in the classpath. Please remove the JAXB 1.0 runtime for 2.0 runtime to work correctly.

    Read the article

  • How to manipulate a gridview in asp.net when the columns in the gridview are generated during runtim

    - by Nandini
    Hi guys, In my application,i have a gridview in which columns are created at runtime.Actually these columns are created when i am entering data in a database table. My gridview is like this:- Description | Column1 | Column2 | Column3 | ...... |....... |......... Input volt | 55 | 56 | 553 |........ Output volt | 656 | 45 | 67 | where Column1,Column2,column3 may vary during runtime.i need to enter values to these columns during runtime.But i cant bind these columns because these are created during runtime. The first column "Description" will not change.It will remain constant.For each description,there will be values for each column.At last these data will be saved to the database.How to add columns in the gridview during runtime?

    Read the article

  • typeset: not found error when executing shell script. Am I missing a package or something?

    - by user11045
    Hi, below is the error and corresponding script lines: spec@Lucifer:~/Documents/seagull.svn.LINUX$ ./build.ksh ./build.ksh: 36: typeset: not found ./build.ksh: 39: typeset: not found ./build.ksh: 44: function: not found Command line syntax of - options -exec : mode used for compilation (default RELEASE) -target : target used for compilation (default all) -help : display the command line syntax ./build.ksh: 52: function: not found ERROR: spec@Lucifer:~/Documents/seagull.svn.LINUX$ Script Init of variables BUILD_TARGET=${BUILD_DEFAULT_TARGET} BUILD_EXEC=${BUILD_DEFAULT_EXEC} typeset -u BUILD_OS=uname -s | tr '-' '_' | tr '.' '_' | tr '/' '_' BUILD_CODE_DIRECTORY=code BUILD_DIRECTORY=pwd typeset -u BUILD_ARCH=uname -m | tr '-' '_' | tr '.' '_' | tr '/' '_' BUILD_VERSION_FILE=build.conf BUILD_DIST_MODE=0 BUILD_FORCE_MODE=0

    Read the article

  • WF 4.0 can't get to resume workflow on the staging/production environment

    - by Yasmine Atta Hajjaj
    I have developed various registeration workflows using WF4.0. Each work flow has various bookmarks. I am using the registeration wf for an asp.net application. I tested the asp.net application locally and it is working fine( Starting WF, Persisting to db and resuming bookmarks). When I try to test it on the staging server, everything goes messy. I can no longer resume wfs and I get an error message : System.Runtime.DurableInstancing.InstancePersistenceCommandException was unhandled by user code Message=The execution of the InstancePersistenceCommand named {urn:schemas-microsoft-com:System.Activities.Persistence/command}LoadWorkflow was interrupted by an error. Source=System.Runtime.DurableInstancing StackTrace: at System.Runtime.AsyncResult.End[TAsyncResult](IAsyncResult result) at System.Runtime.DurableInstancing.InstancePersistenceContext.OuterExecute(InstanceHandle initialInstanceHandle, InstancePersistenceCommand command, Transaction transaction, TimeSpan timeout) at System.Runtime.DurableInstancing.InstanceStore.Execute(InstanceHandle handle, InstancePersistenceCommand command, TimeSpan timeout) at System.Activities.WorkflowApplication.PersistenceManager.Load(TimeSpan timeout) at System.Activities.WorkflowApplication.LoadCore(TimeSpan timeout, Boolean loadAny) at System.Activities.WorkflowApplication.Load(Guid instanceId, TimeSpan timeout) at System.Activities.WorkflowApplication.Load(Guid instanceId) at CEO_StartUpCEORegisterationTest.LoadInstance(Guid wfInstanceId) in c:\Users\Kunoichi\Documents\Visual Studio 2010\Projects\CMERegistrationSystem\RegistrationPortal\CEO\StartUpCEORegisterationTest.aspx.cs:line 64 at CEO_StartUpCEORegisterationTest.Page_Load(Object sender, EventArgs e) in c:\Users\Kunoichi\Documents\Visual Studio 2010\Projects\CMERegistrationSystem\RegistrationPortal\CEO\StartUpCEORegisterationTest.aspx.cs:line 44 at System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) at System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) at System.Web.UI.Control.OnLoad(EventArgs e) at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) InnerException: System.Data.SqlClient.SqlException Message=Index 'NCIX_KeysTable_SurrogateInstanceId' on table 'KeysTable' (specified in the FROM clause) does not exist. Source=.Net SqlClient Data Provider ErrorCode=-2146232060 Class=16 LineNumber=211 Number=308 Procedure=LoadInstance Server= State=1 StackTrace: at System.Runtime.AsyncResult.End[TAsyncResult](IAsyncResult result) at System.Activities.DurableInstancing.SqlWorkflowInstanceStoreAsyncResult.SqlCommandAsyncResultCallback(IAsyncResult result) I know that this is quite verbose. But I have been banging my head against the wall for more than a week. I did search and all I came to know was to work on ms dtc. I enabled it on the staging server , I installed application server on the staging server and I am still getting the same error. I would appreciate if anyone could help with the problem. Thanks in advance :)

    Read the article

  • DataContract Known Types - passing array

    - by Erup
    Im having problems when passing an generic List, trough a WCF operation. In this case, there is a List of int. The example 4 is described here in MSDN. Note that in MSDN sample, is described: // This will serialize and deserialize successfully because the generic List is equivalent to int[], which was added to known types. Above, is the DataContract: [DataContract] [KnownType(typeof(int[]))] [KnownType(typeof(object[]))] public class AccountData { [DataMember] public object accNumber1; [DataMember] public object accNumber2; [DataMember] public object accNumber3; [DataMember] public object accNumber4; } In client side, Im calling the operation like this: DataTransfer.Service.AccountData data = new DataTransfer.Service.AccountData() { accNumber1 = 100, accNumber2 = new int[100], accNumber3 = new List<int>(), accNumber4 = new ArrayList() }; cService.AddAccounts(data); Also, here is the decorations of the generated AccountData obj (WCF proxy): [System.Diagnostics.DebuggerStepThroughAttribute()] [System.CodeDom.Compiler.GeneratedCodeAttribute("System.Runtime.Serialization", "3.0.0.0")] [System.Runtime.Serialization.DataContractAttribute(Name="AccountData", Namespace="http://schemas.datacontract.org/2004/07/DataTransfer.Service")] [System.SerializableAttribute()] [System.Runtime.Serialization.KnownTypeAttribute(typeof(DataTransfer.Client.CustomerServiceReference.PurchaseOrder))] [System.Runtime.Serialization.KnownTypeAttribute(typeof(DataTransfer.Client.CustomerServiceReference.Customer))] [System.Runtime.Serialization.KnownTypeAttribute(typeof(int[]))] [System.Runtime.Serialization.KnownTypeAttribute(typeof(object[]))]

    Read the article

  • Server won't start on using authlogic-oauth2

    - by Yahoo-Me
    I have included oauth2 and authlogic-oauth2 in the gemfile as I want to use them and am trying to start the server. It doesn't start and gives me the error: /Library/Ruby/Gems/1.8/gems/railties-3.0.3/lib/rails.rb:44:in `configuration': undefined method `config' for nil:NilClass (NoMethodError) from /Library/Ruby/Gems/1.8/gems/authlogic_oauth2-1.1.2/lib/authlogic_oauth2.rb:14 from /Library/Ruby/Gems/1.8/gems/bundler-1.0.7/lib/bundler/runtime.rb:64:in `require' from /Library/Ruby/Gems/1.8/gems/bundler-1.0.7/lib/bundler/runtime.rb:64:in `require' from /Library/Ruby/Gems/1.8/gems/bundler-1.0.7/lib/bundler/runtime.rb:62:in `each' from /Library/Ruby/Gems/1.8/gems/bundler-1.0.7/lib/bundler/runtime.rb:62:in `require' from /Library/Ruby/Gems/1.8/gems/bundler-1.0.7/lib/bundler/runtime.rb:51:in `each' from /Library/Ruby/Gems/1.8/gems/bundler-1.0.7/lib/bundler/runtime.rb:51:in `require' from /Library/Ruby/Gems/1.8/gems/bundler-1.0.7/lib/bundler.rb:112:in `require' from /Users/arkidmitra/Documents/qorm_bzar/buyzaar/config/application.rb:7 from /Library/Ruby/Gems/1.8/gems/railties-3.0.3/lib/rails/commands.rb:28:in `require' from /Library/Ruby/Gems/1.8/gems/railties-3.0.3/lib/rails/commands.rb:28 from /Library/Ruby/Gems/1.8/gems/railties-3.0.3/lib/rails/commands.rb:27:in `tap' from /Library/Ruby/Gems/1.8/gems/railties-3.0.3/lib/rails/commands.rb:27 from script/rails:6:in `require' from script/rails:6 I am using Rails 3.0.3 and Ruby 1.8.7. Also the sever seems to be starting fine till I add gem "authlogic-oauth2" to the Gemfile.

    Read the article

  • Why are virtual methods considered early bound?

    - by AspOnMyNet
    One definition of binding is that it is the act of replacing function names with memory addresses. a) Thus I assume early binding means function calls are replaced with memory addresses during compilation process, while with late binding this replacement happens during runtime? b) Why are virtual methods also considered early bound (thus the target method is found at compile time, and code is created that will call this method)? As far as I know, with virtual methods the call to actual method is resolved only during runtime and not compile time?! thanx EDIT: 1) A a=new A(); a.M(); As far as I know, it is not known at compile time where on the heap (thus at which memory address ) will instance a be created during runtime. Now, with early binding the function calls are replaced with memory addresses during compilation process. But how can compiler replace function call with memory address, if it doesn’t know where on the heap will object a be created during runtime ( here I’m assuming the address of method a.M will also be at same memory location as a )? 2) v-table calls are neither early nor late bound. Instead there's an offset into a table of function pointers. The offset is fixed at compile time, but which table the function pointer is chosen from depends on the runtime type of the object (the object contains a hidden pointer to its v-table), so the final function address is found at runtime. But assuming the object of type T is created via reflection ( thus app doesn’t even know of existence of type T ), then how can at compile time exist an entry point for that type of object?

    Read the article

  • Duplicate C# web service proxy classes generated for Java types

    - by Sergey
    My question is about integration between a Java web service and a C# .NET client. Service: CXF 2.2.3 with Aegis databinding Client: C#, .NET 3.5 SP1 For some reason Visual Studio generates two C# proxy enums for each Java enum. The generated C# classes do not compile. For example, this Java enum: public enum SqlDialect { GENERIC, SYBASE, SQL_SERVER, ORACLE; } Produces this WSDL: <xsd:simpleType name="SqlDialect"> <xsd:restriction base="xsd:string"> <xsd:enumeration value="GENERIC" /> <xsd:enumeration value="SYBASE" /> <xsd:enumeration value="SQL_SERVER" /> <xsd:enumeration value="ORACLE" /> </xsd:restriction> </xsd:simpleType> For this WSDL Visual Studio generates two partial C# classes (generated comments removed): [System.CodeDom.Compiler.GeneratedCodeAttribute("System.Runtime.Serialization", "3.0.0.0")] [System.Runtime.Serialization.DataContractAttribute(Name="SqlDialect", Namespace="http://somenamespace")] public enum SqlDialect : int { [System.Runtime.Serialization.EnumMemberAttribute()] GENERIC = 0, [System.Runtime.Serialization.EnumMemberAttribute()] SYBASE = 1, [System.Runtime.Serialization.EnumMemberAttribute()] SQL_SERVER = 2, [System.Runtime.Serialization.EnumMemberAttribute()] ORACLE = 3, } [System.CodeDom.Compiler.GeneratedCodeAttribute("System.Xml", "2.0.50727.3082")] [System.SerializableAttribute()] [System.Xml.Serialization.XmlTypeAttribute(Namespace="http://somenamespace")] public enum SqlDialect { GENERIC, SYBASE, SQL_SERVER, ORACLE, } The resulting C# code does not compile: The namespace 'somenamespace' already contains a definition for 'SqlDialect' I will appreciate any ideas...

    Read the article

  • How To? Use an Expression Tree to call a Generic Method when the Type is only known at runtime.

    - by David Williams
    Please bear with me; I am very new to expression trees and lambda expressions, but trying to learn. This is something that I solved using reflection, but would like to see how to do it using expression trees. I have a generic function: private void DoSomeThing<T>( param object[] args ) { // Some work is done here. } that I need to call from else where in my class. Now, normally, this would be be simple: DoSomeThing<int>( blah ); but only if I know, at design time that I am working with an int. When I do not know the type until runtime is where I need the help. Like I said, I know how to do it via reflection, but I would like to do it via expression trees, as my (very limited) understanding is that I can do so. Any suggestions or points to sites where I can get this understanding, preferably with sample code?

    Read the article

  • Calling from C# to C function which accept a struct array allocated by caller

    - by lifey
    I have the following C struct struct XYZ { void *a; char fn[MAX_FN]; unsigned long l; unsigned long o; }; And I want to call the following function from C#: extern "C" int func(int handle, int *numEntries, XYZ *xyzTbl); Where xyzTbl is an array of XYZ of size numEntires which is allocated by the caller I have defined the following C# struct: [System.Runtime.InteropServices.StructLayoutAttribute(System.Runtime.InteropServices.LayoutKind.Sequential, CharSet = System.Runtime.InteropServices.CharSet.Ansi)] public struct XYZ { public System.IntPtr rva; [System.Runtime.InteropServices.MarshalAsAttribute(System.Runtime.InteropServices.UnmanagedType.ByValTStr, SizeConst = 128)] public string fn; public uint l; public uint o; } and a method: [System.Runtime.InteropServices.DllImport(@"xyzdll.dll", CallingConvention = CallingConvention.Cdecl)] public static extern Int32 func(Int32 handle, ref Int32 numntries, [MarshalAs(UnmanagedType.LPArray)] XYZ[] arr); Then I try to call the function : XYZ xyz = new XYZ[numEntries]; for (...) xyz[i] = new XYZ(); func(handle,numEntries,xyz); Of course it does not work. Can someone shed light on what I am doing wrong ?

    Read the article

  • Visual Studio 2010 RC &ndash; Silverlight 4 and WCF RIA Services Development - Updates from MIX Anno

    - by Harish Ranganathan
    MIX is happening and there is a lot of excitement around the various releases such as the Windows Phone 7 Developer Preview, IE9 Platform Preview and few other announcements that have been made.  Clearly, the Windows Phone 7 Developer Preview has generated the maximum interest and opened a plethora of opportunities for .NET Developers.  It also takes the mobile development to a new generation and doesn’t force developers to learn different programming language. Along with this, few other releases have been out.  The most anticipated Silverlight 4 RC is out and its corresponding templates are also out there for you to download.  Once VS 2010 RC was released, it was much of a disappointment that it doesn’t support SL4 development as well as the SL4 Business Application Development (a.k.a. WCF RIA Services).   There were few workarounds though nothing concrete.  Earlier I had written about how the WCF RIA Services Preview does work with ASP.NET Development using VS 2010 RC. However, with the release of SL4 RC and the corresponding tooling updates, one can develop for both SL4 as well as SL4 + WCF RIA Services using VS 2010 RC.  This is kind of important and keeps the continuum going until VS 2010 RTMs.  So, the purpose of this post is to quickly give the updates and links to install the relevant tools. Silverlight 4 RC Runtime Windows Runtime or the Mac Runtime Silverlight 4 RC Developer Tools for Visual Studio 2010 RC Silverlight 4 Tools for Visual Studio 2010 (this would install the Runtime as well automatically) Expression Blend 4 Beta Expression Blend 4 Beta If you install the SL4 RC Developer Tools, it also installs the WCF RIA Services Preview automatically.  You just need to install the WCF RIA Services Toolkit that can be downloaded from Install the WCF RIA Services Toolkit Of course you can also just install the WCF RIA Services for VS 2010 RC separately (without SL4 Tools) from here Kindly note, all the above mentioned links are with respect to Visual Studio 2010 RC edition.  If you are developing with VS 2008, then you can just target SL3 (as I write this, there seems to be no official support for developing SL4 with VS 2008) and the related tools can be downloaded from http://www.silverlight.net/getstarted/ Basically you need to download SL 3 Runtime, SDK, Expression Blend 3 and the Silverlight Toolkit.  All the links for the download are available in the above mentioned page. Also, a version of WCF RIA Services that is supported in VS 2008 is available for download at WCF RIA Services Beta for VS 2008 I know there are far too many things to keep in mind.  So, I put a flowchart that could help with depicting it pictorial.  Note that this is just my own imagination and doesn’t cover all scenarios.  for example, if you are neither developing for Webform, Silverlight, you end up nowhere whereas in actual scenario you may want to develop Desktop, Services, Console, Game and what not.  So, keep in mind this is just Web. Cheers !!!

    Read the article

  • JavaOne Session Report - Java ME SDK 3.2

    - by Janice J. Heiss
    Oracle Product Manager for Java ME SDK, Sungmoon Cho, presented a session, "Developing Java Mobile and Embedded Applications with Java ME SDK 3.2,” wherein he covered the basic new features of the Java ME Platform SDK 3.2, a state-of-the-art toolbox for developing mobile and embedded applications. The session began with a summary of the four main components of Java ME SDK. A device emulator allows developers to quickly run and test applications before commercialization. It supports CLDC/MIDP CLDC/IMP.NG and CLC/AGUI. A development environment assists writing, running debugging and deploying and enables on-device debugging. Samples provide developers with useful codes and frameworks. IDE Plugins – NetBeans and Eclipse – equip developers with CPU Profiler, Memory Monitor, Network Monitor, and Device Selector. This means that manual integration is no longer necessary. Cho then talked about the Java ME SDK’s on-device tooling architecture: * Java ME SDK provides an architecture ideal for on-device-debugging.* Device Manager plays the central role by managing different devices whether it is the emulator or a device that Oracle provides or recommends or a third party device as long as the devices have a Java Runtime that supports the protocol that is designated.* The Emulator provides an accurate emulation, since it uses the same code base used in Oracle’s Java ME runtime.* The Universal Emulator Interface (UEI) makes it easy for IDEs to detect the platform.He then focused on the Java ME SDK release highlights, which include: * Implementation and support for the new Oracle® Java Wireless Client 3.2 runtime and the Oracle® Java ME Embedded runtime. A full emulation for the runtime is provided.* Support for JSR 228, the Information Module Profile-Next Generation API (IMP-NG). This is a new profile for embedded devices. * A new Custom Device Skin Creator.* An Eclipse plugin for CLDC/MIDP.* Profiling, Network monitoring, and Memory monitoring are now integrated with the NetBeans profiling tools.* Java ME SDK Update CenterCho summarized the main features: IDE Integration (NetBeans and Eclipse) enables developers to write, run, profile, and debug their applications on their favorite IDE. CPU ProfilerThis enables developers to more quickly detect the hot spot and where CPU time is being used. They can double click the method to jump directly into the source code.Memory Monitor Developers can monitor objects and memory usage in real time.Debugger on the Emulator and DeviceDevelopers can run their applications step by step, and inspect the variables to pinpoint the problem. The debugging can take place either on the emulator or the device.Embedded Application DevelopmentIMP-NG, Device Access, Logging, and AMS API Support are now available.On-Device ToolingConnect your device to your computer, and run and debug the application right on your device.Custom Device Skin CreatorDefine your own device and test on an environment that is closest to your target device. The informative session concluded with a demo that showed more concretely how to apply the new features in Java ME SDK 3.2.

    Read the article

  • Issues with ASP.NET via Apache/mod_mono on Ubuntu.

    - by Matthew Scharley
    I run an Ubuntu test server, and my deployment system is also Ubuntu. I've recently been trying to get ASP.NET to work on my test server so that we can take it live. I managed to get it installed, and configured properly, and my application is installed and running, but I can't get anything to work. The error I keep receiving is below, if anyone has any clue what might be going on, it would be greatly appreciated. Server Error in '/' Application Standard output has not been redirected or process has not been started. Description: HTTP 500. Error processing request. Stack Trace: System.InvalidOperationException: Standard output has not been redirected or process has not been started. at System.Diagnostics.Process.CancelErrorRead () [0x00000] at (wrapper remoting-invoke-with-check) System.Diagnostics.Process:CancelErrorRead () at Mono.CSharp.CSharpCodeCompiler.CompileFromFileBatch (System.CodeDom.Compiler.CompilerParameters options, System.String[] fileNames) [0x00000] at Mono.CSharp.CSharpCodeCompiler.CompileAssemblyFromFileBatch (System.CodeDom.Compiler.CompilerParameters options, System.String[] fileNames) [0x00000] at System.CodeDom.Compiler.CodeDomProvider.CompileAssemblyFromFile (System.CodeDom.Compiler.CompilerParameters options, System.String[] fileNames) [0x00000] at System.Web.Compilation.AssemblyBuilder.BuildAssembly (System.Web.VirtualPath virtualPath, System.CodeDom.Compiler.CompilerParameters options) [0x00000] at System.Web.Compilation.AssemblyBuilder.BuildAssembly (System.Web.VirtualPath virtualPath) [0x00000] at System.Web.Compilation.BuildManager.BuildAssembly (System.Web.VirtualPath virtualPath) [0x00000] at System.Web.Compilation.BuildManager.GetCompiledType (System.String virtualPath) [0x00000] at System.Web.HttpApplicationFactory.InitType (System.Web.HttpContext context) [0x00000] Version information: Mono Version: 2.0.50727.42; ASP.NET Version: 2.0.50727.42 Apache version String: Apache/2.2.11 (Ubuntu) mod_mono/2.0 PHP/5.2.6-3ubuntu4.2 with Suhosin-Patch Server at dev Port 80 PS: I had to add three DLL's to the /bin directory in my application, copying them from Windows because I couldn't find them in any of Mono's packages. This might or might not be causing problems, I don't know. The list that I had to add is: System.Web.Abstractions System.Web.Routing System.Web.Mvc

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >