Search Results

Search found 4689 results on 188 pages for 'weak references'.

Page 4/188 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • IntelliTrace collector error Some or all identity references could not be translated

    - by Tarun Arora
    If you are running the IntelliTrace stand alone collector to collect the trace against an Application Pool which is running under the identity “.\<username>” then you are likely to run into the following exception, Start-IntelliTraceCollection : Some or all identity references could not be translated. At line:1 char:29 + Start-IntelliTraceCollection <<<<  "FabrikamFiber.Web" C:\IntelliTraceCTP\collection_plan.ASP.NET.trace.xml C:\Intell iTraceLogs     + CategoryInfo          : NotSpecified: (:) [Start-IntelliTraceCollection], IdentityNotMappedException     + FullyQualifiedErrorId : System.Security.Principal.IdentityNotMappedException,Microsoft.VisualStudio.IntelliTrace    .PowerShell.StartIntelliTraceCollectionCommand   Steps to reproduce the issue The application pool “FabrikamFiber.Web” is using the identity “.\Admin”   Workaround Change the identity of the application pool to <MachineName|Domain>\<UserName>. So, in the above work around if I change the identity to “Production\Admin” then the IntelliTrace does not throw an exception. This error has been reported to Microsoft and it is expected that it will be fixed in one of the future releases. Enjoy!

    Read the article

  • New Exadata public references

    - by Javier Puerta
    The following customers are now public references for Exadata. Show your customers how other companies in their industries are leveraging Exadata to achieve their business objectives. MIGROS BANK - Financial Services - Switzerland Oracle EXADATA Database Machine + OBIEE 11gMigros Bank AG Makes Systems More Available and Improves Operational Insight and Analytics with a Scalable, Integrated Data Warehouse Success Story (English)Success Story(German) - Professional Services - United Arab Emirates Oracle EXADATA Database MachineTech Access Drives Compelling Proof-of-Concept Evaluations for Hardware Sales in Regions Largest Solutions CenterSuccess Story   - Saudi Arabia - Wholesale Distribution Oracle EXADATA Database Machine + OBIEE 11g Balubaid Group of Companies Reduces Help-Desk Complaints by 75%, Improves Business Continuity and System Response Success Story   - Nigeria - Communications Oracle EXADATA Database Machine Etisalat Accelerates Data Retrieval and Analysis by 99 Percent with Oracle Communications Data Model Running on Oracle Exadata Database Machine Oracle Press Release   ETISALAT BALUBAID GROUP TECH ACCESS

    Read the article

  • Do you have references issues with Visual Studio 2008 and C#.Net?

    - by Brian T Hannan
    I'm working on a project and it seems that every time someone checks out the project from source control to build it on their local box they have issues building because references are no longer resolved. I can't figure out if it's a configuration issues or a Visual Studio 2008 issue. Is anyone else having this problem? If so, is there something you can do to fix this issue? Note: it might have something to do with explicit paths to the DLLs being referenced or how they are referenced ... I'm not quite sure.

    Read the article

  • Consolidation Strategy References

    - by BuckWoody
    I have a presentation that I give on SQL Server Consolidation Strategies, and in that presentation I talk about a few links that are useful. Here are some that I’ve found – feel free to comment on more, or if these links go stale:   Consolidation using SQL Server: http://msdn.microsoft.com/en-us/library/ee692366.aspx SQL Server Consolidation Guidance:  http://msdn.microsoft.com/en-us/library/ee819082.aspx   More references for SQL Server and Hyper-V: http://www.sqlskills.com/BLOGS/KIMBERLY/post/Virtualization-with-SQL-Server.aspx Quick overview of Virtual Server licensing implications: http://www.microsoft.com/uk/licensing/morethan250/learn/virtualisation.mspx SQL Server and Hyper-V best practices: http://sqlcat.com/whitepapers/archive/2008/10/03/running-sql-server-2008-in-a-hyper-v-environment-best-practices-and-performance-recommendations.aspx High-Availability and Hyper-V: http://technet.microsoft.com/en-us/magazine/2008.10.higha.aspx Virtualization Calculator: http://www.microsoft.com/Windowsserver2008/en/us/hyperv-calculators.aspx   May not be current, but here’s a whitepaper from VMWare for SQL Server: http://www.vmware.com/files/pdf/SQLServerWorkloads.pdf More information on SQL Server and VMWare: http://blogs.msdn.com/cindygross/archive/2009/10/23/considerations-for-installing-sql-server-on-vmware.aspx   Server Virtualization Validation Program: http://www.windowsservercatalog.com/svvp.aspx?svvppage=svvp.htm Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Statistical Software Quality Control References

    - by Xodarap
    I'm looking for references about hypothesis testing in software management. For example, we might wonder whether "crunch time" leads to an increase in defect rate - this is a surprisingly difficult thing to do. There are many questions on how to measure quality - this isn't what I'm asking. And there are books like Kan which discuss various quality metrics and their utilities. I'm not asking this either. I want to know how one applies these metrics to make decisions. E.g. suppose we decide to go with critical errors / KLOC. One of the problems we'll have to deal with with that this is not a normally distributed data set (almost all patches have zero critical errors). And further, it's not clear that we really want to examine the difference in means. So what should our alternative hypothesis be? (Note: Based on previous questions, my guess is that I'll get a lot of answers telling me that this is a bad idea. That's fine, but I'd request that it's based on published data, instead of your own experience.)

    Read the article

  • Are null references really a bad thing?

    - by Tim Goodman
    I've heard it said that the inclusion of null references in programming languages is the "billion dollar mistake". But why? Sure, they can cause NullReferenceExceptions, but so what? Any element of the language can be a source of errors if used improperly. And what's the alternative? I suppose instead of saying this: Customer c = Customer.GetByLastName("Goodman"); // returns null if not found if (c != null) { Console.WriteLine(c.FirstName + " " + c.LastName + " is awesome!"); } else { Console.WriteLine("There was no customer named Goodman. How lame!"); } You could say this: if (Customer.ExistsWithLastName("Goodman")) { Customer c = Customer.GetByLastName("Goodman") // throws error if not found Console.WriteLine(c.FirstName + " " + c.LastName + " is awesome!"); } else { Console.WriteLine("There was no customer named Goodman. How lame!"); } But how is that better? Either way, if you forget to check that the customer exists, you get an exception. I suppose that a CustomerNotFoundException is a bit easier to debug than a NullReferenceException by virtue of being more descriptive. Is that all there is to it?

    Read the article

  • How to use references, avoid header bloat, and delay initialization?

    - by Kyle
    I was browsing for an alternative to using so many shared_ptrs, and found an excellent reply in a comment section: Do you really need shared ownership? If you stop and think for a few minutes, I'm sure you can pinpoint one owner of the object, and a number of users of it, that will only ever use it during the owner's lifetime. So simply make it a local/member object of the owners, and pass references to those who need to use it. I would love to do this, but the problem becomes that the definition of the owning object now needs the owned object to be fully defined first. For example, say I have the following in FooManager.h: class Foo; class FooManager { shared_ptr<Foo> foo; shared_ptr<Foo> getFoo() { return foo; } }; Now, taking the advice above, FooManager.h becomes: #include "Foo.h" class FooManager { Foo foo; Foo& getFoo() { return foo; } }; I have two issues with this. First, FooManager.h is no longer lightweight. Every cpp file that includes it now needs to compile Foo.h as well. Second, I no longer get to choose when foo is initialized. It must be initialized simultaneously with FooManager. How do I get around these issues?

    Read the article

  • Conditional references in .NET project, possible to get rid of warning?

    - by Lasse V. Karlsen
    I have two references to a SQLite assembly, one for 32-bit and one for 64-bit, which looks like this (this is a test project to try to get rid of the warning, don't get hung up on the paths): <Reference Condition=" '$(Platform)' == 'x64' " Include="System.Data.SQLite, Version=1.0.61.0, Culture=neutral, PublicKeyToken=db937bc2d44ff139, processorArchitecture=AMD64"> <SpecificVersion>True</SpecificVersion> <HintPath>..\..\LVK Libraries\SQLite3\version_1.0.65.0\64-bit\System.Data.SQLite.DLL</HintPath> </Reference> <Reference Condition=" '$(Platform)' == 'x86' " Include="System.Data.SQLite, Version=1.0.65.0, Culture=neutral, PublicKeyToken=db937bc2d44ff139, processorArchitecture=x86"> <SpecificVersion>True</SpecificVersion> <HintPath>..\..\LVK Libraries\SQLite3\version_1.0.65.0\32-bit\System.Data.SQLite.DLL</HintPath> </Reference> This produces the following warning: Warning 1 The referenced component 'System.Data.SQLite' could not be found. Is it possible for me to get rid of this warning? One way I've looked at it to just configure my project to be 32-bit when I develop, and let the build machine fix the reference when building for 64-bit, but this seems a bit awkward and probably prone to errors. Any other options? The reason I want to get rid of it is that the warning is apparently being picked up by TeamCity and periodically flagged as something I need to look into, so I'd like to get completely rid of it.

    Read the article

  • Getting my head around the practical applications of strong and weak pointers in Objective-C

    - by Chris Wilson
    I've just read the accepted excellent answer to this question that clarifies the conceptual differences between strong and weak pointers in Objective-C, and I'm still trying to understand the practical differences. I come from a C++ background where these concepts don't exist, and I'm having trouble figuring out where I would use one vs the other. Could someone please provide a practical example, using Objective-C code, that illustrates the different uses of strong and weak pointers?

    Read the article

  • Disabling weak ciphers on Windows 2003

    - by Kev
    For PCI-DSS compliance you have to disable weak ciphers. PCI-DSS permits a minimum cipher size of 128 bits. However for the highest score (0 I believe) you should only accept 168 bit ciphers but you can still be compliant if you permit 128 bit ciphers. The trouble is that when we disable all but 168 bit encryption it seems to disable both inbound and out bound secure channels. For example we'd like to lock down inbound IIS HTTPS to 168 bit ciphers but permit outbound 128 bit SSL connections to payment gateways/services from service applications running on the server (not all payment gateways support 168 bit only we just found out today). Is it possible to have cipher asymmetry on Windows 2003? I am told it is all or nothing.

    Read the article

  • How to fix: Handler “PageHandlerFactory-Integrated” has a bad module “ManagedPipelineHandler” in its module list

    - by ybbest
    Issue: Recently, I am having issues with deploying asp.net mvc 4 application to Windows Server 2008 R2.After add the necessary role and features and I setup an application in IIS. However , I received the following error message: PageHandlerFactory-Integrated” has a bad module “ManagedPipelineHandler” in its module list   Solution: It turns out that this is because ASP.Net was not completely installed with IIS even though I checked that box in the “Add Feature” dialog.   To fix this, I simply ran the following command at the command prompt %windir%\Microsoft.NET\Framework64\v4.0.30319\aspnet_regiis.exe -i If I had been on a 32 bit system, it would have looked like the following: %windir%\Microsoft.NET\Framework\v4.0.21006\aspnet_regiis.exe –i   References: http://stackoverflow.com/questions/6846544/how-to-fix-handler-pagehandlerfactory-integrated-has-a-bad-module-managedpip

    Read the article

  • Visual Studio 2008 doesn't create *.refresh files for external DLL references... what am I missing?

    - by Cory Larson
    Hi all-- I've got a question about something that's just been irritating me. A colleague and I are building a support framework for our current client that we want to reference in other projects. The DLL we want as a reference in our project would be an external reference. We're adding it by doing "Add Reference...", then browsing to the location of the .dll. What I want Visual Studio to do is only add the .xml, .pdb, and a .dll.refresh file, but instead it copies the actual .dll (and .xml and .pdb) into the bin. When we rebuild the framework project, the other project that uses its .dll gets all out of whack until we drop and re-add the reference. Everything I've read online says that VS2008 is supposed to create the .dll.refresh files for you, but it never does. Any ideas? Am I missing something or doing something wrong? At this point I'm ready to add a pre-build event to simply copy the framework .dll into my bin, but the .refresh file seems like less of a hassle if it would just work. Thanks, Cory

    Read the article

  • How do file references within a PHP Objects work?

    - by bender
    I'm trying to create an PHP object that can load objects in other files on demand when needed. My problem is that when I reference the files based on file location for the class definition, it can not find the files. So file structure: /Test.php /os/os.php (extends kernel) /os/kernel.php /os/libraries/lib1.php /os/libraries/lib2.php /os/libraries/lib3.php In kernel.php, the libraries are referenced as 'libraries/lib1.php'. If I create an "os" object in Test.php. The lib files are not found.

    Read the article

  • Do variable references (alias) incure runtime costs in c++?

    - by cheshirekow
    Maybe this is a compiler specific thing. If so, how about for gcc (g++)? If you use a variable reference/alias like this: int x = 5; int& y = x; y += 10; Does it actually require more cycles than if we didn't use the reference. int x = 5; x += 10; In other words, does the machine code change, or does the "alias" happen only at the compiler level? This may seem like a dumb question, but I am curious. Especially in the case where maybe it would be convenient to temporarily rename some member variables just so that the math code is a little easier to read. Sure, we're not exactly talking about a bottleneck here... but it's something that I'm doing and so I'm just wondering if there is any 'actual' difference... or if it's only cosmetic.

    Read the article

  • Wireless signal changes from strong to weak after connecting

    - by gibberish
    Router (primary AP) is a WRVS4400N, WAP (signal booster) is a WAP4410N. Problem: User is physically located within ten feet of WAP (200 feet from main wireless router). Signal is at 5 bars as user connects to wireless network. Within seconds, signal is at or below two bars and connection is poor. Background: Trying to solve problem of weak wireless signal in back offices. Desired result is for client laptops to automatically switch to the stronger signal. WAP is connected to network via Ethernet cable. WAP is set to AP mode (instead of Wireless Repeater mode) WAP does appear to boost signal. Using Windows 7 sys tray Connect To A Network applet, can observe signal boost as laptop approaches the WAP. Above-described problem happens to users located near or beyond the WAP. It does not happen to users in close proximity to the router. Secondary Question: If using WAP in AP Mode, do WAP and Router (primary AP) need to be on the same channel?

    Read the article

  • (Weak) ETags and Last-Modified

    - by Kai Moritz
    As far as I understand the specs, the ETag, which was introduced in RFC 2616 (HTTP/1.1) is a predecessor of the Last-Modified-Header, which is proposet to give the software-architect more controll over the cache-revalidating process. If both Cache-Validation-Headers (If-None-Match and If-Modified-Since) are present, according to RFC 2616, the client (i.e. the browser) should use the ETag when checking, if a resource has changed. According to section 14.26 of RFC 2616, the server MUST NOT respond with a 304 Not Modified, if the ETag presented in a If-None-Match-Header has changed, and the server has to ignore an additional If-Modified-Since-Header, if present. If the presented ETag matches, he MUST NOT perform the request, unless the Date in the Last-Modified-Header says so. (If the presented ETag matches, the server should respond with a 304 Not Modified in case of a GET- or HEAD-request...) This section leaves room for some speculations: A strong ETag is supposed to change ''everytime'', the resource changes. So, having to responde with something else as 304 Not Modified to a request with an unchanged ETag and an If-Modified-Since-Header, which dose not match is a bit of a contradiction, because the strong ETag says, that the resource was not modified. (Though, this is not that fatal, because the server can send the same unchanged resource again.) ... ... o.k. While I was writing this, the question was boiling down to this answer: The (small) contradiction stated above, was made because of Weak ETags. A resource marked with a Weak ETag may have changed, although the ETag has not. So, in case of a Weak ETag it would be wrong, to answer with 304 Not Modified, when the ETag has not changed, but the date presented in the If-Modified-Since does not match, right?

    Read the article

  • Reference manager for Ubuntu

    - by user36511
    I'm in dire need of a reference/citation manager in Ubuntu. The features I need the most are: 1) Metadata extraction/editing of pdf 2) Fetch metadata from online databases such as Google Scholar 3) Attach pdf or other file to reference 4) Tag references and recall those with a given tag or set of tags 5) Provide APA style citation for references (in integration with OOffice and/or Latex) Optional: Would be great if it can annotate/highlight pdfs. Mendeley probably does all of these, but it's behavior has driven me insane, especially when the number of references it's trying to handle is large. It constantly tries to sync with the web and creates duplicate references. I've tried JabRef, and while it looks like a decent piece of freeware, it doesn't do some of the above. I found others like Bibus, Referencer, etc. to be lacking or buggy or inactive development. Is there another option, or should I give up the search.

    Read the article

  • Type Casting variables in PHP: Is there a practical example?

    - by Stephen
    PHP, as most of us know, has weak typing. For those who don't, PHP.net says: PHP does not require (or support) explicit type definition in variable declaration; a variable's type is determined by the context in which the variable is used. Love it or hate it, PHP re-casts variables on-the-fly. So, the following code is valid: $var = "10"; $value = 10 + $var; var_dump($value); // int(20) PHP also alows you to explicitly cast a variable, like so: $var = "10"; $value = 10 + $var; $value = (string)$value; var_dump($value); // string(2) "20" That's all cool... but, for the life of me, I cannot conceive of a practical reason for doing this. I don't have a problem with strong typing in languages that support it, like Java. That's fine, and I completely understand it. Also, I'm aware of—and fully understand the usefulness of—type hinting in function parameters. The problem I have with type casting is explained by the above quote. If PHP can swap types at-will, it can do so even after you force cast a type; and it can do so on-the-fly when you need a certain type in an operation. That makes the following valid: $var = "10"; $value = (int)$var; $value = $value . ' TaDa!'; var_dump($value); // string(8) "10 TaDa!" So what's the point? Can anyone show me a practical application or example of type casting—one that would fail if type casting were not involved? I ask this here instead of SO because I figure practicality is too subjective. Edit in response to Chris' comment Take this theoretical example of a world where user-defined type casting makes sense in PHP: You force cast variable $foo as int -- (int)$foo. You attempt to store a string value in the variable $foo. PHP throws an exception!! <--- That would make sense. Suddenly the reason for user defined type casting exists! The fact that PHP will switch things around as needed makes the point of user defined type casting vague. For example, the following two code samples are equivalent: // example 1 $foo = 0; $foo = (string)$foo; $foo = '# of Reasons for the programmer to type cast $foo as a string: ' . $foo; // example 2 $foo = 0; $foo = (int)$foo; $foo = '# of Reasons for the programmer to type cast $foo as a string: ' . $foo; UPDATE Guess who found himself using typecasting in a practical environment? Yours Truly. The requirement was to display money values on a website for a restaurant menu. The design of the site required that trailing zeros be trimmed, so that the display looked something like the following: Menu Item 1 .............. $ 4 Menu Item 2 .............. $ 7.5 Menu Item 3 .............. $ 3 The best way I found to do that wast to cast the variable as a float: $price = '7.50'; // a string from the database layer. echo 'Menu Item 2 .............. $ ' . (float)$price; PHP trims the float's trailing zeros, and then recasts the float as a string for concatenation.

    Read the article

  • How to fix Failed to initialize Windows Azure storage emulator error

    - by ybbest
    When you press F5 to start debugging Azure project, you might get the following exception: If you go to the Output windows, you will see the detailed error message below: Windows Azure Tools: Failed to initialize Windows Azure storage emulator. Unable to start Development Storage. Failed to start Development Storage: the SQL Server instance ‘localhost\SQLExpress’ could not be found. Please configure the SQL Server instance for Development Storage using the ‘DSInit’ utility in the Windows Azure SDK. This is because by default, Azure uses the SQLExpress to start Development Storage. To fix this you can do the following: You need to open command prompt, and navigate to C:\Program Files\Windows Azure SDK\v1.4\bin\devstore (depending on your Azure version, the file path is slightly different.) Next, run DSInit /sqlInstance:. (. Means the SQL Server use the default instance, if you have name instance, you need to change. to the name of the SQL Server) After a short while, you should see the following windows showing the configuration succeeds. You can download a batch file here. References: http://msdn.microsoft.com/en-us/library/gg433132.aspx

    Read the article

  • Dynamic Types and DynamicObject References in C#

    - by Rick Strahl
    I've been working a bit with C# custom dynamic types for several customers recently and I've seen some confusion in understanding how dynamic types are referenced. This discussion specifically centers around types that implement IDynamicMetaObjectProvider or subclass from DynamicObject as opposed to arbitrary type casts of standard .NET types. IDynamicMetaObjectProvider types  are treated special when they are cast to the dynamic type. Assume for a second that I've created my own implementation of a custom dynamic type called DynamicFoo which is about as simple of a dynamic class that I can think of:public class DynamicFoo : DynamicObject { Dictionary<string, object> properties = new Dictionary<string, object>(); public string Bar { get; set; } public DateTime Entered { get; set; } public override bool TryGetMember(GetMemberBinder binder, out object result) { result = null; if (!properties.ContainsKey(binder.Name)) return false; result = properties[binder.Name]; return true; } public override bool TrySetMember(SetMemberBinder binder, object value) { properties[binder.Name] = value; return true; } } This class has an internal dictionary member and I'm exposing this dictionary member through a dynamic by implementing DynamicObject. This implementation exposes the properties dictionary so the dictionary keys can be referenced like properties (foo.NewProperty = "Cool!"). I override TryGetMember() and TrySetMember() which are fired at runtime every time you access a 'property' on a dynamic instance of this DynamicFoo type. Strong Typing and Dynamic Casting I now can instantiate and use DynamicFoo in a couple of different ways: Strong TypingDynamicFoo fooExplicit = new DynamicFoo(); var fooVar = new DynamicFoo(); These two commands are essentially identical and use strong typing. The compiler generates identical code for both of them. The var statement is merely a compiler directive to infer the type of fooVar at compile time and so the type of fooExplicit is DynamicFoo, just like fooExplicit. This is very static - nothing dynamic about it - and it completely ignores the IDynamicMetaObjectProvider implementation of my class above as it's never used. Using either of these I can access the native properties:DynamicFoo fooExplicit = new DynamicFoo();// static typing assignmentsfooVar.Bar = "Barred!"; fooExplicit.Entered = DateTime.Now; // echo back static values Console.WriteLine(fooVar.Bar); Console.WriteLine(fooExplicit.Entered); but I have no access whatsoever to the properties dictionary. Basically this creates a strongly typed instance of the type with access only to the strongly typed interface. You get no dynamic behavior at all. The IDynamicMetaObjectProvider features don't kick in until you cast the type to dynamic. If I try to access a non-existing property on fooExplicit I get a compilation error that tells me that the property doesn't exist. Again, it's clearly and utterly non-dynamic. Dynamicdynamic fooDynamic = new DynamicFoo(); fooDynamic on the other hand is created as a dynamic type and it's a completely different beast. I can also create a dynamic by simply casting any type to dynamic like this:DynamicFoo fooExplicit = new DynamicFoo(); dynamic fooDynamic = fooExplicit; Note that dynamic typically doesn't require an explicit cast as the compiler automatically performs the cast so there's no need to use as dynamic. Dynamic functionality works at runtime and allows for the dynamic wrapper to look up and call members dynamically. A dynamic type will look for members to access or call in two places: Using the strongly typed members of the object Using theIDynamicMetaObjectProvider Interface methods to access members So rather than statically linking and calling a method or retrieving a property, the dynamic type looks up - at runtime  - where the value actually comes from. It's essentially late-binding which allows runtime determination what action to take when a member is accessed at runtime *if* the member you are accessing does not exist on the object. Class members are checked first before IDynamicMetaObjectProvider interface methods are kick in. All of the following works with the dynamic type:dynamic fooDynamic = new DynamicFoo(); // dynamic typing assignments fooDynamic.NewProperty = "Something new!"; fooDynamic.LastAccess = DateTime.Now; // dynamic assigning static properties fooDynamic.Bar = "dynamic barred"; fooDynamic.Entered = DateTime.Now; // echo back dynamic values Console.WriteLine(fooDynamic.NewProperty); Console.WriteLine(fooDynamic.LastAccess); Console.WriteLine(fooDynamic.Bar); Console.WriteLine(fooDynamic.Entered); The dynamic type can access the native class properties (Bar and Entered) and create and read new ones (NewProperty,LastAccess) all using a single type instance which is pretty cool. As you can see it's pretty easy to create an extensible type this way that can dynamically add members at runtime dynamically. The Alter Ego of IDynamicObject The key point here is that all three statements - explicit, var and dynamic - declare a new DynamicFoo(), but the dynamic declaration results in completely different behavior than the first two simply because the type has been cast to dynamic. Dynamic binding means that the type loses its typical strong typing, compile time features. You can see this easily in the Visual Studio code editor. As soon as you assign a value to a dynamic you lose Intellisense and you see which means there's no Intellisense and no compiler type checking on any members you apply to this instance. If you're new to the dynamic type it might seem really confusing that a single type can behave differently depending on how it is cast, but that's exactly what happens when you use a type that implements IDynamicMetaObjectProvider. Declare the type as its strong type name and you only get to access the native instance members of the type. Declare or cast it to dynamic and you get dynamic behavior which accesses native members plus it uses IDynamicMetaObjectProvider implementation to handle any missing member definitions by running custom code. You can easily cast objects back and forth between dynamic and the original type:dynamic fooDynamic = new DynamicFoo(); fooDynamic.NewProperty = "New Property Value"; DynamicFoo foo = fooDynamic; foo.Bar = "Barred"; Here the code starts out with a dynamic cast and a dynamic assignment. The code then casts back the value to the DynamicFoo. Notice that when casting from dynamic to DynamicFoo and back we typically do not have to specify the cast explicitly - the compiler can induce the type so I don't need to specify as dynamic or as DynamicFoo. Moral of the Story This easy interchange between dynamic and the underlying type is actually super useful, because it allows you to create extensible objects that can expose non-member data stores and expose them as an object interface. You can create an object that hosts a number of strongly typed properties and then cast the object to dynamic and add additional dynamic properties to the same type at runtime. You can easily switch back and forth between the strongly typed instance to access the well-known strongly typed properties and to dynamic for the dynamic properties added at runtime. Keep in mind that dynamic object access has quite a bit of overhead and is definitely slower than strongly typed binding, so if you're accessing the strongly typed parts of your objects you definitely want to use a strongly typed reference. Reserve dynamic for the dynamic members to optimize your code. The real beauty of dynamic is that with very little effort you can build expandable objects or objects that expose different data stores to an object interface. I'll have more on this in my next post when I create a customized and extensible Expando object based on DynamicObject.© Rick Strahl, West Wind Technologies, 2005-2012Posted in CSharp  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Accessing Server-Side Data from Client Script: Using Ajax Web Services, Script References, and jQuery

    Today's websites commonly exchange information between the browser and the web server using Ajax techniques. In a nutshell, the browser executes JavaScript code typically in response to the page loading or some user action. This JavaScript makes an asynchronous HTTP request to the server. The server processes this request and, perhaps, returns data that the browser can then seamlessly integrate into the web page. Typically, the information exchanged between the browser and server is serialized into JSON, an open, text-based serialization format that is both human-readable and platform independent. Adding such targeted, lightweight Ajax capabilities to your ASP.NET website requires two steps: first, you must create some mechanism on the server that accepts requests from client-side script and returns a JSON payload in response; second, you need to write JavaScript in your ASP.NET page to make an HTTP request to this service you created and to work with the returned results. This article series examines a variety of techniques for implementing such scenarios. In Part 1 we used an ASP.NET page and the JavaScriptSerializer class to create a server-side service. This service was called from the browser using the free, open-source jQuery JavaScript library. This article continues our examination of techniques for implementing lightweight Ajax scenarios in an ASP.NET website. Specifically, it examines how to create ASP.NET Ajax Web Services on the server-side and how to use both the ASP.NET Ajax Library and jQuery to consume them from the client-side. Read on to learn more! Read More >

    Read the article

  • Where do I place XNA content pipeline references?

    - by Zabby Wabby
    I am relatively new to XNA, and have started to delve into the use of the content pipeline. I have already figured out that tricky issue of adding a game library containing classes for any type of .xml file I want to read. Here's the issue. I am trying to handle the reading of all XML content through use of an XMLHandler object that uses the intermediate deserializer. Any time reading of such data is required, the appropriate method within this object would be called. So, as a simple example, something like this would occur when a character levels: public Spell LevelUp(int levelAchived) { XMLHandler.FindSkillsForLevel(levelAchived); } This method would then read the proper .xml file, sending back the spell for the character to learn. However, the XMLHandler is having issues even being created. I cannot get it to use the using namespace of Microsoft.Xna.Framework.Content.Pipeline. I get an error on my using statement in the XMLHandler class: using Microsoft.Xna.Framework.Content.Pipeline.Serialization.Intermediate; The error is a typical reference error: Type or namespace name "'Pipeline' does not exist in the namespace 'Microsoft.Xna.Framework.Content' (are you missing an assembly reference?)" I THINK this is because this namespace is already referenced in my game's content. I would really have no issue placing this object within my game's content (since that is ALL it deals with anyways), but the Content project does not seem capable of holding anything but content files. In summary, I need to use the Intermediate Deserializer in my main project's logic, but, as far as I can make out, I can't safely reference the associated namespace for it outside of the game's content. I'm not a terribly well-versed programmer, so I may be just missing some big detail I've never learned here. How can I make this object accessible for all projects within the solution? I will gladly post more information if needed!

    Read the article

  • How do I import service references to Unity3D?

    - by Timothy Williams
    I'm attempting access a service reference in Unity. I need two: the SOAP framework and a separate service called ContentVault. The respective service URL's are: SOAP: http://api.microsofttranslator.com/V2/Soap.svc ContentVault: http://ioun.wizards.com/ContentVault.svc Both services import fine in to Visual Studio. I've tried everything I can think of but they won't work with Unity. I just get various errors (changing depending on which solution I'm trying out). I've attempted using svcutil to export the services as external scripts, but all I got was a bunch of using errors. I've tried converting the code to work with .NET 2.0 to no avail, I've even tried making the services in to .DLL's to no success. How could get these services working with Unity?

    Read the article

  • New Exadata, Exalogic, Exalytics Public References

    - by Javier Puerta
    CUSTOMER SUCCESS STORIES & SPOTLIGHTS AmerisourceBergen (US) Oracle Exadata, Oracle Advanced Compression, Oracle Advanced Customer Support Services, Oracle Active Data Guard Published: July 31, 2014 Guangzhou Municipal Human Resources and Social Security Bureau (China) Exalogic, Enterprise Mgr Published: July 31, 2014 Norfolk Southern Corp. (US) Oracle Exadata, Oracle Exalytics, Oracle Business Intelligence Suite, Enterprise Edition Published: July 30, 2014 TDC (Denmark) Oracle Exadata, Oracle ZFS Storage Appliance, SPARC T4-4, SPARC T4-1, Oracle Solaris, Oracle Consulting, Oracle Advanced Customer Support Services Published: July 30, 2014 Chosun Ilbo (Korea) Oracle Exadata, Oracle GoldenGate Published: July 29, 2014 GIA (Gemological Institute of America) (US), Exalogic, Exadata Published: July 25, 2014 City of Lakeland (US) Oracle Exadata, Oracle Active Data Guard, Oracle Partitioning, Oracle Tuning Pack, Oracle Enterprise Manager, Oracle Diagnostics Pack, Oracle Enterprise Service Bus, Oracle Advanced Customer Support Services, Oracle Platinum Services Published: July 15, 2014 Tech Mahindra (India) Oracle Exadata, SPARC T5-4, Oracle Solaris 11, PeopleSoft Human Resources, Oracle Advanced Customer Support Services Published: July 01, 2014

    Read the article

  • C# Interview Preparation - References?

    - by Kanini
    This is a specific question relating to C#. However, it can be extrapolated to other languages too. While one is preparing for an interview of a C# Developer (ASP.NET or WinForms or ), what would be the typical reference material that one should look at? Are there any good books/interview question collections that one should look at so that they can be better prepared? This is just to know the different scenarios. For example, I might be writing SQL Stored Procedures and Queries, but I might stumble when asked suddenly Given an Employee Table with the following column(s). EmployeeId, EmployeeName, ManagerId Write a SQL Query which will get me the Name of Employee and Manager Name? NOTE: I am not asking for a Question Bank so that I can learn by rote what the questions are and reproduce them (which, obviously will NOT work!)

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >