Search Results

Search found 97855 results on 3915 pages for 'code performance'.

Page 138/3915 | < Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >

  • What a Performance! MySQL 5.5 and InnoDB 1.1 running on Oracle Linux

    - by zeynep.koch(at)oracle.com
    The MySQL performance team in Oracle has recently completed a series of benchmarks comparing Read / Write and Read-Only performance of MySQL 5.5 with the InnoDB and MyISAM storage engines. Compared to MyISAM, InnoDB delivered 35x higher throughput on the Read / Write test and 5x higher throughput on the Read-Only test, with 90% scalability across 36 CPU cores. A full analysis of results and MySQL configuration parameters are documented in a new whitepaperIn addition to the benchmark, the new whitepaper, also includes:- A discussion of the use-cases for each storage engine- Best practices for users considering the migration of existing applications from MyISAM to InnoDB- A summary of the performance and scalability enhancements introduced with MySQL 5.5 and InnoDB 1.1.The benchmark itself was based on Sysbench, running on AMD Opteron "Magny-Cours" processors, and Oracle Linux with the Unbreakable Enterprise Kernel You can learn more about MySQL 5.5 and InnoDB 1.1 from here and download it from here to test whether you witness performance gains in your real-world applications.  By Mat Keep

    Read the article

  • Is JSF really ready to deliver high performance web applications?

    - by aklin81
    I have heard a lot of good about JSF but as far as I know people also had lots of serious complains with this technology in the past, not aware of how much the situation has improved. We are considering JSF as a probable technology for a social network project. But we are not aware of the performance scores of JSF neither we could really come across any existing high performance website that had been using JSF. People complain about its performance scalability issues. We are still not very sure if we are doing the right thing by choosing jsf, and thus would like to hear from you all about this and take your inputs into consideration. Is it possible to configure JSF to satisfy the high performance needs of social networking service ? Also till what extent is it possible to survive with the current problems in JSF.

    Read the article

  • Slides and Code from my Silverlight MVVM Talk at DevConnections

    - by dwahlin
    I had a great time at the DevConnections conference in Las Vegas this year where Visual Studio 2010 and Silverlight 4 were launched. While at the conference I had the opportunity to give a full-day Silverlight workshop as well as 4 different talks and met a lot of people developing applications in Silverlight. I also had a chance to appear on a live broadcast of Channel 9 with John Papa, Ward Bell and Shawn Wildermuth, record a video with Rick Strahl covering jQuery versus Silverlight and record a few podcasts on Silverlight and ASP.NET MVC 2.  It was a really busy 4 days but I had a lot of fun chatting with people and hearing about different business problems they were solving with ASP.NET and/or Silverlight. Thanks to everyone who attended my sessions and took the time to ask questions and stop by to talk one-on-one. One of the talks I gave covered the Model-View-ViewModel pattern and how it can be used to build architecturally sound applications. Topics covered in the talk included: Understanding the MVVM pattern Benefits of the MVVM pattern Creating a ViewModel class Implementing INotifyPropertyChanged in a ViewModelBase class Binding a ViewModel declaratively in XAML Binding a ViewModel with code ICommand and ButtonBase commanding support in Silverlight 4 Using InvokeCommandBehavior to handle additional commanding needs Working with ViewModels and Sample Data in Blend Messaging support with EventBus classes, EventAggregator and Messenger My personal take on code in a code-beside file (I’m all in favor of it when used appropriately for message boxes, child windows, animations, etc.) One of the samples I showed in the talk was intended to teach all of the concepts mentioned above while keeping things as simple as possible.  The sample demonstrates quite a few things you can do with Silverlight and the MVVM pattern so check it out and feel free to leave feedback about things you like, things you’d do differently or anything else. MVVM is simply a pattern, not a way of life so there are many different ways to implement it. If you’re new to the subject of MVVM check out the following resources. I wish this talk would’ve been recorded (especially since my live and canned demos all worked :-)) but these resources will help get you going quickly. Getting Started with the MVVM Pattern in Silverlight Applications Model-View-ViewModel (MVVM) Explained Laurent Bugnion’s Excellent Talk at MIX10     Download sample code and slides from my DevConnections talk     For more information about onsite, online and video training, mentoring and consulting solutions for .NET, SharePoint or Silverlight please visit http://www.thewahlingroup.com.

    Read the article

  • On Her Majesty's Secret Source Code: .NET Reflector 7 Early Access Builds Now Available

    - by Bart Read
    Dodgy Bond references aside, I'm extremely happy to be able to tell you that we've just released our first .NET Reflector 7 Early Access build. We're going to make these available over the coming weeks via the main .NET Reflector download page at: http://reflector.red-gate.com/Download.aspx Please have a play and tell us what you think in the forum we've set up. Also, please let us know if you run into any problems in the same place. The new version so far comes with numerous decompilation improvements including (after 5 years!) support for iterator blocks - i.e., the yield statement first seen in .NET 2.0. We've also done a lot of work to solidify the support for .NET 4.0. Clive's written about the work he's done to support iterator blocks in much more detail here, along with the odd problem he's encountered when dealing with compiler generated code: http://www.simple-talk.com/community/blogs/clivet/96199.aspx. On the UI front we've started what will ultimately be a rewrite of the entire front-end, albeit broken into stages over two or three major releases. The most obvious addition at the moment is tabbed browsing, which you can see in Figure 1. Figure 1. .NET Reflector's new tabbed decompilation feature. Use CTRL+Click on any item in the assembly browser tree, or any link in the source code view, to open it in a new tab. This isn't by any means finished. I'll be tying up loose ends for the next few weeks, with a major focus on performance and resource usage. .NET Reflector has historically been a largely single-threaded application which has been fine up until now but, as you might expect, the addition of browser-style tabbing has pushed this approach somewhat beyond its limit. You can see this if you refresh the assemblies list by hitting F5. This shows up another problem: we really need to make Reflector remember everything you had open before you refreshed the list, rather than just the last item you viewed - I discovered that it's always done the latter, but it used to hide all panes apart from the treeview after a Refresh, including the decompiler/disassembler window. Ultimately I've got plans to add the whole VS/Chrome/Firefox style ability to drag a tab into the middle of nowhere to spawn a new window, but I need to be mindful of the add-ins, amongst other things, so it's possible that might slip to a 7.5 or 8.0 release. You'll also notice that .NET Reflector 7 now needs .NET 3.5 or later to run. We made this jump because we wanted to offer ourselves a much better chance of adding some really cool functionality to support newer technologies, such as Silverlight and Windows Phone 7. We've also taken the opportunity to start using WPF for UI development, which has frankly been a godsend. The learning curve is practically vertical but, I kid you not, it's just a far better world. Really. Stop using WinForms. Now. Why are you still using it? I had to go back and work on an old WinForms dialog for an hour or two yesterday and it really made me wince. The point is we'll be able to move the UI in some exciting new directions that will make Reflector easier to use whilst continuing to develop its functionality without (and this is key) cluttering the interface. The 3.5 language enhancements should also enable us to be much more productive over the longer term. I know most of you have .NET Fx 3.5 or 4.0 already but, if you do need to install a new version, I'd recommend you jump straight to 4.0 because, for one thing, it's faster, and if you're starting afresh there's really no reason not to. Despite the Fx version jump the Visual Studio add-in should still work fine in Visual Studio 2005, and obviously will continue to work in Visual Studio 2008 and 2010. If you do run into problems, again, please let us know here. As before, we continue to support every edition of Visual Studio exception the Express Editions. Speaking of Visual Studio, we've also been improving the add-in. You can now open and explore decompiled code for any referenced assembly in any project in your solution. Just right-click on the reference, then click Decompile and Explore on the context menu. Reflector will pop up a progress box whilst it decompiles your assembly (Figure 2) - you can move this out of the way whilst you carry on working. Figure 2. Decompilation progress. This isn't modal so you can just move it out of the way and carry on working. Once it's done you can explore your assembly in the Reflector treeview (Figure 3), also accessible via the .NET Reflector Explore Decompiled Assemblies main menu item. Double-click on any item to open decompiled source in the Visual Studio source code view. Use right-click and Go To Definition on the source view context menu to navigate through the code. Figure 3. Using the .NET Reflector treeview within Visual Studio. Double-click on any item to open decompiled source in the source code view. There are loads of other changes and fixes that have gone in, often under the hood, which I don't have room to talk about here, and plenty more to come over the next few weeks. I'll try to keep you abreast of new functionality and changes as they go in. There are a couple of smaller things worth mentioning now though. Firstly, we've reorganised the menus and toolbar in Reflector itself to more closely mirror what you might be used to in other applications. Secondly, we've tried to make some of the functionality more discoverable. For example, you can now switch decompilation target framework version directly from the toolbar - and the default is now .NET 4.0. I think that about covers it for the moment. As I said, please use the new version, and send us your feedback. Here's that download URL again: http://reflector.red-gate.com/Download.aspx. Until next time! Technorati Tags: .net reflector,7,early access,new version,decompilation,tabbing,visual studio,software development,.net,c#,vb

    Read the article

  • Romanian parter Omnilogic Delivers “No Limits” Scalability, Performance, Security, and Affordability through Next-Generation, Enterprise-Grade Engineered Systems

    - by swalker
    Omnilogic SRL is a leading technology and information systems provider in Romania and central and Eastern Europe. An Oracle Value-Added Distributor Partner, Omnilogic resells Oracle software, hardware, and engineered systems to Oracle Partner Network members and provides specialized training, support, and testing facilities. Independent software vendors (ISVs) also use Omnilogic’s demonstration and testing facilities to upgrade the performance and efficiency of their solutions and those of their customers by migrating them from competitor technologies to Oracle platforms. Omnilogic also has a dedicated offering for ISV solutions, based on Oracle technology in a hosting service provider model. Omnilogic wanted to help Oracle Partners and ISVs migrate solutions to Oracle Exadata and sell Oracle Exadata to end-customers. It installed Oracle Exadata Database Machine X2-2 Quarter Rack at its data center to create a demonstration and testing environment. Demonstrations proved that Oracle Exadata achieved processing speeds up to 100 times faster than competitor systems, cut typical back-up times from 6 hours to 20 minutes, and stored 10 times more data. Oracle Partners and ISVs learned that migrating solutions to Oracle Exadata’s preconfigured, pre-integrated hardware and software can be completed rapidly, at low cost, without business disruption, and with reduced ongoing operating costs. Challenges A word from Omnilogic “Oracle Exadata is the new killer application—the smartest solution on the market. There is no competition.” – Sorin Dragomir, Chief Operating Officer, Omnilogic SRL Enable Oracle Partners in Romania and central and eastern Europe to achieve Oracle Exadata Ready status by providing facilities to test and optimize existing applications and build real-life proofs of concept (POCs) for new solutions on Oracle Exadata Database Machine Provide technical support and demonstration facilities for ISVs migrating their customers’ solutions from competitor technologies to Oracle Exadata to maximize performance, scalability, and security; optimize hardware and datacenter space; cut maintenance costs; and improve return on investment Demonstrate power of Oracle Exadata’s high-performance, high-capacity engineered systems for customer-facing businesses, such as government organizations, telecommunications, banking and insurance, and utility companies, which typically require continuous availability to support very large data volumes Showcase Oracle Exadata’s unchallenged online transaction processing (OLTP) capabilities that cut application run times to provide unrivalled query turnaround and user response speeds while significantly reducing back-up times and eliminating risk of unplanned outages Capitalize on providing a world-class training and demonstration environment for Oracle Exadata to accelerate sales with Oracle Partners Solutions Created a testing environment to enable Oracle Partners and ISVs to test their own solutions and those of their customers on Oracle Exadata running on Oracle Enterprise Linux or Oracle Solaris Express to benchmark performance prior to migration Leveraged expertise on Oracle Exadata to offer Oracle Exadata training, migration, support seminars and to showcase live demonstrations for Oracle Partners Proved how Oracle Exadata’s pre-engineered systems, that come assembled, configured, and ready to run, reduce deployment time and cost, minimize risk, and help customers achieve the full performance potential immediately after go live Increased processing speeds 10-fold and with zero data loss for a telecommunications provider’s client-facing customer relationship management solution Achieved performance improvements of between 6 and 100 times faster for financial and utility company applications currently running on IBM, Microsoft, or SAP HANA platforms Showed how daily closure procedures carried out overnight by banks, insurance companies, and other financial institutions to analyze each day’s business, can typically be cut from around six hours to 20 minutes, some 18 times faster, when running on Oracle Exadata Simulated concurrent back-ups while running applications under normal working conditions to prove that Oracle Exadata-based solutions can be backed up during business hours without causing bottlenecks or impacting the end-user experience Demonstrated that Oracle Exadata’s built-in analytics, data mining and OLTP capabilities make it the highest-performance, lowest-cost choice for large data warehousing operations Showed how Oracle Exadata’s columnar compression and intelligent storage architecture allows 10 times more data to be stored than on competitor platforms Demonstrated how Oracle Exadata cuts hardware requirements significantly by consolidating workloads on to fewer servers which delivers greater power efficiency and lower operating costs that competing systems from IBM and other manufacturers Proved to ISVs that migrating solutions to Oracle Exadata’s preconfigured, pre-integrated hardware and software can be completed rapidly, at low cost, and with minimal business disruption Demonstrated how storage servers, database servers, and network switches can be added incrementally and inexpensively to the Oracle Exadata platform to support business expansion On track to grow revenues by 10% in year one and by 15% annually thereafter through increased business generated from Oracle Partners and ISVs

    Read the article

  • Cursors 1 Sets 0

    - by GrumpyOldDBA
    I had an interesting experience with a database I essentially know nothing about. On the server is a database which stores session state, Microsoft provide the code/database with their dot net, so I'm told. Anyway this database has sat happily on the production server for the past 4 years I guess, we've finally made the upgrade to SQL 2008 and the ASPState database has also been upgraded. It seems most likely that the performance increase of our upgrade tipped the usage of this database into...(read more)

    Read the article

  • Make your code gooder with the goodies gem

    - by kerry
    I have decided to publish all my Ruby tools via a gem called ‘goodies’.  To install this gem simply type ‘gem install goodies’. The source is hosted on GitHub.  The first version (0.1) has the Hash object accessors and the String file path utility methods discussed in the previous two posts. Enjoy!   Ruby Goodies @ GitHub Goodies on gemcutter.org

    Read the article

  • Mongodb performance on Windows

    - by Chris
    I've been researching nosql options available for .NET lately and MongoDB is emerging as a clear winner in terms of availability and support, so tonight I decided to give it a go. I downloaded version 1.2.4 (Windows x64 binary) from the mongodb site and ran it with the following options: C:\mongodb\bin>mkdir data C:\mongodb\bin>mongod -dbpath ./data --cpu --quiet I then loaded up the latest mongodb-csharp driver from http://github.com/samus/mongodb-csharp and immediately ran the benchmark program. Having heard about how "amazingly fast" MongoDB is, I was rather shocked at the poor benchmark performance. Starting Tests encode (small).........................................320000 00:00:00.0156250 encode (medium)........................................80000 00:00:00.0625000 encode (large).........................................1818 00:00:02.7500000 decode (small).........................................320000 00:00:00.0156250 decode (medium)........................................160000 00:00:00.0312500 decode (large).........................................2370 00:00:02.1093750 insert (small, no index)...............................2176 00:00:02.2968750 insert (medium, no index)..............................2269 00:00:02.2031250 insert (large, no index)...............................778 00:00:06.4218750 insert (small, indexed)................................2051 00:00:02.4375000 insert (medium, indexed)...............................2133 00:00:02.3437500 insert (large, indexed)................................835 00:00:05.9843750 batch insert (small, no index).........................53333 00:00:00.0937500 batch insert (medium, no index)........................26666 00:00:00.1875000 batch insert (large, no index).........................1114 00:00:04.4843750 find_one (small, no index).............................350 00:00:14.2812500 find_one (medium, no index)............................204 00:00:24.4687500 find_one (large, no index).............................135 00:00:37.0156250 find_one (small, indexed)..............................352 00:00:14.1718750 find_one (medium, indexed).............................184 00:00:27.0937500 find_one (large, indexed)..............................128 00:00:38.9062500 find (small, no index).................................516 00:00:09.6718750 find (medium, no index)................................316 00:00:15.7812500 find (large, no index).................................216 00:00:23.0468750 find (small, indexed)..................................532 00:00:09.3906250 find (medium, indexed).................................346 00:00:14.4375000 find (large, indexed)..................................212 00:00:23.5468750 find range (small, indexed)............................440 00:00:11.3593750 find range (medium, indexed)...........................294 00:00:16.9531250 find range (large, indexed)............................199 00:00:25.0625000 Press any key to continue... For starters, I can get better non-batch insert performance from SQL Server Express. What really struck me, however, was the slow performance of the find_nnnn queries. Why is retrieving data from MongoDB so slow? What am I missing? Edit: This was all on the local machine, no network latency or anything. MongoDB's CPU usage ran at about 75% the entire time the test was running. Edit 2: Also, I ran a trace on the benchmark program and confirmed that 50% of the CPU time spent was waiting for MongoDB to return data, so it's not a performance issue with the C# driver.

    Read the article

  • Performance Optimization &ndash; It Is Faster When You Can Measure It

    - by Alois Kraus
    Performance optimization in bigger systems is hard because the measured numbers can vary greatly depending on the measurement method of your choice. To measure execution timing of specific methods in your application you usually use Time Measurement Method Potential Pitfalls Stopwatch Most accurate method on recent processors. Internally it uses the RDTSC instruction. Since the counter is processor specific you can get greatly different values when your thread is scheduled to another core or the core goes into a power saving mode. But things do change luckily: Intel's Designer's vol3b, section 16.11.1 "16.11.1 Invariant TSC The time stamp counter in newer processors may support an enhancement, referred to as invariant TSC. Processor's support for invariant TSC is indicated by CPUID.80000007H:EDX[8]. The invariant TSC will run at a constant rate in all ACPI P-, C-. and T-states. This is the architectural behavior moving forward. On processors with invariant TSC support, the OS may use the TSC for wall clock timer services (instead of ACPI or HPET timers). TSC reads are much more efficient and do not incur the overhead associated with a ring transition or access to a platform resource." DateTime.Now Good but it has only a resolution of 16ms which can be not enough if you want more accuracy.   Reporting Method Potential Pitfalls Console.WriteLine Ok if not called too often. Debug.Print Are you really measuring performance with Debug Builds? Shame on you. Trace.WriteLine Better but you need to plug in some good output listener like a trace file. But be aware that the first time you call this method it will read your app.config and deserialize your system.diagnostics section which does also take time.   In general it is a good idea to use some tracing library which does measure the timing for you and you only need to decorate some methods with tracing so you can later verify if something has changed for the better or worse. In my previous article I did compare measuring performance with quantum mechanics. This analogy does work surprising well. When you measure a quantum system there is a lower limit how accurately you can measure something. The Heisenberg uncertainty relation does tell us that you cannot measure of a quantum system the impulse and location of a particle at the same time with infinite accuracy. For programmers the two variables are execution time and memory allocations. If you try to measure the timings of all methods in your application you will need to store them somewhere. The fastest storage space besides the CPU cache is the memory. But if your timing values do consume all available memory there is no memory left for the actual application to run. On the other hand if you try to record all memory allocations of your application you will also need to store the data somewhere. This will cost you memory and execution time. These constraints are always there and regardless how good the marketing of tool vendors for performance and memory profilers are: Any measurement will disturb the system in a non predictable way. Commercial tool vendors will tell you they do calculate this overhead and subtract it from the measured values to give you the most accurate values but in reality it is not entirely true. After falling into the trap to trust the profiler timings several times I have got into the habit to Measure with a profiler to get an idea where potential bottlenecks are. Measure again with tracing only the specific methods to check if this method is really worth optimizing. Optimize it Measure again. Be surprised that your optimization has made things worse. Think harder Implement something that really works. Measure again Finished! - Or look for the next bottleneck. Recently I have looked into issues with serialization performance. For serialization DataContractSerializer was used and I was not sure if XML is really the most optimal wire format. After looking around I have found protobuf-net which uses Googles Protocol Buffer format which is a compact binary serialization format. What is good for Google should be good for us. A small sample app to check out performance was a matter of minutes: using ProtoBuf; using System; using System.Diagnostics; using System.IO; using System.Reflection; using System.Runtime.Serialization; [DataContract, Serializable] class Data { [DataMember(Order=1)] public int IntValue { get; set; } [DataMember(Order = 2)] public string StringValue { get; set; } [DataMember(Order = 3)] public bool IsActivated { get; set; } [DataMember(Order = 4)] public BindingFlags Flags { get; set; } } class Program { static MemoryStream _Stream = new MemoryStream(); static MemoryStream Stream { get { _Stream.Position = 0; _Stream.SetLength(0); return _Stream; } } static void Main(string[] args) { DataContractSerializer ser = new DataContractSerializer(typeof(Data)); Data data = new Data { IntValue = 100, IsActivated = true, StringValue = "Hi this is a small string value to check if serialization does work as expected" }; var sw = Stopwatch.StartNew(); int Runs = 1000 * 1000; for (int i = 0; i < Runs; i++) { //ser.WriteObject(Stream, data); Serializer.Serialize<Data>(Stream, data); } sw.Stop(); Console.WriteLine("Did take {0:N0}ms for {1:N0} objects", sw.Elapsed.TotalMilliseconds, Runs); Console.ReadLine(); } } The results are indeed promising: Serializer Time in ms N objects protobuf-net   807 1000000 DataContract 4402 1000000 Nearly a factor 5 faster and a much more compact wire format. Lets use it! After switching over to protbuf-net the transfered wire data has dropped by a factor two (good) and the performance has worsened by nearly a factor two. How is that possible? We have measured it? Protobuf-net is much faster! As it turns out protobuf-net is faster but it has a cost: For the first time a type is de/serialized it does use some very smart code-gen which does not come for free. Lets try to measure this one by setting of our performance test app the Runs value not to one million but to 1. Serializer Time in ms N objects protobuf-net 85 1 DataContract 24 1 The code-gen overhead is significant and can take up to 200ms for more complex types. The break even point where the code-gen cost is amortized by its faster serialization performance is (assuming small objects) somewhere between 20.000-40.000 serialized objects. As it turned out my specific scenario involved about 100 types and 1000 serializations in total. That explains why the good old DataContractSerializer is not so easy to take out of business. The final approach I ended up was to reduce the number of types and to serialize primitive types via BinaryWriter directly which turned out to be a pretty good alternative. It sounded good until I measured again and found that my optimizations so far do not help much. After looking more deeper at the profiling data I did found that one of the 1000 calls did take 50% of the time. So how do I find out which call it was? Normal profilers do fail short at this discipline. A (totally undeserved) relatively unknown profiler is SpeedTrace which does unlike normal profilers create traces of your applications by instrumenting your IL code at runtime. This way you can look at the full call stack of the one slow serializer call to find out if this stack was something special. Unfortunately the call stack showed nothing special. But luckily I have my own tracing as well and I could see that the slow serializer call did happen during the serialization of a bool value. When you encounter after much analysis something unreasonable you cannot explain it then the chances are good that your thread was suspended by the garbage collector. If there is a problem with excessive GCs remains to be investigated but so far the serialization performance seems to be mostly ok.  When you do profile a complex system with many interconnected processes you can never be sure that the timings you just did measure are accurate at all. Some process might be hitting the disc slowing things down for all other processes for some seconds as well. There is a big difference between warm and cold startup. If you restart all processes you can basically forget the first run because of the OS disc cache, JIT and GCs make the measured timings very flexible. When you are in need of a random number generator you should measure cold startup times of a sufficiently complex system. After the first run you can try again getting different and much lower numbers. Now try again at least two times to get some feeling how stable the numbers are. Oh and try to do the same thing the next day. It might be that the bottleneck you found yesterday is gone today. Thanks to GC and other random stuff it can become pretty hard to find stuff worth optimizing if no big bottlenecks except bloatloads of code are left anymore. When I have found a spot worth optimizing I do make the code changes and do measure again to check if something has changed. If it has got slower and I am certain that my change should have made it faster I can blame the GC again. The thing is that if you optimize stuff and you allocate less objects the GC times will shift to some other location. If you are unlucky it will make your faster working code slower because you see now GCs at times where none were before. This is where the stuff does get really tricky. A safe escape hatch is to create a repro of the slow code in an isolated application so you can change things fast in a reliable manner. Then the normal profilers do also start working again. As Vance Morrison does point out it is much more complex to profile a system against the wall clock compared to optimize for CPU time. The reason is that for wall clock time analysis you need to understand how your system does work and which threads (if you have not one but perhaps 20) are causing a visible delay to the end user and which threads can wait a long time without affecting the user experience at all. Next time: Commercial profiler shootout.

    Read the article

  • Dynamic Code for type casting Generic Types 'generically' in C#

    - by Rick Strahl
    C# is a strongly typed language and while that's a fundamental feature of the language there are more and more situations where dynamic types make a lot of sense. I've written quite a bit about how I use dynamic for creating new type extensions: Dynamic Types and DynamicObject References in C# Creating a dynamic, extensible C# Expando Object Creating a dynamic DataReader for dynamic Property Access Today I want to point out an example of a much simpler usage for dynamic that I use occasionally to get around potential static typing issues in C# code especially those concerning generic types. TypeCasting Generics Generic types have been around since .NET 2.0 I've run into a number of situations in the past - especially with generic types that don't implement specific interfaces that can be cast to - where I've been unable to properly cast an object when it's passed to a method or assigned to a property. Granted often this can be a sign of bad design, but in at least some situations the code that needs to be integrated is not under my control so I have to make due with what's available or the parent object is too complex or intermingled to be easily refactored to a new usage scenario. Here's an example that I ran into in my own RazorHosting library - so I have really no excuse, but I also don't see another clean way around it in this case. A Generic Example Imagine I've implemented a generic type like this: public class RazorEngine<TBaseTemplateType> where TBaseTemplateType : RazorTemplateBase, new() You can now happily instantiate new generic versions of this type with custom template bases or even a non-generic version which is implemented like this: public class RazorEngine : RazorEngine<RazorTemplateBase> { public RazorEngine() : base() { } } To instantiate one: var engine = new RazorEngine<MyCustomRazorTemplate>(); Now imagine that the template class receives a reference to the engine when it's instantiated. This code is fired as part of the Engine pipeline when it gets ready to execute the template. It instantiates the template and assigns itself to the template: var template = new TBaseTemplateType() { Engine = this } The problem here is that possibly many variations of RazorEngine<T> can be passed. I can have RazorTemplateBase, RazorFolderHostTemplateBase, CustomRazorTemplateBase etc. as generic parameters and the Engine property has to reflect that somehow. So, how would I cast that? My first inclination was to use an interface on the engine class and then cast to the interface.  Generally that works, but unfortunately here the engine class is generic and has a few members that require the template type in the member signatures. So while I certainly can implement an interface: public interface IRazorEngine<TBaseTemplateType> it doesn't really help for passing this generically templated object to the template class - I still can't cast it if multiple differently typed versions of the generic type could be passed. I have the exact same issue in that I can't specify a 'generic' generic parameter, since there's no underlying base type that's common. In light of this I decided on using object and the following syntax for the property (and the same would be true for a method parameter): public class RazorTemplateBase :MarshalByRefObject,IDisposable { public object Engine {get;set; } } Now because the Engine property is a non-typed object, when I need to do something with this value, I still have no way to cast it explicitly. What I really would need is: public RazorEngine<> Engine { get; set; } but that's not possible. Dynamic to the Rescue Luckily with the dynamic type this sort of thing can be mitigated fairly easily. For example here's a method that uses the Engine property and uses the well known class interface by simply casting the plain object reference to dynamic and then firing away on the properties and methods of the base template class that are common to all templates:/// <summary> /// Allows rendering a dynamic template from a string template /// passing in a model. This is like rendering a partial /// but providing the input as a /// </summary> public virtual string RenderTemplate(string template,object model) { if (template == null) return string.Empty; // if there's no template markup if(!template.Contains("@")) return template; // use dynamic to get around generic type casting dynamic engine = Engine; string result = engine.RenderTemplate(template, model); if (result == null) throw new ApplicationException("RenderTemplate failed: " + engine.ErrorMessage); return result; } Prior to .NET 4.0  I would have had to use Reflection for this sort of thing which would have a been a heck of a lot more verbose, but dynamic makes this so much easier and cleaner and in this case at least the overhead is negliable since it's a single dynamic operation on an otherwise very complex operation call. Dynamic as  a Bailout Sometimes this sort of thing often reeks of a design flaw, and I agree that in hindsight this could have been designed differently. But as is often the case this particular scenario wasn't planned for originally and removing the generic signatures from the base type would break a ton of other code in the framework. Given the existing fairly complex engine design, refactoring an interface to remove generic types just to make this particular code work would have been overkill. Instead dynamic provides a nice and simple and relatively clean solution. Now if there were many other places where this occurs I would probably consider reworking the code to make this cleaner but given this isolated instance and relatively low profile operation use of dynamic seems a valid choice for me. This solution really works anywhere where you might end up with an inheritance structure that doesn't have a common base or interface that is sufficient. In the example above I know what I'm getting but there's no common base type that I can cast to. All that said, it's a good idea to think about use of dynamic before you rush in. In many situations there are alternatives that can still work with static typing. Dynamic definitely has some overhead compared to direct static access of objects, so if possible we should definitely stick to static typing. In the example above the application already uses dynamics extensively for dynamic page page templating and passing models around so introducing dynamics here has very little additional overhead. The operation itself also fires of a fairly resource heavy operation where the overhead of a couple of dynamic member accesses are not a performance issue. So, what's your experience with dynamic as a bailout mechanism? © Rick Strahl, West Wind Technologies, 2005-2012Posted in CSharp   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Visitor pattern and compiler code generation, how to get children attributes?

    - by LeleDumbo
    I'd like to modify my compiler's code generator to use visitor pattern since the current approach must use multiple conditional statement to check the real type of a child before generating the corresponding code. However, I have problems to get children attributes after they're visited. For instance, in binary expression I use this: LHSCode := GenerateExpressionCode(LHSNode); RHSCode := GenerateExpressionCode(RHSNode); CreateBinaryExpression(Self,LHS,RHS); In visitor pattern the visit method is usually void, so I can't get the expression code from LHS and RHS. Keeping shared global variables isn't an option since expression code generation is recursive thus could erase previous values kept in the variables. I'll just show the binary expression as this is the most complicated part (for now): function TLLVMCodeGenerator.GenerateExpressionCode( Expr: TASTExpression): TLLVMValue; var BinExpr: TASTBinaryExpression; UnExpr: TASTUnaryExpression; LHSCode, RHSCode, ExprCode: TLLVMValue; VarExpr: TASTVariableExpression; begin if Expr is TASTBinaryExpression then begin BinExpr := Expr as TASTBinaryExpression; LHSCode := GenerateExpressionCode(BinExpr.LHS); RHSCode := GenerateExpressionCode(BinExpr.RHS); case BinExpr.Op of '<': Result := FBuilder.CreateICmp(ccSLT, LHSCode, RHSCode); '<=': Result := FBuilder.CreateICmp(ccSLE, LHSCode, RHSCode); '>': Result := FBuilder.CreateICmp(ccSGT, LHSCode, RHSCode); '>=': Result := FBuilder.CreateICmp(ccSGE, LHSCode, RHSCode); '==': Result := FBuilder.CreateICmp(ccEQ, LHSCode, RHSCode); '<>': Result := FBuilder.CreateICmp(ccNE, LHSCode, RHSCode); '/\': Result := FBuilder.CreateAnd(LHSCode, RHSCode); '\/': Result := FBuilder.CreateOr(LHSCode, RHSCode); '+': Result := FBuilder.CreateAdd(LHSCode, RHSCode); '-': Result := FBuilder.CreateSub(LHSCode, RHSCode); '*': Result := FBuilder.CreateMul(LHSCode, RHSCode); '/': Result := FBuilder.CreateSDiv(LHSCode, RHSCode); end; end else if Expr is TASTPrimaryExpression then if Expr is TASTBooleanConstant then with Expr as TASTBooleanConstant do Result := FBuilder.CreateConstant(Ord(Value), ltI1) else if Expr is TASTIntegerConstant then with Expr as TASTIntegerConstant do Result := FBuilder.CreateConstant(Value, ltI32) else if Expr is TASTUnaryExpression then begin UnExpr := Expr as TASTUnaryExpression; ExprCode := GenerateExpressionCode(UnExpr.Expr); case UnExpr.Op of '~': Result := FBuilder.CreateXor( FBuilder.CreateConstant(1, ltI1), ExprCode); '-': Result := FBuilder.CreateSub( FBuilder.CreateConstant(0, ltI32), ExprCode); end; end else if Expr is TASTVariableExpression then begin VarExpr := Expr as TASTVariableExpression; with VarExpr.VarDecl do Result := FBuilder.CreateVar(Ident, BaseTypeLLVMTypeMap[BaseType]); end; end; Hope you understand it :)

    Read the article

  • How to monitor CPU usage and performance on a Hyper-V server with several VM's

    - by Bjørn
    Hello, I have a server that is running Windows 2008 64 bit Hyper-V, with 8 gigs of RAM and Intel Xeon X3440 @ 2.53 Ghz, which gives me 8 logical cores in the performance monitor on the host system. I have set up three Virtual Machines, all running Windows 2008 32 bit. Build server, running Team City Staging server SQL Server, running SQL Server 2005 I have some troubles with the setup in that the host monitor remains responsive at all times, even though the VM's are seemingly working at 100% cpu and are very sluggish and unresponsive. (I have asked a separate question about that.) So the question here is: What is the best way to monitor how the physical CPU's are actually utilized? The reason I am asking is that I am being told that i cannot reliably use the task manager to monitor CPU usage in a VM.

    Read the article

  • LVM2 vs MDADM performance

    - by archer
    I've used MDADM + LVM2 on many boxes for quite a while. MDADM was serving for both RAID0 and RAID1 arrays, while LVM2 where used for logical volumes on top of MDADM. Recently I've found that LVM2 could be used w/o MDADM (thus minus one layer, as the result - less overhead) for both mirroring and stripping. However, some guys claims that READ PERFORMANCE on LVM2 for mirrored array is not that fast as for LVM2 (linear) on top of MDADM (RAID1) as LVM2 does not read from 2+ devices at a time, but use 2nd and higher devices in case of 1st device failure. MDADM reads from 2 devices at a time (even in mirrored mode). Who could confirm that?

    Read the article

  • Tweaking Firefox for Performance

    - by Simon Sheehan
    As an avid Firefox user since it began, I've been looking to make some under the hood changes to it, in order to optimize it for speed and performance. I'd also like to limit my RAM usage with it. Are there any settings that can help this? What can be changed in about:config that affects this? I'd also like to know if themes or anything really boost RAM usage, as they are generally very small files to download. Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:7.0a1) Gecko/20110630 Firefox/7.0a1

    Read the article

  • Slides and code for MPI Cluster Debugger

    I've blogged before about the MPI Cluster Debugger in VS2010 that facilitates launching the application on the cluster and attaching the debugger (btw, a shorter version of the screencast I link to there, is here).There have been requests for the code I use in the screencast, so please find a ZIP with that code.There have also been requests for a PowerPoint deck to use when showing this feature to others. Feel free to download some slides I threw together the other day. Comments about this post welcome at the original blog.

    Read the article

  • CPU & Memory Usage Log & Performance

    - by wittythotha
    I want to have an idea of the amount of CPU and memory that is being used. I have a website hosted using IIS, and have clients connecting to it. I want to find out the amount of load that the CPU, RAM and the network has when multiple clients connect. I tried out using tools like Fiddler, the inbuilt Resource Manager, and also some other applications I found on the internet. I just want to keep track of all these data in a file, so I can plot out a graph and find out how the CPU, etc. is performing. I read a few other posts, but didn't find anything that solves the problem. Is there good CPU / Memory Logging tool available, just to plot a graph of the usage, etc.? EDIT : I want to know of some tool that can save the performance details in a log file, so that I can use it to plot a graph, etc.

    Read the article

  • How do I (tactfully) tell my project manager or lead developer that the project's codebase needs serious work?

    - by Adam Maras
    I just joined a (relatively) small development team that's been working on a project for several months, if not a year. As with most developer joining a project, I spent my first couple of days reviewing the project's codebase. The project (a medium- to large-sized ASP.NET WebForms internal line of business application) is, for lack of a more descriptive term, a disaster. There are three immediately noticeable problems with the coding standards: The standard is very loose. It describes more of what not to do (don't use Hungarian notation, etc..) than what to do. The standard isn't always followed. There are inconsistencies with the code formatting everywhere. The standard doesn't follow Microsoft's style guidelines. In my opinion, there's no value in deviating from the guidelines that were set forth by the developer of the framework and the largest contributor to the language specification. As for point 3, perhaps it bothers me more because I've taken the time to get my MCPD with a focus on web applications (specifically, ASP.NET). I'm also the only Microsoft Certified Professional on the team. Because of what I learned in all of my schooling, self-teaching, and on-the-job learning (including my preparation for the certification exams) I've also spotted several instances in the project's code where things are simply not done in the best way. I've only been on this team for a week, but I see so many issues with their codebase that I imagine I'll be spending more time fighting with what's already written to do things in "their way" than I would if I were working on a project that, for example, followed more widely accepted coding standards, architecture patterns, and best practices. This brings me to my question: Should I (and if so, how do I) propose to my project manager and team lead that the project needs to be majorly renovated? I don't want to walk into their office, waving my MCTS and MCPD certificates around, saying that their project's codebase is crap. But I also don't want to have to stay silent and have to write kludgey code atop their kludgey code, because I actually want to write quality software and I want the end product to be stable and easily maintainable.

    Read the article

  • Philly.NET Code Camp

    - by Steve Michelotti
    This Saturday I will be at the Philly.NET Code Camp presenting C# 4.0.  The code camp is currently registered to capacity (800 attendees) but you will be able to view certain presentations on a Live Meeting simulcast (and later on Channel 9).  You can tune it at 3:30PM Eastern time to view my presentation. The attendee URL is here.

    Read the article

  • performance counter

    - by Abruzzo Forte e Gentile
    Hi All I created a performance counter for my C# application. Its type is NumberOfItems32. I don't know why but the Performance Monitor is displaying me on the y-axis only as maximum value only 100 when my counter is much more bigger than this for sure. Do you know if this is the correct behavior or am I doing something wrong? Thanks all AFG

    Read the article

  • 4th Annual Hartford Code Camp - The Code Camp Manifesto lives on!

    - by SB Chatterjee
    It is amazing that Thom Robbins' blog posting back in December 2004 laid the foundation of the Code Camps that have grown world-wide - there is at least one every week-end in some country (unscientific tweets stats sampling). This week end, we at the Connecticut .NET Developers Group had the 4th Annual Hartford Code Camp and it was well attended with 120+ attendees with ~30 sessions. Our thanks to the Speakers from near and far who made our event a success.

    Read the article

  • POI performance

    - by The Machine
    I am using POI in my J2EE web application to generate a workbook. However, i find that POI takes around 3 mins to create a workbook with 25K rows(with around 15 columns each). Is this a POI performance issue , or is it justified to take that much of time? Are there other APIs known for better performance ?

    Read the article

  • My Code Kata–A Solution Kata

    - by Glav
    There are many developers and coders out there who like to do code Kata’s to keep their coding ability up to scratch and to practice their skills. I think it is a good idea. While I like the concept, I find them dead boring and of minimal purpose. Yes, they serve to hone your skills but that’s about it. They are often quite abstract, in that they usually focus on a small problem set requiring specific solutions. It is fair enough as that is how they are designed but again, I find them quite boring. What I personally like to do is go for something a little larger and a little more fun. It takes a little more time and is not as easily executed as a kata though, but it services the same purposes from a practice perspective and allows me to continue to solve some problems that are not directly part of the initial goal. This means I can cover a broader learning range and have a bit more fun. If I am lucky, sometimes they even end up being useful tools. With that in mind, I thought I’d share my current ‘kata’. It is not really a code kata as it is too big. I prefer to think of it as a ‘solution kata’. The code is on bitbucket here. What I wanted to do was create a kind of simplistic virtual world where I can create a player, or a class, stuff it into the world, and see if it survives, and can navigate its way to the exit. Requirements were pretty simple: Must be able to define a map to describe the world using simple X,Y co-ordinates. Z co-ordinates as well if you feel like getting clever. Should have the concept of entrances, exists, solid blocks, and potentially other materials (again if you want to get clever). A coder should be able to easily write a class which will act as an inhabitant of the world. An inhabitant will receive stimulus from the world in the form of surrounding environment and be able to make a decision on action which it passes back to the ‘world’ for processing. At a minimum, an inhabitant will have sight and speed characteristics which determine how far they can ‘see’ in the world, and how fast they can move. Coders who write a really bad ‘inhabitant’ should not adversely affect the rest of world. Should allow multiple inhabitants in the world. So that was the solution I set out to act as a practice solution and a little bit of fun. It had some interesting problems to solve and I figured, if it turned out ok, I could potentially use this as a ‘developer test’ for interviews. Ask a potential coder to write a class for an inhabitant. Show the coder the map they will navigate, but also mention that we will use their code to navigate a map they have not yet seen and a little more complex. I have been playing with solution for a short time now and have it working in basic concepts. Below is a screen shot using a very basic console visualiser that shows the map, boundaries, blocks, entrance, exit and players/inhabitants. The yellow asterisks ‘*’ are the players, green ‘O’ the entrance, purple ‘^’ the exit, maroon/browny ‘#’ are solid blocks. The players can move around at different speeds, knock into each others, and make directional movement decisions based on what they see and who is around them. It has been quite fun to write and it is also quite fun to develop different players to inject into the world. The code below shows a really simple implementation of an inhabitant that can work out what to do based on stimulus from the world. It is pretty simple and just tries to move in some direction if there is nothing blocking the path. public class TestPlayer:LivingEntity { public TestPlayer() { Name = "Beta Boy"; LifeKey = Guid.NewGuid(); } public override ActionResult DecideActionToPerform(EcoDev.Core.Common.Actions.ActionContext actionContext) { try { var action = new MovementAction(); // move forward if we can if (actionContext.Position.ForwardFacingPositions.Length > 0) { if (CheckAccessibilityOfMapBlock(actionContext.Position.ForwardFacingPositions[0])) { action.DirectionToMove = MovementDirection.Forward; return action; } } if (actionContext.Position.LeftFacingPositions.Length > 0) { if (CheckAccessibilityOfMapBlock(actionContext.Position.LeftFacingPositions[0])) { action.DirectionToMove = MovementDirection.Left; return action; } } if (actionContext.Position.RearFacingPositions.Length > 0) { if (CheckAccessibilityOfMapBlock(actionContext.Position.RearFacingPositions[0])) { action.DirectionToMove = MovementDirection.Back; return action; } } if (actionContext.Position.RightFacingPositions.Length > 0) { if (CheckAccessibilityOfMapBlock(actionContext.Position.RightFacingPositions[0])) { action.DirectionToMove = MovementDirection.Right; return action; } } return action; } catch (Exception ex) { World.WriteDebugInformation("Player: "+ Name, string.Format("Player Generated exception: {0}",ex.Message)); throw ex; } } private bool CheckAccessibilityOfMapBlock(MapBlock block) { if (block == null || block.Accessibility == MapBlockAccessibility.AllowEntry || block.Accessibility == MapBlockAccessibility.AllowExit || block.Accessibility == MapBlockAccessibility.AllowPotentialEntry) { return true; } return false; } } It is simple and it seems to work well. The world implementation itself decides the stimulus context that is passed to he inhabitant to make an action decision. All movement is carried out on separate threads and timed appropriately to be as fair as possible and to cater for additional skills such as speed, and eventually maybe stamina, strength, with actions like fighting. It is pretty fun to make up random maps and see how your inhabitant does. You can download the code from here. Along the way I have played with parallel extensions to make the compute intensive stuff spread across all cores, had to heavily factor in visibility of methods and properties so design of classes was paramount, work out movement algorithms that play fairly in the world and properly favour the players with higher abilities, as well as a host of other issues. So that is my ‘solution kata’. If I keep going with it, I may develop a web interface for it where people can upload assemblies and watch their player within a web browser visualiser and maybe even a map designer. What do you do to keep the fires burning?

    Read the article

< Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >