Search Results

Search found 4879 results on 196 pages for 'geeks'.

Page 53/196 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • BizTalk: mapping with Xslt

    - by Leonid Ganeline
    BizTalk Map Editor (Mapper) is a good editor, especially in the last 2010 version of the BizTalk. But still sometimes it cannot do the tasks easily. It is time for the Xslt code, It is time to remember that the maps are executed by the Xslt engine.  Right-click the Mapper Grid (a field between the source and target schemas) and choose Properties /Custom XSLT Path.  Input here a name of the file with Xslt code. Only this code will be executed, forget the picture in the Mapper, all those links and functoids.  Let’s see the real-life example. There are two source Addresses. One is on the top level and the second is inside the Member_Address record with MaxOccurs=* . The target address is placed inside the Locator record with MaxOccurs=*. The requirement is to map all source address to the one target address structure. The source Xml document looks like: The result Xml should be like this: Try to do this mapping with the Mapper and you will spent good amount of time and the result map would be tricky. If we use the Xslt code, the mapping will be simple and unambiguous, like this: Simple, elegant.

    Read the article

  • Autoscaling in a modern world&hellip;. last chapter

    - by Steve Loethen
    As we all know as coders, things like logging are never important.  Our code will work right the first time.  So, you can understand my surprise when the first time I deployed the autoscaling worker role to the actual Azure fabric, it did not scale.  I mean, it worked on my machine.  How dare the datacenter argue with that.  So, how did I track down the problem?  (turns out, it was not so much code as lack of the right certificate)  When I ran it local in the developer fabric, I was able to see a wealth of information.  Lots of periodic status info every time the autoscalar came around to check on my rules and decide to act or not.  But that information was not making it to Azure storage.  The diagnostics were not being transferred to where I could easily see and use them to track down why things were not being cooperative.  After a bit of digging, I discover the problem.  You need to add a bit of extra configuration code to get the correct information stored for you.  I added the following to my app.config: Code Snippet <system.diagnostics>     <sources>         <source name="Autoscaling General"switchName="SourceSwitch"           switchType="System.Diagnostics.SourceSwitch" >         <listeners>           <add name="AzureDiag" />             <remove name="Default"/>         </listeners>       </source>         <source name="Autoscaling Updates"switchName="SourceSwitch"           switchType="System.Diagnostics.SourceSwitch" >         <listeners>           <add name="AzureDiag" />             <remove name="Default"/>         </listeners>       </source>     </sources>     <switches>       <add name="SourceSwitch"           value="Verbose, Information, Warning, Error, Critical" />     </switches>     <sharedListeners>       <add type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener,Microsoft.WindowsAzure.Diagnostics, Version=1.0.0.0, Culture=neutral,PublicKeyToken=31bf3856ad364e35" name="AzureDiag"/>     </sharedListeners>     <trace>       <listeners>         <add             type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener,Microsoft.WindowsAzure.Diagnostics, Version=1.0.0.0, Culture=neutral,PublicKeyToken=31bf3856ad364e35" name="AzureDiagnostics">           <filter type="" />         </add>       </listeners>     </trace>   </system.diagnostics> Suddenly all the rich tracing info I needed was filling up my storage account.  After a few cycles of trying to attempting to scale, I identified the cert problem, uploaded a correct certificate, and away it went.  I hope this was helpful.

    Read the article

  • Winners of Pete Brown's "Silverlight 5 In Action" Books

    - by Dave Campbell
    It's always a double-edged sword when I get to this point in a give-away... I want to give everyone something, but a deal is a deal :) It's also only through the benevolence of the folks at Manning Press that I can even do this, so thank you! The Winners Getting right to it, the winners are: Jaganadh G Stephen Owens Jan Hannemann Notice there are 3 names, not 2... I was told late last week to pick a 3rd name, so thanks again Manning! I've already received email from my contact, and they've been waiting for me to send them the email. You should be hearing from them shortly I think. For everyone else, keep your eyes on my blog... as I told Manning, I like giving away other people's stuff :) Have a great day, and if you're anywhere near Phoenix and interested in Silverlight, I'll see you tomorrow at the Scott Gu Event, and Stay in the 'Light!

    Read the article

  • VSDB to SSDT part 4 : Redistributable database deployment package with SqlPackage.exe

    - by Etienne Giust
    The goal here is to use SSDT SqlPackage to deploy the output of a Visual Studio 2012 Database project… a bit in the same fashion that was detailed here : http://geekswithblogs.net/80n/archive/2012/09/12/vsdb-to-ssdt-part-3--command-line-deployment-with-sqlpackage.exe.aspx   The difference is we want to do it on an environment where Visual Studio 2012 and SSDT are not installed. This might be the case of your Production server.   Package structure So, to get started you need to create a folder named “DeploymentSSDTRedistributable”. This folder will have the following structure :         The dacpac and dll files are the outputs of your Visual Studio 2012 Database project. If your database project references another database project, you need to put their dacpac and dll here too, otherwise deployment will not work. The publish.xml file is the publish configuration suitable for your target environment. It holds connexion strings, SQLVARS parameters and deployment options. Review it carefully. The SqlDacRuntime folder (an arbitrary chosen name) will hold the SqlPackage executable and supporting libraries   Contents of the SqlDacRuntime folder Here is what you need to put in the SqlDacRuntime folder  :      You will be able to find these files in the following locations, on a machine with Visual Studio 2012 Ultimate installed : C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin : SqlPackage.exe Microsoft.Data.Tools.Schema.Sql.dll  Microsoft.Data.Tools.Utilities.dll Microsoft.SqlServer.Dac.dll C:\Windows\Microsoft.NET\assembly\GAC_MSIL\Microsoft.SqlServer.TransactSql.ScriptDom\v4.0_11.0.0.0__89845dcd8080cc91 Microsoft.SqlServer.TransactSql.ScriptDom.dll   Deploying   Now take your DeploymentSSDTRedistributable deployment package to your remote machine. In a standard command window, place yourself inside the DeploymentSSDTRedistributable  folder.   You can first perform a check of what will be updated in the target database. The DeployReport task of SqlPackage.exe will help you do that. The following command will output an xml of the changes:   "SqlDacRuntime/SqlPackage.exe" /Action:DeployReport /SourceFile:./Our.Database.dacpac /Profile:./Release.publish.xml /OutputPath:./ChangesToDeploy.xml      You might get some warnings on Log and Data file like I did. You can ignore them. Also, the tool is warning about data loss when removing a column from a table. By default, the publish.xml options will prevent you from deploying when data loss is occuring (see the BlockOnPossibleDataLoss inside the publish.xml file). Before actual deployment, take time to carefully review the changes to be applied in the ChangesToDeploy.xml file.    When you are satisfied, you can deploy your changes with the following command : "SqlDacRuntime/SqlPackage.exe" /Action:Publish /SourceFile:./Our.Database.dacpac /Profile:./Release.publish.xml   Et voilà !  Your dacpac file has been deployed to your database. I’ve been testing this on a SQL 2008 Server (not R2) but it should work on 2005, 2008 R2 and 2012 as well.   Many thanks to Anuj Chaudhary for his article on the subject : http://www.anujchaudhary.com/2012/08/sqlpackageexe-automating-ssdt-deployment.html

    Read the article

  • Get to Know a Candidate (9 of 25): Gary Johnson&ndash;Libertarian Party

    - by Brian Lanham
    DISCLAIMER: This is not a post about “Romney” or “Obama”. This is not a post for whom I am voting. Information sourced for Wikipedia. Johnson served as the 29th Governor of New Mexico from 1995 to 2003, as a member of the Republican Party, and is known for his low-tax libertarian views and his strong emphasis on personal health and fitness. While a student at the University of New Mexico in 1974, Johnson sustained himself financially by working as a door-to-door handyman. In 1976 he founded Big J Enterprises, which grew from this one-person venture to become one of New Mexico's largest construction companies. He entered politics for the first time by running for Governor of New Mexico in 1994 on a fiscally conservative, low-tax, anti-crime platform. Johnson won the Republican Party of New Mexico's gubernatorial nomination, and defeated incumbent Democratic governor Bruce King by 50% to 40%. He cut the 10% annual growth in the budget: in part, due to his use of the gubernatorial veto 200 times during his first six months in office, which gained him the nickname "Governor Veto". Johnson sought re-election in 1998, winning by 55% to 45%. In his second term, he concentrated on the issue of school voucher reforms, as well as campaigning for marijuana decriminalization and opposition to the War on Drugs. During his tenure as governor, Johnson adhered to a stringent anti-tax and anti-bureaucracy policy driven by a cost–benefit analysis rationale, setting state and national records for his use of veto powers: more than the other 49 contemporary governors put together. Term-limited, Johnson could not run for re-election at the end of his second term. As a fitness enthusiast, Johnson has taken part in several Ironman Triathlons, and he climbed Mount Everest in May 2003. After leaving office, Johnson founded the non-profit Our America Initiative in 2009, a political advocacy committee seeking to promote policies such as free enterprise, foreign non-interventionism, limited government and privatization. The Libertarian Party is the third largest political party in the United States. It is also identified by many as the fastest growing political party in the United States. The political platform of the Libertarian Party reflects the ideas of libertarianism, favoring minimally regulated markets, a less powerful state, strong civil liberties (including support for Same-sex marriage and other LGBT rights), cannabis legalization and regulation, separation of church and state, open immigration, non-interventionism and neutrality in diplomatic relations (i.e., avoiding foreign military or economic entanglements with other nations), freedom of trade and travel to all foreign countries, and a more responsive and direct democracy. Members of the Libertarian Party have also supported the repeal of NAFTA, CAFTA, and similar trade agreements, as well as the United States' exit from the United Nations, WTO, and NATO. Although there is not an officially labeled political position of the party, it is considered by many to be more right-wing than the Democratic Party but more left-wing than the Republican Party when comparing the parties' positions to each other, placing it at or above the center. In the 30 states where voters can register by party, there are over 282,000 voters registered as Libertarians. Hundreds of Libertarian candidates have been elected or appointed to public office, and thousands have run for office under the Libertarian banner. The Libertarian Party has many firsts in its credit, such as being the first party to get an electoral vote for a woman in a United States presidential election. Learn more about Gary Johnson and Libertarian Party on Wikipedia.

    Read the article

  • Fibonacci numbers in F#

    - by BobPalmer
    As you may have gathered from some of my previous posts, I've been spending some quality time at Project Euler.  Normally I do my solutions in C#, but since I have also started learning F#, it only made sense to switch over to F# to get my math coding fix. This week's post is just a small snippet - spefically, a simple function to return a fibonacci number given it's place in the sequence.  One popular example uses recursion: let rec fib n = if n < 2 then 1 else fib (n-2) + fib(n-1) While this is certainly elegant, the recursion is absolutely brutal on performance.  So I decided to spend a little time, and find an option that achieved the same functionality, but used a recursive function.  And since this is F#, I wanted to make sure I did it without the use of any mutable variables. Here's the solution I came up with: let rec fib n1 n2 c =    if c = 1 then        n2    else        fib n2 (n1+n2) (c-1);;let GetFib num =    (fib 1 1 num);;printfn "%A" (GetFib 1000);; Essentially, this function works through the sequence moving forward, passing the two most recent numbers and a counter to the recursive calls until it has achieved the desired number of iterations.  At that point, it returns the latest fibonacci number. Enjoy!

    Read the article

  • KISS and Tell - MVVM and the ViewModelLocator

    - by Bobby Diaz
    A popular topic that comes up when talking about MVVM is the use of a ViewModelLocator and the many different ways one can be implemented.  Rather than getting into the pros and cons on when or why you should use it, I decided I would just post my version of a simple ViewModelLocator and let those who like it use it, and those who don’t, well you know…  :) First, a disclaimer.  I have not used this code in a production application, it is just something I was tossing around while reading others’ posts on the subject. 1. MainView.xaml   2. MainViewModel.cs 3. ViewModelLocator.cs   I have a codepaste of the ViewModelLocator.cs file if you are interested but don’t feel like re-typing the 50 lines of code! Enjoy! Additional Resources Simple ViewModel Locator for MVVM: The Patients Have Left the Asylum - by John Papa ViewModel binding with the Managed Extensibility Framework - by Jeremy Likness MVVM Light Toolkit - by Laurent Bugnion

    Read the article

  • Executing a Batch file from remote Ax client on an AOS server

    - by Anisha
    Is it possible to execute a batch file on an AOS server from a remote AX client? Answer is yes, provided you have necessary permission for this execution on the server. Please create a batch file on your AOS server. Some thing as below for creating a directory on the server.    Insert a command something like this in a .BAT file (batch file) and place any were on the server.   Mkdir “c:\test”      Copy the following code into your server static method of your class and call this piece of code from a button click on Ax form. Please execute this button click from a remote AX client and see the result . This should execute the batch file on the server and should create a directory called ‘test’ on the root directoryof the server.     server static void AOS_batch_file_create() { boolean b; System.Diagnostics.Process process; System.Diagnostics.ProcessStartInfo processStartInfo; ; b = Global::isRunningOnServer(); infolog.add(0, int2str(b)); new InteropPermission(InteropKind::ClrInterop).assert(); process = new System.Diagnostics.Process(); processStartInfo = new System.Diagnostics.ProcessStartInfo(); processStartInfo.set_FileName("C:\\create_dir.bat"); // batch file path on the AOS server process.set_StartInfo(processStartInfo); process.Start(); //process.Refresh(); //process.Close(); //process.WaitForExit(); info("Finished"); }

    Read the article

  • TechEd 2012: Day 3 &ndash; Morning TFS

    - by Tim Murphy
    My morning sessions for day three were dominated by Team Foundation Server.  This has been a hot topic for our clients lately, so this topic really stuck a chord. The speaker for the first session was from Boeing.  It was nice to hear how how a company mixes both agile and waterfall project management.   The approaches that he presented were very pragmatic.  For their needs reporting is the crucial part of their decision to use TFS.  This was interesting since this is probably the last aspect that most shops would think about. The challenge of getting users to adopt TFS was brought up by the audience.  As with the other discussion point he took a very level headed stance.  The approach he was prescribing was to eat the elephant a bite at a time instead of all at once.  If you try to convert you entire shop at once the culture shock will most likely kill the effort. Another key point he reminded us of is that you need to make sure that standards and compliance are taken into account when you setup TFS.  If you don’t implement a tool and processes around it that comply with the standards bodies that govern your business you are in for a world of hurt. Ultimately the reason they chose TFS was because it was the first tool that incorporated all the ALM features that they needed. Reduced licensing cost because of all the different tools they would need to buy to complete the same tasks.  They got to this point by doing an industry evaluation.  Although TFS came out on top he said that it still has a big gap is in the Java area.  Of course in this market there are vendors helping to close that gap. The second session was on how continuous feedback in agile is a new focus in VS2012.  The problems they intended to address included cycle time and average time to repair, root cause analysis. The speakers fired features at us as if they were firing a machine gun.  I will just say that I am looking forward to digging into the product after seeing this presentation.  Beyond that I will simply list some of the key features that caught my attention. Feature – Ability to link documents into tasks as artifacts Web access portal PowerPoint storyboards Exploratory testing Request feedback (allows users to record notes, screen shots and video/audio) See you after the second half. del.icio.us Tags: TechEd,TechEd 2012,TFS,Team Foundation Server

    Read the article

  • Data and Secularism

    - by kaleidoscope
    Ever since we’ve been using Data we’ve been religious. Religious about the way we represent it and equally religious about the way we access it. Be it plain old SQL, DAO, ADO, ADO.Net and I am just referring to religions in MSFT world. A peek outside and I’d need a separate book to list out the Data faiths. Various application areas in networked computing are converging under the HTTP umbrella with a plausible transition to purist HTTP and in turn REST fuelled by the Web2.0 storm. It was time the Data access faiths also gave up the religious silos wrapped around our long worshipped data publishing and access methods. OData is the secular solution we have at hand today. It is an open protocol for sharing data. It can be exposed via REST. It is Open as in the Microsoft Open Specification Promise. This allows virtually everyone to build Data Services for any runtime. OData is one of the key standards for Data publishing/subscribing on Microsoft Codename Dallas. For us .Netters OData data sources can be exposed/consumed via WCF Data Services and the process is very simple, elegant and intuitive. Applications exposing OData Services Sharepoint 2010 IBM Web Sphere Microsoft SQL Azure Windows Azure Table Storage SQL Server Reporting Services   Live OData Services Netflix Open Science Data Initiative Open Government Data Initiatives Northwind database exposed as OData Service and many others Some may prefer to call it commoditization of data, unification of data access strategies or any other sweet name. I for one will stick to my secular definition. :) Technorati Tags: Sarang,OData,MOSP

    Read the article

  • Silverlight Cream for June 16, 2010 -- #884

    - by Dave Campbell
    In this Issue: Zoltan Arvai, Emiel Jongerius, Charles Petzold, Adam Kinney, Deepesh Mohnani, Timmy Kokke, and Damon Payne. Shoutouts: Andy Beaulieu reported his Coding4Fun: Shuffleboard Game for WP7 has been posted -- Big ol' Tutorial and 6 videos of WP7 goodness Karl Shifflett announced Three New WPF and Silverlight Designer Videos Posted Charles Petzold has a cool Flip-Number Clock in Silverlight posted... cool demo, and the source. From SilverlightCream.com: Data Driven Applications with MVVM Part II: Messaging, Unit Testing, and Live Data Sources Zoltan Arvai has part 2 of his Data-Driven Apps with MVVM up, and this one is also including Messaging, Unit Testing, and Live WCF Data... good tutorial and all the code. Silverlight DataContext Changed Event and Trigger Emiel Jongerius takes a hard swing at the lack of DataContextChanged... his solution involves two attached properties instead of one... check it out and see what you think! Orientation Strategies for Windows Phone 7 Charles Petzold is discussing WP7 Orientation... showing the problems you can get involved in, and how to work through them... and you might be surprised at how he does it :) ... pretty cool as usual, Charles! Debugging the TranslateZoomRotate WPF Behavior in Blend Adam Kinney talks through a bug reported about the WPF TranslateZoomRotate Behavior. Again, it's WPF, but it's in Blend, and ya never know when the solution might apply. I want my app to look like the Zune client Deepesh Mohnani demonstrates using the Cosmopolitan theme to get his app to have the same look as the Zune client. MVVM Project and Item Templates Timmy Kokke is continuing with his cool SilverAmp media player, using it to expand upon the new Blend and Silverlight 4 features. This episode touches very lightly on cranking up a new MVVM project in Blend. Great Features for MVVM Friendly Objects Part 0: Favor Composition Over Inheritance Damon Payne has the first part up of a series he's working on with 'MVVM Friendly' features... he's building out a lot of the infrastructure in this post for the ones that follow... all good stuff. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Calculating Screen Resolutions Using WPF

    - by Jeff Ferguson
    WPF measures all elements in device independent pixels (DIPs). These DIPs equate to device pixels if the current display monitor is set to the default of 96 DPI. However, for monitors set to a DPI setting that is different than 96 DPI, then WPF DIPs will not correspond directly to monitor pixels. Consider, for example, the WPF properties SystemParameters.PrimaryScreenHeight and SystemParameters.PrimaryScreenWidth. If your monitor resolution is set to 1024 pixels wide by 768 pixels high, and your monitor is set to 96 DPI, then WPF will report the value of SystemParameters.PrimaryScreenHeight as 768 and the value of SystemParameters.PrimaryScreenWidth as 1024. No problem. This aligns nicely because the WPF device independent pixel value (96) matches your monitor's DPI setting (96). However, if your monitor is not set to display pixels at 96 DPI, then SystemParameters.PrimaryScreenHeight and SystemParameters.PrimaryScreenWidth will not return what you expect. The values returned by these properties may be greater than or less than what you expect, depending on whether or not your monitor's DPI value is less than or greater than 96. Since the SystemParameters.PrimaryScreenHeight and SystemParameters.PrimaryScreenWidth properties are WPF properties, their values are measured in WPF DIPs, rather than taking monitor DPI into effect. Once again: WPF measures all elements in device independent pixels (DIPs). To combat this issue, you must take your monitor's DPI settings into effect if you're looking for the monitor's width and height using the monitor's DPI settings. The handy code block below will help you calculate these values regardless of the DPI setting on your monitor: Window MainWindow = Application.Current.MainWindow; PresentationSource MainWindowPresentationSource = PresentationSource.FromVisual(MainWindow); Matrix m = MainWindowPresentationSource.CompositionTarget.TransformToDevice; DpiWidthFactor = m.M11; DpiHeightFactor = m.M22; double ScreenHeight = SystemParameters.PrimaryScreenHeight * DpiHeightFactor; double ScreenWidth = SystemParameters.PrimaryScreenWidth * DpiWidthFactor; The values of ScreenHeight and ScreenWidth should, after this code is executed, match the resolution that you see in the display's Properties window.

    Read the article

  • OutOfMemoryException in Microsoft WSE 3.0 Diagnostics.TraceInputFilter

    - by Michael Freidgeim
    We are still using Microsoft WSE 3.0 and on test server started to get   Event Type:        Error Event Source:    Microsoft WSE 3.0 WSE054: An error occurred during the operation of the TraceInputFilter: System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown.    at System.String.GetStringForStringBuilder(String value, Int32 startIndex, Int32 length, Int32 capacity)    at System.Text.StringBuilder.GetThreadSafeString(IntPtr& tid)    at System.Text.StringBuilder.set_Length(Int32 value)    at System.Xml.BufferBuilder.Clear()    at System.Xml.BufferBuilder.set_Length(Int32 value)    at System.Xml.XmlTextReaderImpl.ParseText()    at System.Xml.XmlTextReaderImpl.ParseElementContent()    at System.Xml.XmlTextReaderImpl.Read()    at System.Xml.XmlLoader.LoadNode(Boolean skipOverWhitespace)    at System.Xml.XmlLoader.LoadDocSequence(XmlDocument parentDoc)    at System.Xml.XmlLoader.Load(XmlDocument doc, XmlReader reader, Boolean preserveWhitespace)    at System.Xml.XmlDocument.Load(XmlReader reader)    at System.Xml.XmlDocument.Load(Stream inStream)    at Microsoft.Web.Services3.Diagnostics.TraceInputFilter.OpenLoadExistingFile(String path)    at Microsoft.Web.Services3.Diagnostics.TraceInputFilter.Load(String path)    at Microsoft.Web.Services3.Diagnostics.TraceInputFilter.TraceMessage(String messageId, Collection`1 traceEntries).   After investigation it was found, that the problem related to trace files, that become too big. When they were deleted and new files were created, error gone.

    Read the article

  • C#: Why Decorate When You Can Intercept

    - by James Michael Hare
    We've all heard of the old Decorator Design Pattern (here) or used it at one time or another either directly or indirectly.  A decorator is a class that wraps a given abstract class or interface and presents the same (or a superset) public interface but "decorated" with additional functionality.   As a really simplistic example, consider the System.IO.BufferedStream, it itself is a descendent of System.IO.Stream and wraps the given stream with buffering logic while still presenting System.IO.Stream's public interface:   1: Stream buffStream = new BufferedStream(rawStream); Now, let's take a look at a custom-code example.  Let's say that we have a class in our data access layer that retrieves a list of products from a database:  1: // a class that handles our CRUD operations for products 2: public class ProductDao 3: { 4: ... 5:  6: // a method that would retrieve all available products 7: public IEnumerable<Product> GetAvailableProducts() 8: { 9: var results = new List<Product>(); 10:  11: // must create the connection 12: using (var con = _factory.CreateConnection()) 13: { 14: con.ConnectionString = _productsConnectionString; 15: con.Open(); 16:  17: // create the command 18: using (var cmd = _factory.CreateCommand()) 19: { 20: cmd.Connection = con; 21: cmd.CommandText = _getAllProductsStoredProc; 22: cmd.CommandType = CommandType.StoredProcedure; 23:  24: // get a reader and pass back all results 25: using (var reader = cmd.ExecuteReader()) 26: { 27: while(reader.Read()) 28: { 29: results.Add(new Product 30: { 31: Name = reader["product_name"].ToString(), 32: ... 33: }); 34: } 35: } 36: } 37: }            38:  39: return results; 40: } 41: } Yes, you could use EF or any myriad other choices for this sort of thing, but the germaine point is that you have some operation that takes a non-trivial amount of time.  What if, during the production day I notice that my application is performing slowly and I want to see how much of that slowness is in the query versus my code.  Well, I could easily wrap the logic block in a System.Diagnostics.Stopwatch and log the results to log4net or other logging flavor of choice: 1:     // a class that handles our CRUD operations for products 2:     public class ProductDao 3:     { 4:         private static readonly ILog _log = LogManager.GetLogger(typeof(ProductDao)); 5:         ... 6:         7:         // a method that would retrieve all available products 8:         public IEnumerable<Product> GetAvailableProducts() 9:         { 10:             var results = new List<Product>(); 11:             var timer = Stopwatch.StartNew(); 12:             13:             // must create the connection 14:             using (var con = _factory.CreateConnection()) 15:             { 16:                 con.ConnectionString = _productsConnectionString; 17:                 18:                 // and all that other DB code... 19:                 ... 20:             } 21:             22:             timer.Stop(); 23:             24:             if (timer.ElapsedMilliseconds > 5000) 25:             { 26:                 _log.WarnFormat("Long query in GetAvailableProducts() took {0} ms", 27:                     timer.ElapsedMillseconds); 28:             } 29:             30:             return results; 31:         } 32:     } In my eye, this is very ugly.  It violates Single Responsibility Principle (SRP), which says that a class should only ever have one responsibility, where responsibility is often defined as a reason to change.  This class (and in particular this method) has two reasons to change: If the method of retrieving products changes. If the method of logging changes. Well, we could “simplify” this using the Decorator Design Pattern (here).  If we followed the pattern to the letter, we'd need to create a base decorator that implements the DAOs public interface and forwards to the wrapped instance.  So let's assume we break out the ProductDAO interface into IProductDAO using your refactoring tool of choice (Resharper is great for this). Now, ProductDao will implement IProductDao and get rid of all logging logic: 1:     public class ProductDao : IProductDao 2:     { 3:         // this reverts back to original version except for the interface added 4:     } 5:  And we create the base Decorator that also implements the interface and forwards all calls: 1:     public class ProductDaoDecorator : IProductDao 2:     { 3:         private readonly IProductDao _wrappedDao; 4:         5:         // constructor takes the dao to wrap 6:         public ProductDaoDecorator(IProductDao wrappedDao) 7:         { 8:             _wrappedDao = wrappedDao; 9:         } 10:         11:         ... 12:         13:         // and then all methods just forward their calls 14:         public IEnumerable<Product> GetAvailableProducts() 15:         { 16:             return _wrappedDao.GetAvailableProducts(); 17:         } 18:     } This defines our base decorator, then we can create decorators that add items of interest, and for any methods we don't decorate, we'll get the default behavior which just forwards the call to the wrapper in the base decorator: 1:     public class TimedThresholdProductDaoDecorator : ProductDaoDecorator 2:     { 3:         private static readonly ILog _log = LogManager.GetLogger(typeof(TimedThresholdProductDaoDecorator)); 4:         5:         public TimedThresholdProductDaoDecorator(IProductDao wrappedDao) : 6:             base(wrappedDao) 7:         { 8:         } 9:         10:         ... 11:         12:         public IEnumerable<Product> GetAvailableProducts() 13:         { 14:             var timer = Stopwatch.StartNew(); 15:             16:             var results = _wrapped.GetAvailableProducts(); 17:             18:             timer.Stop(); 19:             20:             if (timer.ElapsedMilliseconds > 5000) 21:             { 22:                 _log.WarnFormat("Long query in GetAvailableProducts() took {0} ms", 23:                     timer.ElapsedMillseconds); 24:             } 25:             26:             return results; 27:         } 28:     } Well, it's a bit better.  Now the logging is in its own class, and the database logic is in its own class.  But we've essentially multiplied the number of classes.  We now have 3 classes and one interface!  Now if you want to do that same logging decorating on all your DAOs, imagine the code bloat!  Sure, you can simplify and avoid creating the base decorator, or chuck it all and just inherit directly.  But regardless all of these have the problem of tying the logging logic into the code itself. Enter the Interceptors.  Things like this to me are a perfect example of when it's good to write an Interceptor using your class library of choice.  Sure, you could design your own perfectly generic decorator with delegates and all that, but personally I'm a big fan of Castle's Dynamic Proxy (here) which is actually used by many projects including Moq. What DynamicProxy allows you to do is intercept calls into any object by wrapping it with a proxy on the fly that intercepts the method and allows you to add functionality.  Essentially, the code would now look like this using DynamicProxy: 1: // Note: I like hiding DynamicProxy behind the scenes so users 2: // don't have to explicitly add reference to Castle's libraries. 3: public static class TimeThresholdInterceptor 4: { 5: // Our logging handle 6: private static readonly ILog _log = LogManager.GetLogger(typeof(TimeThresholdInterceptor)); 7:  8: // Handle to Castle's proxy generator 9: private static readonly ProxyGenerator _generator = new ProxyGenerator(); 10:  11: // generic form for those who prefer it 12: public static object Create<TInterface>(object target, TimeSpan threshold) 13: { 14: return Create(typeof(TInterface), target, threshold); 15: } 16:  17: // Form that uses type instead 18: public static object Create(Type interfaceType, object target, TimeSpan threshold) 19: { 20: return _generator.CreateInterfaceProxyWithTarget(interfaceType, target, 21: new TimedThreshold(threshold, level)); 22: } 23:  24: // The interceptor that is created to intercept the interface calls. 25: // Hidden as a private inner class so not exposing Castle libraries. 26: private class TimedThreshold : IInterceptor 27: { 28: // The threshold as a positive timespan that triggers a log message. 29: private readonly TimeSpan _threshold; 30:  31: // interceptor constructor 32: public TimedThreshold(TimeSpan threshold) 33: { 34: _threshold = threshold; 35: } 36:  37: // Intercept functor for each method invokation 38: public void Intercept(IInvocation invocation) 39: { 40: // time the method invocation 41: var timer = Stopwatch.StartNew(); 42:  43: // the Castle magic that tells the method to go ahead 44: invocation.Proceed(); 45:  46: timer.Stop(); 47:  48: // check if threshold is exceeded 49: if (timer.Elapsed > _threshold) 50: { 51: _log.WarnFormat("Long execution in {0} took {1} ms", 52: invocation.Method.Name, 53: timer.ElapsedMillseconds); 54: } 55: } 56: } 57: } Yes, it's a bit longer, but notice that: This class ONLY deals with logging long method calls, no DAO interface leftovers. This class can be used to time ANY class that has an interface or virtual methods. Personally, I like to wrap and hide the usage of DynamicProxy and IInterceptor so that anyone who uses this class doesn't need to know to add a Castle library reference.  As far as they are concerned, they're using my interceptor.  If I change to a new library if a better one comes along, they're insulated. Now, all we have to do to use this is to tell it to wrap our ProductDao and it does the rest: 1: // wraps a new ProductDao with a timing interceptor with a threshold of 5 seconds 2: IProductDao dao = TimeThresholdInterceptor.Create<IProductDao>(new ProductDao(), 5000); Automatic decoration of all methods!  You can even refine the proxy so that it only intercepts certain methods. This is ideal for so many things.  These are just some of the interceptors we've dreamed up and use: Log parameters and returns of methods to XML for auditing. Block invocations to methods and return default value (stubbing). Throw exception if certain methods are called (good for blocking access to deprecated methods). Log entrance and exit of a method and the duration. Log a message if a method takes more than a given time threshold to execute. Whether you use DynamicProxy or some other technology, I hope you see the benefits this adds.  Does it completely eliminate all need for the Decorator pattern?  No, there may still be cases where you want to decorate a particular class with functionality that doesn't apply to the world at large. But for all those cases where you are using Decorator to add functionality that's truly generic.  I strongly suggest you give this a try!

    Read the article

  • Mini Book Review of IronRuby Unleashed by Shay Friedman

    - by Eric Nelson
    When I get some time (and hell starts to look a little chilly) I would love to do a more detailed review. But I wanted to get something “out there” as I really like this book and reviews of it seem a little thin on the ground. In brief: Is it a good book? Yes Would I recommend this book to a .NET developer who was new to Ruby? Yes (This is me by the way) Would I recommend this book to a Ruby developer who was new to .NET ? Yes Would I recommend this book to a developer who sometimes does Ruby and sometimes does .NET? Yes Would I recommend this book to a developer new to .NET and new to Ruby? Yes The above demonstrates how well balanced this book is (IMHO). What I like about it: Its assumes pretty much no knowledge of IronRuby or .NET. All it asks is that you are a developer interested in IronRuby. Yet it manages to cover off the topics in a good degree of detail. If you are a Ruby developer you skip Part 2, if you are a .NET developer you skip some of Part 1 and whizz through the short intros to the individual technologies such as WPF. It is definitely not a “lets makes the manual look pretty” book – this is original content thoughtfully written and presented. It is pretty comprehensive – in 500 pages it packs in  Intro to IronRuby Intro to .NET Intro to Ruby Using IronRuby with Windows Forms, ASP.NET, WPF, Silverlight etc Getting Rails working with IronRuby Unit testing with IronRuby – which I think is an excellent way for a .NET developer to start using IronRuby Embedding IronRuby in a .NET app  - another interesting “first step” for a .NET developer What I didn’t like: Err… nothing yet. Ok, If I am being picky then the start of chapter 2 irked me a little as it went through the history of .NET. “The first version [of the .NET Framework] wasn’t that great”.  Felt pretty good to me compared to Java and C++ development at the time :-) Buy on Amazon UK | Buy on Amazon USA Related Links: Posts from the author Shay Friedman on IronRuby Guest Post: What's IronRuby, and how do I put it on Rails? Guest Post: Using IronRuby and .NET to produce the ‘Hello World of WPF’ Getting PhP and Ruby working on Windows Azure and SQL Azure

    Read the article

  • Whats new in My Life:Robotics,Azure

    - by sonam
    AZURE: I haven’t blogged from long time.I was actually busy with doing some Azure. For any starters with Azure,I would recommend to go with Neil: http://nmackenzie.spaces.live.com/Blog/cns!B863FF075995D18A!564.entry Awesome content.   Another thing that has come in my interests:Robotics Yes,I am finally reading up on robotics, specially the mobile robotics. Since,I don’t have any prof to guide yet,I am doing it independently by reading research papers and books. My first robot is not autonomous but i am actually making it for RoboWars. I got inspired by this video of Steve jobs and I think,I love to work on robotics.Perhaps ,thats my love. http://www.youtube.com/watch?v=Hd_ptbiPoXM Cya

    Read the article

  • Silverlight Cream for April 23, 2010 -- #845

    - by Dave Campbell
    In this Issue: Jason Allor, Bill Reiss, Mike Snow, Tim Heuer, John Papa, Jeremy Likness, and Dave Campbell. Shoutouts: You saw it at MIX10 and DevConnections... now you can give it a dance, John Papa announced eBay Simple Lister Beta Now Available Mike Snow posted some info about and a link to his new Flickr/Bing/Google High End Image Viewer and he's looking for feedback From SilverlightCream.com: Hierarchical Data Trees With A Custom DataSource Jason Allor is rounding out a series here in his new blog (bookmark it), and he's created his own custom HierarchicalDataSource class for use with the TreeView. Space Rocks game step 11: Start level logic Bill Reiss has Episode 11 up in his Space Rocks game ... working on NewGame and start level logic Silverlight Tip of the Day #3 – Mouse Right Clicks Mike Snow has Tip 3 up ... about handling right-mouse clicks in Silverlight 4 -- oh yeah, we got right mouse now ... grab Mike's project to check it out. Silverlight 4 enables Authorization header modification Tim Heuer talks about the ability to modify the Authorization header in network calls with Silverlight 4. He gives not only the quick-and-dirty of how to use it, but has some good examples, code, and code results for show and tell. WCF RIA Services - Hands On Lab John Papa built a bookstore app in roughly 10 minutes in the keynote at DevConnections. He now has a tutorial on doing just that plus all the code up. Transactions with MVVM Not strictly Silverlight (or WPF), but Jeremy Likness has an interesting article up on MVVM and transaction processing. Read the post then grab his helper class. Your First Windows Phone 7 Application As with the First Silverlight App a couple weeks ago, if you've got any WP7 experience at all, just keep going... this is for folks that have not looked at it yet, have not downloaded anything... oh, and it's by Dave Campbell Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Personal | First Stop on our trip, St. Louis

    - by Jeff Julian
    St. Louis is definitely a cool city. I have always looked at it as Kansas City’s big brother. I love to Arch, wonder what is would be like to have pro hockey, really like the downtown area, and have some great friends who live there. The reason we left for St. Louis on Thursday evening was to get us a head start on our journey. Since we were doing a Diners, Drive-ins, and Dives tour, it made since to have the journey start there. We picked the Hyatt Downtown as our hotel because they had an Arch Package which was suppose to get you tickets to the arch so you didn’t need to arrive early and wait in line. That ended up not working cause the arch had been selling out every day and they were no longer accepting the hotels tickets. No biggie and the hotel did try very hard to get us tickets, but we just took our chances in the line and waited. We walked over to the park and had to wait for about 20 minutes for the doors to open and had tickets after another 20 minutes of waiting in line and at that point walked right up and were able to get to the elevators.I want to stop here to have a little aside. I don’t know who started the rumor that the arch ride is scary but it is not. You do sit in a small pod, but it like the accent on a roller coaster to the top of the first drop and an elevator with no windows outside. Nothing to be afraid of here if you aren’t claustrophobic. If you are afraid of small spaces, stay clear of this ride. Once you get to the top, you walk up 10 to 30 stairs depending on which car you were in (lower the number the less stairs you climb) and you are then at the top in a decent sized room where you look out the windows. Beautiful view of the city. I don’t typically like heights, but this felt like being inside a building and not hang out on a roof. Here is the view from the arch: Related Tags: Diners, Drive-ins, and Dives, St. Louis, Vacation

    Read the article

  • Speaking at Dog Food Conference 2013

    - by Brian T. Jackett
    Originally posted on: http://geekswithblogs.net/bjackett/archive/2013/10/22/speaking-at-dog-food-conference-2013.aspx    It has been a couple years since I last attended / spoke at Dog Food Conference, but on Nov 21-22, 2013 I’ll be speaking at Dog Food Conference 2013 here in Columbus, OH.  For those of you confused by the name of the conference (no it’s not about dog food), read up on the concept of dogfooding .  This conference has a history of great sessions from local and regional speakers and I look forward to being a part of it once again.  Registration is now open (registration link) and is expected to sell out quickly.  Reserve your spot today.   Title: The Evolution of Social in SharePoint Audience and Level: IT Pro / Architect, Intermediate Abstract: Activities, newsfeed, community sites, following... these are just some of the big changes introduced to the social experience in SharePoint 2013. This class will discuss the evolution of the social components since SharePoint 2010, the architecture (distributed cache, microfeed, etc.) that supports the social experience, Yammer integration, and proper planning considerations when deploying social capabilities (personal sites, SkyDrive Pro and distributed cache). This session will include demos of the social newsfeed, community sites, and mentions. Attendees should have an intermediate knowledge of SharePoint 2010 or 2013 administration.         -Frog Out

    Read the article

  • Silverlight Cream for April 12, 2010 -- #837

    - by Dave Campbell
    In this Issue: Michael Washington, Joe McBride, Kirupa, Maurice de Beijer, Brad Abrams, Phil Middlemiss, and CorrinaB. Shoutout: Charlie Kindel has a post up about the incompatibility between VS2010RTM and what we currently have for WP7: Visual Studio 2010 RTM and the Windows Phone Developer Tools CTP and if you want to be notified when that changes, submit your email here. Erik Mork and Co. have their latest This Week in Silverlight 4.9.2010 posted. From SilverlightCream.com: Simplified MVVM: Silverlight Video Player Michael Washington created a 'designable' video player using MVVM that allows any set of controls to implement the player. Great tutorial and all the code. Windows Phone 7 Panorama Behaviors Joe McBride posted a link to a couple WP7 gesture behaviors and a link out to some more by smartyP. Event Bubbling and Tunneling Kirupa has a great article up on Event Bubbling and Tunneling... showing the route that events take through your WPF or Silverlight app. Using dynamic objects in Silverlight 4 Maurice de Beijer has a blog up about binding to indexed properties in Silverlight 4... in other words, you don't have to know what you're binging to at design time. Silverlight 4 + RIA Services - Ready for Business: Ajax Endpoint Brad Abrams is still continuing his RIA series. His latest is on exposing your RIA Services in JSON. Changing Data-Templates at run-time from the VM Looks like I missed Phil Middlemiss' latest post on Changing DataTemplates at run-time. He has a visual of why you might need this right up-front, and is a very common issue. Check out the solution he provides us. Windows System Color Theme for Silverlight - Part Three CorrinaB blogged screenshots and discussion of 3 new themes that are going to be coming up, and what they've done to the controls in general. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • What is PowerPivot?

    - by Enrique Lima
    Let’s start with it is a great way to be able to visualize data and transform to information.  It is intended to be a self-service business intelligence tool provided as an add-in for a tool we already know … Microsoft Excel ... 2010. If you have been wondering where to go? what to do? and how do I? I am including links that will help you be on your way to better understanding of the topic and tool. (Links to TechNet documentation) Introduction to PowerPivot Walkthrough: Create your first PowerPivot Workbook Get to know the UI PowerPivot for Excel Central Get some samples

    Read the article

  • TechEd 2012 - day 3

    - by Stefan Barrett
    The content has got more useful for me as a developer, and I've now seen 2 things which I think will make a big difference: Fake in vs2012 - allows me to stub or fake out libraries making unit testing easier/possible. C++ AMP & auto - auto might get me to start using c++ again (it makes code like for each much nicer/easier to write), while AMP is something I want to play with (moves the processing onto the GPU) The food got a little better, while there was less sign of the snacks.

    Read the article

  • Prepping a conference

    - by Laurent Bugnion
    I have had the chance to talk at many conferences these past few years, and came up with a way to prepare them which works really well for me. Most importantly, it would make it quite easy to overcome an emergency (for example if my laptop would suddenly lose data). The whole code as well as the slides and other documents are in the cloud. I also use source control for my demos, so that I always have the latest and the greatest, but also a history of changes I made to my demos. Finally I have a system of code snippets which works great, and I often had very positive remarks from the audience regarding that. Putting everything in the cloud The one thing I used to be the most scared of was a sudden crash of my laptop, and being unable to restore in time for a conference. Most conferences ask speakers to send slides a few days (or weeks…) in advance, but let's face it, we all have last minute changes to our talks and I always come in the conference with updated slides that I pass to the management team. The answer to that dilemma used to be working off memory sticks, and that worked not bad. However last year I started putting all the documents relating to a conference in a DropBox folder, and that works great too. Obviously DropBox works only if you have connectivity, so if I for instance update slides while on an international flight, I cannot save to the cloud. The obvious answer to that is to backup everything on a memory stick… but I have to admit, I have been trusting my luck and working off my laptop HD and then synching everything to the cloud after landing. Of course on some US national flights you get WiFi on board, so in that case it is even simpler :) Usually after the conference is done, I remove the files from DropBox and copy them to their "final destination". They are backed up from there to BackBlaze, the great online backup service I am using routinely (I currently have about 90GB of data in BackBlaze). Outlining the presentations I like to have a written outline of my presentations written somewhere. I keep it simple, just write the various sections of the presentation with timing. I guess it is a remnant of the time when I was a private pilot, and using checklists for flight preparation. For example: Demo about designability 15' (0:37) Switch to Blend Open MainPage.xaml Create a DataTemplate ... Here I can immediately see during the presentation if I am taking too much time for my demo (0:37 is where I need to be when I am done with this section of the presentation, and 15' is the time that this particular section takes). I keep these sections reasonable, I don't detail every step of the preparation. Typically I have one such section for every 10-15 minutes of my talks. Yes, I am timing my presentations. I keep adjusting these numbers when I rehearse, and this really helps to feel more confident during the presentations. This is especially important for presentations that are long, like my MIX11 demo which clocked at 57 minutes (I had a lot of stuff to show…). Such presentations are risky, because if anything goes wrong, you will have to cut stuff, so the answer to that is: Rehearse, rehearse and when you're done rehearsing, rehearse a little more. I also have a "Preparation" section where I outline what I need to do before a presentation. For instance: Preparation Reboot in VHD Make sure MSN and Twitter are not running. Open VS10 and load demo Open Blend and load demo Run the WP7 emulator ... I typically start preparing my laptop an hour before the talk, starting everything I need to start and then putting my laptop to sleep. Saving and printing the outline, Timing Printing is a real problem because it is really hard to find a printer at most conference venues, and also quite hard in hotels. To solve that, I simply write everything in OneNote (synched to the cloud, now you start to know what I like ;) and then I print it to a PDF (I use CutePDFWriter) that I save to my Kindle. During the presentation, I read the outline off the Kindle (I mostly just need a quick check to see how I am timing). For timing during the presentation, I use the free tool ChronoGPS on my Windows Phone 7, but of course any phone these days has a clock/chrono application. In some conferences, they even have timers that the presenters can see, but they tend to count down and I prefer to count up… so I just use my own :) Source control for demos For demos, I create a separate folder and use Mercurial as source control. Mercurial has the huge advantage (over SVN or TFS) to work offline too, so I can commit while on a plane, and all the history is saved. Then when I have connectivity I push everything to the cloud (I am using the fantastic Trunksapp.com for my private repositories). Here too the obvious downside is the risk of losing my last changes if my laptop crashes before I can push to the cloud, and here too the obvious answer would be to work from a memory stick… though I have to admit I didn't do that lately (except when I was writing Silverlight 4 Unleashed, where I was really paranoid…) And code snippets? I am one of these presenters who hates to type in front of an audience. I can type really fast (writing two books has this advantage, it really teaches you to touch type and be fast at it) but in the context of an audience, on a stage where it is often damn cold (an issue I had a lot in past conferences, air conditioning can freeze your fingers and make it really hard to type), it doesn't work as well. I don't know for you, but I really dislike seeing a presentation where the speaker uses the backspace key more often than others ;) To solve that, I like to have my code ready in snippets, and drag them to the screen. Then I can spend time explaining each code snippet, while highlighting portions of the code (always highlight what you talk about, the audience often doesn't even see the cursor and doesn't know where you are on the screen!) Over the years I have used various solutions for code snippets, and now I have one which works really well… if you take a few precautions! I use the Visual Studio Toolbox. Preparing the code snippets You can store code snippets in the Toolbox for anything, XAML, C# etc. I arrange the snippets in the order in which I need them, which is a great way to remember what comes next in the presentation. I also separate them by topic, to make it easier to find them, for example when I switch to the slides and then back to the code. Remember that no matter how experienced you are, you will feel more nervous on stage than while you are preparing, so any way to make it easier for you is going to be beneficial to the audience. To store a code snippet, I do the following: Open the final demo that you want to show to the audience in Visual Studio. In your code, select a snippet of code that you want to explain in particular. Make sure that the Visual Studio Toolbox is open (menu View, Toolbox or Ctrl-Alt-X). Drag the selected snippet from the code window to the toolbox. (if needed) drag the snippet to the correct location (for example between two other code snippets so that you can access it as you speak through the demo). Right click on the snippet and select Rename Item from the context menu. Select a meaningful name. For me I use the following conventions: If it is a method, I use the method's name. If it is not a whole method, I use a descriptive name. If it is the content of a method (i.e. the body only, without the method's signature), I use "-> MethodName". This reminds me during the presentation that this is only the body, and that I need to insert that into an existing signature. This is the case, for instance, when I use Visual Studio to automatically generate the members of an interface’s implementation; then I only need to insert my snippet inside the generated method body. Saving the snippets This is the most important!! It happened to me a few times that VS10 lost its settings. When that happens, the snippets are lost too! Yeah that really sucks, especially (as it happened once) when this is the case about an hour before a talk… Stress and sweat follows, not good conditions to start a talk in front of an audience believe me. Thankfully, saving snippets is really easy with the following steps: Select the menu Tools, Import and Export Settings. Select Export selected environment settings and press Next. Uncheck All Settings. Then expand General Settings and select Toolbox (only!). Press Next. Select your source control folder and save under a meaningful name (for instance Snippets.vssettings). Commit to source control and push to the cloud. By the way, this also has the advantage of applying source control to the snippets file (which is an XML file), so you get history for free on that file! Reimporting the snippets If VS loses its settings and you need to reimport the snippets, this can be done super easily and very fast. Make sure that the Toolbox is empty. When you import snippets, they are merged with existing ones, they do not replace the content of the Toolbox. Unless merging is really what you want, make sure that your Toolbox is clean before you import, it is really easier. Select the menu Tools, Import and Export Settings. Select Import selected environment settings and press Next. Select No, just import new settings and press Next. Press Browse and select the Snippets.vssettings file. Press Finish. Et voila, all your snippets appear again in the Toolbox. Whew, the worst was averted and you can start your demo without sweating! (I had to do that once literally 5 minutes before the start of a demo, while my laptop was already hooked to the projector, and it went just fine). What about special tools? When using special tools (for example beta versions of tools you have an early access to), or a special configuration of your laptop, things can get tricky because you cannot really be sure that you will get a laptop with the same tools and the same configuration at the conference. To solve that, I use the following precautions: I make my demos from a Virtual Hard Disk. The great John Papa made a very easy-to-follow web page where he explains how to create a VHD and install Win7 to it. This gives you the full power of your laptop (as fast as booting from the metal). For me, I have a basic configuration that I saved on a USB harddrive (Win7 plus drivers, basic settings for desktop, folder options, taskbar etc) and Visual Studio 2010 SP1 on it. When preparing, I start by copying this "basis VHD" to my laptop. I install additional tools and configurations. I save the VHD back to the USB harddrive in a different folder. This would allow me to reinstall my demo environment quite fast, for example in case of harddrive failure. Replace the harddrive, copy the VHD to it, configure the BCD and you can start. Unfortunately this only works if the laptop itself still works. In the worst case of total failure, my security is to back all the installers up: The installers I use are synched on all my laptops and backed up to BackBlaze. If the worst happens and my laptop is absolutely broken, I can download the installer from BackBlaze and install on another laptop. This of course takes some time, and if that happens 5 minutes before a presentation, well… I don't have an answer to that, except of course crossing my fingers. Still, all that gives me additional security. Conclusion Remember folks, talking to an audience, large or small, will make you nervous. Just ask Scott Hanselman :) The goal here is to create the best possible conditions for you, and to create an environment where everything is saved and easy to restore, where everything is well known and provides you with additional confidence. The cooler you feel before the presentation (and during ;)), the better your presentation will be. Here too, the goal is to provide the best user experience you can have, which in turn will make it more enjoyable for your audience! Happy presenting :) Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • More goodness: Scrum for Team System Migration Tool from EMC

    - by Enrique Lima
    I have always liked how SfTS works, and the functionality it offers.  In an earlier assessment I wrote of their toolset, the only issue I had encountered was the migration of environment that had been created and worked on for a while in the TFS 2008 world using SfTS 2.0 to the TFS 2010 SfTS 3.0.  There were several elements (Work Items and such) that were not moving correctly. Anyway, that is now in the past! Congratulations to Crispin Parker and Team for the release of the Migration Tool. You can see the release information and link to download here … http://www.scrumforteamsystem.com/main-news/sfts-v3-migration-tool-goes-gold

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >