Search Results

Search found 17192 results on 688 pages for 'geeks with blogs'.

Page 92/688 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • Free E-book on NHibernate from Syncfusion

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2014/08/07/free-e-book-on-nhibernate-from-syncfusion.aspxSyncfusion are providing a free E-Book on NHibernate at http://www.syncfusion.com/resources/techportal/ebooks/nhibernate?utm_medium=edm “Master the intricacies of NHibernate, an established and powerful Object/Relational Mapper (ORM) in NHibernate Succinctly. Let author Ricardo Peres guide you toward a fuller understanding of one of the oldest and most flexible ORMs available”

    Read the article

  • C#: Does an IDisposable in a Halted Iterator Dispose?

    - by James Michael Hare
    If that sounds confusing, let me give you an example. Let's say you expose a method to read a database of products, and instead of returning a List<Product> you return an IEnumerable<Product> in iterator form (yield return). This accomplishes several good things: The IDataReader is not passed out of the Data Access Layer which prevents abstraction leak and resource leak potentials. You don't need to construct a full List<Product> in memory (which could be very big) if you just want to forward iterate once. If you only want to consume up to a certain point in the list, you won't incur the database cost of looking up the other items. This could give us an example like: 1: // a sample data access object class to do standard CRUD operations. 2: public class ProductDao 3: { 4: private DbProviderFactory _factory = SqlClientFactory.Instance 5:  6: // a method that would retrieve all available products 7: public IEnumerable<Product> GetAvailableProducts() 8: { 9: // must create the connection 10: using (var con = _factory.CreateConnection()) 11: { 12: con.ConnectionString = _productsConnectionString; 13: con.Open(); 14:  15: // create the command 16: using (var cmd = _factory.CreateCommand()) 17: { 18: cmd.Connection = con; 19: cmd.CommandText = _getAllProductsStoredProc; 20: cmd.CommandType = CommandType.StoredProcedure; 21:  22: // get a reader and pass back all results 23: using (var reader = cmd.ExecuteReader()) 24: { 25: while(reader.Read()) 26: { 27: yield return new Product 28: { 29: Name = reader["product_name"].ToString(), 30: ... 31: }; 32: } 33: } 34: } 35: } 36: } 37: } The database details themselves are irrelevant. I will say, though, that I'm a big fan of using the System.Data.Common classes instead of your provider specific counterparts directly (SqlCommand, OracleCommand, etc). This lets you mock your data sources easily in unit testing and also allows you to swap out your provider in one line of code. In fact, one of the shared components I'm most proud of implementing was our group's DatabaseUtility library that simplifies all the database access above into one line of code in a thread-safe and provider-neutral way. I went with my own flavor instead of the EL due to the fact I didn't want to force internal company consumers to use the EL if they didn't want to, and it made it easy to allow them to mock their database for unit testing by providing a MockCommand, MockConnection, etc that followed the System.Data.Common model. One of these days I'll blog on that if anyone's interested. Regardless, you often have situations like the above where you are consuming and iterating through a resource that must be closed once you are finished iterating. For the reasons stated above, I didn't want to return IDataReader (that would force them to remember to Dispose it), and I didn't want to return List<Product> (that would force them to hold all products in memory) -- but the first time I wrote this, I was worried. What if you never consume the last item and exit the loop? Are the reader, command, and connection all disposed correctly? Of course, I was 99.999999% sure the creators of C# had already thought of this and taken care of it, but inspection in Reflector was difficult due to the nature of the state machines yield return generates, so I decided to try a quick example program to verify whether or not Dispose() will be called when an iterator is broken from outside the iterator itself -- i.e. before the iterator reports there are no more items. So I wrote a quick Sequencer class with a Dispose() method and an iterator for it. Yes, it is COMPLETELY contrived: 1: // A disposable sequence of int -- yes this is completely contrived... 2: internal class Sequencer : IDisposable 3: { 4: private int _i = 0; 5: private readonly object _mutex = new object(); 6:  7: // Constructs an int sequence. 8: public Sequencer(int start) 9: { 10: _i = start; 11: } 12:  13: // Gets the next integer 14: public int GetNext() 15: { 16: lock (_mutex) 17: { 18: return _i++; 19: } 20: } 21:  22: // Dispose the sequence of integers. 23: public void Dispose() 24: { 25: // force output immediately (flush the buffer) 26: Console.WriteLine("Disposed with last sequence number of {0}!", _i); 27: Console.Out.Flush(); 28: } 29: } And then I created a generator (infinite-loop iterator) that did the using block for auto-Disposal: 1: // simply defines an extension method off of an int to start a sequence 2: public static class SequencerExtensions 3: { 4: // generates an infinite sequence starting at the specified number 5: public static IEnumerable<int> GetSequence(this int starter) 6: { 7: // note the using here, will call Dispose() when block terminated. 8: using (var seq = new Sequencer(starter)) 9: { 10: // infinite loop on this generator, means must be bounded by caller! 11: while(true) 12: { 13: yield return seq.GetNext(); 14: } 15: } 16: } 17: } This is really the same conundrum as the database problem originally posed. Here we are using iteration (yield return) over a large collection (infinite sequence of integers). If we cut the sequence short by breaking iteration, will that using block exit and hence, Dispose be called? Well, let's see: 1: // The test program class 2: public class IteratorTest 3: { 4: // The main test method. 5: public static void Main() 6: { 7: Console.WriteLine("Going to consume 10 of infinite items"); 8: Console.Out.Flush(); 9:  10: foreach(var i in 0.GetSequence()) 11: { 12: // could use TakeWhile, but wanted to output right at break... 13: if(i >= 10) 14: { 15: Console.WriteLine("Breaking now!"); 16: Console.Out.Flush(); 17: break; 18: } 19:  20: Console.WriteLine(i); 21: Console.Out.Flush(); 22: } 23:  24: Console.WriteLine("Done with loop."); 25: Console.Out.Flush(); 26: } 27: } So, what do we see? Do we see the "Disposed" message from our dispose, or did the Dispose get skipped because from an "eyeball" perspective we should be locked in that infinite generator loop? Here's the results: 1: Going to consume 10 of infinite items 2: 0 3: 1 4: 2 5: 3 6: 4 7: 5 8: 6 9: 7 10: 8 11: 9 12: Breaking now! 13: Disposed with last sequence number of 11! 14: Done with loop. Yes indeed, when we break the loop, the state machine that C# generates for yield iterate exits the iteration through the using blocks and auto-disposes the IDisposable correctly. I must admit, though, the first time I wrote one, I began to wonder and that led to this test. If you've never seen iterators before (I wrote a previous entry here) the infinite loop may throw you, but you have to keep in mind it is not a linear piece of code, that every time you hit a "yield return" it cedes control back to the state machine generated for the iterator. And this state machine, I'm happy to say, is smart enough to clean up the using blocks correctly. I suspected those wily guys and gals at Microsoft engineered it well, and I wasn't disappointed. But, I've been bitten by assumptions before, so it's good to test and see. Yes, maybe you knew it would or figured it would, but isn't it nice to know? And as those campy 80s G.I. Joe cartoon public service reminders always taught us, "Knowing is half the battle...". Technorati Tags: C#,.NET

    Read the article

  • Silverlight Cream for December 31, 2010 -- #1019

    - by Dave Campbell
    In this Issue: Michael Washington, Thomas Martinsen, Mike Ormond, William E. Burrows(-2-), Vangos Pterneas, Jesse Liberty, Diptimaya Patra, and Jeff Blankenburg(-2-). Above the Fold: Silverlight: "Drag from Multiple Source In Silverlight 4" Diptimaya Patra WP7: "What I Learned In WP7 – Issue 12" Jeff Blankenburg Shoutouts: Paul Thurrott posted a great phone comparison chart: Great Windows Phone comparison chart Kunal Chowdhury announced his new Silverlight Site: Welcome to Silverlight-Zone - Site is Live Now ... Good Luck, Kunal! From SilverlightCream.com: MyStudioServer goes Open Source Michael Washington decided to put his "MyStudioServer" on CodePlex... I saw this last spring and it's pretty darn cool... check out the post and examples. UriMapping for WP7 Thomas Martinsen discusses UriMapping in WP7, details the steps you need to follow and has sample code to demonstrate. More Monitoring Web Requests on Windows Phone Mike Ormond revisits a post about monitoring WP7 web requests, and shows how to get the data via Fiddler. New Tutorial – Windows Phone 7 (Getting Started) William E. Burrows has 2 parts of a video tutorial series on WP7 development up. This first gets things rolling, explains what is going on, and gets far enough to display golf courses stored in the database. WP7 Tutorial – Part 2: Managing Courses William E. Burrows's 2nd video tutorial is on building out the app to provide features to manage the gold courses for this gold handicap application. Face detection in Windows Phone 7 Vangos Pterneas has a post up about a WP7 app he did using René Schulte's Facelight to do facial recognition. Source available and also on CodePlex. Windows Phone From Scratch – Navigation II Jesse Liberty has up his latest WP7 from Scratch and is the 2nd post in the Navigation series, which is combining the previous navigation with the animation from the one before to produce a better navigation experience. Drag from Multiple Source In Silverlight 4 Diptimaya Patra has a post up at dotnetslackers on dragging into a drop area from multiple sources of different data templates and contexts. What I Learned In WP7 – Issue 12 Jeff Blankenburg's number 12 is up and he's got all the RGB colors on WP7 charted out, name, HEX, RGB, and visual... looks like a good one to bookmark What I Learned In WP7 – Issue 13 Jeff Blankenburg's number 13 is the chart I have listed in the Shoutout above... a complete phone comparison chart. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Timeout Considerations for Solicit Response – Part 2

    - by Michael Stephenson
    To follow up a previous article about timeouts and how they can affect your application I have extended the sample we were using to include WCF. I will execute some test scenarios and discuss the results. The sample We begin by consuming exactly the same web service which is sitting on a remote server. This time I have created a .net 3.5 application which will consume the web service using the basichttp binding. To show you the configuration for the consumption of this web service please refer to the below diagram. You can see like before we also have the connectionManagement element in the configuration file. I have added a WCF service reference (also using the asynchronous proxy methods) and have the below code sample in the application which will asynchronously make the web service calls and handle the responses on a call back method invoked by a delegate. If you have read the previous article you will notice that the code is almost the same.   Sample 1 – WCF with Default Timeouts In this test I set about recreating the same scenario as previous where we would run the test but this time using WCF as the messaging component. For the first test I would use the default configuration settings which WCF had setup when we added a reference to the web service. The timeout values for this test are: closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00"   The Test We simulated 21 calls to the web service Test Results The client-side trace is as follows:   The server-side trace is as follows: Some observations on the results are as follows: The timeouts happened quicker than in the previous tests because some calls were timing out before they attempted to connect to the server The first few calls that timed out did actually connect to the server and did execute successfully on the server   Test 2 – Increase Open Connection Timeout & Send Timeout In this test I wanted to increase both the send and open timeout values to try and give everything a chance to go through. The timeout values for this test are: closeTimeout="00:01:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:10:00"   The Test We simulated 21 calls to the web service   Test Results The client side trace for this test was   The server-side trace for this test was: Some observations on this test are: This test proved if the timeouts are high enough everything will just go through   Test 3 – Increase just the Send Timeout In this test we wanted to increase just the send timeout. The timeout values for this test are: closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:10:00"   The Test We simulated 21 calls to the web service   Test Results The below is the client side trace The below is the server side trace Some observations on this test are: In this test from both the client and server perspective everything ran through fine The open connection timeout did not seem to have any effect   Test 4 – Increase Just the Open Connection Timeout In this test I wanted to validate the change to the open connection setting by increasing just this on its own. The timeout values for this test are: closeTimeout="00:01:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:01:00"   The Test We simulated 21 calls to the web service Test Results The client side trace was The server side trace was Some observations on this test are: In this test you can see that the open connection which relates to opening the channel timeout increase was not the thing which stopped the calls timing out It's the send of data which is timing out On the server you can see that the successful few calls were fine but there were also a few calls which hit the server but timed out on the client You can see that not all calls hit the server which was one of the problems with the WSE and ASMX options   Test 5 – Smaller Increase in Send Timeout In this test I wanted to make a smaller increase to the send timeout than previous just to prove that it was the key setting which was controlling what was timing out. The timeout values for this test are: openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:02:30"   The Test We simulated 21 calls to the web service Test Results The client side trace was   The server side trace was Some observations on this test are: You can see that most of the calls got through fine On the client you can see that call 20 timed out but still hit the server and executed fine.   Summary At this point between the two articles we have quite a lot of scenarios showing the different way the timeout setting have played into our original performance issue, and now we can see how WCF could offer an improved way to handle the problem. To summarise the differences in the timeout properties for the three technology stacks: ASMX The timeout value only applies to the execution time of your request on the server. The timeout does not consider how long your code might be waiting client side to get a connection. WSE The timeout value includes both the time to obtain a connection and also the time to execute the request. A timeout will not be thrown as an error until an attempt to connect to the server is made. This means a 40 second timeout setting may not throw the error until 60 seconds when the connection to the server is made. If the connection to the server is made you should be aware that your message will be processed and you should design for this. WCF The WCF send timeout is the setting most equivalent to the settings we were looking at previously. Like WSE this setting the counter includes the time to get a connection as well as the time to execute on a server. Unlike WSE and ASMX an error will be thrown as soon as the send timeout from making your call from user code has elapsed regardless of whether we are waiting for a connection or have an open connection to the server. This may to a user appear to have better latency in getting an error response compared to WSE or ASMX.

    Read the article

  • First Post

    - by Allan Ritchie
    It has been a while since I've had a blog, but I'm back into the open source dev and decided to get back into things.  I had a blog a few years back when NHibernate was infant (0.8 or something) and I was working with the Wilson ORMapper (www.ormapper.net) at the time.  Anyhow, I'm still working with NHibernate (particularily the exciting v3 alpha 1) and Castle framework. I've also written a .NET ExtDirect stack for which I'll be writing a few articles around due to its flexibility.  I decided to write yet another communication stack because all the implementations I found on the Ext forums were lacking any sort of flexibility.  So stay tuned... I'll be presenting a bunch of the extension points.

    Read the article

  • XAF DSL Tool Needs a new Team Lead

    - by Patrick Liekhus
    I have enjoyed my time on this project and have used it in several production projects.  However, with the enhancements in Visual Studio 2010 and the Entity Framework, the DSL tool doesn’t make sense for me to support at this time.  With that said, I am looking for someone who has interest to continue the project if they so desire.  I have moved my attention to creating a new project at Entity Framework Extensions for XAF.  We are converting the current DSL tool into the Entity Framework extensions.  The same code generation and everything else work.  However, the visual design surface is so much easier to work with.  If you have any questions, please let me know.  Also, please take a moment to look at the new project.  This is where all my effort going forward will be focused. Thanks again for all the support on my vision this far and enjoy.

    Read the article

  • Organizations &amp; Architecture UNISA Studies &ndash; Chap 7

    - by MarkPearl
    Learning Outcomes Name different device categories Discuss the functions and structure of I/.O modules Describe the principles of Programmed I/O Describe the principles of Interrupt-driven I/O Describe the principles of DMA Discuss the evolution characteristic of I/O channels Describe different types of I/O interface Explain the principles of point-to-point and multipoint configurations Discuss the way in which a FireWire serial bus functions Discuss the principles of InfiniBand architecture External Devices An external device attaches to the computer by a link to an I/O module. The link is used to exchange control, status, and data between the I/O module and the external device. External devices can be classified into 3 categories… Human readable – e.g. video display Machine readable – e.g. magnetic disk Communications – e.g. wifi card I/O Modules An I/O module has two major functions… Interface to the processor and memory via the system bus or central switch Interface to one or more peripheral devices by tailored data links Module Functions The major functions or requirements for an I/O module fall into the following categories… Control and timing Processor communication Device communication Data buffering Error detection I/O function includes a control and timing requirement, to coordinate the flow of traffic between internal resources and external devices. Processor communication involves the following… Command decoding Data Status reporting Address recognition The I/O device must be able to perform device communication. This communication involves commands, status information, and data. An essential task of an I/O module is data buffering due to the relative slow speeds of most external devices. An I/O module is often responsible for error detection and for subsequently reporting errors to the processor. I/O Module Structure An I/O module functions to allow the processor to view a wide range of devices in a simple minded way. The I/O module may hide the details of timing, formats, and the electro mechanics of an external device so that the processor can function in terms of simple reads and write commands. An I/O channel/processor is an I/O module that takes on most of the detailed processing burden, presenting a high-level interface to the processor. There are 3 techniques are possible for I/O operations Programmed I/O Interrupt[t I/O DMA Access Programmed I/O When a processor is executing a program and encounters an instruction relating to I/O it executes that instruction by issuing a command to the appropriate I/O module. With programmed I/O, the I/O module will perform the requested action and then set the appropriate bits in the I/O status register. The I/O module takes no further actions to alert the processor. I/O Commands To execute an I/O related instruction, the processor issues an address, specifying the particular I/O module and external device, and an I/O command. There are four types of I/O commands that an I/O module may receive when it is addressed by a processor… Control – used to activate a peripheral and tell it what to do Test – Used to test various status conditions associated with an I/O module and its peripherals Read – Causes the I/O module to obtain an item of data from the peripheral and place it in an internal buffer Write – Causes the I/O module to take an item of data form the data bus and subsequently transmit that data item to the peripheral The main disadvantage of this technique is it is a time consuming process that keeps the processor busy needlessly I/O Instructions With programmed I/O there is a close correspondence between the I/O related instructions that the processor fetches from memory and the I/O commands that the processor issues to an I/O module to execute the instructions. Typically there will be many I/O devices connected through I/O modules to the system – each device is given a unique identifier or address – when the processor issues an I/O command, the command contains the address of the address of the desired device, thus each I/O module must interpret the address lines to determine if the command is for itself. When the processor, main memory and I/O share a common bus, two modes of addressing are possible… Memory mapped I/O Isolated I/O (for a detailed explanation read page 245 of book) The advantage of memory mapped I/O over isolated I/O is that it has a large repertoire of instructions that can be used, allowing more efficient programming. The disadvantage of memory mapped I/O over isolated I/O is that valuable memory address space is sued up. Interrupts driven I/O Interrupt driven I/O works as follows… The processor issues an I/O command to a module and then goes on to do some other useful work The I/O module will then interrupts the processor to request service when is is ready to exchange data with the processor The processor then executes the data transfer and then resumes its former processing Interrupt Processing The occurrence of an interrupt triggers a number of events, both in the processor hardware and in software. When an I/O device completes an I/O operations the following sequence of hardware events occurs… The device issues an interrupt signal to the processor The processor finishes execution of the current instruction before responding to the interrupt The processor tests for an interrupt – determines that there is one – and sends an acknowledgement signal to the device that issues the interrupt. The acknowledgement allows the device to remove its interrupt signal The processor now needs to prepare to transfer control to the interrupt routine. To begin, it needs to save information needed to resume the current program at the point of interrupt. The minimum information required is the status of the processor and the location of the next instruction to be executed. The processor now loads the program counter with the entry location of the interrupt-handling program that will respond to this interrupt. It also saves the values of the process registers because the Interrupt operation may modify these The interrupt handler processes the interrupt – this includes examination of status information relating to the I/O operation or other event that caused an interrupt When interrupt processing is complete, the saved register values are retrieved from the stack and restored to the registers Finally, the PSW and program counter values from the stack are restored. Design Issues Two design issues arise in implementing interrupt I/O Because there will be multiple I/O modules, how does the processor determine which device issued the interrupt? If multiple interrupts have occurred, how does the processor decide which one to process? Addressing device recognition, 4 general categories of techniques are in common use… Multiple interrupt lines Software poll Daisy chain Bus arbitration For a detailed explanation of these approaches read page 250 of the textbook. Interrupt driven I/O while more efficient than simple programmed I/O still requires the active intervention of the processor to transfer data between memory and an I/O module, and any data transfer must traverse a path through the processor. Thus is suffers from two inherent drawbacks… The I/O transfer rate is limited by the speed with which the processor can test and service a device The processor is tied up in managing an I/O transfer; a number of instructions must be executed for each I/O transfer Direct Memory Access When large volumes of data are to be moved, an efficient technique is direct memory access (DMA) DMA Function DMA involves an additional module on the system bus. The DMA module is capable of mimicking the processor and taking over control of the system from the processor. It needs to do this to transfer data to and from memory over the system bus. DMA must the bus only when the processor does not need it, or it must force the processor to suspend operation temporarily (most common – referred to as cycle stealing). When the processor wishes to read or write a block of data, it issues a command to the DMA module by sending to the DMA module the following information… Whether a read or write is requested using the read or write control line between the processor and the DMA module The address of the I/O device involved, communicated on the data lines The starting location in memory to read from or write to, communicated on the data lines and stored by the DMA module in its address register The number of words to be read or written, communicated via the data lines and stored in the data count register The processor then continues with other work, it delegates the I/O operation to the DMA module which transfers the entire block of data, one word at a time, directly to or from memory without going through the processor. When the transfer is complete, the DMA module sends an interrupt signal to the processor, this the processor is involved only at the beginning and end of the transfer. I/O Channels and Processors Characteristics of I/O Channels As one proceeds along the evolutionary path, more and more of the I/O function is performed without CPU involvement. The I/O channel represents an extension of the DMA concept. An I/O channel ahs the ability to execute I/O instructions, which gives it complete control over I/O operations. In a computer system with such devices, the CPU does not execute I/O instructions – such instructions are stored in main memory to be executed by a special purpose processor in the I/O channel itself. Two types of I/O channels are common A selector channel controls multiple high-speed devices. A multiplexor channel can handle I/O with multiple characters as fast as possible to multiple devices. The external interface: FireWire and InfiniBand Types of Interfaces One major characteristic of the interface is whether it is serial or parallel parallel interface – there are multiple lines connecting the I/O module and the peripheral, and multiple bits are transferred simultaneously serial interface – there is only one line used to transmit data, and bits must be transmitted one at a time With new generation serial interfaces, parallel interfaces are becoming less common. In either case, the I/O module must engage in a dialogue with the peripheral. In general terms the dialog may look as follows… The I/O module sends a control signal requesting permission to send data The peripheral acknowledges the request The I/O module transfers data The peripheral acknowledges receipt of data For a detailed explanation of FireWire and InfiniBand technology read page 264 – 270 of the textbook

    Read the article

  • Everythings changes, except that it doesn&rsquo;t

    - by Dennis Vroegop
    You may have noticed this. Microsoft launched a new product this week. Well, launched is a strong word, they announced it. They call it.. The Surface! What is it? Well, it’s a cool looking family of tablets designed for Windows 8. And I have to say: they look stunning and I can’t wait to have one. There’s just one thing.. The name..Where have I heard that before? Why, indeed! Yes, it is also the name of a new paradigm in computing: Surface Computing. You may have read something about that, here on this blog for instance. Well, in order to prevent confusion they have decided to rename Surface to Pixelsense. You know, the technology that drives the ‘camera’ inside the Samsung SUR40 for Microsoft Surface…. So now, when we talk about Surface, we mean the new tablet. When we talk about Pixelsense we mean that big table… So there you have it… Welcome PixelSense! For more info see http://www.surface.com for the tablet and http://www.pixelsense.com for Pixelsense.

    Read the article

  • No VB6 to VS2010 direct upgrade path

    - by Chris Williams
    From the "is this really news?" department... From looking at the currently available versions of 2010, there is no direct upgrade path from VB6 to VS2010. Anyone still using VB6 and wishing to upgrade to VS2010 has two options:  Use the upgrade tool from an earlier version of VS (like 2005 or 2008) and then run the upgrade in VS2010 to get the rest of the way... or rewrite your code. I'll leave it as an exercise to the reader which is the better option. I'd like to take a moment to point out the obvious: A) If you're still using VB6 at this point, you probably don't care about VS2010 compatibility. B) Running your code through 2 upgrade wizards isn't going to result in anything resembling best practices. C) Bemoaning the lack of support in 2010 for a 12 year old version of an extinct programming language helps nobody. This public service announcement is brought to you by the letter C. Thank you.

    Read the article

  • SCCM2012 R2 – How to integrate MDT with SCCM

    - by Waclaw Chrabaszcz
    Originally posted on: http://geekswithblogs.net/Wchrabaszcz/archive/2013/11/12/sccm2012-r2--how-to-integrate-mdt-with-sccm.aspxThere are two maybe not competitive but parallel products: Microsoft Deployment Toolkit and System Center Configuration manager. Few years ago I was wondering why they are separate, I why I cannot share Task Sequences between them. And how it usually happens in live, when I was focused on other technologies, MDT and SCCM became best friends. Let's integrate MDT with SCCM: If first step you need to download MDT http://www.microsoft.com/en-us/download/details.aspx?id=25175 Install MDT on your SCCM server boxaccept the EULA Join CEIP if you like  Once you completed the installation I would recommend you to complete MDT configuring before integration with the SCCM. Start the Deployment Workbenchinstall updatesyou will need to download and install WAIKcreate new deployment shareleave default values Create MDT databaseMake sure you will create separate database, DO NOT use existing SCCM DB Now we are ready to integrate MDT with SCCMthe Integration tool should discover your server automaticallyAfter reopening SCCM console in task sequences you should have new cool features: How to use them? That's another story …

    Read the article

  • Silverlight Cream for March 14, 2011 -- #1060

    - by Dave Campbell
    In this Issue: Lazar Nikolov, Rudi Grobler, WindowsPhoneGeek, Jesse Liberty, Pete Brown, Jessica Foster, Chris Rouw, Andy Beaulieu, and Colin Eberhardt. Above the Fold: Silverlight: "A Silverlight Resizable TextBlock (and other resizable things)" Colin Eberhardt WP7: "Retrofitting the Trial API" Jessica Foster Shoutouts: Rudi Grobler has a post up that's not Silverlight, but it's cool stuff you may be interested in: WPF Themes now available on NuGet From SilverlightCream.com: Simulating rain in Silverlight Lazar Nikolov has a cool tutorial up at SilverlightShow... Simulating rain. Nice demo near to top, and source code plus a very nice tutorial on the entire process. Making the ApplicationBar bindable Rudi Grobler has a couple new posts up... first this one on making the WP7 AppBar bindable... he's created 2 simple wrappers that make it possible to bind to a method... with source as usual! All about Splash Screens in WP7 – Creating animated Splash Screen WindowsPhoneGeek's latest is about splash screens in WP7, but he goes one better with animated splash screens. Lots of good information including closing points in more of a FAQ-style listing. Testing Network Availability Jesse Liberty's latest is on testing for network availability in your WP7 app. This should be required reading for anyone building a WP7 app, since you never know when the network is going to be there or not. Lighting up on Windows 7 with Native Extensions for Microsoft Silverlight Pete Brown's latest post is about the Native Extensions for Microsoft Silverlight or NESL library. Pete describes what NESL is, a link to the library, installing it, and tons more information. If you wanna know or try NESL... this looks like the place to start. Retrofitting the Trial API Jessica Foster paused on the way to shipping her WP7 app to add in the trial API code. Check out what she went through to complete that task, as she explains the steps and directions she took. Good description, links, and code. WP7 Insights #2: Creating a Splash Screen in WP7 In the 2nd post in his series on WP7 development, Chris Rouw is discussing a WP7 splash screen. He gives some good external links for references then gets right into discussing his code. Air Hockey for Windows Phone 7 Andy Beaulieu shares a tutorial he wrote for the Expression Toolbox site, using the Physics Helper Library and Farseer Physics Engine -- an Air Hockey game for WP7. A Silverlight Resizable TextBlock (and other resizable things) I think Michael Washington had the best comment about Colin Eberhardt's latest outing: "Another WOW example" ... drop this in the pretty darn cool category: an attached behavior that uses a Thumb control within a popup to adorn any UI element to allow the user to resize it! Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • MSDN Search Help

    - by George Evjen
    Looks like MSDN has made a change to their search. If you navigate to http://msdn.microsoft.com You will see their search on the top of the page. If you put in a search term, Silverlight DataBinding You will see results that are pulled from the forums of msdn, stackover flow and a few other places. You will also be able to see the number of replies to the thread and the threads that have been answered. Great resource..

    Read the article

  • #altnetseattle &ndash; CQRS

    - by GeekAgilistMercenary
    This is a topic I know nothing about, and thus, may be supremely disparate notes.  Have fun translating.  : )   . . .and coolness that the session is well past capacity. Separates things form the UI and everything that needs populated is done through commands.  The domain and reports have separate storage. Events populate these stores of data, such as "sold event". What it looks like, is that the domain controls the requests by event, which would be a product order or something similar. Event sourcing is a key element of the logic. DDD (Domain Driven Design) is part of the core basis for this methodology/structure. The architecture/methodology/structure is perfect for blade style plugin hardware as needed. Good blog entry DDDD: Why I love CQRS and another Command and Query Responsibility Segregation (CQRS), more, CQRS à la Greg Young, a bit by Udi Dahan and there are more.  Google, Bing, etc are there for a reason. It appears the core underpinning architectural element of this is the break out of unique identifiable actions, or I suppose better described as events.  Those events then act upon specific pipelines such as read requests, write requests, etc.  I will be doing more research on this topic and will have something written up shortly.  At this time it seems like nothing new, just a large architectural break out of identifiable needs of the entire enterprise system.  The reporting is in one segment of the architecture, the domain is in another, hydration broken out to interfaces, and events are executed to incur events on the Reports, or what appears by the description to be events on the domain. Anyway, more to come on this later.

    Read the article

  • Adventures in Scrum: Lesson 2 - For the record

    - by Martin Hinshelwood
    At SSW we have always done Agile. Recently we have started doing Scrum and we have nearly completed our first Sprint ever using Scrum. As you probably guessed from my previous post, it looks like it is going to be a “Failed Sprint”, but the Scrum Team (This includes the ScrumMaster and the Product Owner) has learned a huge amount about working in the Scrum Framework. We have been running with a “Proxy Product Owner” for the last two weeks, but a simple mistake occurred either during the “Product Planning Meeting” or the “Sprint Planning Meeting” that could have prevented this Sprint from failing. We has a heated discussion on the vision of someone not in the room which ended with the assertion that the Product Owner would be quizzed again on their vision. This did not happen and we ran with the “Proxy Product Owner’s vision for two weeks. Product Owner vision: Update Component A of Product A to Silverlight Proxy Product Owner vision: Update Product A to Silverlight Do you see the problem? Worse than that, as we had a lot of junior members of the Scrum Team and we are just feeling our way around how Scrum will work at SSW I missed implementing a fundamental rule. That’s right, it was me. It does not matter that I did not know about this rule, its on the site and I should have read it. Would a police officer let you off if you did not know that a red light meant stop? I think not… But, what is this amazing rule I hear you shout.. Its simple, as per our rule I should have sent the following email: “ Dear Proxy Product Owner, For the record, I disagree that the Product Owner wants us to ‘Update Product A to Silverlight’ as I still think that he wants us to ‘Update Component A of Product A to Silverlight’ and not the entire application. Regards Martin” - ‘For the record’ - Rules to being Software Consultants - Dealing with Clients This email should have been copied to the entire Scrum Team, which would have included the Product Owner, who would have nipped this misunderstanding in the bud and we would have had one less impediment. Technorati Tags: SSW,SSW Rules,SSW Standards,Scrum,Product Owner,ScrumMaster,Sprint,Sprint Planning Meeting,Product Planning Meeting

    Read the article

  • Visual Studio Little Wonders: Box Selection

    - by James Michael Hare
    So this week I decided I’d do a Little Wonder of a different kind and focus on an underused IDE improvement: Visual Studio’s Box Selection capability. This is a handy feature that many people still don’t realize was made available in Visual Studio 2010 (and beyond).  True, there have been other editors in the past with this capability, but now that it’s fully part of Visual Studio we can enjoy it’s goodness from within our own IDE. So, for those of you who don’t know what box selection is and what it allows you to do, read on! Sometimes, we want to select beyond the horizontal… The problem with traditional text selection in many editors is that it is horizontally oriented.  Sure, you can select multiple rows, but if you do you will pull in the entire row (at least for the middle rows).  Under the old selection scheme, if you wanted to select a portion of text from each row (a “box” of text) you were out of luck.  Box selection rectifies this by allowing you to select a box of text that bounded by a selection rectangle that you can grow horizontally or vertically.  So let’s think a situation that could occur where this comes in handy. Let’s say, for instance, that we are defining an enum in our code that we want to be able to translate into some string values (possibly to be stored in a database, output to screen, etc.). Perhaps such an enum would look like this: 1: public enum OrderType 2: { 3: Buy, // buy shares of a commodity 4: Sell, // sell shares of a commodity 5: Exchange, // exchange one commodity for another 6: Cancel, // cancel an order for a commodity 7: } 8:  Now, let’s say we are in the process of creating a Dictionary<K,V> to translate our OrderType: 1: var translator = new Dictionary<OrderType, string> 2: { 3: // do I really want to retype all this??? 4: }; Yes the example above is contrived so that we will pull some garbage if we do a multi-line select. I could select the lines above using the traditional multi-line selection: And then paste them into the translator code, which would result in this: 1: var translator = new Dictionary<OrderType, string> 2: { 3: Buy, // buy shares of a commodity 4: Sell, // sell shares of a commodity 5: Exchange, // exchange one commodity for another 6: Cancel, // cancel an order for a commodity 7: }; But I have a lot of junk there, sure I can manually clear it out, or use some search and replace magic, but if this were hundreds of lines instead of just a few that would quickly become cumbersome. The Box Selection Now that we have the ability to create box selections, we can select the box of text to delete!  Most of us are familiar with the fact we can drag the mouse (or hold [Shift] and use the arrow keys) to create a selection that can span multiple rows: Box selection, however, actually allows us to select a box instead of the typical horizontal lines: Then we can press the [delete] key and the pesky comments are all gone! You can do this either by holding down [Alt] while you select with your mouse, or by holding down [Alt+Shift] and using the arrow keys on the keyboard to grow the box horizontally or vertically. So now we have: 1: var translator = new Dictionary<OrderType, string> 2: { 3: Buy, 4: Sell, 5: Exchange, 6: Cancel, 7: }; Which is closer, but we still need an opening curly, the string to translate to, and the closing curly and comma. Fortunately, again, this is easy with box selections due to the fact box selection can even work for a zero-width selection! That is, hold down [Alt] and either drag down with no width, or hold down [Alt+Shift] and arrow down and you will define a selection range with no width, essentially, a vertical line selection: Notice the faint selection line on the right? So why is this useful? Well, just like with any selected range, we can type and it will replace the selection. What does this mean for box selections? It means that we can insert the same text all the way down on each line! If we have the same selection above, and type a curly and a space, we’d get: Imagine doing this over hundreds of lines and think of what a time saver it could be! Now make a zero-width selection on the other side: And type a curly and a comma, and we’d get: So close! Now finally, imagine we’ve already defined these strings somewhere and want to paste them in: 1: const private string BuyText = "Buy Shares"; 2: const private string SellText = "Sell Shares"; 3: const private string ExchangeText = "Exchange"; 4: const private string CancelText = "Cancel"; We can, again, use our box selection to pull out the constant names: And clicking copy (or [CTRL+C]) and then selecting a range to paste into: And finally clicking paste (or [CTRL+V]) to get the final result: 1: var translator = new Dictionary<OrderType, string> 2: { 3: { Buy, BuyText }, 4: { Sell, SellText }, 5: { Exchange, ExchangeText }, 6: { Cancel, CancelText }, 7: };   Sure, this was a contrived example, but I’m sure you’ll agree that it adds myriad possibilities of new ways to copy and paste vertical selections, as well as inserting text across a vertical slice. Summary: While box selection has been around in other editors, we finally get to experience it in VS2010 and beyond. It is extremely handy for selecting columns of information for cutting, copying, and pasting. In addition, it allows you to create a zero-width vertical insertion point that can be used to enter the same text across multiple rows. Imagine the time you can save adding repetitive code across multiple lines!  Try it, the more you use it, the more you’ll love it! Technorati Tags: C#,CSharp,.NET,Visual Studio,Little Wonders,Box Selection

    Read the article

  • Prologue…

    - by Chris Stewart
    Hi all,                 This is the beginning of my blog. I have been a software developer for going on 11 years using the Microsoft toolset (primarily VB 5, VB 6, VB.net and SQL Server 6, 7, 2000, 2005, 2008). My coding interests are C#, ASP.net, SQL and XNA. Here I will post my musings, things of interest, techie babble and sometimes random gibberish. My hope for this blog is to document my learning experiences with C#, help, encourage and in some way be useful to those who come after me. Thanks for reading!

    Read the article

  • Converting projects to use Automatic NuGet restore

    - by terje
    Originally posted on: http://geekswithblogs.net/terje/archive/2014/06/11/converting-projects-to-use-automatic-nuget-restore.aspxDownload tool In version 2.7 of NuGet automatic nuget restore was introduced, meaning you no longer need to distort your msbuild project files with nuget target information.   Visual Studio and TFS 2013 build have this enabled by default.  However, if your project was created before this was introduced, and/or if you have used the “Enable NuGet Package Restore” afterwards, you now have a series of unwanted things in your projects, and a series of project files that have been modified – and – you no longer neither want nor need this !  You might also get into some unwanted issues due to these modifications.  This is a MSBuild modification that was needed only before NuGet 2.7 ! So: DON’T USE THIS FUNCTION !!! There is an issue https://nuget.codeplex.com/workitem/4019 on this on the NuGet project site to get this function removed, renamed or at least moved farther away from the top level (please help vote it up!).  The response seems to be that it WILL BE removed, around version 3.0. This function does nothing you need after the introduction of NuGet 2.7.  What is also unfortunate is the naming of it – it implies that it is needed, it is not, and what is worse, there is no corresponding function to remove what it does ! So to fix this use the tool named IFix, that will fix this issue for you   - all free of course, and the code is open source.  Also report issues there:  https://github.com/OsirisTerje/IFix    IFix information DOWNLOAD HERE This command line tool installs using an MSI, and add itself to the system path.  If you work in a team, you will probably need to use the  tool multiple times.  Anyone in the team may at any time use the “Enable NuGet Package Restore” function and mess up your project again.  The IFix program can be run either in a  check modus, where it does not write anything back – it only checks if you have any issues, or in a Fix mode, where it will also perform the necessary fixes for you. The IFix program is used like this: IFix <command> [-c/--check] [-f/--fix]  [-v/--verbose] The command in this case is “nugetrestore”.  It will do a check from the location where it is being called, and run through all subfolders from that location. So  “IFix nugetrestore  --check” , will do the check ,  and “IFix nugetrestore  --fix”  will perform the changes, for all files and folders below the current working directory. (Note that --check  can be replaced with only –c, and --fix with –f, and so on. ) BEWARE: When you run the fix option, all solutions to be affected must be closed in Visual Studio ! So, if you just want to DO it, then: IFix nugetrestore --check to see if you have issues then IFix nugetrestore  --fix to fix them. How does it work IFix nugetrestore  checks and optionally fixes four issues that the older enabling of nuget restore did.  The issues are related to the MSBuild projess, and are: Deleting the nuget.targets file. Deleting the nuget.exe that is located under the .nuget folder Removing all references to nuget.targets in the solution file Removing all properties and target imports of nuget.targets inside the csproj files. IFix fixes these issues in the same sequence. The first step, removing the nuget.targets file is the most critical one, and all instances of the nuget.targets file within the scope of a solution has to be removed, and in addition it has to be done with the solution closed in Visual Studio.  If Visual Studio finds a nuget.targets file, the csproj files will be automatically messed up again. This means the removal process above might need to be done multiple times, specially when you’re working with a team, and that solution context menu still has the “Enable NuGet Package Restore” function.  Someone on the team might inadvertently do this at any time. It can be a good idea to add this check to a checkin policy – if you run TFS standard version control, but that will have no effect if you use TFS Git version control of course. So, better be prepared to run the IFix check from time to time. Or, even better, install IFix on your build servers, and add a call to IFix nugetrestore --check in the TFS Build script.    How does it look As a first example I have run the IFix program from the top of a set of git repositories, so it spans multiple repositories with multiple solutions. The result from the check option is as follows: We see the four red lines, there is one for each of the four checks we talked about in the previous section. The fact that they are red, means we have that particular issue. The first section (above the first red text line) is the nuget targets section.  Notice  No.1, it says it has found no paths to copy.  What IFix does here is to check if there are any defined paths to other nuget galleries.  If there are, then those are copied over to the nuget.config file, where is where it should be in version 2.7 and above.   No.2 says it has found the particular nuget.targets file,  No.3  states it HAS found some other nuget galleries defines in the targets file, which then it would like to copy to the config.file. No.4 is the section for nuget.exe files, and list those it has found, and which it would like to delete. No 5 states it has found a reference to nuget.targets in the solution file.  This reference comes from the fact that the .nuget folder is a solution folder, and the items within are described in the solution file. It then checks the csproj files, and as can be seen from the last red line, it ha found issues in 96 out of 198 csproj files.  There are two possible issues in a csproj files.  No.6 is the first one, and the most common and most important one, an “Import project” section.  This is the section that calls the nuget.targets files.  No.7 is another issue, which seems to sometimes be there, sometimes not, it is a RestorePackages property, which also should go away. Now, if we run the IFix nugetrestore –fix command, and then the check again after that, the result is: All green !

    Read the article

  • Silverlight Cream for April 26, 2010 -- #848

    - by Dave Campbell
    In this Issue: Viktor Larsson, Mike Snow(-2-), Jeff Brand, Marlon Grech(-2-, -3-), Jonathan van de Veen, Phil Middlemiss. Shoutout: Justin Angel wants everyone to know he is Joining the Vertigo Team!... congratulations, Justin! From SilverlightCream.com: Learning Silverlight – Advanced Color Animations Viktor Larsson is demonstrating small pieces of Silverlight he's picked upon in the course of his work project. This first one is on ColorAnimations using KeyFrames Silverlight Tip of the Day #4 – Enabling Out of Browser Applications Mike Snow has Tip #4 up and it's all about OOB... from what you have to do to what your user sees, including how to check to see if you're running OOB... source project included. Silverlight Tip of the Day #5 – Debugging Out of Browser Applications Following a fine tradition he started with his first series, Mike Snow is putting out more than one Tip per day :) ... Number 5 is up and is all about debugging OOB apps. Simplifying Page Transitions in Windows Phone 7 Silverlight Applications Jeff Brand has a WP7 post up discussing Page Transitions. He first discusses the most common brute-force method, then moves into the TransitioningContentControl from the Toolkit. An introduction to MEFedMVVM – PART 1 Marlon Grech, Peter O’Hanlon, and Glenn Block worked together to produce an MEF and MVVM library that works for WPF and Silverlight and allows Design-time goodness and a loosely-coupled bridge between the View and ViewModel ... and it's on CodePlex ... they're also looking for comments/additions, so check it out. Leveraging MEFedMVVM ExportViewModel – MEFedMVVM Part 2 In Part 2, Marlon Grech demonstrates using MEFedMVVM and shows off some of the basics such as Importing services, Design-Time data and DataContextAware ViewModels IContextAware services to bridge the gap between the View and the ViewModel – MEFedMVVM Part 3 Marlon Grech's 3rd post about MEFedMVVM is about IContextAwareService -- bridging the gap betwen the View and ViewModel -- a service that knows about it's context. Building a Web Setup that configures your Silverlight application Jonathan van de Veen has a post up at SilverlightShow on using a Web Setup Project to configure your Silverlight when things startup... if you're not familiar with doing this... take note! A Chrome and Glass Theme - Part 4 Phil Middlemiss has part 4 of his great tutorial series up on creating a theme in Expression Blend ... this time tackling the listbox. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Learning Resources for SharePoint

    - by Enrique Lima
    SharePoint 2010 Reference: Software Development Kit SharePoint 2010: Getting Started with Development on SharePoint 2010 Hands-on Labs in C# and Visual Basic SharePoint Developer Training Kit Professional Development Evaluation Guide and Walkthrough SharePoint Server 2010: Advanced Developer Training Presentations

    Read the article

  • Pimp my Silverlight Firestarter

    - by mbcrump
    So Silverlight Firestarter is over and your sitting on your couch thinking… what now? Well its time to So how exactly can you pimp the Silverlight Firestarter? Well read below and you will find out: 1) Pimp the videos: First we are going to use a program named Juice to download all of the Silverlight Firestarter videos. Go ahead and point your browser to http://juicereceiver.sourceforge.net/ and download the application. It works on Mac, Linux and PC. After it is downloaded you are going to want to add an RSS feed by clicking the button highlighted below. At this point you are going to want to add the following URL inside the textbox and hit Save: http://channel9.msdn.com/Series/Silverlight-Firestarter/RSS This RSS feed includes all the Silverlight Firestarter Labs and Presentations located below. The Future of Silverlight Data Binding Strategies with Silverlight and WP7 Building Compelling Apps with WCF using REST and LINQ Building Feature Rich Business Apps Today with RIA Services MVVM: Why and How? Tips and Patterns using MVVM and Service Patterns with Silverlight and WP7 Tips and Tricks for a Great Installation Experience Tune Your Application: Profiling and Performance Tips Performance Tips for Silverlight Windows Phone 7 Select all the videos and click the Download button located below (has blue arrow): Once all the videos are downloaded you will have about 4.64GB of Silverlight fun. You can now move these videos to your MediaServer and watch them with whatever device you want. Put it on an iPad, iPhone.. emm wait I mean WP7 or WMC7.  2) Pimp the Training Material – Download the offline installer for the labs here. This will give you almost a gig of free training materials. Here is the topics covered: Level 100: Getting Started Lab 01 - WinForms and Silverlight Lab 02 - ASP.NET and Silverlight Lab 03 - XAML and Controls Lab 04 - Data Binding Level 200: Ready for More Lab 05 - Migrating Apps to Out-of-Browser Lab 06 - Great UX with Blend Lab 07 - Web Services and Silverlight Lab 08 - Using WCF RIA Services Level 300: Take me Further Lab 09 - Deep Dive into Out-of-Browser Lab 10 - Silverlight Patterns: Using MVVM Lab 11 - Silverlight and Windows Phone 7 You will notice that it install Firestarter to the default of C:\Firestarter. So you will have to navigate to that folder and double click on Default.htm to get started. Now if you followed part one of the pimping guide then you will already have all the videos on your pc. You will notice that once you go into the lab you will get a Lab Document and Source at the bottom of the article. Now instead of opening the Source Folder in a web browser you can just copy the folder C:\Firestarter\Labs into your Visual Studio 2010 Project Folder. This will save a lot of time later.   3) Pimp my Silverlight 5 Knowledge – Always keep reading as much as possible and remember that the Silverlight 5 Beta should come Q1 of 2011 and the final release at the end of 2011. Here are 5 great blog post on Silverlight 5. Scott Gu’s Blog Mary Jo’s Article on Silverlight 5 The Future of Silverlight (Official) Kunal Chowdhury Blog Tim Heuer’s Blog Thats all that I got for now. Have fun with all the new Silverlight content.  Subscribe to my feed

    Read the article

  • Bug with Set / Get Accessor in .Net 3.5

    - by MarkPearl
    I spent a few hours scratching my head on this one... So I thought I would blog about it in case someone else had the same headache. Assume you have a class, and you are wanting to use the INotifyPropertyChanged interface so that when you bind to an instance of the class, you get the magic sauce of binding to do you updates to the UI. Well, I had a special instance where I wanted one of the properties of the class to add some additional formatting to itself whenever someone changed its value (see the code below).   class Test: INotifyPropertyChanged {     private string_inputValue;     public stringInputValue     {         get        {             return_inputValue;         }         set        {             if(value!= _inputValue)             {                 _inputValue = value+ "Extra Stuff";                 NotifyPropertyChanged("InputValue");                     }         }     }     public eventPropertyChangedEventHandler PropertyChanged;     public voidNotifyPropertyChanged(stringinfo)     {         if(PropertyChanged != null)         {             PropertyChanged(this, newPropertyChangedEventArgs(info));         }     } }   Everything looked fine, but when I ran it in my WPF project, the textbox I was binding to would not update? I couldn’t understand it! I thought the code made sense, so why wasn’t it working? Eventually StackOverflow came to the rescue, where I was told that it was a bug in the .Net 3.5 Runtime and that a fix was scheduled in .Net 4 For those who have the same problem, here is the workaround… You need to put the NotifyPropertyChanged method on the application thread! public string InputValue { get { return _inputValue; } set { if (value != _inputValue) { _inputValue = value + "Extra Stuff"; // // React to the type of measurement // Application.Current.Dispatcher.BeginInvoke((Action)delegate { NotifyPropertyChanged("InputValue"); }); } } }

    Read the article

  • Future of BizTalk

    - by Vamsi Krishna
    The future of BizTalk- The last TechEd Conference was a very important one from BizTalk perspective. Microsoft will continue innovating BizTalk and releasing BizTalk versions for “for next few years to come”. So, all the investments that clients have made so far will continue giving returns. Three flavors of BizTalk: BizTalk On-Premise, BizTalk as IaaS and BizTalk as PaaS (Azure Service Bus).Windows Azure IaaS and How It WorksDate: June 12, 2012Speakers: Corey SandersThis session covers the significant investments that Microsoft is making in our Windows Azure Infrastructure-as-a-Service (IaaS) solution and how it http://channel9.msdn.com/Events/TechEd/NorthAmerica/2012/AZR201 TechEd provided two sessions around BizTalk futures:http://channel9.msdn.com/Events/TechEd/NorthAmerica/2012AZR 207: Application Integration Futures - The Road Map and What's Next on Windows Azure http://channel9.msdn.com/Events/TechEd/NorthAmerica/2012/AZR207AZR211: Building Integration Solutions Using Microsoft BizTalk On-Premises and on Windows Azurehttp://channel9.msdn.com/Events/TechEd/NorthAmerica/2012/AZR211Here is are some highlights from the two sessions at TechEd. Bala Sriram, Director of Development for BizTalk provided the introduction at the road map session and covered some key points around BizTalk:More then 12,000 Customers 79% of BizTalk users have already adopted BizTalk Server 2010 BizTalk Server will be released for “years to come” after this release. I.e. There will be more releases of BizTalk after BizTalk 2010 R2. BizTalk 2010 R2 will be releasing 6 months after Windows 8 New Azure (Cloud-based) BizTalk scenarios will be available in IaaS and PaaS BizTalk Server on-premises, IaaS, and PaaS are complimentary and designed to work together BizTalk Investments will be taken forward The second session was mainly around drilling in and demonstrating the up and coming capabilities in BizTalk Server on-premise, IaaS, and PaaS:BizTalk IaaS: Users will be able to bring their own or choose from a Azure IaaS template to quickly provision BizTalk Server virtual machines (VHDs) BizTalk Server 2010 R2: Native adapter to connect to Azure Service Bus Queues, Topics and Relay. Native send port adapter for REST. Demonstrated this in connecting to Salesforce to get and update Salesforce information. ESB Toolkit will be incorporate are part of product and setup BizTalk PaaS: HTTP, FTP, Service Bus, LOB Application connectivity XML and Flat File processing Message properties Transformation, Archiving, Tracking Content based routing

    Read the article

  • Get Started using Build-Deploy-Test Workflow with TFS 2012

    - by Jakob Ehn
    TFS 2012 introduces a new type of Lab environment called Standard Environment. This allows you to setup a full Build Deploy Test (BDT) workflow that will build your application, deploy it to your target machine(s) and then run a set of tests on that server to verify the deployment. In TFS 2010, you had to use System Center Virtual Machine Manager and involve half of your IT department to get going. Now all you need is a server (virtual or physical) where you want to deploy and test your application. You don’t even have to install a test agent on the machine, TFS 2012 will do this for you! Although each step is rather simple, the entire process of setting it up consists of a bunch of steps. So I thought that it could be useful to run through a typical setup.I will also link to some good guidance from MSDN on each topic. High Level Steps Install and configure Visual Studio 2012 Test Controller on Target Server Create Standard Environment Create Test Plan with Test Case Run Test Case Create Coded UI Test from Test Case Associate Coded UI Test with Test Case Create Build Definition using LabDefaultTemplate 1. Install and Configure Visual Studio 2012 Test Controller on Target Server First of all, note that you do not have to have the Test Controller running on the target server. It can be running on another server, as long as the Test Agent can communicate with the test controller and the test controller can communicate with the TFS server. If you have several machines in your environment (web server, database server etc..), the test controller can be installed either on one of those machines or on a dedicated machine. To install the test controller, simply mount the Visual Studio Agents media on the server and browse to the vstf_controller.exe file located in the TestController folder. Run through the installation, you might need to reboot the server since it installs .NET 4.5. When the test controller is installed, the Test Controller configuration tool will launch automatically (if it doesn’t, you can start it from the Start menu). Here you will supply the credentials of the account running the test controller service. Note that this account will be given the necessary permissions in TFS during the configuration. Make sure that you have entered a valid account by pressing the Test link. Also, you have to register the test controller with the TFS collection where your test plan is located (and usually the code base of course) When you press Apply Settings, all the configuration will be done. You might get some warnings at the end, that might or might not cause a problem later. Be sure to read them carefully.   For more information about configuring your test controllers, see Setting Up Test Controllers and Test Agents to Manage Tests with Visual Studio 2. Create Standard Environment Now you need to create a Lab environment in Microsoft Test Manager. Since we are using an existing physical or virtual machine we will create a Standard Environment. Open MTM and go to Lab Center. Click New to create a new environment Enter a name for the environment. Since this environment will only contain one machine, we will use the machine name for the environment (TargetServer in this case) On the next page, click Add to add a machine to the environment. Enter the name of the machine (TargetServer.Domain.Com), and give it the Web Server role. The name must be reachable both from your machine during configuration and from the TFS app tier server. You also need to supply an account that is a local administration on the target server. This is needed in order to automatically install a test agent later on the machine. On the next page, you can add tags to the machine. This is not needed in this scenario so go to the next page. Here you will specify which test controller to use and that you want to run UI tests on this environment. This will in result in a Test Agent being automatically installed and configured on the target server. The name of the machine where you installed the test controller should be available on the drop down list (TargetServer in this sample). If you can’t see it, you might have selected a different TFS project collection. Press Next twice and then Verify to verify all the settings: Press finish. This will now create and prepare the environment, which means that it will remote install a test agent on the machine. As part of this installation, the remote server will be restarted. 3-5. Create Test Plan, Run Test Case, Create Coded UI Test I will not cover step 3-5 here, there are plenty of information on how you create test plans and test cases and automate them using Coded UI Tests. In this example I have a test plan called My Application and it contains among other things a test suite called Automated Tests where I plan to put test cases that should be automated and executed as part of the BDT workflow. For more information about Coded UI Tests, see Verifying Code by Using Coded User Interface Tests   6. Associate Coded UI Test with Test Case OK, so now we want to automate our Coded UI Test and have it run as part of the BDT workflow. You might think that you coded UI test already is automated, but the meaning of the term here is that you link your coded UI Test to an existing Test Case, thereby making the Test Case automated. And the test case should be part of the test suite that we will run during the BDT. Open the solution that contains the coded UI test method. Open the Test Case work item that you want to automate. Go to the Associated Automation tab and click on the “…” button. Select the coded UI test that you corresponds to the test case: Press OK and the save the test case For more information about associating an automated test case with a test case, see How to: Associate an Automated Test with a Test Case 7. Create Build Definition using LabDefaultTemplate Now we are ready to create a build definition that will implement the full BDT workflow. For this purpose we will use the LabDefaultTemplate.11.xaml that comes out of the box in TFS 2012. This build process template lets you take the output of another build and deploy it to each target machine. Since the deployment process will be running on the target server, you will have less problem with permissions and firewalls than if you were to remote deploy your solution. So, before creating a BDT workflow build definition, make sure that you have an existing build definition that produces a release build of your application. Go to the Builds hub in Team Explorer and select New Build Definition Give the build definition a meaningful name, here I called it MyApplication.Deploy Set the trigger to Manual Define a workspace for the build definition. Note that a BDT build doesn’t really need a workspace, since all it does is to launch another build definition and deploy the output of that build. But TFS doesn’t allow you to save a build definition without adding at least one mapping. On Build Defaults, select the build controller. Since this build actually won’t produce any output, you can select the “This build does not copy output files to a drop folder” option. On the process tab, select the LabDefaultTemplate.11.xaml. This is usually located at $/TeamProject/BuildProcessTemplates/LabDefaultTemplate.11.xaml. To configure it, press the … button on the Lab Process Settings property First, select the environment that you created before: Select which build that you want to deploy and test. The “Select an existing build” option is very useful when developing the BDT workflow, because you do not have to run through the target build every time, instead it will basically just run through the deployment and test steps which speeds up the process. Here I have selected to queue a new build of the MyApplication.Test build definition On the deploy tab, you need to specify how the application should be installed on the target server. You can supply a list of deployment scripts with arguments that will be executed on the target server. In this example I execute the generated web deploy command file to deploy the solution. If you for example have databases you can use sqlpackage.exe to deploy the database. If you are producing MSI installers in your build, you can run them using msiexec.exe and so on. A good practice is to create a batch file that contain the entire deployment that you can run both locally and on the target server. Then you would just execute the deployment batch file here in one single step. The workflow defines some variables that are useful when running the deployments. These variables are: $(BuildLocation) The full path to where your build files are located $(InternalComputerName_<VM Name>) The computer name for a virtual machine in a SCVMM environment $(ComputerName_<VM Name>) The fully qualified domain name of the virtual machine As you can see, I specify the path to the myapplication.deploy.cmd file using the $(BuildLocation) variable, which is the drop folder of the MyApplication.Test build. Note: The test agent account must have read permission in this drop location. You can find more information here on Building your Deployment Scripts On the last tab, we specify which tests to run after deployment. Here I select the test plan and the Automated Tests test suite that we saw before: Note that I also selected the automated test settings (called TargetServer in this case) that I have defined for my test plan. In here I define what data that should be collected as part of the test run. For more information about test settings, see Specifying Test Settings for Microsoft Test Manager Tests We are done! Queue your BDT build and wait for it to finish. If the build succeeds, your build summary should look something like this:

    Read the article

  • C#/.NET Little Wonders: Tuples and Tuple Factory Methods

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can really help improve your code by making it easier to write and maintain.  This week, we look at the System.Tuple class and the handy factory methods for creating a Tuple by inferring the types. What is a Tuple? The System.Tuple is a class that tends to inspire a reaction in one of two ways: love or hate.  Simply put, a Tuple is a data structure that holds a specific number of items of a specific type in a specific order.  That is, a Tuple<int, string, int> is a tuple that contains exactly three items: an int, followed by a string, followed by an int.  The sequence is important not only to distinguish between two members of the tuple with the same type, but also for comparisons between tuples.  Some people tend to love tuples because they give you a quick way to combine multiple values into one result.  This can be handy for returning more than one value from a method (without using out or ref parameters), or for creating a compound key to a Dictionary, or any other purpose you can think of.  They can be especially handy when passing a series of items into a call that only takes one object parameter, such as passing an argument to a thread's startup routine.  In these cases, you do not need to define a class, simply create a tuple containing the types you wish to return, and you are ready to go? On the other hand, there are some people who see tuples as a crutch in object-oriented design.  They may view the tuple as a very watered down class with very little inherent semantic meaning.  As an example, what if you saw this in a piece of code: 1: var x = new Tuple<int, int>(2, 5); What are the contents of this tuple?  If the tuple isn't named appropriately, and if the contents of each member are not self evident from the type this can be a confusing question.  The people who tend to be against tuples would rather you explicitly code a class to contain the values, such as: 1: public sealed class RetrySettings 2: { 3: public int TimeoutSeconds { get; set; } 4: public int MaxRetries { get; set; } 5: } Here, the meaning of each int in the class is much more clear, but it's a bit more work to create the class and can clutter a solution with extra classes. So, what's the correct way to go?  That's a tough call.  You will have people who will argue quite well for one or the other.  For me, I consider the Tuple to be a tool to make it easy to collect values together easily.  There are times when I just need to combine items for a key or a result, in which case the tuple is short lived and so the meaning isn't easily lost and I feel this is a good compromise.  If the scope of the collection of items, though, is more application-wide I tend to favor creating a full class. Finally, it should be noted that tuples are immutable.  That means they are assigned a value at construction, and that value cannot be changed.  Now, of course if the tuple contains an item of a reference type, this means that the reference is immutable and not the item referred to. Tuples from 1 to N Tuples come in all sizes, you can have as few as one element in your tuple, or as many as you like.  However, since C# generics can't have an infinite generic type parameter list, any items after 7 have to be collapsed into another tuple, as we'll show shortly. So when you declare your tuple from sizes 1 (a 1-tuple or singleton) to 7 (a 7-tuple or septuple), simply include the appropriate number of type arguments: 1: // a singleton tuple of integer 2: Tuple<int> x; 3:  4: // or more 5: Tuple<int, double> y; 6:  7: // up to seven 8: Tuple<int, double, char, double, int, string, uint> z; Anything eight and above, and we have to nest tuples inside of tuples.  The last element of the 8-tuple is the generic type parameter Rest, this is special in that the Tuple checks to make sure at runtime that the type is a Tuple.  This means that a simple 8-tuple must nest a singleton tuple (one of the good uses for a singleton tuple, by the way) for the Rest property. 1: // an 8-tuple 2: Tuple<int, int, int, int, int, double, char, Tuple<string>> t8; 3:  4: // an 9-tuple 5: Tuple<int, int, int, int, double, int, char, Tuple<string, DateTime>> t9; 6:  7: // a 16-tuple 8: Tuple<int, int, int, int, int, int, int, Tuple<int, int, int, int, int, int, int, Tuple<int,int>>> t14; Notice that on the 14-tuple we had to have a nested tuple in the nested tuple.  Since the tuple can only support up to seven items, and then a rest element, that means that if the nested tuple needs more than seven items you must nest in it as well.  Constructing tuples Constructing tuples is just as straightforward as declaring them.  That said, you have two distinct ways to do it.  The first is to construct the tuple explicitly yourself: 1: var t3 = new Tuple<int, string, double>(1, "Hello", 3.1415927); This creates a triple that has an int, string, and double and assigns the values 1, "Hello", and 3.1415927 respectively.  Make sure the order of the arguments supplied matches the order of the types!  Also notice that we can't half-assign a tuple or create a default tuple.  Tuples are immutable (you can't change the values once constructed), so thus you must provide all values at construction time. Another way to easily create tuples is to do it implicitly using the System.Tuple static class's Create() factory methods.  These methods (much like C++'s std::make_pair method) will infer the types from the method call so you don't have to type them in.  This can dramatically reduce the amount of typing required especially for complex tuples! 1: // this 4-tuple is typed Tuple<int, double, string, char> 2: var t4 = Tuple.Create(42, 3.1415927, "Love", 'X'); Notice how much easier it is to use the factory methods and infer the types?  This can cut down on typing quite a bit when constructing tuples.  The Create() factory method can construct from a 1-tuple (singleton) to an 8-tuple (octuple), which of course will be a octuple where the last item is a singleton as we described before in nested tuples. Accessing tuple members Accessing a tuple's members is simplicity itself… mostly.  The properties for accessing up to the first seven items are Item1, Item2, …, Item7.  If you have an octuple or beyond, the final property is Rest which will give you the nested tuple which you can then access in a similar matter.  Once again, keep in mind that these are read-only properties and cannot be changed. 1: // for septuples and below, use the Item properties 2: var t1 = Tuple.Create(42, 3.14); 3:  4: Console.WriteLine("First item is {0} and second is {1}", 5: t1.Item1, t1.Item2); 6:  7: // for octuples and above, use Rest to retrieve nested tuple 8: var t9 = new Tuple<int, int, int, int, int, int, int, 9: Tuple<int, int>>(1,2,3,4,5,6,7,Tuple.Create(8,9)); 10:  11: Console.WriteLine("The 8th item is {0}", t9.Rest.Item1); Tuples are IStructuralComparable and IStructuralEquatable Most of you know about IComparable and IEquatable, what you may not know is that there are two sister interfaces to these that were added in .NET 4.0 to help support tuples.  These IStructuralComparable and IStructuralEquatable make it easy to compare two tuples for equality and ordering.  This is invaluable for sorting, and makes it easy to use tuples as a compound-key to a dictionary (one of my favorite uses)! Why is this so important?  Remember when we said that some folks think tuples are too generic and you should define a custom class?  This is all well and good, but if you want to design a custom class that can automatically order itself based on its members and build a hash code for itself based on its members, it is no longer a trivial task!  Thankfully the tuple does this all for you through the explicit implementations of these interfaces. For equality, two tuples are equal if all elements are equal between the two tuples, that is if t1.Item1 == t2.Item1 and t1.Item2 == t2.Item2, and so on.  For ordering, it's a little more complex in that it compares the two tuples one at a time starting at Item1, and sees which one has a smaller Item1.  If one has a smaller Item1, it is the smaller tuple.  However if both Item1 are the same, it compares Item2 and so on. For example: 1: var t1 = Tuple.Create(1, 3.14, "Hi"); 2: var t2 = Tuple.Create(1, 3.14, "Hi"); 3: var t3 = Tuple.Create(2, 2.72, "Bye"); 4:  5: // true, t1 == t2 because all items are == 6: Console.WriteLine("t1 == t2 : " + t1.Equals(t2)); 7:  8: // false, t1 != t2 because at least one item different 9: Console.WriteLine("t2 == t2 : " + t2.Equals(t3)); The actual implementation of IComparable, IEquatable, IStructuralComparable, and IStructuralEquatable is explicit, so if you want to invoke the methods defined there you'll have to manually cast to the appropriate interface: 1: // true because t1.Item1 < t3.Item1, if had been same would check Item2 and so on 2: Console.WriteLine("t1 < t3 : " + (((IComparable)t1).CompareTo(t3) < 0)); So, as I mentioned, the fact that tuples are automatically equatable and comparable (provided the types you use define equality and comparability as needed) means that we can use tuples for compound keys in hashing and ordering containers like Dictionary and SortedList: 1: var tupleDict = new Dictionary<Tuple<int, double, string>, string>(); 2:  3: tupleDict.Add(t1, "First tuple"); 4: tupleDict.Add(t2, "Second tuple"); 5: tupleDict.Add(t3, "Third tuple"); Because IEquatable defines GetHashCode(), and Tuple's IStructuralEquatable implementation creates this hash code by combining the hash codes of the members, this makes using the tuple as a complex key quite easy!  For example, let's say you are creating account charts for a financial application, and you want to cache those charts in a Dictionary based on the account number and the number of days of chart data (for example, a 1 day chart, 1 week chart, etc): 1: // the account number (string) and number of days (int) are key to get cached chart 2: var chartCache = new Dictionary<Tuple<string, int>, IChart>(); Summary The System.Tuple, like any tool, is best used where it will achieve a greater benefit.  I wouldn't advise overusing them, on objects with a large scope or it can become difficult to maintain.  However, when used properly in a well defined scope they can make your code cleaner and easier to maintain by removing the need for extraneous POCOs and custom property hashing and ordering. They are especially useful in defining compound keys to IDictionary implementations and for returning multiple values from methods, or passing multiple values to a single object parameter. Tweet Technorati Tags: C#,.NET,Tuple,Little Wonders

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >