Search Results

Search found 4886 results on 196 pages for 'geeks on hugs'.

Page 37/196 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Chunking a List - .NET vs Python

    - by Abhijeet Patel
    Chunking a List As I mentioned last time, I'm knee deep in python these days. I come from a statically typed background so it's definitely a mental adjustment. List comprehensions is BIG in Python and having worked with a few of them I can see why. Let's say we need to chunk a list into sublists of a specified size. Here is how we'd do it in C#  static class Extensions   {       public static IEnumerable<List<T>> Chunk<T>(this List<T> l, int chunkSize)       {           if (chunkSize <0)           {               throw new ArgumentException("chunkSize cannot be negative", "chunkSize");           }           for (int i = 0; i < l.Count; i += chunkSize)           {               yield return new List<T>(l.Skip(i).Take(chunkSize));           }       }    }    static void Main(string[] args)  {           var l = new List<string> { "a", "b", "c", "d", "e", "f","g" };             foreach (var list in l.Chunk(7))           {               string str = list.Aggregate((s1, s2) => s1 + "," + s2);               Console.WriteLine(str);           }   }   A little wordy but still pretty concise thanks to LINQ.We skip the iteration number plus chunkSize elements and yield out a new List of chunkSize elements on each iteration. The python implementation is a bit more terse. def chunkIterable(iter, chunkSize):      '''Chunks an iterable         object into a list of the specified chunkSize     '''        assert hasattr(iter, "__iter__"), "iter is not an iterable"      for i in xrange(0, len(iter), chunkSize):          yield iter[i:i + chunkSize]    if __name__ == '__main__':      l = ['a', 'b', 'c', 'd', 'e', 'f']      generator = chunkIterable(l,2)      try:          while(1):              print generator.next()      except StopIteration:          pass   xrange generates elements in the specified range taking in a seed and returning a generator. which can be used in a for loop(much like using a C# iterator in a foreach loop) Since chunkIterable has a yield statement, it turns this method into a generator as well. iter[i:i + chunkSize] essentially slices the list based on the current iteration index and chunksize and creates a new list that we yield out to the caller one at a time. A generator much like an iterator is a state machine and each subsequent call to it remembers the state at which the last call left off and resumes execution from that point. The caveat to keep in mind is that since variables are not explicitly typed we need to ensure that the object passed in is iterable using hasattr(iter, "__iter__").This way we can perform chunking on any object which is an "iterable", very similar to accepting an IEnumerable in the .NET land

    Read the article

  • Creando controles personalizados para asp.net

    - by jaullo
    Si bien es cierto que asp.net contiene muchos controles que nos facilitan la vida, en muchas ocasiones requerimos funcionalidades adicionales. Una de las opciones es recurrir a la creación de controles personalizados. Este será el Primero de varios post que dedicare a mostrar como crear algunos controles personalizados utilizando elementos sumamente sencillos y faciles de entender. Para ello utilizaremos unicamente los regularexpressionvalidator y unas cuantas expresiones regulares. Para este ejemplo extenderemos la funcionalidad de un textbox para que valide números de tarjetas de crédito. Nuestro textbox deberá verificar que existan 16 números, en grupos de 4, separados por un - Entonces, creamos un nuevo proyecto de tipo control de servidor asp.net Primeramente importamos los espacios de nombres Imports System.ComponentModel Imports System.Web Imports System.Web.UI.WebControls Imports System.Web.UI   Segundo creamos nuestra clase Public Class TextboxCreditCardNumber end class Ahora,  le decimos a nuestra clase que vamos a heredar de textbox Public Class TextboxCreditCardNumber           Inherits TextBox end class Una vez que tenemos esto, nuestra base de programación esta lista, asi que vamos a codificar nuestra nueva funcionalidad Declaramos nuestra variables y una propiedad pública que contendrá el mensaje de error que debe ser devuelto al usuario, esta será publica para que pueda ser personalizada.    Private req As New RegularExpressionValidator     Private mstrmensaje As String = "Número de Tarjeta Invalido"     Public Property MensajeError() As String         Get             Return mstrmensaje         End Get         Set(ByVal value As String)             mstrmensaje = value         End Set     End Property   Ahora definimos el metodo OnInit de nuestro control, en el cual asignaremos las propiedad e inicializaremos nuestras funciones    Protected Overrides Sub OnInit(ByVal e As System.EventArgs)         req.ControlToValidate = MyBase.ID         req.ErrorMessage = mstrmensaje         req.Display = ValidatorDisplay.Dynamic         req.ValidationExpression = "^(\d{4}-){3}\d{4}$|^(\d{4} ){3}\d{4}$|^\d{16}$"         Controls.Add(New LiteralControl("&nbsp;"))         Controls.Add(req)         MyBase.OnInit(e)     End Sub   Y por último, definimos el evento render (que es el encarado de dibujar nuestro control) Protected Overrides Sub Render(ByVal writer As System.Web.UI.HtmlTextWriter)         MyBase.Render(writer)         req.RenderControl(writer)     End Sub   Lo unico que nos queda ahora es compilar nuestra clase y añadir nuestro nuevo control al ToolBox de Controles para que pueda ser utilizado.

    Read the article

  • Silverlight Cream for June 15, 2010 -- #882

    - by Dave Campbell
    In this Issue: Colin Eberhardt Zoltan Arvai, Marcel du Preez, Mark Tucker, John Papa, Phil Middlemiss, Andy Beaulieu, and Chad Campbell. From SilverlightCream.com: Throttling Silverlight Mouse Events to Keep the UI Responsive Colin Eberhardt sent me this link to his latest at Scott Logic... about how to throttle Silverlight -- no not that, you'd have to go to one of the *other* blogs for that :) ... this is throttling the mouse, particularly the mouse wheel to keep the UI from freezing up ... check out the demos, you'll want to read the code Data Driven Applications with MVVM Part I: The Basics Zoltan Arvai started a series of tutorials on Data-Driven Applications with MVVM at SilverlightShow... this is number 1, and it looks like it's going to be a good series to read. Red-To-Green scale using an IValueConverter Marcel du Preez has an interesting post up at SilverlightShow using an IValueConverter to do a red/yellow/green progress bar ... this is pretty cool. Infragistics XamWebOutlookBar & Caliburn With assistance from Rob Eisenburg, Mark Tucker was able to build a Caliburn sample including the Infragistics XamWebOutlookBar, and he's sharing his experience (and code) with all of us. Printing Tip – Handling User Initiated Dialogs Exceptions John Papa responded to a common printing problem by writing it up in his blog. Note this problem quite often appears during debug, so check it out... John also has a quick tip on an update to the PrintAPI in Silverlight 4. Automatic Rectangle Radius X and Y Phil Middlemiss has another great Blend post up -- this one on rounding off buttons... they look great to me, but he's looking for advice -- how about that Phil? They look great to me :) WP7 Back Button in Games Planning on selling 'stuff' in the Windows Phone Marketplace? Are you familiar with the required use of the Back Button? How about in a game? ... Andy Beaulieu discusses all this and has some code you'll want to use. Windows Phone 7 – Call Phone Number from HyperlinkButton Chad Campbell [no relation :) ] is discussing dialing a number from a hyperlink in WP7 - oh yeah, it's a phone as well :) -- I think I've only seen a number attempt to be called -- hmm... and we're not yet either because we all have emulators, but this is a good intro to the functionality for when we may actually have devices! Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Silverlight Cream for November 08, 2011 -- #1165

    - by Dave Campbell
    In this Issue: Brian Noyes, Michael Crump, WindowsPhoneGeek, Erno de Weerd, Jesse Liberty, Derik Whittaker, Sumit Dutta, Asim Sajjad, Dhananjay Kumar, Kunal Chowdhury, and Beth Massi. Above the Fold: Silverlight: "Working with Prism 4 Part 1: Getting Started" Brian Noyes WP7: "Getting Started with the Coding4Fun toolkit Tile Control" WindowsPhoneGeek LightSwitch: "How to Connect to and Diagram your SQL Express Database in Visual Studio LightSwitch" Beth Massi Shoutouts: Michael Palermo's latest Desert Mountain Developers is up Michael Washington's latest Visual Studio #LightSwitch Daily is up From SilverlightCream.com: Working with Prism 4 Part 1: Getting Started Brian Noyes has a series starting at SilverlightShow about Prism 4 ... this is the first one, so a good time to jump in and pick up on an intro and basic info about Prism plus building your first Prism app. 10 Laps around Silverlight 5 (Part 5 of 10) Michael Crump has Part 5 of his 10-part Silverlight 5 investigation up at SilverlightShow talking about all the various text features added in Silverlight 5 Beta: Text Tracking and Leading, Linked and MultiColumn, OpenType, etc. Getting Started with the Coding4Fun toolkit Tile Control WindowsPhoneGeek takes on the Tile control from the Coding4Fun toolkit... as usual, great tutorial... diagrams, code, explanation Using AppHarbor, Bitbucket and Mercurial with ASP.NET and Silverlight – Part 2 CouchDB, Cloudant and Hammock Erno de Weerd has Part 2 of his trilogy and he's trying to beat David Anson for the long title record :) ... in this episode, he's adding in cloud storage to the mix in a 35-step tutorial. Background Audio Jesse Liberty's talking about background Audio... and no not the Muzak in the elevator (do they still have that?) ... he's tlking about the WP7.1 BackgroundAudioPlayer Using the ToggleSwitch in WinRT/Metro (for C#) Derik Whittaker shows off the ToggleSwitch for WinRT/Metro... not a lot to be said about it, but he says it all :) Part 19 - Windows Phone 7 - Access Phone Contacts Sumit Dutta has Part 19! of his WP7 series up... talking today about getting a phone number from the directory using the PhoneNumberChooserTask ContextMenu using MVVM Asim Sajjad shows how to make the Context Menu ViewModel friendly in this short tutorial. Code to make call in Windows Phone 7 Dhananjay Kumar's latest WP7 post is explaining how to make a call programmatically using the PhoneCallTask launcher. Silverlight Page Navigation Framework - Basic Concept Kunal Chowdhury has a 3-part tutorial series on Silverlight Navigation up. This is the first in the series, and he hits the basics... what constitutes a Page, and how to get started with the navigation framework. How to Connect to and Diagram your SQL Express Database in Visual Studio LightSwitch Beth Massi's latest LightSwitch post is on using the Data Designer to easily crete and model database tables... during development this is in SQL Express, but can be deployed to most SQL server db you like Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Benefits of PerformancePoint Services Using SharePoint Server 2010

    - by Wayne
    What is PerformancePoint Services? Most of the time it happens that the metrics that make up your key performance indicators are not simple values from a data source. In SharePoint Server 2007 PerformancePoint Services, you could create two kinds of KPI metrics: Simple single value metrics from any supported data source or Complex multiple value metrics from a single Analysis Services data source using MDX. Now things are even easier with Performance Point Services in SharePoint 2010. Let us check what is it? PerformancePoint Services in SharePoint Server 2010 is a performance management service that you can use to monitor and analyze your business. By providing flexible, easy-to-use tools for building dashboards, scorecards, reports, and key performance indicators (KPIs), PerformancePoint Services can help everyone across an organization make informed business decisions that align with companywide objectives and strategy. Scorecards, dashboards, and KPIs help drive accountability. Integrated analytics help employees move quickly from monitoring information to analyzing it and, when appropriate, sharing it throughout the organization. Prior to the addition of PerformancePoint Services to SharePoint Server, Microsoft Office PerformancePoint Server 2007 functioned as a standalone server. Now PerformancePoint functionality is available as an integrated part of the SharePoint Server Enterprise license, as is the case with Excel Services in Microsoft SharePoint Server 2010. The popular features of earlier versions of PerformancePoint Services are preserved along with numerous enhancements and additional functionality. New PerformancePoint Services features PerformancePoint Services now can utilize SharePoint Server scalability, collaboration, backup and recovery, and disaster recovery capabilities. Dashboards and dashboard items are stored and secured within SharePoint lists and libraries, providing you with a single security and repository framework. New features and enhancements of SharePoint 2010 PerformancePoint Services • With PerformancePoint Services, functioning as a service in SharePoint Server, dashboards and dashboard items are stored and secured within SharePoint lists and libraries, providing you with a single security and repository framework. The new architecture also takes advantage of SharePoint Server scalability, collaboration, backup and recovery, and disaster recovery capabilities. You also can include and link PerformancePoint Services Web Parts with other SharePoint Server Web Parts on the same page. The new architecture also streamlines security models that simplify access to report data. • The Decomposition Tree is a new visualization report type available in PerformancePoint Services. You can use it to quickly and visually break down higher-level data values from a multi-dimensional data set to understand the driving forces behind those values. The Decomposition Tree is available in scorecards and analytic reports and ultimately in dashboards. • You can access more detailed business information with improved scorecards. Scorecards have been enhanced to make it easy for you to drill down and quickly access more detailed information. PerformancePoint scorecards also offer more flexible layout options, dynamic hierarchies, and calculated KPI features. Using this enhanced functionality, you can now create custom metrics that use multiple data sources. You can also sort, filter, and view variances between actual and target values to help you identify concerns or risks. • Better Time Intelligence filtering capabilities that you can use to create and use dynamic time filters that are always up to date. Other improved filters improve the ability for dashboard users to quickly focus in on information that is most relevant. • Ability to include and link PerformancePoint Services Web Parts together with other PerformancePoint Services Web parts on the same page. • Easier to author and publish dashboard items by using Dashboard Designer. • SQL Server Analysis Services 2008 support. • Increased support for accessibility compliance in individual reports and scorecards. • The KPI Details report is a new report type that displays contextually relevant information about KPIs, metrics, rows, columns, and cells within a scorecard. The KPI Details report works as a Web part that links to a scorecard or individual KPI to show relevant metadata to the end user in SharePoint Server. This Web part can be added to PerformancePoint dashboards or any SharePoint Server page. • Create analytics reports to better understand underlying business forces behind the results. Analytic reports have been enhanced to support value filtering, new chart types, and server-based conditional formatting. To conclude, PerformancePoint Services, by becoming tightly integrated with SharePoint Server 2010, takes advantage of many enterprise-level SharePoint Server 2010 features. Unfortunately, SharePoint Foundation 2010 doesn’t include this feature. There are still many choices in SharePoint family of products that include SharePoint Server 2010, SharePoint Foundation, SharePoint Server 2007 and associated free SharePoint web parts and templates.

    Read the article

  • Review of ComponentOne Silverlight Controls (Free License Giveaway).

    - by mbcrump
    ComponentOne has several great products that target Silverlight Developers. One of them is their Silverlight Controls and the other is the XAP Optimizer. I decided that I would check out the controls and Xap Optimizer and feature them on my blog. After talking with ComponentOne, they agreed to take part in my Monthly Silverlight giveaway. The details are listed below: ----------------------------------------------------------------------------------------------------------------------------------------------------------- Win a FREE developer’s license of ComponentOne Silverlight Controls + XAP Optimizer! (the winner also gets a license to Silverlight Spy) Random winner will be announced on March 1st, 2011! To be entered into the contest do the following things: Subscribe to my feed. Leave a comment below with a valid email account (I WILL NOT share this info with anyone.) Retweet the following : I just entered to win free #Silverlight controls from @mbcrump and @ComponentOne http://mcrump.me/fTSmB8 ! Don’t change the URL because this will allow me to track the users that Tweet this page. Don’t forget to visit ComponentOne because they made this possible. MichaelCrump.Net provides Silverlight Giveaways every month. You can also see the latest giveaway by bookmarking http://giveaways.michaelcrump.net . ---------------------------------------------------------------------------------------------------------------------------------------------------------- Before we get started with the Silverlight Controls, here is a couple of links to bookmark: The Live Demos of the Silverlight Controls is located here. The XAP Optimizer page is here. One thing that I liked about the help documentation is that you can grab a PDF that only contains documentation for that control. This allows you to get the information you need without going through several hundred pages. You can also download the full documentation from their site.  ComponentOne Silverlight Controls I recently built a hobby project and decided to use ComponentOne Silverlight Controls. The main reason for this is that the controls are heavily documented, they look great and getting help was just a tweet or forum click away. So, the first question that you may ask is, “What is included?” Here is the official list below. I wanted to show several of the controls that I think developers will use the most. 1) ComponentOne’s Image Control – Display animated GIF images on your Silverlight pages as you would in traditional Web apps. Add attractive visuals with minimal effort. 2) HTML Host - Render HTML and arbitrary URI content from within Silverlight. 3) Chart3D - Create 3D surface charts with options for contour levels, zones, a chart legend and more. 4) PDFViewer - View PDF files in Silverlight! That is just a fraction of the controls available. If you want to check out several of them in a “real” application then check out my Silverlight page at http://michaelcrump.info. This brings me to the second part of the giveaway. XAP Optimizer – Is designed to reduce the size of your XAP File. It also includes built-in obfuscation and signing. With my personal project, I decided to use the XAP Optimizer by ComponentOne. It was so easy to use. You basically give it your .XAP file and it provides an output file. If you prefer to prune unused references manually then you can prune your XAP file manually by selecting the option below. I went ahead and added Obfuscation just to try it out and it worked great. You may notice from the screenshot below that I only obfuscated assemblies that I built. The other dlls anyone can grab off the net so we have no reason to obfuscate them. You also have the option to automatically sign your .xap with the SN.exe tool. So how did it turn out? Well, I reduced my XAP size from 2.4 to 1.8 with simply a click of a button. I added obfuscation with a click of a button: Screenshot of no obfuscation on my XAP File   Screenshot of obfuscation on my XAP File with XAP Optimizer.   So, with 2 button clicks, I reduce my XAP file and obfuscated my assembly. What else can you want? Well, they provide a nice HTML report that gives you an optimization summary. So what if you don’t want to launch this tool every time you deploy a Silverlight application? Well the official documentation provided a way to do it in your built event in Visual Studio. Click the Build Events tab on the left side of the Properties window. Enter the following command in the Post-build event command line: $Program Files\ComponentOne\XapOptimizer\XapOptimizer.exe /cmd /p:$(ProjectDir)$(ProjectName).xoproj In the end, this is a great product. I love code that I don’t have to write and utilities that just work. ComponentOne delivers with both the Silverlight Controls and the XAP Optimizer. Don’t forget to leave a comment below in order to win a set of the controls! Subscribe to my feed

    Read the article

  • Using Dependency Walker

    - by Valter Minute
    Dependency Walker is a very useful tool that can be used to find dependencies of a Portable Executable module. The PE format is used also on Windows CE and this means that Dependency Walker can be used to analyze also Windows CE/Windows Embedded Compact module. On Win32 it can be used also to monitor modules loaded by an application during runtime, this feature is not supported on CE. You can download dependency walker for free here: http://dependencywalker.com/. To analyze the dependencies of a Windows CE/Windows Embedded Compact 7 module you can just open it using Dependency Walker. If you want to check if a specific module can run on a Windows CE/Windows Compact 7 OS Image you can copy the executable in the same directory that contains your OS binaries (FLATRELEASEDIR). In this way Dependency Walker will highlight missing dlls or missing entry points inside existing dlls. Let’s do a quick sample. You need to check if myapp.exe (an application from a third party) can run on an image generated with your Test01 OSDesign. Copy Myapp.exe to the flat release directory of your OS Design. Launch depends.exe and use the File\Open option of its main menu to open the application executable file you just copied. You may receive an error if some of the modules required by your applications are missing. Before you analyze the module dependencies is important to configure Dependency Walker to check DLL in the same folder where your application file is stored. This is needed because some Windows CE DLLs have the same name of Win32 system DLLs but different entry points. To configure the DLL search path select “Options\Configure Module Search Order…” from Depenency Walker main menu. Select “The application directory” from the “Current Search Order” list, select it, and move it to the top of the list using the “Move Up” button. The system will ask to refresh the window contents to reflect your configuration change, click on “Yes” to proceed. Now you can inspect myapp.exe dependencies. Some DLLs are missing (XAMLRUNTIME.DLL and TILEENGINE.DLL) and OLE32.DLL exists but does not export the “CoInitialize” entry point that is required by myapp.exe. The bad news is that MyApp.exe will not run on your OS Image, the good news is that now you know what’s missing and you can add the required modules to your OS Design and fix the problem!

    Read the article

  • Scaling-out Your Services by Message Bus based WCF Transport Extension &ndash; Part 1 &ndash; Background

    - by Shaun
    Cloud computing gives us more flexibility on the computing resource, we can provision and deploy an application or service with multiple instances over multiple machines. With the increment of the service instances, how to balance the incoming message and workload would become a new challenge. Currently there are two approaches we can use to pass the incoming messages to the service instances, I would like call them dispatcher mode and pulling mode.   Dispatcher Mode The dispatcher mode introduces a role which takes the responsible to find the best service instance to process the request. The image below describes the sharp of this mode. There are four clients communicate with the service through the underlying transportation. For example, if we are using HTTP the clients might be connecting to the same service URL. On the server side there’s a dispatcher listening on this URL and try to retrieve all messages. When a message came in, the dispatcher will find a proper service instance to process it. There are three mechanism to find the instance: Round-robin: Dispatcher will always send the message to the next instance. For example, if the dispatcher sent the message to instance 2, then the next message will be sent to instance 3, regardless if instance 3 is busy or not at that moment. Random: Dispatcher will find a service instance randomly, and same as the round-robin mode it regardless if the instance is busy or not. Sticky: Dispatcher will send all related messages to the same service instance. This approach always being used if the service methods are state-ful or session-ful. But as you can see, all of these approaches are not really load balanced. The clients will send messages at any time, and each message might take different process duration on the server side. This means in some cases, some of the service instances are very busy while others are almost idle. For example, if we were using round-robin mode, it could be happened that most of the simple task messages were passed to instance 1 while the complex ones were sent to instance 3, even though instance 1 should be idle. This brings some problem in our architecture. The first one is that, the response to the clients might be longer than it should be. As it’s shown in the figure above, message 6 and 9 can be processed by instance 1 or instance 2, but in reality they were dispatched to the busy instance 3 since the dispatcher and round-robin mode. Secondly, if there are many requests came from the clients in a very short period, service instances might be filled by tons of pending tasks and some instances might be crashed. Third, if we are using some cloud platform to host our service instances, for example the Windows Azure, the computing resource is billed by service deployment period instead of the actual CPU usage. This means if any service instance is idle it is wasting our money! Last one, the dispatcher would be the bottleneck of our system since all incoming messages must be routed by the dispatcher. If we are using HTTP or TCP as the transport, the dispatcher would be a network load balance. If we wants more capacity, we have to scale-up, or buy a hardware load balance which is very expensive, as well as scaling-out the service instances. Pulling Mode Pulling mode doesn’t need a dispatcher to route the messages. All service instances are listening to the same transport and try to retrieve the next proper message to process if they are idle. Since there is no dispatcher in pulling mode, it requires some features on the transportation. The transportation must support multiple client connection and server listening. HTTP and TCP doesn’t allow multiple clients are listening on the same address and port, so it cannot be used in pulling mode directly. All messages in the transportation must be FIFO, which means the old message must be received before the new one. Message selection would be a plus on the transportation. This means both service and client can specify some selection criteria and just receive some specified kinds of messages. This feature is not mandatory but would be very useful when implementing the request reply and duplex WCF channel modes. Otherwise we must have a memory dictionary to store the reply messages. I will explain more about this in the following articles. Message bus, or the message queue would be best candidate as the transportation when using the pulling mode. First, it allows multiple application to listen on the same queue, and it’s FIFO. Some of the message bus also support the message selection, such as TIBCO EMS, RabbitMQ. Some others provide in memory dictionary which can store the reply messages, for example the Redis. The principle of pulling mode is to let the service instances self-managed. This means each instance will try to retrieve the next pending incoming message if they finished the current task. This gives us more benefit and can solve the problems we met with in the dispatcher mode. The incoming message will be received to the best instance to process, which means this will be very balanced. And it will not happen that some instances are busy while other are idle, since the idle one will retrieve more tasks to make them busy. Since all instances are try their best to be busy we can use less instances than dispatcher mode, which more cost effective. Since there’s no dispatcher in the system, there is no bottleneck. When we introduced more service instances, in dispatcher mode we have to change something to let the dispatcher know the new instances. But in pulling mode since all service instance are self-managed, there no extra change at all. If there are many incoming messages, since the message bus can queue them in the transportation, service instances would not be crashed. All above are the benefits using the pulling mode, but it will introduce some problem as well. The process tracking and debugging become more difficult. Since the service instances are self-managed, we cannot know which instance will process the message. So we need more information to support debug and track. Real-time response may not be supported. All service instances will process the next message after the current one has done, if we have some real-time request this may not be a good solution. Compare with the Pros and Cons above, the pulling mode would a better solution for the distributed system architecture. Because what we need more is the scalability, cost-effect and the self-management.   WCF and WCF Transport Extensibility Windows Communication Foundation (WCF) is a framework for building service-oriented applications. In the .NET world WCF is the best way to implement the service. In this series I’m going to demonstrate how to implement the pulling mode on top of a message bus by extending the WCF. I don’t want to deep into every related field in WCF but will highlight its transport extensibility. When we implemented an RPC foundation there are many aspects we need to deal with, for example the message encoding, encryption, authentication and message sending and receiving. In WCF, each aspect is represented by a channel. A message will be passed through all necessary channels and finally send to the underlying transportation. And on the other side the message will be received from the transport and though the same channels until the business logic. This mode is called “Channel Stack” in WCF, and the last channel in the channel stack must always be a transport channel, which takes the responsible for sending and receiving the messages. As we are going to implement the WCF over message bus and implement the pulling mode scaling-out solution, we need to create our own transport channel so that the client and service can exchange messages over our bus. Before we deep into the transport channel, let’s have a look on the message exchange patterns that WCF defines. Message exchange pattern (MEP) defines how client and service exchange the messages over the transportation. WCF defines 3 basic MEPs which are datagram, Request-Reply and Duplex. Datagram: Also known as one-way, or fire-forgot mode. The message sent from the client to the service, and no need any reply from the service. The client doesn’t care about the message result at all. Request-Reply: Very common used pattern. The client send the request message to the service and wait until the reply message comes from the service. Duplex: The client sent message to the service, when the service processing the message it can callback to the client. When callback the service would be like a client while the client would be like a service. In WCF, each MEP represent some channels associated. MEP Channels Datagram IInputChannel, IOutputChannel Request-Reply IRequestChannel, IReplyChannel Duplex IDuplexChannel And the channels are created by ChannelListener on the server side, and ChannelFactory on the client side. The ChannelListener and ChannelFactory are created by the TransportBindingElement. The TransportBindingElement is created by the Binding, which can be defined as a new binding or from a custom binding. For more information about the transport channel mode, please refer to the MSDN document. The figure below shows the transport channel objects when using the request-reply MEP. And this is the datagram MEP. And this is the duplex MEP. After investigated the WCF transport architecture, channel mode and MEP, we finally identified what we should do to extend our message bus based transport layer. They are: Binding: (Optional) Defines the channel elements in the channel stack and added our transport binding element at the bottom of the stack. But we can use the build-in CustomBinding as well. TransportBindingElement: Defines which MEP is supported in our transport and create the related ChannelListener and ChannelFactory. This also defines the scheme of the endpoint if using this transport. ChannelListener: Create the server side channel based on the MEP it’s. We can have one ChannelListener to create channels for all supported MEPs, or we can have ChannelListener for each MEP. In this series I will use the second approach. ChannelFactory: Create the client side channel based on the MEP it’s. We can have one ChannelFactory to create channels for all supported MEPs, or we can have ChannelFactory for each MEP. In this series I will use the second approach. Channels: Based on the MEPs we want to support, we need to implement the channels accordingly. For example, if we want our transport support Request-Reply mode we should implement IRequestChannel and IReplyChannel. In this series I will implement all 3 MEPs listed above one by one. Scaffold: In order to make our transport extension works we also need to implement some scaffold stuff. For example we need some classes to send and receive message though out message bus. We also need some codes to read and write the WCF message, etc.. These are not necessary but would be very useful in our example.   Message Bus There is only one thing remained before we can begin to implement our scaling-out support WCF transport, which is the message bus. As I mentioned above, the message bus must have some features to fulfill all the WCF MEPs. In my company we will be using TIBCO EMS, which is an enterprise message bus product. And I have said before we can use any message bus production if it’s satisfied with our requests. Here I would like to introduce an interface to separate the message bus from the WCF. This allows us to implement the bus operations by any kinds bus we are going to use. The interface would be like this. 1: public interface IBus : IDisposable 2: { 3: string SendRequest(string message, bool fromClient, string from, string to = null); 4:  5: void SendReply(string message, bool fromClient, string replyTo); 6:  7: BusMessage Receive(bool fromClient, string replyTo); 8: } There are only three methods for the bus interface. Let me explain one by one. The SendRequest method takes the responsible for sending the request message into the bus. The parameters description are: message: The WCF message content. fromClient: Indicates if this message was came from the client. from: The channel ID that this message was sent from. The channel ID will be generated when any kinds of channel was created, which will be explained in the following articles. to: The channel ID that this message should be received. In Request-Reply and Duplex MEP this is necessary since the reply message must be received by the channel which sent the related request message. The SendReply method takes the responsible for sending the reply message. It’s very similar as the previous one but no “from” parameter. This is because it’s no need to reply a reply message again in any MEPs. The Receive method takes the responsible for waiting for a incoming message, includes the request message and specified reply message. It returned a BusMessage object, which contains some information about the channel information. The code of the BusMessage class is 1: public class BusMessage 2: { 3: public string MessageID { get; private set; } 4: public string From { get; private set; } 5: public string ReplyTo { get; private set; } 6: public string Content { get; private set; } 7:  8: public BusMessage(string messageId, string fromChannelId, string replyToChannelId, string content) 9: { 10: MessageID = messageId; 11: From = fromChannelId; 12: ReplyTo = replyToChannelId; 13: Content = content; 14: } 15: } Now let’s implement a message bus based on the IBus interface. Since I don’t want you to buy and install the TIBCO EMS or any other message bus products, I will implement an in process memory bus. This bus is only for test and sample purpose. It can only be used if the service and client are in the same process. Very straightforward. 1: public class InProcMessageBus : IBus 2: { 3: private readonly ConcurrentDictionary<Guid, InProcMessageEntity> _queue; 4: private readonly object _lock; 5:  6: public InProcMessageBus() 7: { 8: _queue = new ConcurrentDictionary<Guid, InProcMessageEntity>(); 9: _lock = new object(); 10: } 11:  12: public string SendRequest(string message, bool fromClient, string from, string to = null) 13: { 14: var entity = new InProcMessageEntity(message, fromClient, from, to); 15: _queue.TryAdd(entity.ID, entity); 16: return entity.ID.ToString(); 17: } 18:  19: public void SendReply(string message, bool fromClient, string replyTo) 20: { 21: var entity = new InProcMessageEntity(message, fromClient, null, replyTo); 22: _queue.TryAdd(entity.ID, entity); 23: } 24:  25: public BusMessage Receive(bool fromClient, string replyTo) 26: { 27: InProcMessageEntity e = null; 28: while (true) 29: { 30: lock (_lock) 31: { 32: var entity = _queue 33: .Where(kvp => kvp.Value.FromClient == fromClient && (kvp.Value.To == replyTo || string.IsNullOrWhiteSpace(kvp.Value.To))) 34: .FirstOrDefault(); 35: if (entity.Key != Guid.Empty && entity.Value != null) 36: { 37: _queue.TryRemove(entity.Key, out e); 38: } 39: } 40: if (e == null) 41: { 42: Thread.Sleep(100); 43: } 44: else 45: { 46: return new BusMessage(e.ID.ToString(), e.From, e.To, e.Content); 47: } 48: } 49: } 50:  51: public void Dispose() 52: { 53: } 54: } The InProcMessageBus stores the messages in the objects of InProcMessageEntity, which can take some extra information beside the WCF message itself. 1: public class InProcMessageEntity 2: { 3: public Guid ID { get; set; } 4: public string Content { get; set; } 5: public bool FromClient { get; set; } 6: public string From { get; set; } 7: public string To { get; set; } 8:  9: public InProcMessageEntity() 10: : this(string.Empty, false, string.Empty, string.Empty) 11: { 12: } 13:  14: public InProcMessageEntity(string content, bool fromClient, string from, string to) 15: { 16: ID = Guid.NewGuid(); 17: Content = content; 18: FromClient = fromClient; 19: From = from; 20: To = to; 21: } 22: }   Summary OK, now I have all necessary stuff ready. The next step would be implementing our WCF message bus transport extension. In this post I described two scaling-out approaches on the service side especially if we are using the cloud platform: dispatcher mode and pulling mode. And I compared the Pros and Cons of them. Then I introduced the WCF channel stack, channel mode and the transport extension part, and identified what we should do to create our own WCF transport extension, to let our WCF services using pulling mode based on a message bus. And finally I provided some classes that need to be used in the future posts that working against an in process memory message bus, for the demonstration purpose only. In the next post I will begin to implement the transport extension step by step.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Windows Azure Evolution &ndash; TFS Integration (WAWS Part 2)

    - by Shaun
    So this is the fourth blog post about the new features of Windows Azure and the second part of Windows Azure Web Sites. But this is not just focus on the WAWS since the function I’m going to introduce is available in both Windows Azure Web Sites and Windows Azure Cloud Service (a.k.a. hosted service). In the previous post I talked about the Windows Azure Web Sites and how to use its gallery to build a WordPress personal blog without coding. Besides the gallery we can create an empty web site and upload our website from vary approaches. And one of the highlighted feature here is that, we can make our web site integrated with a source control service, such as TFS and Git, so that it will be deployed automatically once a new commit or build available.   Create New Empty Web Site In the developer portal when creating a new web site, we can select QUICK CREATE item. This will create an empty web site with only one shared instance without any database associated. Let’s specify the URL, region and subscription and click OK. After a few seconds our website will be ready. And now we can click the BROWSE button to open this empty website. As you can see there is a welcome page available in my website even thought I didn’t upload or deploy anything. This means even though the website will be charged even before anything was deployed, similar as the cloud service (hosted service). It is because once we created a website, Windows Azure platform had arranged a hosting process (w3wp.exe) in the group of virtual machines.   Create Project in TFS Preview Service and Setup Link Currently the Windows Azure Web Sites can integrate with TFS and Git as its deployment source, and it only support the Microsoft TFS Preview Service for now. I will not deep into how to use the TFS preview service in this post but once we click into the website we had just created and then clicked the “Set up TFS publishing”, there will be a dialog helping us to connect to this service. If you don’t have an account you can click the link shown below to request one. Assuming we have already had an account of TFS service then we need to create a new project firstly. Go to your TFS service website and create a new project, giving the project name, description and the process template. Then, back to the developer portal and clicked the “Set up TFS publishing” link. In the popping up window I will provide my TFS service URL and click the “Authorize now” link. Click “Accept” button to allow my windows azure to connect to my TFS service. Then it will be back to the developer portal and list all projects in my account. Just select the one I had just created and click OK. Then our website is linking to the TFS project I specified and finally it will show similar like this below. This means the web site had been linked to the TFS successfully.   Work with TFS Preview Service in VS2010 In the figure above there are some links to guide us how to connect to the TFS server through Visual Studio 2010 and 2012 RC. If you are using Visual Studio 2012 RC, you don’t need any extension. But if you are using Visual Studio 2010 you must have SP1 and KB2581206 installed. To connect to my TFS service just open the Visual Studio and in the Team Explorer, we can add a new TFS server and paste the URL of my TFS service from the developer portal. And select the project I had just created, then it will be listed in my Team Explorer. Now let’s start to build our website. Since the website we are going to build will be deployed to WAWS, it’s NOT a cloud service, NOT a web role. So in this case we need to create a normal ASP.NET web application. For example, an ASP.NET MVC 3 web application. Next, right click on the solution and select “Add Solution to Source Control”, select the project I had just created. Then check my code in. Once the check-in finished we can see that there is a build running in the TFS server. And if we back to the developer portal, we will see in our web site deployment page there’s a deployment running. In fact, once we linked our web site to our TFS then it will create a new build definition in our TFS project. It will be triggered by each check-in and deploy to the web site we linked automatically. So that when our code had been compiled it will be published to our web site from our TFS server. Once the build and deployment finished we can see it’s now active on our developer portal. Now we can see the web site that created from my Visual Studio and deployed by my TFS.   Continue Deployment through VS and TFS A big benefit when using TFS publishing is the continue deployment. Now if I changed some code in my Visual Studio, for example update some text on the home page and check in my changes, then it will trigger an new build and deploy to my WAWS automatically. And even more, if we wanted to rollback to a previous version we can just select an existing deployment listed in the portal and click REDEPLOY at the bottom.   Q&A: Can Web Site use Storage work with a Worker Role? Stacy asked a question in my previous post, which was “can a web site use Windows Azure Storage and furthermore working with a worker role”. Since the web site is deployed on the windows azure virtual machines in data center, it must be able to use all windows azure features such as the storage, SQL databases, CDN, etc.. But since when using web site we normally have a standard ASP.NET web application, PHP website or NodeJS, the windows azure SDK was not referenced by default. But we can add them by ourselves. In our sample project let’s right click on my MVC project and clicked the “Manage NuGet packages”. And in the dialog I will search windows azure packages and select the “Windows Azure Storage” to install. Then we will have the assemblies to access windows azure storage such as tables, queues and blobs. Since I have a storage account already, let’s have a quick demo, just to list all blobs in a container. The code would be like this. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Web; 5: using System.Web.Mvc; 6: using Microsoft.WindowsAzure; 7: using Microsoft.WindowsAzure.StorageClient; 8:  9: namespace WAASTFSDemo.Controllers 10: { 11: public class HomeController : Controller 12: { 13: public ActionResult Index() 14: { 15: ViewBag.Message = "Welcome to Windows Azure!"; 16:  17: var credentials = new StorageCredentialsAccountAndKey("[STORAGE_ACCOUNT]", "[STORAGE_KEY]"); 18: var account = new CloudStorageAccount(credentials, false); 19: var client = account.CreateCloudBlobClient(); 20: var container = client.GetContainerReference("shared"); 21: ViewBag.Blobs = container.ListBlobs().Select(b => b.Uri.AbsoluteUri); 22:  23: return View(); 24: } 25:  26: public ActionResult About() 27: { 28: return View(); 29: } 30: } 31: } 1: @{ 2: ViewBag.Title = "Home Page"; 3: } 4:  5: <h2>@ViewBag.Message</h2> 6: <p> 7: To learn more about ASP.NET MVC visit <a href="http://asp.net/mvc" title="ASP.NET MVC Website">http://asp.net/mvc</a>. 8: </p> 9: <div> 10: <ul> 11: @foreach (var blob in ViewBag.Blobs) 12: { 13: <li>@blob</li> 14: } 15: </ul> 16: </div> And then just check in the code, it will be deployed to my web site. Finally we can see the blobs in my storage.   This is just an example but it proves that web sites can connect to storage, table, blob and queue as well. So the answer to Stacy should be “yes”. The web site can use queue storage to work with worker role.   Summary In this post I demonstrated how to integrate with TFS from Windows Azure Web Sites. You can see our website can be built, uploaded and deployed automatically by TFS service. All we need to do is to provide the TFS name and select the project. Not only the Windows Azure Web Site, in this upgrade the Windows Azure Cloud Services (hosted service) can be published through TFS as well. Very similar as what we have shown below. But currently, only Microsoft TFS Service Preview can be integrated with Windows Azure. But I think in the future we can link the TFS in our enterprise and some 3rd party TFS such as CodePlex to Windows Azure.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Virtualize a 64 bit guest OS on 32 bit host OS

    - by Manesh Karunakaran
    If you want to run a 64 bit virtual machine on a 32 bit host you have two options 1. VMWare Server (or a Workstation version that supports 64bit guests) 2. Sun Virtual box Though 64 bit guests on 32 bit hosts is possible, it requires that you are running x64 (not ia64) hardware. All the new Intel processors are 64 bit compatible (if you have T5200/T550 on your laptop, you are out of luck) VMWare has a free tool you can download to check whether your machine can run 64bit guests. Microsoft Vitual PC and Microsoft Windows Virtual PC do not support 64 bit guests. Also Hyper-V will run only on a 64 bit host. So if you are looking for a Microsoft solution, then tough luck!   Technorati Tags: Virtualization,64 bit guest OS on 32 bit host,VirtualBox,VMWare,VirtualPC,Windows Virtual PC,Hyper-V

    Read the article

  • O&rsquo;Reilly Deal of the Day 10/June/2014 - AngularJS Directives

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2014/06/10/orsquoreilly-deal-of-the-day-10june2014---angularjs-directives.aspxToday’s half-price E-Book offer from O’Reilly at http://shop.oreilly.com/product/9781783280339.do is AngularJS Directives. “AngularJS, propelled by Google, is quickly becoming one of the most popular JavaScript MVC frameworks available, working to invert the development paradigm and bring data-driven modularity to the web frontend. Directives serve as the core building blocks in AngularJS and enable you to create reusable models that mold around your data structures and breathe new life into the intersection of HTML and JavaScript.”

    Read the article

  • Rouen Business School builds its entire back office UI with Visual WebGui

    - by Webgui
    Two years ago, Rouen Business School (AMBA accredited institution located in Rouen, Normandy, France) decided to develop and implement a proprietary information system in-house. The objective was to administer all the data encompassed by a classic 3500 Students business school: from on-line application forms to the registration system including financial information, scheduling, grades management, etc. The development team at Rouen Business School chose Visual WebGui for the UI. “When we tested the Visual WebGui solution we were really amazed and enthusiastic. It was exactly the kind of solution we were looking for… The great performance of the solution allows us to manage a large amount of information with no delay with a very positive feedback at the user end,” said Stéphane Henry the IT Project Manager of the school.   As a result of the fast development, easy deployment, performance, and professional design that the team experienced with Visual WebGui, the entire back office of Rouen Business School information system was chosen to be developed with the Visual WebGui framework “and after two years we do not see any reason to change this,” commented Stéphane Henry who added that “all the original requirements were satisfied using Visual WebGui.” You can read the full Case study here >

    Read the article

  • Silverlight Cream for April 20, 2010 -- #842

    - by Dave Campbell
    In this Issue: Zoltan Arvai, Svetla Stoycheva, Alexey Zakharov, Chris Rouw, David Anson(-2-), Bill Reiss, John Papa and Adam Kinney, Chris Klug, CorrinaB, and Mike Snow. Shoutouts: Pete Brown interviewed David Kelley at MIX10: Pete at MIX10: David Kelley on the Prototype WPF and Silverlight Retail Experience Pete Brown also interviewed Emil Stoychev at MIX10: Pete at MIX10: Emil Stoychev on the CompletIT Silverlight Site SilverlightShow has a MIX10 Review by SilverlightShow Live Reporter Cigdem Patlak SilverlightShow also has an Interview with SilverlightShow Article Author Andrej Tozon From SilverlightCream.com: Implementing Push Notifications in Windows Phone 7 Zoltan Arvai has a post up on SilverlightShow discussing Push Notification on WP7 ... what it is, and how to use it. Completit.com - the challenges behind building a corporate website in Silverlight Svetla Stoycheva shows off the new CompleteIT corporate website which is pretty darn cool... and disucusses some of the challenges and solutions Introducing to Halcyone - Silverlight Application Framework: Silverlight Rest Extensions Alexey Zakharov has a tutorial up on a Silverlight application framework he's working on called Halcyone which is available on CodePlex Using the Tag Property during Silverlight Binding Chris Rouw details his SL3 to SL4 conversion and some issues he had, and how he was able to resolve a binding problem using the tag property. Using ContextMenu to implement SplitButton and MenuButton for Silverlight (or WPF) David Anson has a cool discussion up of using the ContextMenu code he put up previously to build a Split button, and includes all the code as usual. Silverlight/WPF Data Visualization Development Release 4 and Windows Phone 7 Charting sample! David Anson updated his Data Visualization because of the new releases, and this time he's including WP7... charting in WP7... ! Space Rocks game step 10: More fun with rocks In episode 10, Bill Reiss shows how to deal with multiple asteroids and all the interaction. Silverlight Training Course (Silverlight 4) Get your serious Silverlight 4 Mojo on with a new SL4 Training kit on Channel 9 ... buncha folks, spearheaded (it looks like) by John Papa and Adam Kinney... Plug-ins and composite applications in Silverlight – pt 3 Chris Klug is back with part 3 of his series on extensions and plug-in loading. So far he's covered a roll-your-own concept and MEF, now he digs into Prism. Transitions, Animations, and Effects with Blend - Part One How cool to have CorrinaB speak at your User Group meeting! ... She did just that in Portland, and instead of simply dropping a deck and some code in her blog, she's giving the run-down on her presentation... always good stuff, Corrina! Tip of the Day #110 – Using Static Resources in Class Libraries Mike Snow's latest tip is about how to create and use a Resource Dictionary. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • South African .Net Bloggers

    - by MarkPearl
    Where would I be without the inspiration of the following South African developers who are constantly contributing to the .NET community. Robert MacClean Hilton Giesenow Rubi Grobler Zayd Kara Zlatan Dzinic Dave Coates As well as the great input we get from the local Microsoft people.

    Read the article

  • Expert F# &ndash; Pattern Matching with Adam and Eve

    - by MarkPearl
    So I am loving my Expert F# book. I wish I had more time with it, but the little time I get I really enjoy. However today I was completely stumped by what the book was trying to get across with regards to pattern matching. On Page 38 – Chapter 3, it briefly describes F# option values. On this page it gives the code snippet along the code lines below and then goes on to speak briefly about pattern matching... open System type 'a option = | None | Some of 'a let people = [ ("Adam", None); ("Eve", None); ("Cain", Some("Adam", "Eve")); ("Abel", Some("Adam", "Eve")) ] let showParents(name, parents) = match parents with | Some(dad, mum) -> printfn "%s has father %s, mother %s" name dad mum | None -> printfn "%s has no parents!" name Console.WriteLine(showParents("Adam", None))   Originally when I read this code I think I misunderstood the purpose of the example code. I for some reason thought that the showParents function would magically be parsing the people array and looking for a match of name and then showing the parents. But obviously it cannot do this since there is no reference to the people array in the showParents method. After rereading the page I realized that I had just combined the two segments of code together, possibly incorrectly, and that a better example would have been to have a code snippet like the following. let showParents(name, parents) = match parents with | Some(dad, mum) -> printfn "%s has father %s, mother %s" name dad mum | None -> printfn "%s has no parents!" name Console.WriteLine(showParents("Adam", None)) Console.WriteLine(showParents("Cain", Some("Adam", "Eve"))) Console.ReadLine()   However, what if I wanted to have a function that was passed a list of people and a name would then show the parents of the name if there were any, and if not would show that they had no parents… so that doesnt seem to difficult does it… lets look at my very unoptimized noob F# code to try and achieve this… open System let people = [ ("Adam", None); ("Eve", None); ("Cain", Some("Adam", "Eve")); ("Abel", Some("Adam", "Eve")) ] // // returns the name of the person // let showName(person : string * (string * string) option) = let name = fst(person) name // // Returns a string with the parents details or not // let showParents(itemData : string * (string * string) option) = let name = fst(itemData) let parents = snd(itemData) match parents with | Some(dad, mum) -> "Father " + dad + " and Mother " + mum | None -> "Has no parents!" // // Prints the details // let showDetails(person : string * (string * string) option) = Console.WriteLine(showName(person)) Console.WriteLine(showParents(person)) // // Check if the name matches the first portion of person // if so, return true, else return false // let nameMatch(name : string , person : string * (string * string) option) = match name with | x when x = fst(person) -> true | _ -> false // // Searches an array of people and looks for a match of names // let findPerson(name : string, people : (string * (string * string) option) list) = let o = Seq.tryFind(fun x -> nameMatch(name, x)) people if Option.isSome o then o else Option.None // // Try and find a person, if found show their details // else show no match // let FoundPerson = findPerson("Cain", people) match FoundPerson with | None -> Console.WriteLine("Not found") | Some(x) -> showDetails(x) Console.ReadLine() So, my code isn’t the cleanest but it did teach me a bit more F#. The area that I learnt about was the option keyword. The challenge being, if a match of the name isn’t found – and if a name is found but the person doesn’t have parents it should react accordingly. I’m pretty sure I can optimize this code quite a bit more and I think I may come back to it sometime in the future and relook at it, but for now at least I was able to achieve what I wanted.. and my brain has gone just that wee little bit more functional.

    Read the article

  • Autoscaling in a modern world&hellip;. Part 3

    - by Steve Loethen
    The Wasabi Hands on Labs give you a good look at the basic mechanics, but I don’t find the setup too practical.  Using a local console application to host the Autoscaler and rules files is probably the (IMHO) least likely architecture.  Far more common would be hosting in a service on premise (if you want to have the Autoscaler local) or most likely, host it in a Azure role of it’s own.  I chose to go the Azure route. First step was to get the rules.xml and the services.xml files into the cloud.  I tend to be a “one step at a time” sort of guy, so running the console application with the rules sitting in a Azure hosted set of blobs seemed to be the logical first step.  Here are the steps: 1) Create a container in the storage account you wish to use.  Name does not matter, you will get a chance to set the container name (as well as the file names) in the app.config 2) Copy the two files from where you created them to your  container.  I used the same files I had locally.  I made the container public to eliminate security issues, but in the final application, a bit of security needs to be applied (one problem at a time).  The content type was set to text/xml.  I found one reference claiming the importance of this step, and it makes sense. 3) Adjust the app.config to set the location of the files.  This will let you set all the storage account and key information needed to reach into the cloud form your console application.  The sections of your app.config will look like this: <rulesStores> <add name="Blob Rules Store" type="Microsoft.Practices.EnterpriseLibrary.WindowsAzure.Autoscaling.Rules.Configuration.BlobXmlFileRulesStore, Microsoft.Practices.EnterpriseLibrary.WindowsAzure.Autoscaling, Version=5.0.1118.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" blobContainerName="[ContainerName]" blobName="rules.xml" storageAccount="DefaultEndpointsProtocol=https;AccountName=[StorageAccount];AccountKey=[AccountKey]" monitoringRate="00:00:30" certificateThumbprint="" certificateStoreLocation="LocalMachine" checkCertificateValidity="false" /> </rulesStores> <serviceInformationStores> <add name="Blob Service Information Store" type="Microsoft.Practices.EnterpriseLibrary.WindowsAzure.Autoscaling.ServiceModel.Configuration.BlobXmlFileServiceInformationStore, Microsoft.Practices.EnterpriseLibrary.WindowsAzure.Autoscaling, Version=5.0.1118.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" blobContainerName="[ContainerName]" blobName="services.xml" storageAccount="DefaultEndpointsProtocol=https;AccountName=[StorageAccount];AccountKey=[AccountKey]" monitoringRate="00:00:30" certificateThumbprint="" certificateStoreLocation="LocalMachine" checkCertificateValidity="false" /> </serviceInformationStores> Once I had the files up in the sky, I renamed the local copies to just to make my self feel better about the application using the correct set of rules and services.  Deploy the web role to the cloud.  Once it is up and running, start the console application.  You should find the application scales up and down in response to the buttons on the web site.  Tune in next time for moving the hosting of the Autoscaler to a worker role, discussions on getting the logging information into diagnostics into storage, and a set of discussions about certs and how they play a role.

    Read the article

  • Using Table-Valued Parameters With SQL Server Reporting Services

    - by Jesse
    In my last post I talked about using table-valued parameters to pass a list of integer values to a stored procedure without resorting to using comma-delimited strings and parsing out each value into a TABLE variable. In this post I’ll extend the “Customer Transaction Summary” report example to see how we might leverage this same stored procedure from within an SQL Server Reporting Services (SSRS) report. I’ve worked with SSRS off and on for the past several years and have generally found it to be a very useful tool for building nice-looking reports for end users quickly and easily. That said, I’ve been frustrated by SSRS from time to time when seemingly simple things are difficult to accomplish or simply not supported at all. I thought that using table-valued parameters from within a SSRS report would be simple, but unfortunately I was wrong. Customer Transaction Summary Example Let’s take the “Customer Transaction Summary” report example from the last post and try to plug that same stored procedure into an SSRS report. Our report will have three parameters: Start Date – beginning of the date range for which the report will summarize customer transactions End Date – end of the date range for which the report will summarize customer transactions Customer Ids – One or more customer Ids representing the customers that will be included in the report The simplest way to get started with this report will be to create a new dataset and point it at our Customer Transaction Summary report stored procedure (note that I’m using SSRS 2012 in the screenshots below, but there should be little to no difference with SSRS 2008): When you initially create this dataset the SSRS designer will try to invoke the stored procedure to determine what the parameters and output fields are for you automatically. As part of this process the following dialog pops-up: Obviously I can’t use this dialog to specify a value for the ‘@customerIds’ parameter since it is of the IntegerListTableType user-defined type that we created in the last post. Unfortunately this really throws the SSRS designer for a loop, and regardless of what combination of Data Type, Pass Null Value, or Parameter Value I used here, I kept getting this error dialog with the message, "Operand type clash: nvarchar is incompatible with IntegerListTableType". This error message makes some sense considering that the nvarchar type is indeed incompatible with the IntegerListTableType, but there’s little clue given as to how to remedy the situation. I don’t know for sure, but I think that behind-the-scenes the SSRS designer is trying to give the @customerIds parameter an nvarchar-typed SqlParameter which is causing the issue. When I first saw this error I figured that this might just be a limitation of the dataset designer and that I’d be able to work around the issue by manually defining the parameters. I know that there are some special steps that need to be taken when invoking a stored procedure with a table-valued parameter from ADO .NET, so I figured that I might be able to use some custom code embedded in the report  to create a SqlParameter instance with the needed properties and value to make this work, but the “Operand type clash" error message persisted. The Text Query Approach Just because we’re using a stored procedure to create the dataset for this report doesn’t mean that we can’t use the ‘Text’ Query Type option and construct an EXEC statement that will invoke the stored procedure. In order for this to work properly the EXEC statement will also need to declare and populate an IntegerListTableType variable to pass into the stored procedure. Before I go any further I want to make one point clear: this is a really ugly hack and it makes me cringe to do it. Simply put, I strongly feel that it should not be this difficult to use a table-valued parameter with SSRS. With that said, let’s take a look at what we’ll have to do to make this work. Manually Define Parameters First, we’ll need to manually define the parameters for report by right-clicking on the ‘Parameters’ folder in the ‘Report Data’ window. We’ll need to define the ‘@startDate’ and ‘@endDate’ as simple date parameters. We’ll also create a parameter called ‘@customerIds’ that will be a mutli-valued Integer parameter: In the ‘Available Values’ tab we’ll point this parameter at a simple dataset that just returns the CustomerId and CustomerName of each row in the Customers table of the database or manually define a handful of Customer Id values to make available when the report runs. Once we have these parameters properly defined we can take another crack at creating the dataset that will invoke the ‘rpt_CustomerTransactionSummary’ stored procedure. This time we’ll choose the ‘Text’ query type option and put the following into the ‘Query’ text area: 1: exec('declare @customerIdList IntegerListTableType ' + @customerIdInserts + 2: ' EXEC rpt_CustomerTransactionSummary 3: @startDate=''' + @startDate + ''', 4: @endDate='''+ @endDate + ''', 5: @customerIds=@customerIdList')   By using the ‘Text’ query type we can enter any arbitrary SQL that we we want to and then use parameters and string concatenation to inject pieces of that query at run time. It can be a bit tricky to parse this out at first glance, but from the SSRS designer’s point of view this query defines three parameters: @customerIdInserts – This will be a Text parameter that we use to define INSERT statements that will populate the @customerIdList variable that is being declared in the SQL. This parameter won’t actually ever get passed into the stored procedure. I’ll go into how this will work in a bit. @startDate – This is a simple date parameter that will get passed through directly into the @startDate parameter of the stored procedure on line 3. @endDate – This is another simple data parameter that will get passed through into the @endDate parameter of the stored procedure on line 4. At this point the dataset designer will be able to correctly parse the query and should even be able to detect the fields that the stored procedure will return without needing to specify any values for query when prompted to. Once the dataset has been correctly defined we’ll have a @customerIdInserts parameter listed in the ‘Parameters’ tab of the dataset designer. We need to define an expression for this parameter that will take the values selected by the user for the ‘@customerIds’ parameter that we defined earlier and convert them into INSERT statements that will populate the @customerIdList variable that we defined in our Text query. In order to do this we’ll need to add some custom code to our report using the ‘Report Properties’ dialog: Any custom code defined in the Report Properties dialog gets embedded into the .rdl of the report itself and (unfortunately) must be written in VB .NET. Note that you can also add references to custom .NET assemblies (which could be written in any language), but that’s outside the scope of this post so we’ll stick with the “quick and dirty” VB .NET approach for now. Here’s the VB .NET code (note that any embedded code that you add here must be defined in a static/shared function, though you can define as many functions as you want): 1: Public Shared Function BuildIntegerListInserts(ByVal variableName As String, ByVal paramValues As Object()) As String 2: Dim insertStatements As New System.Text.StringBuilder() 3: For Each paramValue As Object In paramValues 4: insertStatements.AppendLine(String.Format("INSERT {0} VALUES ({1})", variableName, paramValue)) 5: Next 6: Return insertStatements.ToString() 7: End Function   This method takes a variable name and an array of objects. We use an array of objects here because that is how SSRS will pass us the values that were selected by the user at run-time. The method uses a StringBuilder to construct INSERT statements that will insert each value from the object array into the provided variable name. Once this method has been defined in the custom code for the report we can go back into the dataset designer’s Parameters tab and update the expression for the ‘@customerIdInserts’ parameter by clicking on the button with the “function” symbol that appears to the right of the parameter value. We’ll set the expression to: 1: =Code.BuildIntegerListInserts("@customerIdList ", Parameters!customerIds.Value)   In order to invoke our custom code method we simply need to invoke “Code.<method name>” and pass in any needed parameters. The first parameter needs to match the name of the IntegerListTableType variable that we used in the EXEC statement of our query. The second parameter will come from the Value property of the ‘@customerIds’ parameter (this evaluates to an object array at run time). Finally, we’ll need to edit the properties of the ‘@customerIdInserts’ parameter on the report to mark it as a nullable internal parameter so that users aren’t prompted to provide a value for it when running the report. Limitations And Final Thoughts When I first started looking into the text query approach described above I wondered if there might be an upper limit to the size of the string that can be used to run a report. Obviously, the size of the actual query could increase pretty dramatically if you have a parameter that has a lot of potential values or you need to support several different table-valued parameters in the same query. I tested the example Customer Transaction Summary report with 1000 selected customers without any issue, but your mileage may vary depending on how much data you might need to pass into your query. If you think that the text query hack is a lot of work just to use a table-valued parameter, I agree! I think that it should be a lot easier than this to use a table-valued parameter from within SSRS, but so far I haven’t found a better way. It might be possible to create some custom .NET code that could build the EXEC statement for a given set of parameters automatically, but exploring that will have to wait for another post. For now, unless there’s a really compelling reason or requirement to use table-valued parameters from SSRS reports I would probably stick with the tried and true “join-multi-valued-parameter-to-CSV-and-split-in-the-query” approach for using mutli-valued parameters in a stored procedure.

    Read the article

  • Backup Azure Tables, schedule Azure scripts&hellip; and more

    - by Herve Roggero
    Well – months of effort are now officially over… or should I say it’s just the beginning?   Enzo Cloud Backup 2.0 (beta) is now officially out!!! This tool will let you do the following: * Backup SQL Database (and SQL Server to a limited extend) * Backup Azure Tables * Restore SQL Backups into another SQL environment * Restore Azure Tables in Azure Storage, or SQL Environment * Manage and schedule database maintenance scripts * Drop database schema containers (with preview) for SaaS environments * Receive alerts (SMTP) when operations complete or fail That’s it at a high level… but you need to see the flexibility around these features. For example you can select a specific backup strategy for Azure Tables allowing faster backup operations when partition keys use GUIDs. You can also call custom stored procedures during the restore operation of Azure Tables, allowing you to transform the data along the way. You can also set a performance threshold during Azure Table backup operations to help you control possible throttling conditions in your Storage Account. Regarding database scripts, you can now define T-SQL scripts and schedule them for execution in a specific order. You can also tell Enzo to execute a pre and post script during Azure Table restore operations against a SQL environment. The backup operation now supports backing up to multiple devices at the same time. So you can execute a backup request to both a local file, and a blob at the same time, guaranteeing that both will contain the exact same data. And due to the level of options that are available, you can save backup definitions for later reuse. The screenshot below backs up Azure Tables to two devices (a blob and a SQL Database). You can also manage your database schemas for SaaS environments that use schema containers to separate customer data. This new edition allows you to see how many objects you have in each schema, backup specific schemas, and even drop all objects in a given schema. For example the screenshot below shows that the EnzoLog database has 4 user-defined schemas, and the AFA schema has 5 tables and 1 module (stored proc, function, view…). Selecting the AFA schema and trying to delete it will prompt another screen to show which objects will be deleted. As you can see, Enzo Cloud Backup provides amazing capabilities that can help you safeguard your data in SQL Database and Azure Tables, and give you advanced management functions for your Azure environment. Download a free trial today at http://www.bluesyntax.net.

    Read the article

  • A Closer Look at the HiddenInput Attribute in MVC 2

    - by Steve Michelotti
    MVC 2 includes an attribute for model metadata called the HiddenInput attribute. The typical usage of the attribute looks like this (line #3 below): 1: public class PersonViewModel 2: { 3: [HiddenInput(DisplayValue = false)] 4: public int? Id { get; set; } 5: public string FirstName { get; set; } 6: public string LastName { get; set; } 7: } So if you displayed your PersonViewModel with Html.EditorForModel() or Html.EditorFor(m => m.Id), the framework would detect the [HiddenInput] attribute metadata and produce HTML like this: 1: <input id="Id" name="Id" type="hidden" value="21" /> This is pretty straight forward and allows an elegant way to keep the technical key for your model (e.g., a Primary Key from the database) in the HTML so that everything will be wired up correctly when the form is posted to the server and of course not displaying this value visually to the end user. However, when I was giving a recent presentation, a member of the audience asked me (quite reasonably), “When would you ever set DisplayValue equal to true when using a HiddenInput?” To which I responded, “Well, it’s an edge case. There are sometimes when…er…um…people might want to…um…display this value to the user.” It was quickly apparent to me (and I’m sure everyone else in the room) what a terrible answer this was. I realized I needed to have a much better answer here. First off, let’s look at what is produced if we change our view model to use “true” (which is equivalent to use specifying [HiddenInput] since “true” is the default) on line #3: 1: public class PersonViewModel 2: { 3: [HiddenInput(DisplayValue = true)] 4: public int? Id { get; set; } 5: public string FirstName { get; set; } 6: public string LastName { get; set; } 7: } Will produce the following HTML if rendered from Htm.EditorForModel() in your view: 1: <div class="editor-label"> 2: <label for="Id">Id</label> 3: </div> 4: <div class="editor-field"> 5: 21<input id="Id" name="Id" type="hidden" value="21" /> 6: <span class="field-validation-valid" id="Id_validationMessage"></span> 7: </div> The key is line #5. We get the text of “21” (which happened to be my DB Id in this instance) and also a hidden input element (again with “21”). So the question is, why would one want to use this? The best answer I’ve found is contained in this MVC 2 whitepaper: When a view lets users edit the ID of an object and it is necessary to display the value as well as to provide a hidden input element that contains the old ID so that it can be passed back to the controller. Well, that actually makes sense. Yes, it seems like something that would happen *rarely* but, for those instances, it would enable them easily. It’s effectively equivalent to doing this in your view: 1: <%: Html.LabelFor(m => m.Id) %> 2: <%: Model.Id %> 3: <%: Html.HiddenFor(m => m.Id) %> But it’s allowing you to specify it in metadata on your view model (and thereby take advantage of templated helpers like Html.EditorForModel() and Html.EditorFor()) rather than having to explicitly specifying everything in your view.

    Read the article

  • Silverlight Cream for April 16, 2010 -- #838

    - by Dave Campbell
    In this Issue: Alan Beasley(-2-, -3-, -4-, -5-), Brian, Rishi, Pete Brown, Yavor Georgiev, and David Anson. Shoutouts: As usual, Tim Heuer has all the scoop on all the hot-off-the-presses releases: Silverlight 4 released. Availability of tools announcement. He covers all the main parts of interest. Tim Heuer also discusses Backward Compatibility with Silverlight 4 applications And before you ask, Tim Heuer announced the Silverlight Client for Facebook updated for Silverlight 4 release If you're having trouble with the install, Peter Bromberg has a post up to help bail you out: Get Silverlight 4 Installed: Tips and Tricks Christian Schormann has a link to probably the fastest intro to SketchFlow I've seen: Video: SketchFlow in 90 seconds, with Jon Harris Chris Rouw has a Summary of Silverlight at DevConnections on his site. I had the opportunity to spend some time with Chris and we had some good discussions. Rene Schulte describes how to get started with the new final Silverlight 4 RTW build and announces that he updated his samples and open source projects. He also shares what he wishes for the next Silverlight version: Silverlight 4 Up and Running From SilverlightCream.com: Building Better Buttons in Expression Blend and Silverlight I generally end up missing articles embedded at CodeProject, so Alan Beasley emailed me a link to these, they were new to me. In this first one, he's got a very nice tutorial up on making some awesome buttons in Expression Blend Arcade Button in Expression Blend and Silverlight Alan Beasley's second Expression Blend Button tutorial is the classic 'arcade button' ... this is great stuff.. check it out. Picture Frame Control in Expression Blend and Silverlight I wasn't going to do the full list Alan Beasley had sent me in one post, but they're all so good! This third takes an excursion away from buttons to do a Picture Frame control. Styled to the max, and another great Blend tutorial! The last building buttons article (Part1), in Expression Blend and Silverlight Alan Beasley finishes what may be a definitive work on buttons in Blend... even if you don't want to follow the tutorials (and why wouldn't you??) ... he's got 10 buttons you can download! ListBox Styling (Part1-ScrollBars) in Expression Blend & Silverlight In Alan Beasley's 5th post at Code Project, He has a great long tutorial on Styling Listbox Scrollbars in Expression Blend ... the ScrollBars are Part 1 of a series. Some Notes on DRM in Silverlight 4 Brian at Silverlight SDK has a post up on DRM ... WMDRM and PlayReady. If you're planning on utilizing this, Brian's post looks like a good starting point. nRoute: Now, More Wholesome Rishi has a detailed post up explaining the latest nRoute release now supporting Silverlight 4, WP7, and WPF. What a piece of work! Scanning an Image from Silverlight 4 using WIA Automation Pete Brown demonstrates using VS2010 and SL4 to lash up to his scanner. Lots of code and external links... all good stuff, Pete! Dealing with those pesky WCF CommunicationException “NotFound” errors in Silverlight Yavor Georgiev has a quick post up discussing WCF CommunicationException errors in Silverlight with a couple external links to explain the solution. New Silverlight 4 Toolkit released with today's Silverlight 4 RTW! David Anson blogged about the new Toolkit release that is live right now along with the Silverlight 4 Release, and has some release notes up on the Toolkit. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Generate Strongly Typed Observable Events for the Reactive Extensions for .NET (Rx)

    - by Bobby Diaz
    I must have tried reading through the various explanations and introductions to the new Reactive Extensions for .NET before the concepts finally started sinking in.  The article that gave me the ah-ha moment was over on SilverlightShow.net and titled Using Reactive Extensions in Silverlight.  The author did a good job comparing the "normal" way of handling events vs. the new "reactive" methods. Admittedly, I still have more to learn about the Rx Framework, but I wanted to put together a sample project so I could start playing with the new Observable and IObservable<T> constructs.  I decided to throw together a whiteboard application in Silverlight based on the Drawing with Rx example on the aforementioned article.  At the very least, I figured I would learn a thing or two about a new technology, but my real goal is to create a fun application that I can share with the kids since they love drawing and coloring so much! Here is the code sample that I borrowed from the article: var mouseMoveEvent = Observable.FromEvent<MouseEventArgs>(this, "MouseMove"); var mouseLeftButtonDown = Observable.FromEvent<MouseButtonEventArgs>(this, "MouseLeftButtonDown"); var mouseLeftButtonUp = Observable.FromEvent<MouseButtonEventArgs>(this, "MouseLeftButtonUp");       var draggingEvents = from pos in mouseMoveEvent                              .SkipUntil(mouseLeftButtonDown)                              .TakeUntil(mouseLeftButtonUp)                              .Let(mm => mm.Zip(mm.Skip(1), (prev, cur) =>                                  new                                  {                                      X2 = cur.EventArgs.GetPosition(this).X,                                      X1 = prev.EventArgs.GetPosition(this).X,                                      Y2 = cur.EventArgs.GetPosition(this).Y,                                      Y1 = prev.EventArgs.GetPosition(this).Y                                  })).Repeat()                          select pos;       draggingEvents.Subscribe(p =>     {         Line line = new Line();         line.Stroke = new SolidColorBrush(Colors.Black);         line.StrokeEndLineCap = PenLineCap.Round;         line.StrokeLineJoin = PenLineJoin.Round;         line.StrokeThickness = 5;         line.X1 = p.X1;         line.Y1 = p.Y1;         line.X2 = p.X2;         line.Y2 = p.Y2;         this.LayoutRoot.Children.Add(line);     }); One thing that was nagging at the back of my mind was having to deal with the event names as strings, as well as the verbose syntax for the Observable.FromEvent<TEventArgs>() method.  I came up with a couple of static/helper classes to resolve both issues and also created a T4 template to auto-generate these helpers for any .NET type.  Take the following code from the above example: var mouseMoveEvent = Observable.FromEvent<MouseEventArgs>(this, "MouseMove"); var mouseLeftButtonDown = Observable.FromEvent<MouseButtonEventArgs>(this, "MouseLeftButtonDown"); var mouseLeftButtonUp = Observable.FromEvent<MouseButtonEventArgs>(this, "MouseLeftButtonUp"); Turns into this with the new static Events class: var mouseMoveEvent = Events.Mouse.Move.On(this); var mouseLeftButtonDown = Events.Mouse.LeftButtonDown.On(this); var mouseLeftButtonUp = Events.Mouse.LeftButtonUp.On(this); Or better yet, just remove the variable declarations altogether:     var draggingEvents = from pos in Events.Mouse.Move.On(this)                              .SkipUntil(Events.Mouse.LeftButtonDown.On(this))                              .TakeUntil(Events.Mouse.LeftButtonUp.On(this))                              .Let(mm => mm.Zip(mm.Skip(1), (prev, cur) =>                                  new                                  {                                      X2 = cur.EventArgs.GetPosition(this).X,                                      X1 = prev.EventArgs.GetPosition(this).X,                                      Y2 = cur.EventArgs.GetPosition(this).Y,                                      Y1 = prev.EventArgs.GetPosition(this).Y                                  })).Repeat()                          select pos; The Move, LeftButtonDown and LeftButtonUp members of the Events.Mouse class are readonly instances of the ObservableEvent<TTarget, TEventArgs> class that provide type-safe access to the events via the On() method.  Here is the code for the class: using System; using System.Collections.Generic; using System.Linq;   namespace System.Linq {     /// <summary>     /// Represents an event that can be managed via the <see cref="Observable"/> API.     /// </summary>     /// <typeparam name="TTarget">The type of the target.</typeparam>     /// <typeparam name="TEventArgs">The type of the event args.</typeparam>     public class ObservableEvent<TTarget, TEventArgs> where TEventArgs : EventArgs     {         /// <summary>         /// Initializes a new instance of the <see cref="ObservableEvent"/> class.         /// </summary>         /// <param name="eventName">Name of the event.</param>         protected ObservableEvent(String eventName)         {             EventName = eventName;         }           /// <summary>         /// Registers the specified event name.         /// </summary>         /// <param name="eventName">Name of the event.</param>         /// <returns></returns>         public static ObservableEvent<TTarget, TEventArgs> Register(String eventName)         {             return new ObservableEvent<TTarget, TEventArgs>(eventName);         }           /// <summary>         /// Creates an enumerable sequence of event values for the specified target.         /// </summary>         /// <param name="target">The target.</param>         /// <returns></returns>         public IObservable<IEvent<TEventArgs>> On(TTarget target)         {             return Observable.FromEvent<TEventArgs>(target, EventName);         }           /// <summary>         /// Gets or sets the name of the event.         /// </summary>         /// <value>The name of the event.</value>         public string EventName { get; private set; }     } } And this is how it's used:     /// <summary>     /// Categorizes <see cref="ObservableEvents"/> by class and/or functionality.     /// </summary>     public static partial class Events     {         /// <summary>         /// Implements a set of predefined <see cref="ObservableEvent"/>s         /// for the <see cref="System.Windows.System.Windows.UIElement"/> class         /// that represent mouse related events.         /// </summary>         public static partial class Mouse         {             /// <summary>Represents the MouseMove event.</summary>             public static readonly ObservableEvent<UIElement, MouseEventArgs> Move =                 ObservableEvent<UIElement, MouseEventArgs>.Register("MouseMove");               // additional members omitted...         }     } The source code contains a static Events class with prefedined members for various categories (Key, Mouse, etc.).  There is also an Events.tt template that you can customize to generate additional event categories for any .NET type.  All you should have to do is add the name of your class to the types collection near the top of the template:     types = new Dictionary<String, Type>()     {         //{ "Microsoft.Maps.MapControl.Map, Microsoft.Maps.MapControl", null }         { "System.Windows.FrameworkElement, System.Windows", null },         { "Whiteboard.MainPage, Whiteboard", null }     }; The template is also a bit rough at this point, but at least it generates code that *should* compile.  Please let me know if you run into any issues with it.  Some people have reported errors when trying to use T4 templates within a Silverlight project, but I was able to get it to work with a little black magic...  You can download the source code for this project or play around with the live demo.  Just be warned that it is at a very early stage so don't expect to find much today.  I plan on adding alot more options like pen colors and sizes, saving, printing, etc. as time permits.  HINT: hold down the ESC key to erase! Enjoy! Additional Resources Using Reactive Extensions in Silverlight DevLabs: Reactive Extensions for .NET (Rx) Rx Framework Part III - LINQ to Events - Generating GetEventName() Wrapper Methods using T4

    Read the article

  • O' Reilly Deal of the Day 26/Jun/2012 - Developer's Guide to Collections in Microsoft® .NET

    - by TATWORTH
    Today's 50% off Deal of the Day at http://shop.oreilly.com/product/0790145317193.do?code=MSDEAL is Developer's Guide to Collections in Microsoft® .NET "Put .NET collections to work—and manage issues with GUI data binding, threading, data querying, and storage. Led by a data collection expert, you'll gain task-oriented guidance, exercises, and extensive code samples to tackle common problems and improve application performance. This one-stop reference is designed for experienced Microsoft Visual Basic® and C# developers—whether you’re already using collections or just starting out." I am reviewing this book. Here are my initial comments:The code is well illustrated by diagrams. The approach is practical. The code is well commented, however the C# code samples would be better had they been fully Style Cop compliant.I recommend this book to all C# and VB.NET Development teams. I concur with the author who states that the book is not for learning C# or VB.NET, however it is an excellent book for C# or VB.NET developers to extend their knowledge of the Dot Net framework.

    Read the article

  • Use Microsoft PowerPivot to Access Salesforce.com Through the OData Connector

    - by dataintegration
    This article will explain how to connect to any of our OData Connectors with Microsoft Excel's PowerPivot business intelligence tool. While the example will use the Salesforce Connector, the same process can be followed for any of the RSSBus OData Connectors. Step 1: Download and install both the Salesforce Connector from RSSBus and PowerPivot for Excel from Microsoft. Step 2: Next you will want to configure the Salesforce Connector to connect with your Salesforce account. If you browse to the Help tab in the Salesforce Connector application, there is a link to the Getting Started Guide which will walk you through setting up the Salesforce Connector. Step 3: Once you have successfully configured the Salesforce Connector application, you will want to open Excel and select the PowerPivot tab at the top of the window. Step 4: Here you will click on the button labeled PowerPivot Window at the top left. Step 5: A new pop up will appear. Now select the option "From Data Feeds". Step 6: In the resulting Table Import Wizard you will enter the OData URL of the Salesforce Connector. You can find this by clicking on the Settings tab of the Salesforce Connector. It will look something like this: http://localhost:8181/sfconnector/data/conn/odata.rsc. You will also need to add authentication options in this step. To do this, click on the Advanced button and scroll down to the Security section of the resulting pop up window. Change the Integrated Security option to "Basic". You will also need to enter the User ID and Password of the user who has access to the Salesforce Connector. Step 7: When the connection to the Salesforce Connector is successful, click the Next button at the bottom of the window. Step 8: A table listing of the available tables will appear in the next window of the wizard. Here you will select which tables you want to import and click Finish. Step 9: If the import was successful, click Close and you are done! Your data is now in PowerPivot.

    Read the article

  • It seems another season of previews is upon us

    - by Enrique Lima
    Originally posted on: http://geekswithblogs.net/enriquelima/archive/2013/06/26/it-seems-another-season-of-previews-is-upon-us.aspxThe past couple of weeks have been packed with teasers and updates. But here they go. Visual Studio Update 3: http://www.microsoft.com/en-us/download/confirmation.aspx?id=39305 Visual Studio 2013 and TFS 2013 Preview: http://www.microsoft.com/visualstudio/eng/2013-downloads SQL Server 2014 CTP1 : http://technet.microsoft.com/en-us/evalcenter/dn205290.aspx Windows Server 2012 R2 Preview: http://technet.microsoft.com/en-us/evalcenter/dn205286.aspx Windows 8.1 : http://preview.windows.com

    Read the article

  • Network Solutions Gold VIP Program

    - by GGBlogger
    Today I received an email advising me “Congratulations! You have been selected for the Network Solutions Gold VIP Program” Now I get a bunch of messages from Network Solutions because a long time ago I chose them as my registrar. I usually just save these to my Outlook Network Solutions folder and move on. This time my wife was in the room and I chuckled and said something about “Network Solutions has made me a Gold VIP member.” She said “So what does that mean?” Prompted by that I scrolled down the page to look at my “perks". Let’s see – Special pricing, 1 year free Web Forwarding, 1 hour of free support… etc etc until I hit “Enrollment in SafeRenew* SafeRenew* simplifies your renewal process. It protects your domain name registrations and corresponding services in the event you forget or are unable to renew on time. I’m thinking “Now ain’t that neat” I don’t use auto renew anyway and never have. Then I continued reading: Beginning June 24th, 2012 your domains* (that’s 10 days away) will automatically renew. To ensure continuation of service, please be certain you have a valid credit card on file. If you do not wish for your services to be automatically renewed, please click here to log into account manager and opt out of the SafeRenew™ service. Whoa Network Solutions is going to start spending my money without so much as a by your leave sir!!! I don’t bloody think so and how truly magnanimous of them I can “click here” to opt out of what I didn’t opt in for. Talk about furious! I still am. Now the kicker is: if my wife hadn’t been curious and if I’d had a working credit card on file I wouldn’t have know this until one of my unwanted domain names auto renewed. Of course I was told “Oh – we’d have refunded your money” to which I say bull. In my view Networks Solutions is guilty of a crime of some sort. They DO NOT have a right to spend my money without asking!!!!! So watch out folks.

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >