Search Results

Search found 17192 results on 688 pages for 'geeks with blogs'.

Page 70/688 | < Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >

  • SPSiteDataQuery Returns Only One List Type At A Time

    - by Brian Jackett
    The SPSiteDataQuery class in SharePoint 2007 is very powerful, but it has a few limitations.  One of these limitations that I ran into this morning (and caused hours of frustration) is that you can only return results from one list type at a time.  For example, if you are trying to query items from an out of the box custom list (list type = 100) and document library (list type = 101) you will only get items from the custom list (SPSiteDataQuery defaults to list type = 100.)  In my situation I was attempting to query multiple lists (created from custom list templates 10001 and 10002) each with their own content types. Solution     Since I am only able to return results from one list type at a time, I was forced to run my query twice with each time setting the ServerTemplate (translates to ListTemplateId if you are defining custom list templates) before executing the query.  Below is a snippet of the code to accomplish this. SPSiteDataQuery spDataQuery = new SPSiteDataQuery(); spDataQuery.Lists = "<Lists ServerTemplate='10001' />"; // ... set rest of properties for spDataQuery   var results = SPContext.Current.Web.GetSiteData(spDataQuery).AsEnumerable();   // only change to SPSiteDataQuery is Lists property for ServerTemplate attribute spDataQuery.Lists = "<Lists ServerTemplate='10002' />";   // re-execute query and concatenate results to existing entity results = results.Concat(SPContext.Current.Web.GetSiteData(spDataQuery).AsEnumerable());   Conclusion     Overall this isn’t an elegant solution, but it’s a workaround for a limitation with the SPSiteDataQuery.  I am now able to return data from multiple lists spread across various list templates.  I’d like to thank those who commented on this MSDN page that finally pointed out the limitation to me.  Also a thanks out to Mark Rackley for “name dropping” me in his latest article (which I humbly insist I don’t belong in such company)  as well as encouraging me to write up a quick post on this issue above despite my busy schedule.  Hopefully this post saves some of you from the frustrations I experienced this morning using the SPSiteDataQuery.  Until next time, Happy SharePoint’ing all.         -Frog Out   Links MSDN Article for SPSiteDataQuery http://msdn.microsoft.com/en-us/library/microsoft.sharepoint.spsitedataquery.lists.aspx

    Read the article

  • Sam Abraham To Speak At The LI .Net User Group on June 3rd, 2010

    - by Sam Abraham
    As you might know, I lived and worked on LI, NY for 11 years before relocating to South Florida. As I will be visiting my family who still live there in the first week of June, I couldn't resist reaching out to Dan Galvez, LI  .Net User Group Leader, and asking if he needed a speaker for June's meeting. Apparently the stars were lined up right and I am now scheduled to speak at my "home" group on June 3rd, which I am pretty excited about. Here is a brief abstract of my talk and speaker bio. What's New in MVC2 We will start by briefly reviewing the basics of the Microsoft MVC Framework. Next, we will look at the new features introduced in the latest and greatest MVC2. Many new enhancements were introduced to both the MS MVC Framework and to VS2010 to improve developers' experience and reduce development time. We will be talking about new MVC2 features such as: Model Validation, Areas and Template Helpers. We will also discuss the new built-in MVC project templates that ship with VS2010. About the Speaker Sam Abraham is a Microsoft Certified Professional (MCP) and Microsoft Certified Technology Specialist (MCTS ASP.Net 3.5) He currently lives in South Florida where he leads the West Palm Beach .Net User Group (www.fladotnet.com) and actively participates in various local .Net Community events as organizer and/or technical speaker. Sam is also an active committee member on various initiatives at the South Florida Chapter of the Project Management Institute (www.southfloridapmi.org). Sam finds his passion in leveraging latest and greatest .Net Technologies along with proven Project Management practices and methodologies to produce high quality, cost-competitive software.  Sam can be reached through his blog: http://www.geekswithblogs.net/wildturtle

    Read the article

  • Silverlight Cream for June 28, 2011 -- #1112

    - by Dave Campbell
    In this Issue: WindowsPhoneGeek, John Papa, Mike Taulty, Erno de Weerd, Stephen Price, Chris Rouw, Peter Kuhn, Damian Schenkelman, Michael Washington, and Manas Patnaik. Above the Fold: Silverlight: "Binding to View Model properties in Data Templates. The RootBinding Markup Extension" Damian Schenkelman WP7: "Storing Files in SQL Server using WCF RIA Services and Silverlight – Part 3" Chris Rouw LightSwitch: "Saving Files To File System With LightSwitch (Uploading Files)" Michael Washington Shoutouts: Steve Wortham announced a change to his XilverlightXAP.com site... they're now accepting XAML illustrations: Introducing XAML Illustrations, Increased Payouts to Contributors, and More Amid all the discussions that I've tried to avoid, Michael Washinton is Betting The House On LightSwitch From SilverlightCream.com: Dynamically updating a data bound LongListSelector in Windows Phone WindowsPhoneGeek's latest is on using the LongListSelector from the Toolkit and dynamically updating it with data... detailed guidelines and plenty of pictures and code as always. Silverlight TV 77: Exploring 3D with Aaron Oneal John Papa has Silverlight TV number 77 up and is chatting with Aaron Oneal, program manager of the Silverlight 3D efforts... too cool. Silverlight WebBrowser Control for Offline Apps (Part 2) Mike Taulty wrote this post in Silverlight 5 Beta, but says it should be fine in 4... a continuation of his HTML Content display using the WebBrowser control while offline Windows Phone 7: Databinding and the Pivot Control Erno de Weerd discusses the Pivot control in WP7 based on his attempts to use it in an app. Required Attribute on an Entity Stephen Price has a new post at XAML Source... first is this one on setting the required attribute and the troubles you can get into if it's not set correctly Storing Files in SQL Server using WCF RIA Services and Silverlight – Part 3 Chris Rouw has Part 3 of his series on Storing files in SQL Server using FILESTREAM Storage in SQL Server 2008 and Silverlight... this time he's viewing files stored in the FILESTREAM from the LOB app. Getting ready for the Windows Phone 7 Exam 70-599 (Part 4) Peter Kuhn has Part 4 of his series on getting ready for the WP7 exam up at SilverlightShow... the date is coming up soon... are you ready? Binding to View Model properties in Data Templates. The RootBinding Markup Extension Damian Schenkelman has a Silverlight 5 Beta post up... digging into the XAML Markup Extensions and popping out a RootBindingExtensionthat helps bind to a property in a view model from a DataTemplate. Saving Files To File System With LightSwitch (Uploading Files) Michael Washington has a cool tutorial up on his new LightSwitch Help Website... File Upload to a server file system using LightSwitch, plus a project to download... good stuff! Microsoft Media Platform (MMPPF): Player Framework for Silverlight Manas Patnaik's latest post is about the Media Player Project... some of the history of media with Silvelight and how to go about using the Media Player Project bits Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • F# and ArcObjects, Part 3

    - by Marko Apfel
    Today i played a little bit with IFeature-sequences and piping data. The result was a calculator of the bounding box around all features in a feature class. Maybe a little bit dirty, but for learning was it OK. ;-) open System;; #I "C:\Program Files\ArcGIS\DotNet";; #r "ESRI.ArcGIS.System.dll";; #r "ESRI.ArcGIS.DataSourcesGDB.dll";; #r "ESRI.ArcGIS.Geodatabase.dll";; #r "ESRI.ArcGIS.Geometry.dll";; open ESRI.ArcGIS.esriSystem;; open ESRI.ArcGIS.DataSourcesGDB;; open ESRI.ArcGIS.Geodatabase;; open ESRI.ArcGIS.Geometry; let aoInitialize = new AoInitializeClass();; let status = aoInitialize.Initialize(esriLicenseProductCode.esriLicenseProductCodeArcEditor);; let workspacefactory = new SdeWorkspaceFactoryClass();; let connection = "SERVER=okul;DATABASE=p;VERSION=sde.default;INSTANCE=sde:sqlserver:okul;USER=s;PASSWORD=g";; let workspace = workspacefactory.OpenFromString(connection, 0);; let featureWorkspace = (box workspace) :?> IFeatureWorkspace;; let featureClass = featureWorkspace.OpenFeatureClass("Praxair.SFG.BP_L_ROHR");; let queryFilter = new QueryFilterClass();; let featureCursor = featureClass.Search(queryFilter, true);; let featureCursorSeq (featureCursor : IFeatureCursor) = let actualFeature = ref (featureCursor.NextFeature()) seq { while (!actualFeature) <> null do yield actualFeature do actualFeature := featureCursor.NextFeature() };; let min x y = if x < y then x else y;; let max x y = if x > y then x else y;; let info s (x : IEnvelope) = printfn "%s xMin:{%f} xMax: {%f} yMin:{%f} yMax: {%f}" s x.XMin x.XMax x.YMin x.YMax;; let con (env1 : IEnvelope) (env2 : IEnvelope) = let env = (new EnvelopeClass()) :> IEnvelope env.XMin <- min env1.XMin env2.XMin env.XMax <- max env1.XMax env2.XMax env.YMin <- min env1.YMin env2.YMin env.YMax <- max env1.YMax env2.YMax info "Intermediate" env env;; let feature = featureClass.GetFeature(100);; let ext = feature.Extent;; let BoundingBox featureClassName = let featureClass = featureWorkspace.OpenFeatureClass(featureClassName) let queryFilter = new QueryFilterClass() let featureCursor = featureClass.Search(queryFilter, true) let featureCursorSeq (featureCursor : IFeatureCursor) = let actualFeature = ref (featureCursor.NextFeature()) seq { while (!actualFeature) <> null do yield actualFeature do actualFeature := featureCursor.NextFeature() } featureCursorSeq featureCursor |> Seq.map (fun feature -> (!feature).Extent) |> Seq.fold (fun (acc : IEnvelope) a -> info "Intermediate" acc (con acc a)) ext ;; let boundingBox = BoundingBox "Praxair.SFG.BP_L_ROHR";; info "Ende-Info:" boundingBox;;

    Read the article

  • Using progress dialog in Visual Studio extensions

    - by Utkarsh Shigihalli
    Originally posted on: http://geekswithblogs.net/onlyutkarsh/archive/2014/05/23/using-progress-dialog-in-visual-studio-extensions.aspxAs a Visual Studio extension developer you are required to keep the aesthetics of Visual Studio in tact when you integrate your extension with Visual Studio. Your extension looks odd when you try to use windows controls and dialogs in your extensions. Visual Studio SDK exposes many interfaces so that your extension looks as integrated with Visual Studio as possible. When your extension is performing a long running task, you have many options to notify the progress to the user. One such option is through Visual Studio status bar. I have previously blogged about displaying progress through Visual Studio status bar. In this blog post I am going to highlight another way using IVsThreadedWaitDialog2 interface. One thing to note is, as the IVsThreadedWaitDialog2 interface name suggests it is a dialog hence user cannot perform any action when the dialog is being shown. So Visual Studio seems responsive to user, even when a task is being performed. Visual Studio itself makes use of this interface heavily. One example is when you are loading a solution (.sln) with lot of projects Visual Studio displays dialog implemented by this interface (screenshot below). So the first step is to get the instance of IVsThreadedWaitDialog2 interface using IServiceProvider interface. var dialogFactory = _serviceProvider.GetService(typeof(SVsThreadedWaitDialogFactory)) as IVsThreadedWaitDialogFactory; IVsThreadedWaitDialog2 dialog = null; if (dialogFactory != null) { dialogFactory.CreateInstance(out dialog); } So if your have the package initialized properly out object dialog will be not null and would contain the instance of IVsThreadedWaitDialog2 interface. Once the instance is got, you call the different methods to manage the dialog. I will cover 3 methods StartWaitDialog, EndWaitDialog and HasCanceled in this blog post. You show the progress dialog as below. if (dialog != null && dialog.StartWaitDialog( "Threaded Wait Dialog", "VS is Busy", "Progress text", null, "Waiting status bar text", 0, false, true) == VSConstants.S_OK) { Thread.Sleep(4000); } As you can see from the method syntax it is very similar to standard windows message box. If you pass true to the 7th parameter to StartWaitDialog method, you will also see a cancel button allowing user to cancel the running task. You can react when user cancels the task as below. bool isCancelled; dialog.HasCanceled(out isCancelled); if (isCancelled) { MessageBox.Show("Cancelled"); } Finally, you can close the dialog when you complete the task running as below. int usercancel; dialog.EndWaitDialog(out usercancel); To help you quickly experience the above code, I have created a sample. It is available for download from GitHub. The sample creates a tool window with two buttons to demo the above explained scenarios. The tool window can be accessed by clicking View –> Other Windows -> ProgressDialogDemo Window

    Read the article

  • SyncToBlog #10 Lots of Azure and Cloud Links including MIX10 videos

    - by Eric Nelson
    Just getting a few interesting cloud links “down on paper”. I last did one of these on Azure in Feb 20010. Cloud Links: Article on Debugging in the Cloud http://code.msdn.microsoft.com/azurescale  A sample app that demonstrates monitoring and automatically scaling an Azure application in response to dropping performance etc. Basically a console app that checks perf stats and then uses the Service Management API to spin up new instances when needed. Azure In Action book is imminent :) Running Memcached in Windows Azure from the MS UK team Using Microsoft Codename Dallas as a data source for Drupal also from the MS UK team I often mention them – but this post is the biz! Metodi on fault and upgrade domains Detailed blog post on comparing Azure AppFabric Service Bus REST support to the free Faye Ruby+JavaScript gem that implements the JSON publish/subscribe protocol Bayeux. AppFabric LABS allow you to test out and play with experimental AppFabric technologies. Details of the upcoming VM support in Windows Azure Nice series of posts from J D Meier in the Patterns and Practice team How To Use ASP.NET Forms Auth with Azure Tables  How To Use ASP.NET Forms Auth with Roles in Azure Tables How To Use ASP.NET Forms Auth with SQL Server on Windows Azure And sessions from MIX10 held March 15th to 17th: Lap around the Windows Azure Platform – Steve Marx Building and Deploying Windows Azure Based Applications with Microsoft Visual Studio 2010 – Jim Nakashima Building PHP Applications using the Windows Azure Platform – Craig Kitterman, Sumit Chawla Using Ruby on Rails to Build Windows Azure Applications – Sriram Krishnan Microsoft Project Code Name “Dallas": Data for your apps – Moe Khosravy Using Storage in the Windows Azure Platform – Chris Auld Building Web Applications with Windows Azure Storage – Brad Calder Building Web Application with Microsoft SQL Azure – David Robinson Connecting Your Applications in the Cloud with Windows Azure AppFabric – Clemens Vasters Microsoft Silverlight and Windows Azure: A Match Made for the Web – Matt Kerner Something for everyone :)

    Read the article

  • Sound vs. Valid Argument

    - by MarkPearl
    Today I spent some time reviewing my Formal Logic course for my up coming exam. I came across a section that I have never really explored in any proper depth… the difference between a valid argument and a sound argument. Here go some notes I made… What is an argument? In this case we are not referring to a verbal fight, but more what we call a set of premise followed by a conclusion. Before we go further we need to understand what a premise is… a premise is a statement that an argument claims will induce or justify a conclusion. Think of a premise as an assumption that something is true. So, an argument can consist of one or more premises and a conclusion… When is an argument valid? An argument can be either valid or invalid. An argument is valid if, and only if, it is impossible for there to be a situation in which all it's premises are TRUE and it's conclusion is FALSE. It is generally easier to determine if an argument is invalid. Do this by applying the following… Assume that all the premises are true, then ask yourself if it is now possible for the conclusion to be false. If the answer is "yes," the argument is invalid. If it's "no," the argument is valid. Example 1… P1 – Mark is Tall P2 – Mark is a boy C –  Mark is a tall boy Walkthrough 1… Assume Mark is Tall is true and also assume that Mark is a boy. Based on these two premises, the conclusion is also true – Mark is a tall boy, thus the it is a valid argument. Let’s make this an invalid argument…   Example 2… P1 – Mark is Tall P2 – Mark is a boy C – Mark is a short boy Walkthrough 2… This would be an invalid argument, since from the premises we assume that Mark is tall and he is a boy, and then the conclusion goes against this by saying that Mark is short. Thus an invalid argument.   When is an argument sound? An argument is said to be sound when it is valid and all the premises are indeed true (not just assumed to be true). Rephrased, an argument is said to be sound when the conclusion will follow from the premises and the premises are indeed true in real life. In example 1 we were referring to a specific person, if we generalized it a bit we could come up with the following example.   Example 3 P1 – All people called Mark are tall P2 – I know a specific person called Mark C – He is a tall person   In this instance, it is a valid argument (we assume the premises are true, which leads to the conclusion being true), but the argument is NOT sound. In the real world there must be at least one person called Mark who is not tall. Something also to note, all invalid arguments are also unsound – this makes sense, if an argument is not valid, how on earth can it be true in the real world.   What happens when the premises contradict themselves? This is an interesting one… An argument is valid if, and only if, it is impossible for there to be a situation in which all it's premises are TRUE and it's conclusion is FALSE. When premises are contradictory, the argument is always valid because it is impossible for all the premises to be true at one time. Lets look at an example.. P1 - Elvis is dead P2 – Elvis is alive C – Laura is a woolly mammoth This is a valid argument, but not a sound one. Think about it. Is it possible to have a situation in which the premises are true and the conclusion is false? Sure, it's possible to have a situation in which the conclusion is false, but for the argument to be invalid, it has to be possible for the premises to all be true at the same time the conclusion is false. So if the premises can't all be true, the argument is valid. (If you still think the argument is invalid, draw a picture in which the premises are all true and the conclusion is false. Remember, there's only one Elvis, and you can't be both dead and alive.) For more info on this I suggest reading the following blog post.

    Read the article

  • 10 steps to enable &lsquo;Anonymous Access&rsquo; for your SharePoint 2010 site

    - by KunaalKapoor
    What’s Anonymous Access? Anonymous access to your SharePoint site enables all visitors to view your SharePoint site anonymously without having to log in. With this blog I’d like to go through an easy step wise procedure to enable/set up anonymous access. Before you actually enable anonymous access on the site, you’ll have to change some settings at the web app level. So let’s start with that: Prerequisite(s): 1. A hosted SharePoint 2010 farm/server. 2. An existing SharePoint site. I just thought I’d mention the above pre-reqs, since the steps mentioned below would’nt be valid or a different type of a site. Step 1: In Central Administration, under Application Management, click on the Manage web applications. Step 2: Now select the site you want to enable anonymous access and click on the Authentication Providers icon. Step 3: On the modal window click on the Default zone. Step 4: Now under the Edit Authentication section, check Enable anonymous access and click Save. This is basically to make the Anonymous Access authentication mechanism available at the web app level @ IIS. Now, web application will allow anonymous access to be set. 5. Going back to Web Application Management click on the Anonymous Policy icon. Step 6: Also before we proceed any further, under the Anonymous Access Restrictions (@ web app mgmt.) select your Zone and set the Permissions to None – No policy and click Save. Step 7:  Now lets navigate to your top level site collection for the web application. Click the Site Actions > Site Settings. Under Users and Permissions click Site permissions. Step 8: Under Users and Permissions, click on Site Permissions. Step 9: Under the Edit tab, click on Anonymous Access. Step 10: Choose whether you want Anonymous users to have access to the entire Web site or to lists and libraries only, and then click on OK. You should now be able to see the view as below under your permissions Also keep in mind: If you are trying to access the site from a browser within the domain, then you’ll need to change some browser settings to see the after affects. Normally this is because the browsers (Internet Explorer) is set to log in automatically to intranet zone only , not sure if you have explicitly changed the zones and added it to trusted sites. If this is from a box within your domain please try to access the site by temporarily changing the Internet Explorer setting to Anonymous Logon on the zone that the site is added example "Intranet" and try . You will find the same settings by clicking on Tools > Internet Options > Security Tab.

    Read the article

  • Sending Parameters with the BizTalk HTTP Adapter

    - by Christopher House
    I've never had occaison to use the BizTalk HTTP adapter since I've always needed SOAP rather than just POX (plain old XML).  Yesterday we decided that we're going to expose some data via a Java servlet that will accept an HTTP post and respond with POX.  I knew BizTalk had an HTTP adapter but I had no idea what it's capabilities were. After a quick read through the BizTalk docs, it was apparent that the HTTP send adapter does in fact do posts.  The concern I had though was how we were going to supply parameters to the servlet.  The examples I had seen using the HTTP adapter all involved posting an XML message to some HTTP location.  Our Java guy, however didn't want to take that approach.  He wanted us to provide a query string via post, much like you'd expect to see on an HTTP get.  I decided to put together a little test scenario and see what I could come up with.  We didn't have a test servlet I could go against and my Java experience is virtually nill, so I decided to put together an ASP.Net project to act as the servlet.  It didn't need to be fancy, just one HttpHandler that accepts a post, reads a parameter and returns XML.  With the HttpHandler done, I put together a simple orchestration to send a message to the handler.  I started by having the orch send a message of type System.String to see what it would look like when the handler received it. I set a breakpoint in my handler and kicked off the orchestration.  Below is what I saw: As I suspected, because of BizTalk's XML serialization, System.String was not going to work.  I thought back to my BizTalk 2004 days and I project I worked on that required sending HTML formatted emails via the SMTP adapter.  To acomplish that, I had used a .Net class with a custom serialization formatter that I got from a Microsoft sample.  The code for the class, RawString can be found here. I created a new class library with the RawString class as well as a static factory class, referenced that in my orchestration project and changed my message type from System.String to RawString.  Below is what the code in my message construction looks like: After deploying the updated orchestration, I fired it off again and checked the breakpoint in my HttpHandler.  This is what I saw: And there you have it.  The RawString message type allowed me to pass a query string in the HTTP post without wrapping it in XML.

    Read the article

  • ASP.NET MVC 3 Hosting :: Rolling with Razor in MVC v3 Preview

    - by mbridge
    Razor is an alternate view engine for asp.net MVC.  It was introduced in the “WebMatrix” tool and has now been released as part of the asp.net MVC 3 preview 1.  Basically, Razor allows us to replace the clunky <% %> syntax with a much cleaner coding model, which integrates very nicely with HTML.  Additionally, it provides some really nice features for master page type scenarios and you don’t lose access to any of the features you are currently familiar with, such as HTML helper methods. First, download and install the ASP.NET MVC Preview 1.  You can find this at http://www.microsoft.com/downloads/details.aspx?FamilyID=cb42f741-8fb1-4f43-a5fa-812096f8d1e8&displaylang=en. Now, follow these steps to create your first asp.net mvc project using Razor: 1. Open Visual Studio 2010 2. Create a new project.  Select File->New->Project (Shift Control N) 3. You will see the list of project types which should look similar to what’s shown:   4. Select “ASP.NET MVC 3 Web Application (Razor).”  Set the application name to RazorTest and the path to c:projectsRazorTest for this tutorial. If you select accidently select ASPX, you will end up with the standard asp.net view engine and template, which isn’t what you want. 5. For this tutorial, and ONLY for this tutorial, select “No, do not create a unit test project.”  In general, you should create and use a unit test project.  Code without unit tests is kind of like diet ice cream.  It just isn’t very good. Now, once we have this done, our brand new project will be created.    In all likelihood, Visual Studio will leave you looking at the “HomeController.cs” class, as shown below: Immediately, you should notice one difference.  The Index action used to look like: public ActionResult Index () { ViewData[“Message”] = “Welcome to ASP.Net MVC!”; Return View(); } While this will still compile and run just fine, ASP.Net MVC 3 has a much nicer way of doing this: public ActionResult Index() { ViewModel.Message = “Welcome to ASP.Net MVC!”; Return View(); } Instead of using ViewData we are using the new ViewModel object, which uses the new dynamic data typing of .Net 4.0 to allow us to express ourselves much more cleanly.  This isn’t a tutorial on ALL of MVC 3, but the ViewModel concept is one we will need as we dig into Razor. What comes in the box? When we create a project using the ASP.Net MVC 3 Template with Razor, we get a standard project setup, just like we did in ASP.NET MVC 2.0 but with some differences.  Instead of seeing “.aspx” view files and “.ascx” files, we see files with the “.cshtml” which is the default razor extension.  Before we discuss the details of a razor file, one thing to keep in mind is that since this is an extremely early preview, intellisense is not currently enabled with the razor view engine.  This is promised as an updated before the final release.  Just like with the aspx view engine, the convention of the folder name for a set of views matching the controller name without the word “Controller” still stands.  Similarly, each action in the controller will usually have a corresponding view file in the appropriate view directory.  Remember, in asp.net MVC, convention over configuration is key to successful development! The initial template organizes views in the following folders, located in the project under Views: - Account – The default account management views used by the Account controller.  Each file represents a distinct view. - Home – Views corresponding to the appropriate actions within the home controller. - Shared – This contains common view objects used by multiple views.  Within here, master pages are stored, as well as partial page views (user controls).  By convention, these partial views are named “_XXXPartial.cshtml” where XXX is the appropriate name, such as _LogonPartial.cshtml.  Additionally, display templates are stored under here. With this in mind, let us take a look at the index.cshtml file under the home view directory.  When you open up index.cshtml you should see 1:   @inherits System.Web.Mvc.WebViewPage 2:  @{ 3:          View.Title = "Home Page"; 4:       LayoutPage = "~/Views/Shared/_Layout.cshtml"; 5:   } 6:  <h2>@View.Message</h2> 7:  <p> 8:     To learn more about ASP.NET MVC visit <a href="http://asp.net/mvc" title="ASP.NET MVC     9:    Website">http://asp.net/mvc</a>. 10:  </p> So looking through this, we observe the following facts: Line 1 imports the base page that all views (using Razor) are based on, which is System.Web.Mvc.WebViewPage.  Note that this is different than System.Web.MVC.ViewPage which is used by asp.net MVC 2.0 Also note that instead of the <% %> syntax, we use the very simple ‘@’ sign.  The View Engine contains enough context sensitive logic that it can even distinguish between @ in code and @ in an email.  It’s a very clean markup.  Line 2 introduces the idea of a code block in razor.  A code block is a scoping mechanism just like it is in a normal C# class.  It is designated by @{… }  and any C# code can be placed in between.  Note that this is all server side code just like it is when using the aspx engine and <% %>.  Line 3 allows us to set the page title in the client page’s file.  This is a new feature which I’ll talk more about when we get to master pages, but it is another of the nice things razor brings to asp.net mvc development. Line 4 is where we specify our “master” page, but as you can see, you can place it almost anywhere you want, because you tell it where it is located.  A Layout Page is similar to a master page, but it gains a bit when it comes to flexibility.  Again, we’ll come back to this in a later installment.  Line 6 and beyond is where we display the contents of our view.  No more using <%: %> intermixed with code.  Instead, we get to use very clean syntax such as @View.Message.  This is a lot easier to read than <%:@View.Message%> especially when intermixed with html.  For example: <p> My name is @View.Name and I live at @View.Address </p> Compare this to the equivalent using the aspx view engine <p> My name is <%:View.Name %> and I live at <%: View.Address %> </p> While not an earth shaking simplification, it is easier on the eyes.  As  we explore other features, this clean markup will become more and more valuable.

    Read the article

  • Steps for MySQL DB Replication

    - by Manish Agrawal
    Following are the steps for MySQL Replication implementation on Linux machine: Pre-implementation steps for DB Replication:   1.    Identify the databases to be replicated 2.    Identify the tables to be ignored during replication per database for example log tables 3.  Carefully identify and replace the variables and paths(locations) mentioned (in bold) in the commands given below with appropriate values 4.  Schedule the maintenance activity in odd hours as these activities will affect all the databases on Master database server       Implementation steps for DB Replication:     1.    Configure the /etc/my.cnf file on Master database server to enable Binary logging, setting of server id and configuring of dbnames for which logging should be done. [mysqld] log-bin=mysql-bin server-id=1 binlog-do-db = dbname   Note: You can specify multiple DB in binlog-do-db by using comma separated dbname values like: dbname1, dbname2, …, dbnameN   2.    On Master database, Grant Replication Slave Privileges, by executing following command on mysql prompt mysql> GRANT REPLICATION SLAVE ON *.* TO slaveuser@<hostname> identified by ‘slavepassword’;   3.    Stop the Master & Slave database by giving the command      mysqladmin shutdown   4.    Start the Master database by giving the command      /usr/local/mysql-5.0.22/bin/mysqld_safe --user=user&     5.    mysql> FLUSH TABLES WITH READ LOCK; Note: Leave the client (putty session) from which you issued the FLUSH TABLES statement running, so that the read lock remains in effect. If you exit the client, the lock is released. 6.    mysql > SHOW MASTER STATUS;          +---------------+----------+--------------+------------------+          | File          | Position | Binlog_Do_DB | Binlog_Ignore_DB |          +---------------+----------+--------------+------------------+          | mysql-bin.003 | 117       | dbname       |                  |          +---------------+----------+--------------+------------------+ Note: Note this information as this will be required while starting of Slave and replication in later steps   7.    Take MySQL dump by giving the following command, In another session window (putty window) run the following command: mysqldump –u user --ignore-table=dbname.tbl_name -–ignore-table=dbname.tbl_name2 --master-data dbname > dbname_dump.db Note: When choosing databases to include in the dump, remember that you will need to filter out databases on each slave that you do not want to include in the replication process.     8.    Unlock the tables on Master by giving following command: mysql> UNLOCK TABLES;   9.    Copy the dump file to Slave DB server   10.  Startup the Slave by using option --skip-slave      /usr/local/mysql-5.0.22/bin/mysqld_safe --user=user --skip-slave&   11.  Restore the dump file on Slave DB server      mysql –u user dbname < dbname_dump.db   12.  Stop the Slave database by giving the command      mysqladmin shutdown   13.  Configure the /etc/my.cnf file on the Slave database server [mysqld] server-id=2 replicate-ignore-table = dbname.tablename   14.  Start the Slave Mysql Server with 'replicate-do-db=DB name' option.      /usr/local/mysql-5.0.22/bin/mysqld_safe --user=user --replicate-do-db=dbname --skip-slave   15.  Configure the settings at Slave server for Master host name, log filename and position within the log file as shown in Step 6 above Use Change Master statement in the MySQL session mysql> CHANGE MASTER TO MASTER_HOST='<master_host_name>', MASTER_USER='<replication_user_name>', MASTER_PASSWORD='<replication_password>', MASTER_LOG_FILE='<recorded_log_file_name>', MASTER_LOG_POS=<recorded_log_position>;   16.  On Slave Servers mysql prompt give the following command: a.     mysql > START SLAVE; b.    mysql > SHOW SLAVE STATUS;         Note: To stop slave for backup or any other activity you can use the following command on the Slave Servers mysql prompt: mysql> STOP SLAVE     Refer following links for more information on MySQL DB Replication: http://dev.mysql.com/doc/refman/5.0/en/replication-options.html http://crazytoon.com/2008/04/21/mysql-replication-replicate-by-choice/ http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html

    Read the article

  • Scaling-out Your Services by Message Bus based WCF Transport Extension &ndash; Part 1 &ndash; Background

    - by Shaun
    Cloud computing gives us more flexibility on the computing resource, we can provision and deploy an application or service with multiple instances over multiple machines. With the increment of the service instances, how to balance the incoming message and workload would become a new challenge. Currently there are two approaches we can use to pass the incoming messages to the service instances, I would like call them dispatcher mode and pulling mode.   Dispatcher Mode The dispatcher mode introduces a role which takes the responsible to find the best service instance to process the request. The image below describes the sharp of this mode. There are four clients communicate with the service through the underlying transportation. For example, if we are using HTTP the clients might be connecting to the same service URL. On the server side there’s a dispatcher listening on this URL and try to retrieve all messages. When a message came in, the dispatcher will find a proper service instance to process it. There are three mechanism to find the instance: Round-robin: Dispatcher will always send the message to the next instance. For example, if the dispatcher sent the message to instance 2, then the next message will be sent to instance 3, regardless if instance 3 is busy or not at that moment. Random: Dispatcher will find a service instance randomly, and same as the round-robin mode it regardless if the instance is busy or not. Sticky: Dispatcher will send all related messages to the same service instance. This approach always being used if the service methods are state-ful or session-ful. But as you can see, all of these approaches are not really load balanced. The clients will send messages at any time, and each message might take different process duration on the server side. This means in some cases, some of the service instances are very busy while others are almost idle. For example, if we were using round-robin mode, it could be happened that most of the simple task messages were passed to instance 1 while the complex ones were sent to instance 3, even though instance 1 should be idle. This brings some problem in our architecture. The first one is that, the response to the clients might be longer than it should be. As it’s shown in the figure above, message 6 and 9 can be processed by instance 1 or instance 2, but in reality they were dispatched to the busy instance 3 since the dispatcher and round-robin mode. Secondly, if there are many requests came from the clients in a very short period, service instances might be filled by tons of pending tasks and some instances might be crashed. Third, if we are using some cloud platform to host our service instances, for example the Windows Azure, the computing resource is billed by service deployment period instead of the actual CPU usage. This means if any service instance is idle it is wasting our money! Last one, the dispatcher would be the bottleneck of our system since all incoming messages must be routed by the dispatcher. If we are using HTTP or TCP as the transport, the dispatcher would be a network load balance. If we wants more capacity, we have to scale-up, or buy a hardware load balance which is very expensive, as well as scaling-out the service instances. Pulling Mode Pulling mode doesn’t need a dispatcher to route the messages. All service instances are listening to the same transport and try to retrieve the next proper message to process if they are idle. Since there is no dispatcher in pulling mode, it requires some features on the transportation. The transportation must support multiple client connection and server listening. HTTP and TCP doesn’t allow multiple clients are listening on the same address and port, so it cannot be used in pulling mode directly. All messages in the transportation must be FIFO, which means the old message must be received before the new one. Message selection would be a plus on the transportation. This means both service and client can specify some selection criteria and just receive some specified kinds of messages. This feature is not mandatory but would be very useful when implementing the request reply and duplex WCF channel modes. Otherwise we must have a memory dictionary to store the reply messages. I will explain more about this in the following articles. Message bus, or the message queue would be best candidate as the transportation when using the pulling mode. First, it allows multiple application to listen on the same queue, and it’s FIFO. Some of the message bus also support the message selection, such as TIBCO EMS, RabbitMQ. Some others provide in memory dictionary which can store the reply messages, for example the Redis. The principle of pulling mode is to let the service instances self-managed. This means each instance will try to retrieve the next pending incoming message if they finished the current task. This gives us more benefit and can solve the problems we met with in the dispatcher mode. The incoming message will be received to the best instance to process, which means this will be very balanced. And it will not happen that some instances are busy while other are idle, since the idle one will retrieve more tasks to make them busy. Since all instances are try their best to be busy we can use less instances than dispatcher mode, which more cost effective. Since there’s no dispatcher in the system, there is no bottleneck. When we introduced more service instances, in dispatcher mode we have to change something to let the dispatcher know the new instances. But in pulling mode since all service instance are self-managed, there no extra change at all. If there are many incoming messages, since the message bus can queue them in the transportation, service instances would not be crashed. All above are the benefits using the pulling mode, but it will introduce some problem as well. The process tracking and debugging become more difficult. Since the service instances are self-managed, we cannot know which instance will process the message. So we need more information to support debug and track. Real-time response may not be supported. All service instances will process the next message after the current one has done, if we have some real-time request this may not be a good solution. Compare with the Pros and Cons above, the pulling mode would a better solution for the distributed system architecture. Because what we need more is the scalability, cost-effect and the self-management.   WCF and WCF Transport Extensibility Windows Communication Foundation (WCF) is a framework for building service-oriented applications. In the .NET world WCF is the best way to implement the service. In this series I’m going to demonstrate how to implement the pulling mode on top of a message bus by extending the WCF. I don’t want to deep into every related field in WCF but will highlight its transport extensibility. When we implemented an RPC foundation there are many aspects we need to deal with, for example the message encoding, encryption, authentication and message sending and receiving. In WCF, each aspect is represented by a channel. A message will be passed through all necessary channels and finally send to the underlying transportation. And on the other side the message will be received from the transport and though the same channels until the business logic. This mode is called “Channel Stack” in WCF, and the last channel in the channel stack must always be a transport channel, which takes the responsible for sending and receiving the messages. As we are going to implement the WCF over message bus and implement the pulling mode scaling-out solution, we need to create our own transport channel so that the client and service can exchange messages over our bus. Before we deep into the transport channel, let’s have a look on the message exchange patterns that WCF defines. Message exchange pattern (MEP) defines how client and service exchange the messages over the transportation. WCF defines 3 basic MEPs which are datagram, Request-Reply and Duplex. Datagram: Also known as one-way, or fire-forgot mode. The message sent from the client to the service, and no need any reply from the service. The client doesn’t care about the message result at all. Request-Reply: Very common used pattern. The client send the request message to the service and wait until the reply message comes from the service. Duplex: The client sent message to the service, when the service processing the message it can callback to the client. When callback the service would be like a client while the client would be like a service. In WCF, each MEP represent some channels associated. MEP Channels Datagram IInputChannel, IOutputChannel Request-Reply IRequestChannel, IReplyChannel Duplex IDuplexChannel And the channels are created by ChannelListener on the server side, and ChannelFactory on the client side. The ChannelListener and ChannelFactory are created by the TransportBindingElement. The TransportBindingElement is created by the Binding, which can be defined as a new binding or from a custom binding. For more information about the transport channel mode, please refer to the MSDN document. The figure below shows the transport channel objects when using the request-reply MEP. And this is the datagram MEP. And this is the duplex MEP. After investigated the WCF transport architecture, channel mode and MEP, we finally identified what we should do to extend our message bus based transport layer. They are: Binding: (Optional) Defines the channel elements in the channel stack and added our transport binding element at the bottom of the stack. But we can use the build-in CustomBinding as well. TransportBindingElement: Defines which MEP is supported in our transport and create the related ChannelListener and ChannelFactory. This also defines the scheme of the endpoint if using this transport. ChannelListener: Create the server side channel based on the MEP it’s. We can have one ChannelListener to create channels for all supported MEPs, or we can have ChannelListener for each MEP. In this series I will use the second approach. ChannelFactory: Create the client side channel based on the MEP it’s. We can have one ChannelFactory to create channels for all supported MEPs, or we can have ChannelFactory for each MEP. In this series I will use the second approach. Channels: Based on the MEPs we want to support, we need to implement the channels accordingly. For example, if we want our transport support Request-Reply mode we should implement IRequestChannel and IReplyChannel. In this series I will implement all 3 MEPs listed above one by one. Scaffold: In order to make our transport extension works we also need to implement some scaffold stuff. For example we need some classes to send and receive message though out message bus. We also need some codes to read and write the WCF message, etc.. These are not necessary but would be very useful in our example.   Message Bus There is only one thing remained before we can begin to implement our scaling-out support WCF transport, which is the message bus. As I mentioned above, the message bus must have some features to fulfill all the WCF MEPs. In my company we will be using TIBCO EMS, which is an enterprise message bus product. And I have said before we can use any message bus production if it’s satisfied with our requests. Here I would like to introduce an interface to separate the message bus from the WCF. This allows us to implement the bus operations by any kinds bus we are going to use. The interface would be like this. 1: public interface IBus : IDisposable 2: { 3: string SendRequest(string message, bool fromClient, string from, string to = null); 4:  5: void SendReply(string message, bool fromClient, string replyTo); 6:  7: BusMessage Receive(bool fromClient, string replyTo); 8: } There are only three methods for the bus interface. Let me explain one by one. The SendRequest method takes the responsible for sending the request message into the bus. The parameters description are: message: The WCF message content. fromClient: Indicates if this message was came from the client. from: The channel ID that this message was sent from. The channel ID will be generated when any kinds of channel was created, which will be explained in the following articles. to: The channel ID that this message should be received. In Request-Reply and Duplex MEP this is necessary since the reply message must be received by the channel which sent the related request message. The SendReply method takes the responsible for sending the reply message. It’s very similar as the previous one but no “from” parameter. This is because it’s no need to reply a reply message again in any MEPs. The Receive method takes the responsible for waiting for a incoming message, includes the request message and specified reply message. It returned a BusMessage object, which contains some information about the channel information. The code of the BusMessage class is 1: public class BusMessage 2: { 3: public string MessageID { get; private set; } 4: public string From { get; private set; } 5: public string ReplyTo { get; private set; } 6: public string Content { get; private set; } 7:  8: public BusMessage(string messageId, string fromChannelId, string replyToChannelId, string content) 9: { 10: MessageID = messageId; 11: From = fromChannelId; 12: ReplyTo = replyToChannelId; 13: Content = content; 14: } 15: } Now let’s implement a message bus based on the IBus interface. Since I don’t want you to buy and install the TIBCO EMS or any other message bus products, I will implement an in process memory bus. This bus is only for test and sample purpose. It can only be used if the service and client are in the same process. Very straightforward. 1: public class InProcMessageBus : IBus 2: { 3: private readonly ConcurrentDictionary<Guid, InProcMessageEntity> _queue; 4: private readonly object _lock; 5:  6: public InProcMessageBus() 7: { 8: _queue = new ConcurrentDictionary<Guid, InProcMessageEntity>(); 9: _lock = new object(); 10: } 11:  12: public string SendRequest(string message, bool fromClient, string from, string to = null) 13: { 14: var entity = new InProcMessageEntity(message, fromClient, from, to); 15: _queue.TryAdd(entity.ID, entity); 16: return entity.ID.ToString(); 17: } 18:  19: public void SendReply(string message, bool fromClient, string replyTo) 20: { 21: var entity = new InProcMessageEntity(message, fromClient, null, replyTo); 22: _queue.TryAdd(entity.ID, entity); 23: } 24:  25: public BusMessage Receive(bool fromClient, string replyTo) 26: { 27: InProcMessageEntity e = null; 28: while (true) 29: { 30: lock (_lock) 31: { 32: var entity = _queue 33: .Where(kvp => kvp.Value.FromClient == fromClient && (kvp.Value.To == replyTo || string.IsNullOrWhiteSpace(kvp.Value.To))) 34: .FirstOrDefault(); 35: if (entity.Key != Guid.Empty && entity.Value != null) 36: { 37: _queue.TryRemove(entity.Key, out e); 38: } 39: } 40: if (e == null) 41: { 42: Thread.Sleep(100); 43: } 44: else 45: { 46: return new BusMessage(e.ID.ToString(), e.From, e.To, e.Content); 47: } 48: } 49: } 50:  51: public void Dispose() 52: { 53: } 54: } The InProcMessageBus stores the messages in the objects of InProcMessageEntity, which can take some extra information beside the WCF message itself. 1: public class InProcMessageEntity 2: { 3: public Guid ID { get; set; } 4: public string Content { get; set; } 5: public bool FromClient { get; set; } 6: public string From { get; set; } 7: public string To { get; set; } 8:  9: public InProcMessageEntity() 10: : this(string.Empty, false, string.Empty, string.Empty) 11: { 12: } 13:  14: public InProcMessageEntity(string content, bool fromClient, string from, string to) 15: { 16: ID = Guid.NewGuid(); 17: Content = content; 18: FromClient = fromClient; 19: From = from; 20: To = to; 21: } 22: }   Summary OK, now I have all necessary stuff ready. The next step would be implementing our WCF message bus transport extension. In this post I described two scaling-out approaches on the service side especially if we are using the cloud platform: dispatcher mode and pulling mode. And I compared the Pros and Cons of them. Then I introduced the WCF channel stack, channel mode and the transport extension part, and identified what we should do to create our own WCF transport extension, to let our WCF services using pulling mode based on a message bus. And finally I provided some classes that need to be used in the future posts that working against an in process memory message bus, for the demonstration purpose only. In the next post I will begin to implement the transport extension step by step.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • C#/.NET Little Wonders: The Timeout static class

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. When I started the “Little Wonders” series, I really wanted to pay homage to parts of the .NET Framework that are often small but can help in big ways.  The item I have to discuss today really is a very small item in the .NET BCL, but once again I feel it can help make the intention of code much clearer and thus is worthy of note. The Problem - Magic numbers aren’t very readable or maintainable In my first Little Wonders Post (Five Little Wonders That Make Code Better) I mention the TimeSpan factory methods which, I feel, really help the readability of constructed TimeSpan instances. Just to quickly recap that discussion, ask yourself what the TimeSpan specified in each case below is 1: // Five minutes? Five Seconds? 2: var fiveWhat1 = new TimeSpan(0, 0, 5); 3: var fiveWhat2 = new TimeSpan(0, 0, 5, 0); 4: var fiveWhat3 = new TimeSpan(0, 0, 5, 0, 0); You’d think they’d all be the same unit of time, right?  After all, most overloads tend to tack additional arguments on the end.  But this is not the case with TimeSpan, where the constructor forms are:     TimeSpan(int hours, int minutes, int seconds);     TimeSpan(int days, int hours, int minutes, int seconds);     TimeSpan(int days, int hours, int minutes, int seconds, int milliseconds); Notice how in the 4 and 5 parameter version we suddenly have the parameter days slipping in front of hours?  This can make reading constructors like those above much harder.  Fortunately, there are TimeSpan factory methods to help make your intention crystal clear: 1: // Ah! Much clearer! 2: var fiveSeconds = TimeSpan.FromSeconds(5); These are great because they remove all ambiguity from the reader!  So in short, magic numbers in constructors and methods can be ambiguous, and anything we can do to clean up the intention of the developer will make the code much easier to read and maintain. Timeout – Readable identifiers for infinite timeout values In a similar way to TimeSpan, let’s consider specifying timeouts for some of .NET’s (or our own) many methods that allow you to specify timeout periods. For example, in the TPL Task class, there is a family of Wait() methods that can take TimeSpan or int for timeouts.  Typically, if you want to specify an infinite timeout, you’d just call the version that doesn’t take a timeout parameter at all: 1: myTask.Wait(); // infinite wait But there are versions that take the int or TimeSpan for timeout as well: 1: // Wait for 100 ms 2: myTask.Wait(100); 3:  4: // Wait for 5 seconds 5: myTask.Wait(TimeSpan.FromSeconds(5); Now, if we want to specify an infinite timeout to wait on the Task, we could pass –1 (or a TimeSpan set to –1 ms), which what the .NET BCL methods with timeouts use to represent an infinite timeout: 1: // Also infinite timeouts, but harder to read/maintain 2: myTask.Wait(-1); 3: myTask.Wait(TimeSpan.FromMilliseconds(-1)); However, these are not as readable or maintainable.  If you were writing this code, you might make the mistake of thinking 0 or int.MaxValue was an infinite timeout, and you’d be incorrect.  Also, reading the code above it isn’t as clear that –1 is infinite unless you happen to know that is the specified behavior. To make the code like this easier to read and maintain, there is a static class called Timeout in the System.Threading namespace which contains definition for infinite timeouts specified as both int and TimeSpan forms: Timeout.Infinite An integer constant with a value of –1 Timeout.InfiniteTimeSpan A static readonly TimeSpan which represents –1 ms (only available in .NET 4.5+) This makes our calls to Task.Wait() (or any other calls with timeouts) much more clear: 1: // intention to wait indefinitely is quite clear now 2: myTask.Wait(Timeout.Infinite); 3: myTask.Wait(Timeout.InfiniteTimeSpan); But wait, you may say, why would we care at all?  Why not use the version of Wait() that takes no arguments?  Good question!  When you’re directly calling the method with an infinite timeout that’s what you’d most likely do, but what if you are just passing along a timeout specified by a caller from higher up?  Or perhaps storing a timeout value from a configuration file, and want to default it to infinite? For example, perhaps you are designing a communications module and want to be able to shutdown gracefully, but if you can’t gracefully finish in a specified amount of time you want to force the connection closed.  You could create a Shutdown() method in your class, and take a TimeSpan or an int for the amount of time to wait for a clean shutdown – perhaps waiting for client to acknowledge – before terminating the connection.  So, assume we had a pub/sub system with a class to broadcast messages: 1: // Some class to broadcast messages to connected clients 2: public class Broadcaster 3: { 4: // ... 5:  6: // Shutdown connection to clients, wait for ack back from clients 7: // until all acks received or timeout, whichever happens first 8: public void Shutdown(int timeout) 9: { 10: // Kick off a task here to send shutdown request to clients and wait 11: // for the task to finish below for the specified time... 12:  13: if (!shutdownTask.Wait(timeout)) 14: { 15: // If Wait() returns false, we timed out and task 16: // did not join in time. 17: } 18: } 19: } We could even add an overload to allow us to use TimeSpan instead of int, to give our callers the flexibility to specify timeouts either way: 1: // overload to allow them to specify Timeout in TimeSpan, would 2: // just call the int version passing in the TotalMilliseconds... 3: public void Shutdown(TimeSpan timeout) 4: { 5: Shutdown(timeout.TotalMilliseconds); 6: } Notice in case of this class, we don’t assume the caller wants infinite timeouts, we choose to rely on them to tell us how long to wait.  So now, if they choose an infinite timeout, they could use the –1, which is more cryptic, or use Timeout class to make the intention clear: 1: // shutdown the broadcaster, waiting until all clients ack back 2: // without timing out. 3: myBroadcaster.Shutdown(Timeout.Infinite); We could even add a default argument using the int parameter version so that specifying no arguments to Shutdown() assumes an infinite timeout: 1: // Modified original Shutdown() method to add a default of 2: // Timeout.Infinite, works because Timeout.Infinite is a compile 3: // time constant. 4: public void Shutdown(int timeout = Timeout.Infinite) 5: { 6: // same code as before 7: } Note that you can’t default the ShutDown(TimeSpan) overload with Timeout.InfiniteTimeSpan since it is not a compile-time constant.  The only acceptable default for a TimeSpan parameter would be default(TimeSpan) which is zero milliseconds, which specified no wait, not infinite wait. Summary While Timeout.Infinite and Timeout.InfiniteTimeSpan are not earth-shattering classes in terms of functionality, they do give you very handy and readable constant values that you can use in your programs to help increase readability and maintainability when specifying infinite timeouts for various timeouts in the BCL and your own applications. Technorati Tags: C#,CSharp,.NET,Little Wonders,Timeout,Task

    Read the article

  • Office add on saves you time if you use Moodle

    - by Brian Scarbeau
    Moodle is a free elearning content management software program. It does take a great deal of time to set it up because you need to upload your Office files to Moodle. Now, Microsoft has made that job easier with their new Office Add on. With it you can save directly into Moodle.   Here are the instructions on how to use. Just change the URL you use for your Moodle site. 1. Go to this site and download and install the software. http://www.educationlabs.com/projects/officeaddinformoodle/Pages/default.aspx 2. Open your Office Word in this example and then select Save to Moodle (Notice you can also open files that you have stored in moodle make changes and then save back to moodle. (WOW) 3,  Now because this is the first time you are using this feature you will see a dialog box that looks like this: Enter the moodle website exactly as you see here along with your username and password for moodle. Click the checkbox to remember you. 4. After you click on Save to Moodle you should see a dialog box like this: 5.  Click the plus on the left Lake Highland Preparatory School-Online Learning 6. You will now see the listing of your moodle classes. Now click on the class that you your file to go to and save. Now you use this file in moodle. Good luck!

    Read the article

  • Cross platform application revolution

    - by anirudha
    Every developer know that if they make a windows application that they work only on windows. that’s a small pity thing we all know. this is a lose point for windows application who make developer thing small means only for windows and other only for mac. this is a big point behind success of web because who purchase a operating system if they want to use a application on other platform. why they purchase when they can’t try them. that’s a thing better in Web means IE 6 no problem IE 6 to IE 8 chrome to chrome 8 Firefox to Firefox 3.6.13 even that’s beta no problem the good website is shown as same as other browser. some minor difference may be can see. the cross platform application development thinking is much big then making a application who is only for some audience. the difference between audience make by OS what they use Windows or mac. if they use mac they can’t use this they use windows they can’t use this. Web for Everyone starting from a children to grandfather. male and female Everyone can use internet.no worrying what you have even you have Windows or mac , any browser even as silly IE 6. the cross platform have a good thing that “People”. everyone can use them without a problem that. just like some time problem come in windows that “some component is missing click here to get them” , you can’t use this [apps] software because you have windows sp1 , sp2  sp3. you need to install this first before this. this stupidity mainly comes in Microsoft software. in last year i found a issue on WPI that they force user to install another software when they get them from WPI. ex:- you need to install Visual studio 2008 before installing Visual studio 2010 express. are anyone tell me why user get old version 2008 when they get latest and express version. i never try again their to check the issue is solved or not. a another thing is you can’t get IE 9 on windows XP version. in that’case don’t thing and worrying about them because Firefox and Chrome is much better. the stupidity from Microsoft is too much. they never told you about Firebug even sometime they discuss about damage tool in IE they called them developer tool because they are Microsoft and they only thing how they can market their products. you need to install many thing without any reason such as many SQL server component even you use other RDBMS. you can’t say no to them because you need a tool and tool require a useless component called SQL server. i never found any software force me to install this for this and this for this before install me. that’s another good thing in WEB that no thing require i means you not need to install dotnet framework 4 before enjoy facebook or twitter. may be you found out that Microsoft's fail project Window planet force you to get silverlight before going their. i never hear about them. some month ago my friend talked to me about them i found nothing better their. Wha’t user do when facebook force user to install silverlight or adobe flash or may be Microsoft dotnet framework 4. if you not install them facebook tell  you bye bye tata ! never come here before installing Microsoft dotnet framework 4. the door is open for you after installing them not before. the story is same as “ tell me sorry before coming in home” as mother says to their child when they do something wrong. the web never force you to do something for them. sometime they allow you to use other website account their that’s very fast login for you. because they know the importance of your time.

    Read the article

  • Benefits of PerformancePoint Services Using SharePoint Server 2010

    - by Wayne
    What is PerformancePoint Services? Most of the time it happens that the metrics that make up your key performance indicators are not simple values from a data source. In SharePoint Server 2007 PerformancePoint Services, you could create two kinds of KPI metrics: Simple single value metrics from any supported data source or Complex multiple value metrics from a single Analysis Services data source using MDX. Now things are even easier with Performance Point Services in SharePoint 2010. Let us check what is it? PerformancePoint Services in SharePoint Server 2010 is a performance management service that you can use to monitor and analyze your business. By providing flexible, easy-to-use tools for building dashboards, scorecards, reports, and key performance indicators (KPIs), PerformancePoint Services can help everyone across an organization make informed business decisions that align with companywide objectives and strategy. Scorecards, dashboards, and KPIs help drive accountability. Integrated analytics help employees move quickly from monitoring information to analyzing it and, when appropriate, sharing it throughout the organization. Prior to the addition of PerformancePoint Services to SharePoint Server, Microsoft Office PerformancePoint Server 2007 functioned as a standalone server. Now PerformancePoint functionality is available as an integrated part of the SharePoint Server Enterprise license, as is the case with Excel Services in Microsoft SharePoint Server 2010. The popular features of earlier versions of PerformancePoint Services are preserved along with numerous enhancements and additional functionality. New PerformancePoint Services features PerformancePoint Services now can utilize SharePoint Server scalability, collaboration, backup and recovery, and disaster recovery capabilities. Dashboards and dashboard items are stored and secured within SharePoint lists and libraries, providing you with a single security and repository framework. New features and enhancements of SharePoint 2010 PerformancePoint Services • With PerformancePoint Services, functioning as a service in SharePoint Server, dashboards and dashboard items are stored and secured within SharePoint lists and libraries, providing you with a single security and repository framework. The new architecture also takes advantage of SharePoint Server scalability, collaboration, backup and recovery, and disaster recovery capabilities. You also can include and link PerformancePoint Services Web Parts with other SharePoint Server Web Parts on the same page. The new architecture also streamlines security models that simplify access to report data. • The Decomposition Tree is a new visualization report type available in PerformancePoint Services. You can use it to quickly and visually break down higher-level data values from a multi-dimensional data set to understand the driving forces behind those values. The Decomposition Tree is available in scorecards and analytic reports and ultimately in dashboards. • You can access more detailed business information with improved scorecards. Scorecards have been enhanced to make it easy for you to drill down and quickly access more detailed information. PerformancePoint scorecards also offer more flexible layout options, dynamic hierarchies, and calculated KPI features. Using this enhanced functionality, you can now create custom metrics that use multiple data sources. You can also sort, filter, and view variances between actual and target values to help you identify concerns or risks. • Better Time Intelligence filtering capabilities that you can use to create and use dynamic time filters that are always up to date. Other improved filters improve the ability for dashboard users to quickly focus in on information that is most relevant. • Ability to include and link PerformancePoint Services Web Parts together with other PerformancePoint Services Web parts on the same page. • Easier to author and publish dashboard items by using Dashboard Designer. • SQL Server Analysis Services 2008 support. • Increased support for accessibility compliance in individual reports and scorecards. • The KPI Details report is a new report type that displays contextually relevant information about KPIs, metrics, rows, columns, and cells within a scorecard. The KPI Details report works as a Web part that links to a scorecard or individual KPI to show relevant metadata to the end user in SharePoint Server. This Web part can be added to PerformancePoint dashboards or any SharePoint Server page. • Create analytics reports to better understand underlying business forces behind the results. Analytic reports have been enhanced to support value filtering, new chart types, and server-based conditional formatting. To conclude, PerformancePoint Services, by becoming tightly integrated with SharePoint Server 2010, takes advantage of many enterprise-level SharePoint Server 2010 features. Unfortunately, SharePoint Foundation 2010 doesn’t include this feature. There are still many choices in SharePoint family of products that include SharePoint Server 2010, SharePoint Foundation, SharePoint Server 2007 and associated free SharePoint web parts and templates.

    Read the article

  • Using Table-Valued Parameters in SQL Server

    - by Jesse
    I work with stored procedures in SQL Server pretty frequently and have often found myself with a need to pass in a list of values at run-time. Quite often this list contains a set of ids on which the stored procedure needs to operate the size and contents of which are not known at design time. In the past I’ve taken the collection of ids (which are usually integers), converted them to a string representation where each value is separated by a comma and passed that string into a VARCHAR parameter of a stored procedure. The body of the stored procedure would then need to parse that string into a table variable which could be easily consumed with set-based logic within the rest of the stored procedure. This approach works pretty well but the VARCHAR variable has always felt like an un-wanted “middle man” in this scenario. Of course, I could use a BULK INSERT operation to load the list of ids into a temporary table that the stored procedure could use, but that approach seems heavy-handed in situations where the list of values is usually going to contain only a few dozen values. Fortunately SQL Server 2008 introduced the concept of table-valued parameters which effectively eliminates the need for the clumsy middle man VARCHAR parameter. Example: Customer Transaction Summary Report Let’s say we have a report that can summarize the the transactions that we’ve conducted with customers over a period of time. The report returns a pretty simple dataset containing one row per customer with some key metrics about how much business that customer has conducted over the date range for which the report is being run. Sometimes the report is run for a single customer, sometimes it’s run for all customers, and sometimes it’s run for a handful of customers (i.e. a salesman runs it for the customers that fall into his sales territory). This report can be invoked from a website on-demand, or it can be scheduled for periodic delivery to certain users via SQL Server Reporting Services. Because the report can be created from different places and the query to generate the report is complex it’s been packed into a stored procedure that accepts three parameters: @startDate – The beginning of the date range for which the report should be run. @endDate – The end of the date range for which the report should be run. @customerIds – The customer Ids for which the report should be run. Obviously, the @startDate and @endDate parameters are DATETIME variables. The @customerIds parameter, however, needs to contain a list of the identity values (primary key) from the Customers table representing the customers that were selected for this particular run of the report. In prior versions of SQL Server we might have made this parameter a VARCHAR variable, but with SQL Server 2008 we can make it into a table-valued parameter. Defining And Using The Table Type In order to use a table-valued parameter, we first need to tell SQL Server about what the table will look like. We do this by creating a user defined type. For the purposes of this stored procedure we need a very simple type to model a table variable with a single integer column. We can create a generic type called ‘IntegerListTableType’ like this: CREATE TYPE IntegerListTableType AS TABLE (Value INT NOT NULL) Once defined, we can use this new type to define the @customerIds parameter in the signature of our stored procedure. The parameter list for the stored procedure definition might look like: 1: CREATE PROCEDURE dbo.rpt_CustomerTransactionSummary 2: @starDate datetime, 3: @endDate datetime, 4: @customerIds IntegerListTableTableType READONLY   Note the ‘READONLY’ statement following the declaration of the @customerIds parameter. SQL Server requires any table-valued parameter be marked as ‘READONLY’ and no DML (INSERT/UPDATE/DELETE) statements can be performed on a table-valued parameter within the routine in which it’s used. Aside from the DML restriction, however, you can do pretty much anything with a table-valued parameter as you could with a normal TABLE variable. With the user defined type and stored procedure defined as above, we could invoke like this: 1: DECLARE @cusomterIdList IntegerListTableType 2: INSERT @customerIdList VALUES (1) 3: INSERT @customerIdList VALUES (2) 4: INSERT @customerIdList VALUES (3) 5:  6: EXEC dbo.rpt_CustomerTransationSummary 7: @startDate = '2012-05-01', 8: @endDate = '2012-06-01' 9: @customerIds = @customerIdList   Note that we can simply declare a variable of type ‘IntegerListTableType’ just like any other normal variable and insert values into it just like a TABLE variable. We could also populate the variable with a SELECT … INTO or INSERT … SELECT statement if desired. Using The Table-Valued Parameter With ADO .NET Invoking a stored procedure with a table-valued parameter from ADO .NET is as simple as building a DataTable and passing it in as the Value of a SqlParameter. Here’s some example code for how we would construct the SqlParameter for the @customerIds parameter in our stored procedure: 1: var customerIdsParameter = new SqlParameter(); 2: customerIdParameter.Direction = ParameterDirection.Input; 3: customerIdParameter.TypeName = "IntegerListTableType"; 4: customerIdParameter.Value = selectedCustomerIds.ToIntegerListDataTable("Value");   All we’re doing here is new’ing up an instance of SqlParameter, setting the pamameters direction, specifying the name of the User Defined Type that this parameter uses, and setting its value. We’re assuming here that we have an IEnumerable<int> variable called ‘selectedCustomerIds’ containing all of the customer Ids for which the report should be run. The ‘ToIntegerListDataTable’ method is an extension method of the IEnumerable<int> type that looks like this: 1: public static DataTable ToIntegerListDataTable(this IEnumerable<int> intValues, string columnName) 2: { 3: var intergerListDataTable = new DataTable(); 4: intergerListDataTable.Columns.Add(columnName); 5: foreach(var intValue in intValues) 6: { 7: var nextRow = intergerListDataTable.NewRow(); 8: nextRow[columnName] = intValue; 9: intergerListDataTable.Rows.Add(nextRow); 10: } 11:  12: return intergerListDataTable; 13: }   Since the ‘IntegerListTableType’ has a single int column called ‘Value’, we pass that in for the ‘columnName’ parameter to the extension method. The method creates a new single-columned DataTable using the provided column name then iterates over the items in the IEnumerable<int> instance adding one row for each value. We can then use this SqlParameter instance when invoking the stored procedure just like we would use any other parameter. Advanced Functionality Using passing a list of integers into a stored procedure is a very simple usage scenario for the table-valued parameters feature, but I’ve found that it covers the majority of situations where I’ve needed to pass a collection of data for use in a query at run-time. I should note that BULK INSERT feature still makes sense for passing large amounts of data to SQL Server for processing. MSDN seems to suggest that 1000 rows of data is the tipping point where the overhead of a BULK INSERT operation can pay dividends. I should also note here that table-valued parameters can be used to deal with more complex data structures than single-columned tables of integers. A User Defined Type that backs a table-valued parameter can use things like identities and computed columns. That said, using some of these more advanced features might require the use the SqlDataRecord and SqlMetaData classes instead of a simple DataTable. Erland Sommarskog has a great article on his website that describes when and how to use these classes for table-valued parameters. What About Reporting Services? Earlier in the post I referenced the fact that our example stored procedure would be called from both a web application and a SQL Server Reporting Services report. Unfortunately, using table-valued parameters from SSRS reports can be a bit tricky and warrants its own blog post which I’ll be putting together and posting sometime in the near future.

    Read the article

  • Sevensteps and I are joining forces

    - by Dennis Vroegop
    As of today, I will be partnering with Sevensteps when it comes to developing great Surface, Windows Phone 7 and Windows 7 Touch applications. Below you’ll find the press release we sent out today. I am looking forward to this partnership and expect great things coming from us both in the future!   Dennis Vroegop, Microsoft MVP, joins Sevensteps partner network 1 March 2011, Seattle / Amersfoort Today Dennis Vroegop and Bart Roozendaal, both Microsoft Most Valuable Professional for Microsoft Surface, announce the joining of Dennis Vroegop to the Sevensteps partner network. Dennis and Bart already worked together very closely through the Microsoft MVP connection, but decided to combined their efforts to make the new Microsoft Surface and our solutions for it, a success. Dennis will join the other Sevensteps partners in creating state of the art solutions for Microsoft Surface, Windows Phone 7 and Windows 7 Touch. Dennis brings a vast amount of knowledge about these technologies, as well as his network in the Dutch developer community. With Dennis joining the Sevensteps partner network we bring unique expertise, power and insight in the platforms, that no other company worldwide can offer. This step brings our goal of Sevensteps being the knowledge hub for Microsoft Surface of choice a whole lot closer. About Dennis Vroegop Dennis is a Microsoft MVP for Microsoft Surface and chairman of the Dutch dotNed user group. He has a long history promoting Microsoft Surface in the developer community. Dennis is a regular speaker at local and international conferences and a frequent writer of articles, including but not limited to Microsoft Surface. Dennis has a bachelor’s degree in computer sciences and has spent all of his professional life writing software for the Microsoft platform. About Sevensteps For more information about Sevensteps and Bart Roozendaal please point to http://www.sevensteps.com Tags: surface,wp7,windows touch

    Read the article

  • Chunking a List - .NET vs Python

    - by Abhijeet Patel
    Chunking a List As I mentioned last time, I'm knee deep in python these days. I come from a statically typed background so it's definitely a mental adjustment. List comprehensions is BIG in Python and having worked with a few of them I can see why. Let's say we need to chunk a list into sublists of a specified size. Here is how we'd do it in C#  static class Extensions   {       public static IEnumerable<List<T>> Chunk<T>(this List<T> l, int chunkSize)       {           if (chunkSize <0)           {               throw new ArgumentException("chunkSize cannot be negative", "chunkSize");           }           for (int i = 0; i < l.Count; i += chunkSize)           {               yield return new List<T>(l.Skip(i).Take(chunkSize));           }       }    }    static void Main(string[] args)  {           var l = new List<string> { "a", "b", "c", "d", "e", "f","g" };             foreach (var list in l.Chunk(7))           {               string str = list.Aggregate((s1, s2) => s1 + "," + s2);               Console.WriteLine(str);           }   }   A little wordy but still pretty concise thanks to LINQ.We skip the iteration number plus chunkSize elements and yield out a new List of chunkSize elements on each iteration. The python implementation is a bit more terse. def chunkIterable(iter, chunkSize):      '''Chunks an iterable         object into a list of the specified chunkSize     '''        assert hasattr(iter, "__iter__"), "iter is not an iterable"      for i in xrange(0, len(iter), chunkSize):          yield iter[i:i + chunkSize]    if __name__ == '__main__':      l = ['a', 'b', 'c', 'd', 'e', 'f']      generator = chunkIterable(l,2)      try:          while(1):              print generator.next()      except StopIteration:          pass   xrange generates elements in the specified range taking in a seed and returning a generator. which can be used in a for loop(much like using a C# iterator in a foreach loop) Since chunkIterable has a yield statement, it turns this method into a generator as well. iter[i:i + chunkSize] essentially slices the list based on the current iteration index and chunksize and creates a new list that we yield out to the caller one at a time. A generator much like an iterator is a state machine and each subsequent call to it remembers the state at which the last call left off and resumes execution from that point. The caveat to keep in mind is that since variables are not explicitly typed we need to ensure that the object passed in is iterable using hasattr(iter, "__iter__").This way we can perform chunking on any object which is an "iterable", very similar to accepting an IEnumerable in the .NET land

    Read the article

  • Review of ComponentOne Silverlight Controls (Free License Giveaway).

    - by mbcrump
    ComponentOne has several great products that target Silverlight Developers. One of them is their Silverlight Controls and the other is the XAP Optimizer. I decided that I would check out the controls and Xap Optimizer and feature them on my blog. After talking with ComponentOne, they agreed to take part in my Monthly Silverlight giveaway. The details are listed below: ----------------------------------------------------------------------------------------------------------------------------------------------------------- Win a FREE developer’s license of ComponentOne Silverlight Controls + XAP Optimizer! (the winner also gets a license to Silverlight Spy) Random winner will be announced on March 1st, 2011! To be entered into the contest do the following things: Subscribe to my feed. Leave a comment below with a valid email account (I WILL NOT share this info with anyone.) Retweet the following : I just entered to win free #Silverlight controls from @mbcrump and @ComponentOne http://mcrump.me/fTSmB8 ! Don’t change the URL because this will allow me to track the users that Tweet this page. Don’t forget to visit ComponentOne because they made this possible. MichaelCrump.Net provides Silverlight Giveaways every month. You can also see the latest giveaway by bookmarking http://giveaways.michaelcrump.net . ---------------------------------------------------------------------------------------------------------------------------------------------------------- Before we get started with the Silverlight Controls, here is a couple of links to bookmark: The Live Demos of the Silverlight Controls is located here. The XAP Optimizer page is here. One thing that I liked about the help documentation is that you can grab a PDF that only contains documentation for that control. This allows you to get the information you need without going through several hundred pages. You can also download the full documentation from their site.  ComponentOne Silverlight Controls I recently built a hobby project and decided to use ComponentOne Silverlight Controls. The main reason for this is that the controls are heavily documented, they look great and getting help was just a tweet or forum click away. So, the first question that you may ask is, “What is included?” Here is the official list below. I wanted to show several of the controls that I think developers will use the most. 1) ComponentOne’s Image Control – Display animated GIF images on your Silverlight pages as you would in traditional Web apps. Add attractive visuals with minimal effort. 2) HTML Host - Render HTML and arbitrary URI content from within Silverlight. 3) Chart3D - Create 3D surface charts with options for contour levels, zones, a chart legend and more. 4) PDFViewer - View PDF files in Silverlight! That is just a fraction of the controls available. If you want to check out several of them in a “real” application then check out my Silverlight page at http://michaelcrump.info. This brings me to the second part of the giveaway. XAP Optimizer – Is designed to reduce the size of your XAP File. It also includes built-in obfuscation and signing. With my personal project, I decided to use the XAP Optimizer by ComponentOne. It was so easy to use. You basically give it your .XAP file and it provides an output file. If you prefer to prune unused references manually then you can prune your XAP file manually by selecting the option below. I went ahead and added Obfuscation just to try it out and it worked great. You may notice from the screenshot below that I only obfuscated assemblies that I built. The other dlls anyone can grab off the net so we have no reason to obfuscate them. You also have the option to automatically sign your .xap with the SN.exe tool. So how did it turn out? Well, I reduced my XAP size from 2.4 to 1.8 with simply a click of a button. I added obfuscation with a click of a button: Screenshot of no obfuscation on my XAP File   Screenshot of obfuscation on my XAP File with XAP Optimizer.   So, with 2 button clicks, I reduce my XAP file and obfuscated my assembly. What else can you want? Well, they provide a nice HTML report that gives you an optimization summary. So what if you don’t want to launch this tool every time you deploy a Silverlight application? Well the official documentation provided a way to do it in your built event in Visual Studio. Click the Build Events tab on the left side of the Properties window. Enter the following command in the Post-build event command line: $Program Files\ComponentOne\XapOptimizer\XapOptimizer.exe /cmd /p:$(ProjectDir)$(ProjectName).xoproj In the end, this is a great product. I love code that I don’t have to write and utilities that just work. ComponentOne delivers with both the Silverlight Controls and the XAP Optimizer. Don’t forget to leave a comment below in order to win a set of the controls! Subscribe to my feed

    Read the article

  • Visual WebGui launches a CompanionKit for enhanced developers experience

    - by Webgui
    Visual WebGui launched a new major live demo of the platform's concepts, features and controls and the code behind them. The new Developer CompanionKit is a hige leap forward in the developer experience by allowing developers a hands-on exploration of Visual WebGui which should provide better understanding of the system and the ability to utilize the great advantages of Visual WebGui in order to develop better performing rich web applications. The CompanionKit is available online at companionkit.visualwebgui.com/main.wgx We invite you to Explore Visual WebGui via the new CompanionKit and to watch the CompanionKit Intro video. Below is a screenshot taken from the live CompanionKit which allows developers to see how applying an alternate style to the appearance of a DataGridView is done and how it looks running live and its code (C# or VB.NET). You can access the different Controls (within the Controls section) from the left navigation bar or perform a free text search which shows the relevant results from all the sections - additional sections such as a Concept section are expected to be added in the near future.   In addition, the New Developer CompanionKit which was built with Visual WebGui showcases the enhanced UI design capabilities of building more engaing, modern Web 2.0 applications. The CompanionKit will also be available for download in the next few days as part of the media for 6.4 beta 2 SDK (.NET 2.0 or .NET 3.5) under "Help and Documentation".

    Read the article

  • O&rsquo;Reilly Deal of the Day 10/June/2014 - AngularJS Directives

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2014/06/10/orsquoreilly-deal-of-the-day-10june2014---angularjs-directives.aspxToday’s half-price E-Book offer from O’Reilly at http://shop.oreilly.com/product/9781783280339.do is AngularJS Directives. “AngularJS, propelled by Google, is quickly becoming one of the most popular JavaScript MVC frameworks available, working to invert the development paradigm and bring data-driven modularity to the web frontend. Directives serve as the core building blocks in AngularJS and enable you to create reusable models that mold around your data structures and breathe new life into the intersection of HTML and JavaScript.”

    Read the article

  • Windows Azure Evolution &ndash; TFS Integration (WAWS Part 2)

    - by Shaun
    So this is the fourth blog post about the new features of Windows Azure and the second part of Windows Azure Web Sites. But this is not just focus on the WAWS since the function I’m going to introduce is available in both Windows Azure Web Sites and Windows Azure Cloud Service (a.k.a. hosted service). In the previous post I talked about the Windows Azure Web Sites and how to use its gallery to build a WordPress personal blog without coding. Besides the gallery we can create an empty web site and upload our website from vary approaches. And one of the highlighted feature here is that, we can make our web site integrated with a source control service, such as TFS and Git, so that it will be deployed automatically once a new commit or build available.   Create New Empty Web Site In the developer portal when creating a new web site, we can select QUICK CREATE item. This will create an empty web site with only one shared instance without any database associated. Let’s specify the URL, region and subscription and click OK. After a few seconds our website will be ready. And now we can click the BROWSE button to open this empty website. As you can see there is a welcome page available in my website even thought I didn’t upload or deploy anything. This means even though the website will be charged even before anything was deployed, similar as the cloud service (hosted service). It is because once we created a website, Windows Azure platform had arranged a hosting process (w3wp.exe) in the group of virtual machines.   Create Project in TFS Preview Service and Setup Link Currently the Windows Azure Web Sites can integrate with TFS and Git as its deployment source, and it only support the Microsoft TFS Preview Service for now. I will not deep into how to use the TFS preview service in this post but once we click into the website we had just created and then clicked the “Set up TFS publishing”, there will be a dialog helping us to connect to this service. If you don’t have an account you can click the link shown below to request one. Assuming we have already had an account of TFS service then we need to create a new project firstly. Go to your TFS service website and create a new project, giving the project name, description and the process template. Then, back to the developer portal and clicked the “Set up TFS publishing” link. In the popping up window I will provide my TFS service URL and click the “Authorize now” link. Click “Accept” button to allow my windows azure to connect to my TFS service. Then it will be back to the developer portal and list all projects in my account. Just select the one I had just created and click OK. Then our website is linking to the TFS project I specified and finally it will show similar like this below. This means the web site had been linked to the TFS successfully.   Work with TFS Preview Service in VS2010 In the figure above there are some links to guide us how to connect to the TFS server through Visual Studio 2010 and 2012 RC. If you are using Visual Studio 2012 RC, you don’t need any extension. But if you are using Visual Studio 2010 you must have SP1 and KB2581206 installed. To connect to my TFS service just open the Visual Studio and in the Team Explorer, we can add a new TFS server and paste the URL of my TFS service from the developer portal. And select the project I had just created, then it will be listed in my Team Explorer. Now let’s start to build our website. Since the website we are going to build will be deployed to WAWS, it’s NOT a cloud service, NOT a web role. So in this case we need to create a normal ASP.NET web application. For example, an ASP.NET MVC 3 web application. Next, right click on the solution and select “Add Solution to Source Control”, select the project I had just created. Then check my code in. Once the check-in finished we can see that there is a build running in the TFS server. And if we back to the developer portal, we will see in our web site deployment page there’s a deployment running. In fact, once we linked our web site to our TFS then it will create a new build definition in our TFS project. It will be triggered by each check-in and deploy to the web site we linked automatically. So that when our code had been compiled it will be published to our web site from our TFS server. Once the build and deployment finished we can see it’s now active on our developer portal. Now we can see the web site that created from my Visual Studio and deployed by my TFS.   Continue Deployment through VS and TFS A big benefit when using TFS publishing is the continue deployment. Now if I changed some code in my Visual Studio, for example update some text on the home page and check in my changes, then it will trigger an new build and deploy to my WAWS automatically. And even more, if we wanted to rollback to a previous version we can just select an existing deployment listed in the portal and click REDEPLOY at the bottom.   Q&A: Can Web Site use Storage work with a Worker Role? Stacy asked a question in my previous post, which was “can a web site use Windows Azure Storage and furthermore working with a worker role”. Since the web site is deployed on the windows azure virtual machines in data center, it must be able to use all windows azure features such as the storage, SQL databases, CDN, etc.. But since when using web site we normally have a standard ASP.NET web application, PHP website or NodeJS, the windows azure SDK was not referenced by default. But we can add them by ourselves. In our sample project let’s right click on my MVC project and clicked the “Manage NuGet packages”. And in the dialog I will search windows azure packages and select the “Windows Azure Storage” to install. Then we will have the assemblies to access windows azure storage such as tables, queues and blobs. Since I have a storage account already, let’s have a quick demo, just to list all blobs in a container. The code would be like this. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Web; 5: using System.Web.Mvc; 6: using Microsoft.WindowsAzure; 7: using Microsoft.WindowsAzure.StorageClient; 8:  9: namespace WAASTFSDemo.Controllers 10: { 11: public class HomeController : Controller 12: { 13: public ActionResult Index() 14: { 15: ViewBag.Message = "Welcome to Windows Azure!"; 16:  17: var credentials = new StorageCredentialsAccountAndKey("[STORAGE_ACCOUNT]", "[STORAGE_KEY]"); 18: var account = new CloudStorageAccount(credentials, false); 19: var client = account.CreateCloudBlobClient(); 20: var container = client.GetContainerReference("shared"); 21: ViewBag.Blobs = container.ListBlobs().Select(b => b.Uri.AbsoluteUri); 22:  23: return View(); 24: } 25:  26: public ActionResult About() 27: { 28: return View(); 29: } 30: } 31: } 1: @{ 2: ViewBag.Title = "Home Page"; 3: } 4:  5: <h2>@ViewBag.Message</h2> 6: <p> 7: To learn more about ASP.NET MVC visit <a href="http://asp.net/mvc" title="ASP.NET MVC Website">http://asp.net/mvc</a>. 8: </p> 9: <div> 10: <ul> 11: @foreach (var blob in ViewBag.Blobs) 12: { 13: <li>@blob</li> 14: } 15: </ul> 16: </div> And then just check in the code, it will be deployed to my web site. Finally we can see the blobs in my storage.   This is just an example but it proves that web sites can connect to storage, table, blob and queue as well. So the answer to Stacy should be “yes”. The web site can use queue storage to work with worker role.   Summary In this post I demonstrated how to integrate with TFS from Windows Azure Web Sites. You can see our website can be built, uploaded and deployed automatically by TFS service. All we need to do is to provide the TFS name and select the project. Not only the Windows Azure Web Site, in this upgrade the Windows Azure Cloud Services (hosted service) can be published through TFS as well. Very similar as what we have shown below. But currently, only Microsoft TFS Service Preview can be integrated with Windows Azure. But I think in the future we can link the TFS in our enterprise and some 3rd party TFS such as CodePlex to Windows Azure.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

< Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >