Search Results

Search found 25727 results on 1030 pages for 'solution'.

Page 925/1030 | < Previous Page | 921 922 923 924 925 926 927 928 929 930 931 932  | Next Page >

  • A better way of handling the below program(may be with Take/Skip/TakeWhile.. or anything better)

    - by Newbie
    I have a data table which has only one row. But it is having 44 columns. My task is to get the columns from the 4th row till the end. Henceforth, I have done the below program that suits my requirement. (kindly note that dt is the datatable) List<decimal> lstDr = new List<decimal>(); Enumerable.Range(0, dt.Columns.Count).ToList().ForEach(i => { if (i > 3) lstDr.Add(Convert.ToDecimal(dt.Rows[0][i])); } ); There is nothing harm in the program. Works fine. But I feel that there may be a better way of handimg the program may be with Skip ot Take or TakeWhile or anyother stuff. I am looking for a better solution that the one I implemented. Is it possible? I am using c#3.0 Thanks.

    Read the article

  • Calling a view from the second project

    - by user3128303
    I have 2 projects in the Solution (asp.net-mvc). The first project is main, the other project (1 simple controller and views (Index, Layout). I want directly from the menu in the project 1, to refer to the Index view of the second project. I added a reference but I do not know what to do. Someone help? Ps: Sorry for my english. Project 1 _Layout.cshtml <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <title>@ViewBag.Title</title> <link href="@Url.Content("~/Content/Site.css")" rel="stylesheet" type="text/css" /> <script src="@Url.Content("~/Scripts/jquery-1.5.1.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/modernizr-1.7.min.js")" type="text/javascript"></script> </head> <body> <div> <nav> <a href="@Url.Content("~")" id="current">Home</a> <a href="@Url.Content( /* via a link you want to get to the index from Project 2 */)">TEST</a> </nav> </div> <div id="main"> @RenderBody() </div> <div id="footer"> </div> </body> </html> Project 2 HomeController.cs namespace Panel.Controllers { public class HomeController : Controller { public ActionResult Index() { return View(); } } }

    Read the article

  • Generating an identifier for objects so that they can be added to a hashtable I have created

    - by dukenukem
    I have a hashtable base class and I am creating different type of hashtable by deriving from it. I only allow it to accept objects that implement my IHashable interface.For example - class LinearProbingHashTable<T> : HashTableBase<T> where T: IHashable { ... ... ... } interface IHashable { /** * Every IHashable implementation should provide an indentfying value for use in generating a hash key. */ int getIdentifier(); } class Car : IHashable { public String Make { get; set; } public String Model { get; set; } public String Color { get; set; } public int Year { get; set; } public int getIdentifier() { /// ??? } } Can anyone suggest a good method for generating an identifier for the car that can be used by the hash function to place it in the hash table? I am actually really looking for a general purpose solution to generating an id for any given class. I would like to have a base class for all classes, HashableObject, that implements IHashable and its getIdentifier method. So then I could just derive from HashableObject which would automatically provide an identifier for any instances. Which means I wouldn't have to write a different getIdentifier method for every object I add to the hashtable. public class HashableObject : IHashable { public int getIdentifier() { // Looking for code here that would generate an id for any object... } } public class Dog : HashableObject { // Dont need to implement getIdentifier because the parent class does it for me }

    Read the article

  • HELP IN JAVA!! URGENT

    - by annayena
    The following function accepts 2 strings, the 2nd (not 1st) possibly containing *'s (asterisks). An * is a replacement for a string (empty, 1 char or more), it can appear appear (only in s2) once, twice, more or not at all, it cannot be adjacent to another * (ab**c), no need to check that. public static boolean samePattern(String s1, String s2) It returns true if strings are of the same pattern. It must be recursive, not use any loops, static & global variables. Also it's PROHIBITED to use the method equals in the String class. Can use local variables & method overloading. Can use only these methods: charAt(i), substring(i), substring(i, j), length(). Examples: 1: TheExamIsEasy; 2: "The*xamIs*y" --- true 1: TheExamIsEasy; 2: "Th*mIsEasy*" --- true 1: TheExamIsEasy; 2: "*" --- true 1: TheExamIsEasy; 2: "TheExamIsEasy" --- true 1: TheExamIsEasy; 2: "The*IsHard" --- FALSE I am stucked on this question for many hours now! I need the solution in Java please kindly help me.

    Read the article

  • Doing a DELETE request from <a> tag without constructing forms?

    - by Mike Trpcic
    Is there any way to force an <a href> to do a delete request when clicked instead of a regular GET? The "Rails" way is to dynamically generate a <form> tag that adds the appropriate params so that it gets routed to the right place, however I'm not too fond of that much DOM manipulation for such a simple task. I thought about going the route of something like: $("a.delete").click(function(){ var do_refresh = false; $.ajax({ type: "DELETE", url: "my/path/to/delete", success: function(){ do_refresh = true; }, failure: function(){ alert("There is an error"); } }); return do_return; //If this is false, no refresh would happen. If it //is true, it will do a page refresh. }); I'm not sure if the above AJAX stuff will work, as it's just theoretical. Is there any way to go about this? Note: Please don't recommend that I use the rails :method => :delete on link_to, as I'm trying to get away from using the rails helpers in many senses as they pollute your DOM with obtrusive javascript. I'd like this solution to be put in an external JS file and be included as needed.

    Read the article

  • Jquery .next() function not working

    - by Sundhar
    Guys i am trying to do something like this i have two href and a text box in the middle of those <- TEXT <+ So when i press the - and + the value in the txt must increase or decrease by one " value="<%=addProduct.getInteger("ATR_WebMinQuantity",1)/addProduct.getInteger(MCRConstants.DM_ATR_LEGACY_CASE_VENDOR_PACK_SIZE,1) %" name="ADD_CART_ITEM<quantity" class="text" maxlength="3" / --! and i am using a jquery to + and - the value in the text box. Whenever i press + its happening correctly but for - it takes the TEXT fields name instead of its value . Any solution for this to make it to take the value of the TEXT box Jquery used follows : $(".quantity .subtract").click(function () { var qtyInput = $(this).next('input'); var qty = parseInt(qtyInput.val()); if (qty 1) qtyInput.val(qty - 1); qtyInput.focus(); return false; }); $(".quantity .add").click(function () { var qtyInput = $(this).prev('input'); var qty = parseInt(qtyInput.val()); if (qty >= 0 && (qty + 1 <= 999)) qtyInput.val(qty + 1); qtyInput.focus(); return false; });

    Read the article

  • Generic list/sublist handling

    - by user628661
    Let's say we have a class class ComplexCls { public int Fld1; public string Fld2; //could be more fields } class Cls { public int SomeField; } and then some code class ComplexClsList: List<ComplexCls>; ComplexClsList myComplexList; // fill myComplexList // same for Cls class ClsList : List<Cls>; ClsList myClsList; We want to populate myClsList from myComplexList, something like (pseudocode): foreach Complexitem in myComplexList { Cls ClsItem = new Cls(); ClsItem.SomeField = ComplexItem.Fld1; } The code to do this is easy and will be put in some method in myClsList. However I'd like to design this as generic as possible, for generic ComplexCls. Note that the exact ComplexCls is known at the moment of using this code, only the algorithm shd be generic. I know it can be done using (direct) reflection but is there other solution? Let me know if the question is not clear enough. (probably isn't). [EDIT] Basically, what I need is this: having myClsList, I need to specify a DataSource (ComplexClsList) and a field from that DataSource (Fld1) that will be used to populate my SomeField

    Read the article

  • How to find next (by a single parameter) element in c++? (stl) [closed]

    - by user2136963
    I have n humans of THuman class Each human has scored some points in one of two rounds. (score1 and score2) Each human has its unique id. Score1 and 2 are also unique. Besides, a human has a score_t=score1+score2, which can be the same for two of them. I need to implement 6 variables to THuman which return id of a human with: bigger score1 smaller score1 bigger score2 smaller score2 bigger score_t smaller score_t (if there are many humans those satisfy theese conditions, the one with smallest difference of corresponding parameter should be chosen (like score1 for 1 and 2)) In other words, it's some kind of storing 3 human sortings. Two more functions I need should get argument x, set score1 or score 2 to x, and then refresh some of the 6 variables above. If I needed sorting by only one variable, I would simply create set and defined and < operators for my class. But what is the solution for three of parameters? Is it possible to use STL here, or I should create my own lists/treaps? __ Answer: How to update set of pointers c++?

    Read the article

  • Change value of adjacent vertices and remove self loop

    - by StereoMatching
    Try to write a Karger’s algorithm with boost::graph example (first column is vertice, other are adjacent vertices): 1 2 3 2 1 3 4 3 1 2 4 4 2 3 assume I merge 2 to 1, I get the result 1 2 3 2 1 1 3 4 2 1 3 4 3 1 2 4 4 2 3 first question : How could I change the adjacent vertices("2" to "1") of vertice 1? my naive solution template<typename Vertex, typename Graph> void change_adjacent_vertices_value(Vertex input, Vertex value, Graph &g) { for (auto it = boost::adjacent_vertices(input, g); it.first != it.second; ++it.first){ if(*it.first == value){ *(it.first) = input; //error C2106: '=' : left operand must be l-value } } } Apparently, I can't set the value of the adjacent vertices to "1" by this way The result I want after "change_adjacent_vertices_value" 1 1 3 1 1 1 3 4 2 1 3 4 3 1 2 4 4 2 3 second question : How could I pop out the adjacent vertices? Assume I want to pop out the consecutive 1 from the vertice 1 The result I expected 1 1 3 1 3 4 2 1 3 4 3 1 2 4 4 2 3 any function like "pop_adjacent_vertex" could use?

    Read the article

  • Best (in your opinion) GIT workflow for case when releases are done on demand (in most cases 1-2 tickets at once)

    - by Robert
    I'm rather a Git newbie and I'm looking for your advice. In the company I work for we have a "workflow" where we have a single Git repo for our project with 2 branches: master and prod. All devs work on the master branch. If a ticket is done (from the dev perspective), we push to the repo. If all tests are passed, we make a release. The issue is that in most cases, the request from business guys sounds like: "please release ticket A or A && B". In most cases, I end up doing something like git checkout prod git cherry-pick --no-commit commit_hash git commit -m "blah blah to prod" -a As you can see this is not a perfect solution, and I'm under a huge impression this is a perfect way to nowhere especially when change A depends on changes B and C. Do you have any suggestions how to handle releases on demand if more devs works on the same branch and the flow looks like I described above? All suggestions are welcome. I cannot change business processes and it will have to stay as it is - unfortunately.

    Read the article

  • Error 1606. Could not access network location %SystemDrive%\inetpub\wwwroot\ while installing on IIS

    - by Mark
    I'm trying to port our software installer which currently supports Windows 2000 and Windows 2003 to a Windows 2008 environment. Currently, the installer gets an error which reads "Error 1606. Could not access network location %SystemDrive%\inetpub\wwwroot." %SystemDrive% is without a doubt C:\, and C:\inetpub\wwwroot\ has the correct accessibility. It is interesting that if I hardcode the path in the following keys in the registry to C:\inetpub\wwwroot\, without using the environment variable, the installer works correctly. • HKLM/Software/Wow6432Node/Microsoft/InetStp/PathWWWRoot • KHLM/Software/Microsoft/InetStp/PathWWWRoot. This seems like a very poor hack. I do not want to tell our clients that they need to hack their registry before they will be able to install our product. Another option is to change the registry behind the scenes, do our install, and revert the registry keys to their original values at the end of the install, but obviously I don't like this solution either. I find it hard to believe that Microsoft would have done this without reason, so there must be an alternate approach to get these installers to work without modifying the registry. Any tips appreciated.

    Read the article

  • Git development?production workflow – how to set up repo?

    - by Blixt
    I'm working on a relatively small, but fast-changing project (a web application) with a few other developers. We're using Git for source control. We started out creating a stable branch which is what is deployed to the live production web server. The master branch is what is deployed to a secondary "unstable" server for testing purposes. Whenever we felt that the master branch was ready to go live, we merged it into stable. However, we came to a point where we wanted one of the later master commits, but not some of the commits before it, so we used cherry-pick to pull that change into stable. This creates a new commit with the same change as the one in master, and it feels as if we're losing the nice history that Git otherwise provides. Are there better ways of handling this type of unstable/stable deployment model? One solution I thought of was using feature branches, and only ever merging a feature branch into master once we want it to go live. Then we'll tag every deployment instead of having a stable branch.

    Read the article

  • MS Ajax Libraries and Configured Assemblies

    - by smehaffie
    Use Case You have a brand new IIS servers that has .Net 3.5 installed and are migrating sites to the new servers.  In the process of migrating sites you come across some sites that get an error about the version of AJAX libraries being references in the web.config.  In the web.config all the entries reference 1.0.61025.0, but the older version of the AJAX libraries are not installed on the new servers, only the latest version is installed that comes with .Net 3.5.  So what are the options to fix this issue. Solutions 1) Install the older version of the AJAX Libraries: Although this works, IMO it is never a great idea to install an older version of a library after a newer version has been installed.  Plus, if all new application use the latest versions, is it worth the effort of installing the older version for a few legacy applications? 2) Update the web.config files so all references use latest version (3.5.0.0):  This option is very time consuming and error prone. In addition, you will also have to update any pages where there is a register tag for the older libraries as well.  This would require you to redeploy any application that have this issue. 3) Use the Configured Assembly capabilities of .Net (aka: Assembly Bindings) to make any application that uses the older AJAX libraries to use the new AJAX libraries.  IMO, this is the easiest, quickest and least invasive way to fix the issue.  Below are the steps to implement this fix. Solution #3 Do the following steps on the IIS servers that the issue is occurring.  The 2 assemblies that need assemblies bindings created are: System.Web.Extension & System.Web.Extensions.Design 1) Go to Start - > All Program -> Administrative Tools -> Microsoft .NET Framework 2.0 Configuration. 2) Right click on "Configured Assemblies" to view list of configured assemblies. 3) Left Click on right pane to bring up menu and choose "Add". 4) Make sure "Choose and assembly from the assembly cache is checked" and click the "Choose Assembly" button. 5) Choose System.Web.Extension (does not matter what version). 6) Click the "Finish" button. 7) Binding Policy Tab      - Enter Requested Version = 1.0.61025.0      - Enter New Version = 3.5.0.0 8) Repeat steps 2-7 for the System.Web.Extensions.Design assembly. --------------------------------------------------------------------------------------------------------------------------------------------------------- Note: If "Microsoft .NET Framework 2.0 Configuration does not exist under Admin tools use mmc to access it (see below) 1) Start -> Run -> Enter MMC 2) File - > Add/Remove Snap-In then Click "Add" button 3) Choose ".Net 2.0 Configuration" then click "Add" button and then the "Close" Button. 4) On "Add/Remove Snapin" windows click the "OK" Button. 5) Expand the tree on the right and you can start following the directions above for adding the configured assemblies. ---------------------------------------------------------------------------------------------------------------------------------------------------------

    Read the article

  • SQL Server Configuration timeouts - and a workaround [SSIS]

    - by jamiet
    Ever since I started writing SSIS packages back in 2004 I have opted to store configurations in .dtsConfig (.i.e. XML) files rather than in a SQL Server table (aka SQL Server Configurations) however recently I inherited some packages that used SQL Server Configurations and thus had to immerse myself in their murky little world. To all the people that have ever gone onto the SSIS forum and asked questions about ambiguous behaviour of SQL Server Configurations I now say this... I feel your pain! The biggest problem I have had was in dealing with the change to the order in which configurations get applied that came about in SSIS 2008. Those changes are detailed on MSDN at SSIS Package Configurations however the pertinent bits are: As the utility loads and runs the package, events occur in the following order: The dtexec utility loads the package. The utility applies the configurations that were specified in the package at design time and in the order that is specified in the package. (The one exception to this is the Parent Package Variables configurations. The utility applies these configurations only once and later in the process.) The utility then applies any options that you specified on the command line. The utility then reloads the configurations that were specified in the package at design time and in the order specified in the package. (Again, the exception to this rule is the Parent Package Variables configurations). The utility uses any command-line options that were specified to reload the configurations. Therefore, different values might be reloaded from a different location. The utility applies the Parent Package Variable configurations. The utility runs the package. To understand how these steps differ from SSIS 2005 I recommend reading Doug Laudenschlager’s blog post Understand how SSIS package configurations are applied. The very nature of SQL Server Configurations means that the Connection String for the database holding the configuration values needs to be supplied from the command-line. Typically then the call to execute your package resembles this: dtexec /FILE Package.dtsx /SET "\Package.Connections[SSISConfigurations].Properties[ConnectionString]";"\"Data Source=SomeServer;Initial Catalog=SomeDB;Integrated Security=SSPI;\"", The problem then is that, as per the steps above, the package will (1) attempt to apply all configurations using the Connection String stored in the package for the "SSISConfigurations" Connection Manager before then (2) applying the Connection String from the command-line and then (3) apply the same configurations all over again. In the packages that I inherited that first attempt to apply the configurations would timeout (not unexpected); I had 8 SQL Server Configurations in the package and thus the package was waiting for 2 minutes until all the Configurations timed out (i.e. 15seconds per Configuration) - in a package that only executes for ~8seconds when it gets to do its actual work a delay of 2minutes was simply unacceptable. We had three options in how to deal with this: Get rid of the use of SQL Server configurations and use .dtsConfig files instead Edit the packages when they get deployed Change the timeout on the "SSISConfigurations" Connection Manager #1 was my preferred choice but, for reasons I explain below*, wasn't an option in this particular instance. #2 was discounted out of hand because it negates the point of using Configurations in the first place. This left us with #3 - change the timeout on the Connection Manager. This is done by going into the properties of the Connection Manager, opening the "All" tab and changing the Connect Timeout property to some suitable value (in the screenshot below I chose 2 seconds). This change meant that the attempts to apply the SQL Server configurations timed out in 16 seconds rather than two minutes; clearly this isn't an optimum solution but its certainly better than it was. So there you have it - if you are having problems with SQL Server configuration timeouts within SSIS try changing the timeout of the Connection Manager. Better still - don't bother using SQL Server Configuration in the first place. Even better - install RC0 of SQL Server 2012 to start leveraging SSIS parameters and leave the nasty old world of configurations behind you. @Jamiet * Basically, we are leveraging a SSIS execution/logging framework in which the client had invested a lot of resources and SQL Server Configurations are an integral part of that.

    Read the article

  • April 14th Links: ASP.NET, ASP.NET MVC, ASP.NET Web API and Visual Studio

    - by ScottGu
    Here is the latest in my link-listing blog series: ASP.NET Easily overlooked features in VS 11 Express for Web: Good post by Scott Hanselman that highlights a bunch of easily overlooked improvements that are coming to VS 11 (and specifically the free express editions) for web development: unit testing, browser chooser/launcher, IIS Express, CSS Color Picker, Image Preview in Solution Explorer and more. Get Started with ASP.NET 4.5 Web Forms: Good 5-part tutorial that walks-through building an application using ASP.NET Web Forms and highlights some of the nice improvements coming with ASP.NET 4.5. What is New in Razor V2 and What Else is New in Razor V2: Great posts by Andrew Nurse, a dev on the ASP.NET team, about some of the new improvements coming with ASP.NET Razor v2. ASP.NET MVC 4 AllowAnonymous Attribute: Nice post from David Hayden that talks about the new [AllowAnonymous] filter introduced with ASP.NET MVC 4. Introduction to the ASP.NET Web API: Great tutorial by Stephen Walher that covers how to use the new ASP.NET Web API support built-into ASP.NET 4.5 and ASP.NET MVC 4. Comprehensive List of ASP.NET Web API Tutorials and Articles: Tugberk Ugurlu links to a huge collection of articles, tutorials, and samples about the new ASP.NET Web API capability. Async Mashups using ASP.NET Web API: Nice post by Henrik on how you can use the new async language support coming with .NET 4.5 to easily and efficiently make asynchronous network requests that do not block threads within ASP.NET. ASP.NET and Front-End Web Development Visual Studio 11 and Front End Web Development - JavaScript/HTML5/CSS3: Nice post by Scott Hanselman that highlights some of the great improvements coming with VS 11 (including the free express edition) for front-end web development. HTML5 Drag/Drop and Async Multi-file Upload with ASP.NET Web API: Great post by Filip W. that demonstrates how to implement an async file drag/drop uploader using HTML5 and ASP.NET Web API. Device Emulator Guide for Mobile Development with ASP.NET: Good post from Rachel Appel that covers how to use various device emulators with ASP.NET and VS to develop cross platform mobile sites. Fixing these jQuery: A Guide to Debugging: Great presentation by Adam Sontag on debugging with JavaScript and jQuery.  Some really good tips, tricks and gotchas that can save a lot of time. ASP.NET and Open Source Getting Started with ASP.NET Web Stack Source on CodePlex: Fantastic post by Henrik (an architect on the ASP.NET team) that provides step by step instructions on how to work with the ASP.NET source code we recently open sourced. Contributing to ASP.NET Web Stack Source on CodePlex: Follow-on to the post above (also by Henrik) that walks-through how you can submit a code contribution to the ASP.NET MVC, Web API and Razor projects. Overview of the WebApiContrib project: Nice post by Pedro Reys on the new open source WebApiContrib project that has been started to deliver cool extensions and libraries for use with ASP.NET Web API. Entity Framework Entity Framework 5 Performance Improvements and Performance Considerations for EF5:  Good articles that describes some of the big performance wins coming with EF5 (which will ship with both .NET 4.5 and ASP.NET MVC 4). Automatic compilation of LINQ queries will yield some significant performance wins (up to 600% faster). ASP.NET MVC 4 and EF Database Migrations: Good post by David Hayden that covers the new database migrations support within EF 4.3 which allows you to easily update your database schema during development - without losing any of the data within it. Visual Studio What's New in Visual Studio 11 Unit Testing: Nice post by Peter Provost (from the VS team) that talks about some of the great improvements coming to VS11 for unit testing - including built-in VS tooling support for a broad set of unit test frameworks (including NUnit, XUnit, Jasmine, QUnit and more) Hope this helps, Scott

    Read the article

  • Creating a Build Definition using the TFS 2010 API

    - by Jakob Ehn
    In this post I will show how to create a new build definition in TFS 2010 using the TFS API. When creating a build definition manually, using Team Explorer, the necessary steps are lined out in the New Build Definition Wizard:     So, lets see how the code looks like, using the same order. To start off, we need to connect to TFS and get a reference to the IBuildServer object: TfsTeamProjectCollection server = newTfsTeamProjectCollection(newUri("http://<tfs>:<port>/tfs")); server.EnsureAuthenticated(); IBuildServer buildServer = (IBuildServer) server.GetService(typeof (IBuildServer)); General First we create a IBuildDefinition object for the team project and set a name and description for it: var buildDefinition = buildServer.CreateBuildDefinition(teamProject); buildDefinition.Name = "TestBuild"; buildDefinition.Description = "description here..."; Trigger Next up, we set the trigger type. For this one, we set it to individual which corresponds to the Continuous Integration - Build each check-in trigger option buildDefinition.ContinuousIntegrationType = ContinuousIntegrationType.Individual; Workspace For the workspace mappings, we create two mappings here, where one is a cloak. Note the user of $(SourceDir) variable, which is expanded by Team Build into the sources directory when running the build. buildDefinition.Workspace.AddMapping("$/Path/project.sln", "$(SourceDir)", WorkspaceMappingType.Map); buildDefinition.Workspace.AddMapping("$/OtherPath/", "", WorkspaceMappingType.Cloak); Build Defaults In the build defaults, we set the build controller and the drop location. To get a build controller, we can (for example) use the GetBuildController method to get an existing build controller by name: buildDefinition.BuildController = buildServer.GetBuildController(buildController); buildDefinition.DefaultDropLocation = @\\SERVER\Drop\TestBuild; Process So far, this wasy easy. Now we get to the tricky part. TFS 2010 Build is based on Windows Workflow 4.0. The build process is defined in a separate .XAML file called a Build Process Template. By default, every new team team project containtwo build process templates called DefaultTemplate and UpgradeTemplate. In this sample, we want to create a build definition using the default template. We use te QueryProcessTemplates method to get a reference to the default for the current team project   //Get default template var defaultTemplate = buildServer.QueryProcessTemplates(teamProject).Where(p => p.TemplateType == ProcessTemplateType.Default).First(); buildDefinition.Process = defaultTemplate;   There are several build process templates that can be set for the default build process template. Only one of these are required, the ProjectsToBuild parameters which contains the solution(s) and configuration(s) that should be built. To set this info, we use the ProcessParameters property of thhe IBuildDefinition interface. The format of this property is actually just a serialized dictionary (IDictionary<string, object>) that maps a key (parameter name) to a value which can be any kind of object. This is rather messy, but fortunately, there is a helper class called WorkflowHelpers inthe Microsoft.TeamFoundation.Build.Workflow namespace, that simplifies working with this persistence format a bit. The following code shows how to set the BuildSettings information for a build definition: //Set process parameters varprocess = WorkflowHelpers.DeserializeProcessParameters(buildDefinition.ProcessParameters); //Set BuildSettings properties BuildSettings settings = newBuildSettings(); settings.ProjectsToBuild = newStringList("$/pathToProject/project.sln"); settings.PlatformConfigurations = newPlatformConfigurationList(); settings.PlatformConfigurations.Add(newPlatformConfiguration("Any CPU", "Debug")); process.Add("BuildSettings", settings); buildDefinition.ProcessParameters = WorkflowHelpers.SerializeProcessParameters(process); The other build process parameters of a build definition can be set using the same approach   Retention  Policy This one is easy, we just clear the default settings and set our own: buildDefinition.RetentionPolicyList.Clear(); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.Succeeded, 10, DeleteOptions.All); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.Failed, 10, DeleteOptions.All); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.Stopped, 1, DeleteOptions.All); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.PartiallySucceeded, 10, DeleteOptions.All); Save It! And we’re done, lets save the build definition: buildDefinition.Save(); That’s it!

    Read the article

  • Executing legacy MSBuild scripts in TFS 2010 Build

    - by Jakob Ehn
    When upgrading from TFS 2008 to TFS 2010, all builds are “upgraded” in the sense that a build definition with the same name is created, and it uses the UpgradeTemplate  build process template to execute the build. This template basically just runs MSBuild on the existing TFSBuild.proj file. The build definition contains a property called ConfigurationFolderPath that points to the TFSBuild.proj file. So, existing builds will run just fine after upgrade. But what if you want to use the new workflow functionality in TFS 2010 Build, but still have a lot of MSBuild scripts that maybe call custom MSBuild tasks that you don’t have the time to rewrite? Then one option is to keep these MSBuild scrips and call them from a TFS 2010 Build workflow. This can be done using the MSBuild workflow activity that is avaiable in the toolbox in the Team Foundation Build Activities section: This activity wraps the call to MSBuild.exe and has the following parameters: Most of these properties are only relevant when actually compiling projects, for example C# project files. When calling custom MSBuild project files, you should focus on these properties: Property Meaning Example CommandLineArguments Use this to send in/override MSBuild properties in your project “/p:MyProperty=SomeValue” or MSBuildArguments (this will let you define the arguments in the build definition or when queuing the build) LogFile Name of the log file where MSbuild will log the output “MyBuild.log” LogFileDropLocation Location of the log file BuildDetail.DropLocation + “\log” Project The project to execute SourcesDirectory + “\BuildExtensions.targets” ResponseFile The name of the MSBuild response file SourcesDirectory + “\BuildExtensions.rsp” Targets The target(s) to execute New String() {“Target1”, “Target2”} Verbosity Logging verbosity Microsoft.TeamFoundation.Build.Workflow.BuildVerbosity.Normal Integrating with Team Build   If your MSBuild scripts tries to use Team Build tasks, they will most likely fail with the above approach. For example, the following MSBuild project file tries to add a build step using the BuildStep task:   <?xml version="1.0" encoding="utf-8"?> <Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <Import Project="$(MSBuildExtensionsPath)\Microsoft\VisualStudio\TeamBuild\Microsoft.TeamFoundation.Build.targets" /> <Target Name="MyTarget"> <BuildStep TeamFoundationServerUrl="$(TeamFoundationServerUrl)" BuildUri="$(BuildUri)" Name="MyBuildStep" Message="My build step executed" Status="Succeeded"></BuildStep> </Target> </Project> When executing this file using the MSBuild activity, calling the MyTarget, it will fail with the following message: The "Microsoft.TeamFoundation.Build.Tasks.BuildStep" task could not be loaded from the assembly \PrivateAssemblies\Microsoft.TeamFoundation.Build.ProcessComponents.dll. Could not load file or assembly 'file:///D:\PrivateAssemblies\Microsoft.TeamFoundation.Build.ProcessComponents.dll' or one of its dependencies. The system cannot find the file specified. Confirm that the <UsingTask> declaration is correct, that the assembly and all its dependencies are available, and that the task contains a public class that implements Microsoft.Build.Framework.ITask. You can see that the path to the ProcessComponents.dll is incomplete. This is because in the Microsoft.TeamFoundation.Build.targets file the task is referenced using the $(TeamBuildRegPath) property. Also note that the task needs the TeamFounationServerUrl and BuildUri properties. One solution here is to pass these properties in using the Command Line Arguments parameter:   Here we pass in the parameters with the corresponding values from the curent build. The build log shows that the build step has in fact been inserted:   The problem as you probably spted is that the build step is insert at the top of the build log, instead of next to the MSBuild activity call. This is because we are using a legacy team build task (BuildStep), and that is how these are handled in TFS 2010. You can see the same behaviour when running builds that are using the UpgradeTemplate, that cutom build steps shows up at the top of the build log.

    Read the article

  • Removing the XML Formatter from ASP.NET Web API Applications

    - by Rick Strahl
    ASP.NET Web API's default output format is supposed to be JSON, but when I access my Web APIs using the browser address bar I'm always seeing an XML result instead. When working on AJAX application I like to test many of my AJAX APIs with the browser while working on them. While I can't debug all requests this way, GET requests are easy to test in the browser especially if you have JSON viewing options set up in your various browsers. If I preview a Web API request in most browsers I get an XML response like this: Why is that? Web API checks the HTTP Accept headers of a request to determine what type of output it should return by looking for content typed that it has formatters registered for. This automatic negotiation is one of the great features of Web API because it makes it easy and transparent to request different kinds of output from the server. In the case of browsers it turns out that most send Accept headers that look like this (Chrome in this case): Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Web API inspects the entire list of headers from left to right (plus the quality/priority flag q=) and tries to find a media type that matches its list of supported media types in the list of formatters registered. In this case it matches application/xml to the Xml formatter and so that's what gets returned and displayed. To verify that Web API indeed defaults to JSON output by default you can open the request in Fiddler and pop it into the Request Composer, remove the application/xml header and see that the output returned comes back in JSON instead. An accept header like this: Accept: text/html,application/xhtml+xml,*/*;q=0.9 or leaving the Accept header out altogether should give you a JSON response. Interestingly enough Internet Explorer 9 also displays JSON because it doesn't include an application/xml Accept header: Accept: text/html, application/xhtml+xml, */* which for once actually seems more sensible. Removing the XML Formatter We can't easily change the browser Accept headers (actually you can by delving into the config but it's a bit of a hassle), so can we change the behavior on the server? When working on AJAX applications I tend to not be interested in XML results and I always want to see JSON results at least during development. Web API uses a collection of formatters and you can go through this list and remove the ones you don't want to use - in this case the XmlMediaTypeFormatter. To do this you can work with the HttpConfiguration object and the static GlobalConfiguration object used to configure it: protected void Application_Start(object sender, EventArgs e) { // Action based routing (used for RPC calls) RouteTable.Routes.MapHttpRoute( name: "StockApi", routeTemplate: "stocks/{action}/{symbol}", defaults: new { symbol = RouteParameter.Optional, controller = "StockApi" } ); // WebApi Configuration to hook up formatters and message handlers RegisterApis(GlobalConfiguration.Configuration); } public static void RegisterApis(HttpConfiguration config) { // remove default Xml handler var matches = config.Formatters .Where(f = f.SupportedMediaTypes .Where(m = m.MediaType.ToString() == "application/xml" || m.MediaType.ToString() == "text/xml") .Count() 0) .ToList() ; foreach (var match in matches) config.Formatters.Remove(match); } } That LINQ code is quite a mouthful of nested collections, but it does the trick to remove the formatter based on the content type. You can also look for the specific formatter (XmlMediatTypeFormatter) by its type name which is simpler, but it's better to search for the supported types as this will work even if there are other custom formatters added. Once removed, now the browser request results in a JSON response: It's a simple solution to a small debugging task that's made my life easier. Maybe you find it useful too…© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api  ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Improved appointment rendering in RadScheduler for ASP.NET AJAX, Q1 2010

    Now that Q1 2010 release is out in the wild, we can sit down and discuss some of the changes we decided to make in the new release. One of them is the new appointment rendering of RadScheduler - a potentially breaking change, but a much needed one. If you have problems with your old custom skins, include the old base stylesheet along with your RadScheduler and set EnableEmbeddedBaseStylesheet=false in your RadScheduler. You can find the said base stylesheet attached to this post.   While trying to improve the performance of RadScheduler, I noticed that the number of resources slows down the rendering and overall performance considerably. This had to be expected - the images to support the appointment rounded corners (and the predefined resources) were quite large. However, I didnt take into account that all browsers keep for performance reasons their images uncompressed in memory and with the color depth of the current desktop. A simple calculation later I discovered that the appointment sprite itself is taking 25MB memory when loaded. Add 5 resources to the fray and you have 150MB memory down with a single blow. As it turns out - a sprite image is not a panacea, if it gets too big - dont be afraid to break it in two. The loading time may suffer, but your browser suffers more while rendering a 25MB monster. First I thought of undertaking the aforementioned solution - breaking the appointment sprite in two and thus reducing the two appointment sprites to mere 2MB uncompressed. Then I thought - the rounded corners are small - I can use borders and backgrounds to simulate rounded appointment borders while still keeping the same HTML structure. The gradients can be done with a single 10x50px image plus we have a gain - border colors and backgrounds can be changed on the fly.  I started with five rendering elements at first, then tried with four and finally I settled on only three elements.  Behold the new appointment rendering (quite simple really):       On the left you can see that the first container has only top and bottom borders and a background. In fact, the background isnt even needed since it will be obscured by the elements on top of it. The whole first container is only needed for the four dots that reside in the four corners of the appointment. On top of this container is another one that holds the left and right borders and slightly lighter background to create the illusion of a second lighter border beside the other two. At last on top of all others is placed the text container that also holds the top and bottom borders and the gradient background. On the right you can see the final result - Im quite happy with it and I hope you will be too. After creating the new rendering we took another step further - we decided to use alpha gradients for the resource rendering, thus supporting any color appointments with rounded corners and gradients. You can see some examples below:We plan on adding BorderColor and BackColor properties  to the ResourceStyles definitions for Q1 SP1. However with the new rendering in Q1 2010 we do support BackColor and BorderColor appointment properties - you only need to set AppointmentStyleMode=Default to keep RadScheduler from switching to Simple appointment rendering. Here is one screenshot of RadScheduler with appointments set to different colors: I hope that you will enjoy working with the new appointments in RadScheduler. RadScheduler base stylesheet Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Improved appointment rendering in RadScheduler for ASP.NET AJAX, Q1 2010

    Now that Q1 2010 release is out in the wild, we can sit down and discuss some of the changes we decided to make in the new release. One of them is the new appointment rendering of RadScheduler - a potentially breaking change, but a much needed one. If you have problems with your old custom skins, include the old base stylesheet along with your RadScheduler and set EnableEmbeddedBaseStylesheet=false in your RadScheduler. You can find the said base stylesheet attached to this post.   While trying to improve the performance of RadScheduler, I noticed that the number of resources slows down the rendering and overall performance considerably. This had to be expected - the images to support the appointment rounded corners (and the predefined resources) were quite large. However, I didnt take into account that all browsers keep for performance reasons their images uncompressed in memory and with the color depth of the current desktop. A simple calculation later I discovered that the appointment sprite itself is taking 25MB memory when loaded. Add 5 resources to the fray and you have 150MB memory down with a single blow. As it turns out - a sprite image is not a panacea, if it gets too big - dont be afraid to break it in two. The loading time may suffer, but your browser suffers more while rendering a 25MB monster. First I thought of undertaking the aforementioned solution - breaking the appointment sprite in two and thus reducing the two appointment sprites to mere 2MB uncompressed. Then I thought - the rounded corners are small - I can use borders and backgrounds to simulate rounded appointment borders while still keeping the same HTML structure. The gradients can be done with a single 10x50px image plus we have a gain - border colors and backgrounds can be changed on the fly.  I started with five rendering elements at first, then tried with four and finally I settled on only three elements.  Behold the new appointment rendering (quite simple really):       On the left you can see that the first container has only top and bottom borders and a background. In fact, the background isnt even needed since it will be obscured by the elements on top of it. The whole first container is only needed for the four dots that reside in the four corners of the appointment. On top of this container is another one that holds the left and right borders and slightly lighter background to create the illusion of a second lighter border beside the other two. At last on top of all others is placed the text container that also holds the top and bottom borders and the gradient background. On the right you can see the final result - Im quite happy with it and I hope you will be too. After creating the new rendering we took another step further - we decided to use alpha gradients for the resource rendering, thus supporting any color appointments with rounded corners and gradients. You can see some examples below:We plan on adding BorderColor and BackColor properties  to the ResourceStyles definitions for Q1 SP1. However with the new rendering in Q1 2010 we do support BackColor and BorderColor appointment properties - you only need to set AppointmentStyleMode=Default to keep RadScheduler from switching to Simple appointment rendering. Here is one screenshot of RadScheduler with appointments set to different colors: I hope that you will enjoy working with the new appointments in RadScheduler. RadScheduler base stylesheet Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQL SERVER – Disable Clustered Index and Data Insert

    - by pinaldave
    Earlier today I received following email. “Dear Pinal, [Removed unrelated content] We looked at your script and found out that in your script of disabling indexes, you have only included non-clustered index during the bulk insert and missed to disabled all the clustered index. Our DBA[name removed] has changed your script a bit and included all the clustered indexes. Since our application is not working. When DBA [name removed] tried to enable clustered indexes again he is facing error incorrect syntax error. We are in deep problem [word replaced] [Removed Identity of organization and few unrelated stuff ]“ I have replied to my client and helped them fixed the problem. What really came to my attention is the concept of disabling clustered index. Let us try to learn a lesson from this experience. In this case, there was no need to disable clustered index at all. I had done necessary work when I was called in to work on tuning project. I had removed unused indexes, created few optimal indexes and wrote a script to disable few selected high cost indexes when bulk insert (and similar) operations are performed. There was another script which rebuild all the indexes as well. The solution worked till they included clustered index in disabling the script. Clustered indexes are in fact original table (or heap) physically ordered (any more things – not scope of this article) according to one or more keys(columns). When clustered index is disabled data rows of the disabled clustered index cannot be accessed. This means there will be no insert possible. When non clustered indexes are disabled all the data related to physically deleted but the definition of the index is kept in the system. Due to the same reason even reorganization of the index is not possible till the clustered index (which was disabled) is rebuild. Now let us come to the second part of the question, regarding receiving the error when clustered index is ‘enabled’. This is very common question I receive on the blog. (The following statement is written keeping the syntax of T-SQL in mind) Clustered indexes can be disabled but can not be enabled, they have to rebuild. It is intuitive to think that something which we have ‘disabled’ can be ‘enabled’ but the syntax for the same is ‘rebuild’. This issue has been explained here: SQL SERVER – How to Enable Index – How to Disable Index – Incorrect syntax near ‘ENABLE’. Let us go over this example where inserting the data is not possible when clustered index is disabled. USE AdventureWorks GO -- Create Table CREATE TABLE [dbo].[TableName]( [ID] [int] NOT NULL, [FirstCol] [varchar](50) NULL, CONSTRAINT [PK_TableName] PRIMARY KEY CLUSTERED ([ID] ASC) ) GO -- Create Nonclustered Index CREATE UNIQUE NONCLUSTERED INDEX [IX_NonClustered_TableName] ON [dbo].[TableName] ([FirstCol] ASC) GO -- Populate Table INSERT INTO [dbo].[TableName] SELECT 1, 'First' UNION ALL SELECT 2, 'Second' UNION ALL SELECT 3, 'Third' GO -- Disable Nonclustered Index ALTER INDEX [IX_NonClustered_TableName] ON [dbo].[TableName] DISABLE GO -- Insert Data should work fine INSERT INTO [dbo].[TableName] SELECT 4, 'Fourth' UNION ALL SELECT 5, 'Fifth' GO -- Disable Clustered Index ALTER INDEX [PK_TableName] ON [dbo].[TableName] DISABLE GO -- Insert Data will fail INSERT INTO [dbo].[TableName] SELECT 6, 'Sixth' UNION ALL SELECT 7, 'Seventh' GO /* Error: Msg 8655, Level 16, State 1, Line 1 The query processor is unable to produce a plan because the index 'PK_TableName' on table or view 'TableName' is disabled. */ -- Reorganizing Index will also throw an error ALTER INDEX [PK_TableName] ON [dbo].[TableName] REORGANIZE GO /* Error: Msg 1973, Level 16, State 1, Line 1 Cannot perform the specified operation on disabled index 'PK_TableName' on table 'dbo.TableName'. */ -- Rebuliding should work fine ALTER INDEX [PK_TableName] ON [dbo].[TableName] REBUILD GO -- Insert Data should work fine INSERT INTO [dbo].[TableName] SELECT 6, 'Sixth' UNION ALL SELECT 7, 'Seventh' GO -- Clean Up DROP TABLE [dbo].[TableName] GO I hope this example is clear enough. There were few additional posts I had written years ago, I am listing them here. SQL SERVER – Enable and Disable Index Non Clustered Indexes Using T-SQL SQL SERVER – Enabling Clustered and Non-Clustered Indexes – Interesting Fact Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Constraint and Keys, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • An Introduction to ASP.NET Web API

    - by Rick Strahl
    Microsoft recently released ASP.NET MVC 4.0 and .NET 4.5 and along with it, the brand spanking new ASP.NET Web API. Web API is an exciting new addition to the ASP.NET stack that provides a new, well-designed HTTP framework for creating REST and AJAX APIs (API is Microsoft’s new jargon for a service, in case you’re wondering). Although Web API ships and installs with ASP.NET MVC 4, you can use Web API functionality in any ASP.NET project, including WebForms, WebPages and MVC or just a Web API by itself. And you can also self-host Web API in your own applications from Console, Desktop or Service applications. If you're interested in a high level overview on what ASP.NET Web API is and how it fits into the ASP.NET stack you can check out my previous post: Where does ASP.NET Web API fit? In the following article, I'll focus on a practical, by example introduction to ASP.NET Web API. All the code discussed in this article is available in GitHub: https://github.com/RickStrahl/AspNetWebApiArticle [republished from my Code Magazine Article and updated for RTM release of ASP.NET Web API] Getting Started To start I’ll create a new empty ASP.NET application to demonstrate that Web API can work with any kind of ASP.NET project. Although you can create a new project based on the ASP.NET MVC/Web API template to quickly get up and running, I’ll take you through the manual setup process, because one common use case is to add Web API functionality to an existing ASP.NET application. This process describes the steps needed to hook up Web API to any ASP.NET 4.0 application. Start by creating an ASP.NET Empty Project. Then create a new folder in the project called Controllers. Add a Web API Controller Class Once you have any kind of ASP.NET project open, you can add a Web API Controller class to it. Web API Controllers are very similar to MVC Controller classes, but they work in any kind of project. Add a new item to this folder by using the Add New Item option in Visual Studio and choose Web API Controller Class, as shown in Figure 1. Figure 1: This is how you create a new Controller Class in Visual Studio   Make sure that the name of the controller class includes Controller at the end of it, which is required in order for Web API routing to find it. Here, the name for the class is AlbumApiController. For this example, I’ll use a Music Album model to demonstrate basic behavior of Web API. The model consists of albums and related songs where an album has properties like Name, Artist and YearReleased and a list of songs with a SongName and SongLength as well as an AlbumId that links it to the album. You can find the code for the model (and the rest of these samples) on Github. To add the file manually, create a new folder called Model, and add a new class Album.cs and copy the code into it. There’s a static AlbumData class with a static CreateSampleAlbumData() method that creates a short list of albums on a static .Current that I’ll use for the examples. Before we look at what goes into the controller class though, let’s hook up routing so we can access this new controller. Hooking up Routing in Global.asax To start, I need to perform the one required configuration task in order for Web API to work: I need to configure routing to the controller. Like MVC, Web API uses routing to provide clean, extension-less URLs to controller methods. Using an extension method to ASP.NET’s static RouteTable class, you can use the MapHttpRoute() (in the System.Web.Http namespace) method to hook-up the routing during Application_Start in global.asax.cs shown in Listing 1.using System; using System.Web.Routing; using System.Web.Http; namespace AspNetWebApi { public class Global : System.Web.HttpApplication { protected void Application_Start(object sender, EventArgs e) { RouteTable.Routes.MapHttpRoute( name: "AlbumVerbs", routeTemplate: "albums/{title}", defaults: new { symbol = RouteParameter.Optional, controller="AlbumApi" } ); } } } This route configures Web API to direct URLs that start with an albums folder to the AlbumApiController class. Routing in ASP.NET is used to create extensionless URLs and allows you to map segments of the URL to specific Route Value parameters. A route parameter, with a name inside curly brackets like {name}, is mapped to parameters on the controller methods. Route parameters can be optional, and there are two special route parameters – controller and action – that determine the controller to call and the method to activate respectively. HTTP Verb Routing Routing in Web API can route requests by HTTP Verb in addition to standard {controller},{action} routing. For the first examples, I use HTTP Verb routing, as shown Listing 1. Notice that the route I’ve defined does not include an {action} route value or action value in the defaults. Rather, Web API can use the HTTP Verb in this route to determine the method to call the controller, and a GET request maps to any method that starts with Get. So methods called Get() or GetAlbums() are matched by a GET request and a POST request maps to a Post() or PostAlbum(). Web API matches a method by name and parameter signature to match a route, query string or POST values. In lieu of the method name, the [HttpGet,HttpPost,HttpPut,HttpDelete, etc] attributes can also be used to designate the accepted verbs explicitly if you don’t want to follow the verb naming conventions. Although HTTP Verb routing is a good practice for REST style resource APIs, it’s not required and you can still use more traditional routes with an explicit {action} route parameter. When {action} is supplied, the HTTP verb routing is ignored. I’ll talk more about alternate routes later. When you’re finished with initial creation of files, your project should look like Figure 2.   Figure 2: The initial project has the new API Controller Album model   Creating a small Album Model Now it’s time to create some controller methods to serve data. For these examples, I’ll use a very simple Album and Songs model to play with, as shown in Listing 2. public class Song { public string AlbumId { get; set; } [Required, StringLength(80)] public string SongName { get; set; } [StringLength(5)] public string SongLength { get; set; } } public class Album { public string Id { get; set; } [Required, StringLength(80)] public string AlbumName { get; set; } [StringLength(80)] public string Artist { get; set; } public int YearReleased { get; set; } public DateTime Entered { get; set; } [StringLength(150)] public string AlbumImageUrl { get; set; } [StringLength(200)] public string AmazonUrl { get; set; } public virtual List<Song> Songs { get; set; } public Album() { Songs = new List<Song>(); Entered = DateTime.Now; // Poor man's unique Id off GUID hash Id = Guid.NewGuid().GetHashCode().ToString("x"); } public void AddSong(string songName, string songLength = null) { this.Songs.Add(new Song() { AlbumId = this.Id, SongName = songName, SongLength = songLength }); } } Once the model has been created, I also added an AlbumData class that generates some static data in memory that is loaded onto a static .Current member. The signature of this class looks like this and that's what I'll access to retrieve the base data:public static class AlbumData { // sample data - static list public static List<Album> Current = CreateSampleAlbumData(); /// <summary> /// Create some sample data /// </summary> /// <returns></returns> public static List<Album> CreateSampleAlbumData() { … }} You can check out the full code for the data generation online. Creating an AlbumApiController Web API shares many concepts of ASP.NET MVC, and the implementation of your API logic is done by implementing a subclass of the System.Web.Http.ApiController class. Each public method in the implemented controller is a potential endpoint for the HTTP API, as long as a matching route can be found to invoke it. The class name you create should end in Controller, which is how Web API matches the controller route value to figure out which class to invoke. Inside the controller you can implement methods that take standard .NET input parameters and return .NET values as results. Web API’s binding tries to match POST data, route values, form values or query string values to your parameters. Because the controller is configured for HTTP Verb based routing (no {action} parameter in the route), any methods that start with Getxxxx() are called by an HTTP GET operation. You can have multiple methods that match each HTTP Verb as long as the parameter signatures are different and can be matched by Web API. In Listing 3, I create an AlbumApiController with two methods to retrieve a list of albums and a single album by its title .public class AlbumApiController : ApiController { public IEnumerable<Album> GetAlbums() { var albums = AlbumData.Current.OrderBy(alb => alb.Artist); return albums; } public Album GetAlbum(string title) { var album = AlbumData.Current .SingleOrDefault(alb => alb.AlbumName.Contains(title)); return album; }} To access the first two requests, you can use the following URLs in your browser: http://localhost/aspnetWebApi/albumshttp://localhost/aspnetWebApi/albums/Dirty%20Deeds Note that you’re not specifying the actions of GetAlbum or GetAlbums in these URLs. Instead Web API’s routing uses HTTP GET verb to route to these methods that start with Getxxx() with the first mapping to the parameterless GetAlbums() method and the latter to the GetAlbum(title) method that receives the title parameter mapped as optional in the route. Content Negotiation When you access any of the URLs above from a browser, you get either an XML or JSON result returned back. The album list result for Chrome 17 and Internet Explorer 9 is shown Figure 3. Figure 3: Web API responses can vary depending on the browser used, demonstrating Content Negotiation in action as these two browsers send different HTTP Accept headers.   Notice that the results are not the same: Chrome returns an XML response and IE9 returns a JSON response. Whoa, what’s going on here? Shouldn’t we see the same result in both browsers? Actually, no. Web API determines what type of content to return based on Accept headers. HTTP clients, like browsers, use Accept headers to specify what kind of content they’d like to see returned. Browsers generally ask for HTML first, followed by a few additional content types. Chrome (and most other major browsers) ask for: Accept: text/html, application/xhtml+xml,application/xml; q=0.9,*/*;q=0.8 IE9 asks for: Accept: text/html, application/xhtml+xml, */* Note that Chrome’s Accept header includes application/xml, which Web API finds in its list of supported media types and returns an XML response. IE9 does not include an Accept header type that works on Web API by default, and so it returns the default format, which is JSON. This is an important and very useful feature that was missing from any previous Microsoft REST tools: Web API automatically switches output formats based on HTTP Accept headers. Nowhere in the server code above do you have to explicitly specify the output format. Rather, Web API determines what format the client is requesting based on the Accept headers and automatically returns the result based on the available formatters. This means that a single method can handle both XML and JSON results.. Using this simple approach makes it very easy to create a single controller method that can return JSON, XML, ATOM or even OData feeds by providing the appropriate Accept header from the client. By default you don’t have to worry about the output format in your code. Note that you can still specify an explicit output format if you choose, either globally by overriding the installed formatters, or individually by returning a lower level HttpResponseMessage instance and setting the formatter explicitly. More on that in a minute. Along the same lines, any content sent to the server via POST/PUT is parsed by Web API based on the HTTP Content-type of the data sent. The same formats allowed for output are also allowed on input. Again, you don’t have to do anything in your code – Web API automatically performs the deserialization from the content. Accessing Web API JSON Data with jQuery A very common scenario for Web API endpoints is to retrieve data for AJAX calls from the Web browser. Because JSON is the default format for Web API, it’s easy to access data from the server using jQuery and its getJSON() method. This example receives the albums array from GetAlbums() and databinds it into the page using knockout.js.$.getJSON("albums/", function (albums) { // make knockout template visible $(".album").show(); // create view object and attach array var view = { albums: albums }; ko.applyBindings(view); }); Figure 4 shows this and the next example’s HTML output. You can check out the complete HTML and script code at http://goo.gl/Ix33C (.html) and http://goo.gl/tETlg (.js). Figu Figure 4: The Album Display sample uses JSON data loaded from Web API.   The result from the getJSON() call is a JavaScript object of the server result, which comes back as a JavaScript array. In the code, I use knockout.js to bind this array into the UI, which as you can see, requires very little code, instead using knockout’s data-bind attributes to bind server data to the UI. Of course, this is just one way to use the data – it’s entirely up to you to decide what to do with the data in your client code. Along the same lines, I can retrieve a single album to display when the user clicks on an album. The response returns the album information and a child array with all the songs. The code to do this is very similar to the last example where we pulled the albums array:$(".albumlink").live("click", function () { var id = $(this).data("id"); // title $.getJSON("albums/" + id, function (album) { ko.applyBindings(album, $("#divAlbumDialog")[0]); $("#divAlbumDialog").show(); }); }); Here the URL looks like this: /albums/Dirty%20Deeds, where the title is the ID captured from the clicked element’s data ID attribute. Explicitly Overriding Output Format When Web API automatically converts output using content negotiation, it does so by matching Accept header media types to the GlobalConfiguration.Configuration.Formatters and the SupportedMediaTypes of each individual formatter. You can add and remove formatters to globally affect what formats are available and it’s easy to create and plug in custom formatters.The example project includes a JSONP formatter that can be plugged in to provide JSONP support for requests that have a callback= querystring parameter. Adding, removing or replacing formatters is a global option you can use to manipulate content. It’s beyond the scope of this introduction to show how it works, but you can review the sample code or check out my blog entry on the subject (http://goo.gl/UAzaR). If automatic processing is not desirable in a particular Controller method, you can override the response output explicitly by returning an HttpResponseMessage instance. HttpResponseMessage is similar to ActionResult in ASP.NET MVC in that it’s a common way to return an abstract result message that contains content. HttpResponseMessage s parsed by the Web API framework using standard interfaces to retrieve the response data, status code, headers and so on[MS2] . Web API turns every response – including those Controller methods that return static results – into HttpResponseMessage instances. Explicitly returning an HttpResponseMessage instance gives you full control over the output and lets you mostly bypass WebAPI’s post-processing of the HTTP response on your behalf. HttpResponseMessage allows you to customize the response in great detail. Web API’s attention to detail in the HTTP spec really shows; many HTTP options are exposed as properties and enumerations with detailed IntelliSense comments. Even if you’re new to building REST-based interfaces, the API guides you in the right direction for returning valid responses and response codes. For example, assume that I always want to return JSON from the GetAlbums() controller method and ignore the default media type content negotiation. To do this, I can adjust the output format and headers as shown in Listing 4.public HttpResponseMessage GetAlbums() { var albums = AlbumData.Current.OrderBy(alb => alb.Artist); // Create a new HttpResponse with Json Formatter explicitly var resp = new HttpResponseMessage(HttpStatusCode.OK); resp.Content = new ObjectContent<IEnumerable<Album>>( albums, new JsonMediaTypeFormatter()); // Get Default Formatter based on Content Negotiation //var resp = Request.CreateResponse<IEnumerable<Album>>(HttpStatusCode.OK, albums); resp.Headers.ConnectionClose = true; resp.Headers.CacheControl = new CacheControlHeaderValue(); resp.Headers.CacheControl.Public = true; return resp; } This example returns the same IEnumerable<Album> value, but it wraps the response into an HttpResponseMessage so you can control the entire HTTP message result including the headers, formatter and status code. In Listing 4, I explicitly specify the formatter using the JsonMediaTypeFormatter to always force the content to JSON.  If you prefer to use the default content negotiation with HttpResponseMessage results, you can create the Response instance using the Request.CreateResponse method:var resp = Request.CreateResponse<IEnumerable<Album>>(HttpStatusCode.OK, albums); This provides you an HttpResponse object that's pre-configured with the default formatter based on Content Negotiation. Once you have an HttpResponse object you can easily control most HTTP aspects on this object. What's sweet here is that there are many more detailed properties on HttpResponse than the core ASP.NET Response object, with most options being explicitly configurable with enumerations that make it easy to pick the right headers and response codes from a list of valid codes. It makes HTTP features available much more discoverable even for non-hardcore REST/HTTP geeks. Non-Serialized Results The output returned doesn’t have to be a serialized value but can also be raw data, like strings, binary data or streams. You can use the HttpResponseMessage.Content object to set a number of common Content classes. Listing 5 shows how to return a binary image using the ByteArrayContent class from a Controller method. [HttpGet] public HttpResponseMessage AlbumArt(string title) { var album = AlbumData.Current.FirstOrDefault(abl => abl.AlbumName.StartsWith(title)); if (album == null) { var resp = Request.CreateResponse<ApiMessageError>( HttpStatusCode.NotFound, new ApiMessageError("Album not found")); return resp; } // kinda silly - we would normally serve this directly // but hey - it's a demo. var http = new WebClient(); var imageData = http.DownloadData(album.AlbumImageUrl); // create response and return var result = new HttpResponseMessage(HttpStatusCode.OK); result.Content = new ByteArrayContent(imageData); result.Content.Headers.ContentType = new MediaTypeHeaderValue("image/jpeg"); return result; } The image retrieval from Amazon is contrived, but it shows how to return binary data using ByteArrayContent. It also demonstrates that you can easily return multiple types of content from a single controller method, which is actually quite common. If an error occurs - such as a resource can’t be found or a validation error – you can return an error response to the client that’s very specific to the error. In GetAlbumArt(), if the album can’t be found, we want to return a 404 Not Found status (and realistically no error, as it’s an image). Note that if you are not using HTTP Verb-based routing or not accessing a method that starts with Get/Post etc., you have to specify one or more HTTP Verb attributes on the method explicitly. Here, I used the [HttpGet] attribute to serve the image. Another option to handle the error could be to return a fixed placeholder image if no album could be matched or the album doesn’t have an image. When returning an error code, you can also return a strongly typed response to the client. For example, you can set the 404 status code and also return a custom error object (ApiMessageError is a class I defined) like this:return Request.CreateResponse<ApiMessageError>( HttpStatusCode.NotFound, new ApiMessageError("Album not found") );   If the album can be found, the image will be returned. The image is downloaded into a byte[] array, and then assigned to the result’s Content property. I created a new ByteArrayContent instance and assigned the image’s bytes and the content type so that it displays properly in the browser. There are other content classes available: StringContent, StreamContent, ByteArrayContent, MultipartContent, and ObjectContent are at your disposal to return just about any kind of content. You can create your own Content classes if you frequently return custom types and handle the default formatter assignments that should be used to send the data out . Although HttpResponseMessage results require more code than returning a plain .NET value from a method, it allows much more control over the actual HTTP processing than automatic processing. It also makes it much easier to test your controller methods as you get a response object that you can check for specific status codes and output messages rather than just a result value. Routing Again Ok, let’s get back to the image example. Using the original routing we have setup using HTTP Verb routing there's no good way to serve the image. In order to return my album art image I’d like to use a URL like this: http://localhost/aspnetWebApi/albums/Dirty%20Deeds/image In order to create a URL like this, I have to create a new Controller because my earlier routes pointed to the AlbumApiController using HTTP Verb routing. HTTP Verb based routing is great for representing a single set of resources such as albums. You can map operations like add, delete, update and read easily using HTTP Verbs. But you cannot mix action based routing into a an HTTP Verb routing controller - you can only map HTTP Verbs and each method has to be unique based on parameter signature. You can't have multiple GET operations to methods with the same signature. So GetImage(string id) and GetAlbum(string title) are in conflict in an HTTP GET routing scenario. In fact, I was unable to make the above Image URL work with any combination of HTTP Verb plus Custom routing using the single Albums controller. There are number of ways around this, but all involve additional controllers.  Personally, I think it’s easier to use explicit Action routing and then add custom routes if you need to simplify your URLs further. So in order to accommodate some of the other examples, I created another controller – AlbumRpcApiController – to handle all requests that are explicitly routed via actions (/albums/rpc/AlbumArt) or are custom routed with explicit routes defined in the HttpConfiguration. I added the AlbumArt() method to this new AlbumRpcApiController class. For the image URL to work with the new AlbumRpcApiController, you need a custom route placed before the default route from Listing 1.RouteTable.Routes.MapHttpRoute( name: "AlbumRpcApiAction", routeTemplate: "albums/rpc/{action}/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumRpcApi", action = "GetAblums" } ); Now I can use either of the following URLs to access the image: Custom route: (/albums/rpc/{title}/image)http://localhost/aspnetWebApi/albums/PowerAge/image Action route: (/albums/rpc/action/{title})http://localhost/aspnetWebAPI/albums/rpc/albumart/PowerAge Sending Data to the Server To send data to the server and add a new album, you can use an HTTP POST operation. Since I’m using HTTP Verb-based routing in the original AlbumApiController, I can implement a method called PostAlbum()to accept a new album from the client. Listing 6 shows the Web API code to add a new album.public HttpResponseMessage PostAlbum(Album album) { if (!this.ModelState.IsValid) { // my custom error class var error = new ApiMessageError() { message = "Model is invalid" }; // add errors into our client error model for client foreach (var prop in ModelState.Values) { var modelError = prop.Errors.FirstOrDefault(); if (!string.IsNullOrEmpty(modelError.ErrorMessage)) error.errors.Add(modelError.ErrorMessage); else error.errors.Add(modelError.Exception.Message); } return Request.CreateResponse<ApiMessageError>(HttpStatusCode.Conflict, error); } // update song id which isn't provided foreach (var song in album.Songs) song.AlbumId = album.Id; // see if album exists already var matchedAlbum = AlbumData.Current .SingleOrDefault(alb => alb.Id == album.Id || alb.AlbumName == album.AlbumName); if (matchedAlbum == null) AlbumData.Current.Add(album); else matchedAlbum = album; // return a string to show that the value got here var resp = Request.CreateResponse(HttpStatusCode.OK, string.Empty); resp.Content = new StringContent(album.AlbumName + " " + album.Entered.ToString(), Encoding.UTF8, "text/plain"); return resp; } The PostAlbum() method receives an album parameter, which is automatically deserialized from the POST buffer the client sent. The data passed from the client can be either XML or JSON. Web API automatically figures out what format it needs to deserialize based on the content type and binds the content to the album object. Web API uses model binding to bind the request content to the parameter(s) of controller methods. Like MVC you can check the model by looking at ModelState.IsValid. If it’s not valid, you can run through the ModelState.Values collection and check each binding for errors. Here I collect the error messages into a string array that gets passed back to the client via the result ApiErrorMessage object. When a binding error occurs, you’ll want to return an HTTP error response and it’s best to do that with an HttpResponseMessage result. In Listing 6, I used a custom error class that holds a message and an array of detailed error messages for each binding error. I used this object as the content to return to the client along with my Conflict HTTP Status Code response. If binding succeeds, the example returns a string with the name and date entered to demonstrate that you captured the data. Normally, a method like this should return a Boolean or no response at all (HttpStatusCode.NoConent). The sample uses a simple static list to hold albums, so once you’ve added the album using the Post operation, you can hit the /albums/ URL to see that the new album was added. The client jQuery code to call the POST operation from the client with jQuery is shown in Listing 7. var id = new Date().getTime().toString(); var album = { "Id": id, "AlbumName": "Power Age", "Artist": "AC/DC", "YearReleased": 1977, "Entered": "2002-03-11T18:24:43.5580794-10:00", "AlbumImageUrl": http://ecx.images-amazon.com/images/…, "AmazonUrl": http://www.amazon.com/…, "Songs": [ { "SongName": "Rock 'n Roll Damnation", "SongLength": 3.12}, { "SongName": "Downpayment Blues", "SongLength": 4.22 }, { "SongName": "Riff Raff", "SongLength": 2.42 } ] } $.ajax( { url: "albums/", type: "POST", contentType: "application/json", data: JSON.stringify(album), processData: false, beforeSend: function (xhr) { // not required since JSON is default output xhr.setRequestHeader("Accept", "application/json"); }, success: function (result) { // reload list of albums page.loadAlbums(); }, error: function (xhr, status, p3, p4) { var err = "Error"; if (xhr.responseText && xhr.responseText[0] == "{") err = JSON.parse(xhr.responseText).message; alert(err); } }); The code in Listing 7 creates an album object in JavaScript to match the structure of the .NET Album class. This object is passed to the $.ajax() function to send to the server as POST. The data is turned into JSON and the content type set to application/json so that the server knows what to convert when deserializing in the Album instance. The jQuery code hooks up success and failure events. Success returns the result data, which is a string that’s echoed back with an alert box. If an error occurs, jQuery returns the XHR instance and status code. You can check the XHR to see if a JSON object is embedded and if it is, you can extract it by de-serializing it and accessing the .message property. REST standards suggest that updates to existing resources should use PUT operations. REST standards aside, I’m not a big fan of separating out inserts and updates so I tend to have a single method that handles both. But if you want to follow REST suggestions, you can create a PUT method that handles updates by forwarding the PUT operation to the POST method:public HttpResponseMessage PutAlbum(Album album) { return PostAlbum(album); } To make the corresponding $.ajax() call, all you have to change from Listing 7 is the type: from POST to PUT. Model Binding with UrlEncoded POST Variables In the example in Listing 7 I used JSON objects to post a serialized object to a server method that accepted an strongly typed object with the same structure, which is a common way to send data to the server. However, Web API supports a number of different ways that data can be received by server methods. For example, another common way is to use plain UrlEncoded POST  values to send to the server. Web API supports Model Binding that works similar (but not the same) as MVC's model binding where POST variables are mapped to properties of object parameters of the target method. This is actually quite common for AJAX calls that want to avoid serialization and the potential requirement of a JSON parser on older browsers. For example, using jQUery you might use the $.post() method to send a new album to the server (albeit one without songs) using code like the following:$.post("albums/",{AlbumName: "Dirty Deeds", YearReleased: 1976 … },albumPostCallback); Although the code looks very similar to the client code we used before passing JSON, here the data passed is URL encoded values (AlbumName=Dirty+Deeds&YearReleased=1976 etc.). Web API then takes this POST data and maps each of the POST values to the properties of the Album object in the method's parameter. Although the client code is different the server can both handle the JSON object, or the UrlEncoded POST values. Dynamic Access to POST Data There are also a few options available to dynamically access POST data, if you know what type of data you're dealing with. If you have POST UrlEncoded values, you can dynamically using a FormsDataCollection:[HttpPost] public string PostAlbum(FormDataCollection form) { return string.Format("{0} - released {1}", form.Get("AlbumName"),form.Get("RearReleased")); } The FormDataCollection is a very simple object, that essentially provides the same functionality as Request.Form[] in ASP.NET. Request.Form[] still works if you're running hosted in an ASP.NET application. However as a general rule, while ASP.NET's functionality is always available when running Web API hosted inside of an  ASP.NET application, using the built in classes specific to Web API makes it possible to run Web API applications in a self hosted environment outside of ASP.NET. If your client is sending JSON to your server, and you don't want to map the JSON to a strongly typed object because you only want to retrieve a few simple values, you can also accept a JObject parameter in your API methods:[HttpPost] public string PostAlbum(JObject jsonData) { dynamic json = jsonData; JObject jalbum = json.Album; JObject juser = json.User; string token = json.UserToken; var album = jalbum.ToObject<Album>(); var user = juser.ToObject<User>(); return String.Format("{0} {1} {2}", album.AlbumName, user.Name, token); } There quite a few options available to you to receive data with Web API, which gives you more choices for the right tool for the job. Unfortunately one shortcoming of Web API is that POST data is always mapped to a single parameter. This means you can't pass multiple POST parameters to methods that receive POST data. It's possible to accept multiple parameters, but only one can map to the POST content - the others have to come from the query string or route values. I have a couple of Blog POSTs that explain what works and what doesn't here: Passing multiple POST parameters to Web API Controller Methods Mapping UrlEncoded POST Values in ASP.NET Web API   Handling Delete Operations Finally, to round out the server API code of the album example we've been discussin, here’s the DELETE verb controller method that allows removal of an album by its title:public HttpResponseMessage DeleteAlbum(string title) { var matchedAlbum = AlbumData.Current.Where(alb => alb.AlbumName == title) .SingleOrDefault(); if (matchedAlbum == null) return new HttpResponseMessage(HttpStatusCode.NotFound); AlbumData.Current.Remove(matchedAlbum); return new HttpResponseMessage(HttpStatusCode.NoContent); } To call this action method using jQuery, you can use:$(".removeimage").live("click", function () { var $el = $(this).parent(".album"); var txt = $el.find("a").text(); $.ajax({ url: "albums/" + encodeURIComponent(txt), type: "Delete", success: function (result) { $el.fadeOut().remove(); }, error: jqError }); }   Note the use of the DELETE verb in the $.ajax() call, which routes to DeleteAlbum on the server. DELETE is a non-content operation, so you supply a resource ID (the title) via route value or the querystring. Routing Conflicts In all requests with the exception of the AlbumArt image example shown so far, I used HTTP Verb routing that I set up in Listing 1. HTTP Verb Routing is a recommendation that is in line with typical REST access to HTTP resources. However, it takes quite a bit of effort to create REST-compliant API implementations based only on HTTP Verb routing only. You saw one example that didn’t really fit – the return of an image where I created a custom route albums/{title}/image that required creation of a second controller and a custom route to work. HTTP Verb routing to a controller does not mix with custom or action routing to the same controller because of the limited mapping of HTTP verbs imposed by HTTP Verb routing. To understand some of the problems with verb routing, let’s look at another example. Let’s say you create a GetSortableAlbums() method like this and add it to the original AlbumApiController accessed via HTTP Verb routing:[HttpGet] public IQueryable<Album> SortableAlbums() { var albums = AlbumData.Current; // generally should be done only on actual queryable results (EF etc.) // Done here because we're running with a static list but otherwise might be slow return albums.AsQueryable(); } If you compile this code and try to now access the /albums/ link, you get an error: Multiple Actions were found that match the request. HTTP Verb routing only allows access to one GET operation per parameter/route value match. If more than one method exists with the same parameter signature, it doesn’t work. As I mentioned earlier for the image display, the only solution to get this method to work is to throw it into another controller. Because I already set up the AlbumRpcApiController I can add the method there. First, I should rename the method to SortableAlbums() so I’m not using a Get prefix for the method. This also makes the action parameter look cleaner in the URL - it looks less like a method and more like a noun. I can then create a new route that handles direct-action mapping:RouteTable.Routes.MapHttpRoute( name: "AlbumRpcApiAction", routeTemplate: "albums/rpc/{action}/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumRpcApi", action = "GetAblums" } ); As I am explicitly adding a route segment – rpc – into the route template, I can now reference explicit methods in the Web API controller using URLs like this: http://localhost/AspNetWebApi/rpc/SortableAlbums Error Handling I’ve already done some minimal error handling in the examples. For example in Listing 6, I detected some known-error scenarios like model validation failing or a resource not being found and returning an appropriate HttpResponseMessage result. But what happens if your code just blows up or causes an exception? If you have a controller method, like this:[HttpGet] public void ThrowException() { throw new UnauthorizedAccessException("Unauthorized Access Sucka"); } You can call it with this: http://localhost/AspNetWebApi/albums/rpc/ThrowException The default exception handling displays a 500-status response with the serialized exception on the local computer only. When you connect from a remote computer, Web API throws back a 500  HTTP Error with no data returned (IIS then adds its HTML error page). The behavior is configurable in the GlobalConfiguration:GlobalConfiguration .Configuration .IncludeErrorDetailPolicy = IncludeErrorDetailPolicy.Never; If you want more control over your error responses sent from code, you can throw explicit error responses yourself using HttpResponseException. When you throw an HttpResponseException the response parameter is used to generate the output for the Controller action. [HttpGet] public void ThrowError() { var resp = Request.CreateResponse<ApiMessageError>( HttpStatusCode.BadRequest, new ApiMessageError("Your code stinks!")); throw new HttpResponseException(resp); } Throwing an HttpResponseException stops the processing of the controller method and immediately returns the response you passed to the exception. Unlike other Exceptions fired inside of WebAPI, HttpResponseException bypasses the Exception Filters installed and instead just outputs the response you provide. In this case, the serialized ApiMessageError result string is returned in the default serialization format – XML or JSON. You can pass any content to HttpResponseMessage, which includes creating your own exception objects and consistently returning error messages to the client. Here’s a small helper method on the controller that you might use to send exception info back to the client consistently:private void ThrowSafeException(string message, HttpStatusCode statusCode = HttpStatusCode.BadRequest) { var errResponse = Request.CreateResponse<ApiMessageError>(statusCode, new ApiMessageError() { message = message }); throw new HttpResponseException(errResponse); } You can then use it to output any captured errors from code:[HttpGet] public void ThrowErrorSafe() { try { List<string> list = null; list.Add("Rick"); } catch (Exception ex) { ThrowSafeException(ex.Message); } }   Exception Filters Another more global solution is to create an Exception Filter. Filters in Web API provide the ability to pre- and post-process controller method operations. An exception filter looks at all exceptions fired and then optionally creates an HttpResponseMessage result. Listing 8 shows an example of a basic Exception filter implementation.public class UnhandledExceptionFilter : ExceptionFilterAttribute { public override void OnException(HttpActionExecutedContext context) { HttpStatusCode status = HttpStatusCode.InternalServerError; var exType = context.Exception.GetType(); if (exType == typeof(UnauthorizedAccessException)) status = HttpStatusCode.Unauthorized; else if (exType == typeof(ArgumentException)) status = HttpStatusCode.NotFound; var apiError = new ApiMessageError() { message = context.Exception.Message }; // create a new response and attach our ApiError object // which now gets returned on ANY exception result var errorResponse = context.Request.CreateResponse<ApiMessageError>(status, apiError); context.Response = errorResponse; base.OnException(context); } } Exception Filter Attributes can be assigned to an ApiController class like this:[UnhandledExceptionFilter] public class AlbumRpcApiController : ApiController or you can globally assign it to all controllers by adding it to the HTTP Configuration's Filters collection:GlobalConfiguration.Configuration.Filters.Add(new UnhandledExceptionFilter()); The latter is a great way to get global error trapping so that all errors (short of hard IIS errors and explicit HttpResponseException errors) return a valid error response that includes error information in the form of a known-error object. Using a filter like this allows you to throw an exception as you normally would and have your filter create a response in the appropriate output format that the client expects. For example, an AJAX application can on failure expect to see a JSON error result that corresponds to the real error that occurred rather than a 500 error along with HTML error page that IIS throws up. You can even create some custom exceptions so you can differentiate your own exceptions from unhandled system exceptions - you often don't want to display error information from 'unknown' exceptions as they may contain sensitive system information or info that's not generally useful to users of your application/site. This is just one example of how ASP.NET Web API is configurable and extensible. Exception filters are just one example of how you can plug-in into the Web API request flow to modify output. Many more hooks exist and I’ll take a closer look at extensibility in Part 2 of this article in the future. Summary Web API is a big improvement over previous Microsoft REST and AJAX toolkits. The key features to its usefulness are its ease of use with simple controller based logic, familiar MVC-style routing, low configuration impact, extensibility at all levels and tight attention to exposing and making HTTP semantics easily discoverable and easy to use. Although none of the concepts used in Web API are new or radical, Web API combines the best of previous platforms into a single framework that’s highly functional, easy to work with, and extensible to boot. I think that Microsoft has hit a home run with Web API. Related Resources Where does ASP.NET Web API fit? Sample Source Code on GitHub Passing multiple POST parameters to Web API Controller Methods Mapping UrlEncoded POST Values in ASP.NET Web API Creating a JSONP Formatter for ASP.NET Web API Removing the XML Formatter from ASP.NET Web API Applications© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • SSAS processing error: Client unable to establish connection; 08001; Encryption not supported on the client.; 08001

    - by Kevin Shyr
    After getting the cube to successfully deploy and process on Friday, I was baffled on Monday that the newly added dimension caused the cube processing to break.  I then followed the first instinct, discarded all my changes to reverted back to the version on Friday, and had no luck.  The error message (attached below) did not help as I was looking for some kind of SQL service error.  After examining the windows server log and the SQL server log, I just couldn't see anything wrong with it.After swearing for some time, and with the help of going off and working on something else for a while.  I came back to the solution and looked at the data source.  Even though I know I have never changed the provider (the default setup gave me SQL native client), I decided to change it and give OLE DB a try.This simple change allows my cube to process successfully again.  While I don't understand why the same settings that worked last week doesn't work this week, I don't have all the information to say with certainty that nothing has changed in the environment (firewall changes, server updates, etc.).SSAS processing error:<Batch >  <Parallel>    <Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200">      <Object>        <DatabaseID>DWH Sales Facts</DatabaseID>        <CubeID>DWH Sales Facts</CubeID>      </Object>      <Type>ProcessFull</Type>      <WriteBackTableCreation>UseExisting</WriteBackTableCreation>    </Process>  </Parallel></Batch>                Processing Dimension 'Date' completed.                                Errors and Warnings from Response                OLE DB error: OLE DB or ODBC error: A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online.; 08001; Client unable to establish connection; 08001; Encryption not supported on the client.; 08001.                Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of 'DWH Sales Facts', Name of 'DWH Sales Facts'.                Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of 'Currency', Name of 'Currency' was being processed.                Errors in the OLAP storage engine: An error occurred while the 'Currency Dim ID' attribute of the 'Currency' dimension from the 'DWH Sales Facts' database was being processed.                Internal error: The operation terminated unsuccessfully.                Server: The operation has been cancelled.

    Read the article

  • SQLAuthority News – Guest Post – Performance Counters Gathering using Powershell

    - by pinaldave
    Laerte Junior Laerte Junior has previously helped me personally to resolve the issue with Powershell installation on my computer. He did awesome job to help. He has send this another wonderful article regarding performance counter for readers of this blog. I really liked it and I expect all of you who are Powershell geeks, you will like the same as well. As a good DBA, you know that our social life is restricted to a few movies over the year and, when possible, a pizza in a restaurant next to your company’s place, of course. So what we have to do is to create methods through which we can facilitate our daily processes to go home early, and eventually have a nice time with our family (and not sleeping on the couch). As a consultant or fixed employee, one of our daily tasks is to monitor performance counters using Perfmom. To be honest, IDE is getting more complicated. To deal with this, I thought a solution using Powershell. Yes, with some lines of Powershell, you can configure which counters to use. And with one more line, you can already start collecting data. Let’s see one scenario: You are a consultant who has several clients and has just closed another project in troubleshooting an SQL Server environment. You are to use Perfmom to collect data from the server and you already have its XML configuration files made with the counters that you will be using- a file for memory bottleneck f, one for CPU, etc. With one Powershell command line for each XML file, you start collecting. The output of such a TXT file collection is set to up in an SQL Server. With two lines of command for each XML, you make the whole process of data collection. Creating an XML configuration File to Memory Counters: Get-PerfCounterCategory -CategoryName "Memory" | Get-PerfCounterInstance  | Get-PerfCounterCounters |Save-ConfigPerfCounter -PathConfigFile "c:\temp\ConfigfileMemory.xml" -newfile Creating an XML Configuration File to Buffer Manager, counters Page lookups/sec, Page reads/sec, Page writes/sec, Page life expectancy: Get-PerfCounterCategory -CategoryName "SQLServer:Buffer Manager" | Get-PerfCounterInstance | Get-PerfCounterCounters -CounterName "Page*" | Save-ConfigPerfCounter -PathConfigFile "c:\temp\BufferManager.xml" –NewFile Then you start the collection: Set-CollectPerfCounter -DateTimeStart "05/24/2010 08:00:00" -DateTimeEnd "05/24/2010 22:00:00" -Interval 10 -PathConfigFile c:\temp\ConfigfileMemory.xml -PathOutputFile c:\temp\ConfigfileMemory.txt To let the Buffer Manager collect, you need one more counters, including the Buffer cache hit ratio. Just add a new counter to BufferManager.xml, omitting the new file parameter Get-PerfCounterCategory -CategoryName "SQLServer:Buffer Manager" | Get-PerfCounterInstance | Get-PerfCounterCounters -CounterName "Buffer cache hit ratio" | Save-ConfigPerfCounter -PathConfigFile "c:\temp\BufferManager.xml" And start the collection: Set-CollectPerfCounter -DateTimeStart "05/24/2010 08:00:00" -DateTimeEnd "05/24/2010 22:00:00" -Interval 10 -PathConfigFile c:\temp\BufferManager.xml -PathOutputFile c:\temp\BufferManager.txt You do not know which counters are in the Category Buffer Manager? Simple! Get-PerfCounterCategory -CategoryName "SQLServer:Buffer Manager" | Get-PerfCounterInstance | Get-PerfCounterCounters Let’s see one output file as shown below. It is ready to bulk insert into the SQL Server. As you can see, Powershell makes this process incredibly easy and fast. Do you want to see more examples? Visit my blog at Shell Your Experience You can find more about Laerte Junior over here: www.laertejuniordba.spaces.live.com www.simple-talk.com/author/laerte-junior www.twitter.com/laertejuniordba SQL Server Powershell Extension Team: http://sqlpsx.codeplex.com/ Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Add-On, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology Tagged: Powershell

    Read the article

  • AIX Grid Control 10.2.0.5 Communication and Monitoring Issue since 31-DEC-2010

    - by jayatheertha.rao(at)oracle.com
    Detailed symptoms for Oracle Management Server (OMS) 10.2.0.5 on AIX Oracle Management Service 10.2.0.5 instances on AIX 5L remain active and functional, but the OMS instances fail to communicate with the Grid Control Management Agents.An SSLPeerUnverified exception will be reported in the file $ORACLE_HOME/sysman/log/emoms.trc when OMS attempts to connect with an Agent:Javax.net.ssl.SSLPeerUnverifiedException: peer not authenticatedat com.sun.net.ssl.internal.ssl.SSLSessionImpl.getPeerCertificateChain(DashoA12275)at oracle.sysman.emSDK.emd.comm.EMDClient.authenticateHTTPConnection(EMDClient.java:2002)at oracle.sysman.emSDK.emd.comm.EMDClient.getConnection(EMDClient.java:1877)at oracle.sysman.emSDK.emd.comm.EMDClient.getConnection(EMDClient.java:1810)at oracle.sysman.emSDK.emd.comm.EMDClient.verifyHttpConnection(EMDClient.java:2540)at oracle.sysman.emSDK.emd.comm.EMDClient.getResponseForRequest(EMDClient.java:2323)at oracle.sysman.emSDK.emd.comm.EMDClient.getUploadManagerStatus(EMDClient.java:4853)at oracle.sysman.eml.admin.rep.emdConfig.EmdConfigTargetsData.getEmdUploadData(EmdConfigTargetsData.java:1640)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)This error may be reported when:- Accessing the Agent home page in Grid Control- Setting preferred credentials for a target monitored by the Agent- Managing metrics for a target monitored by the Agent The jobs scheduled to be run by Agents can become non-responsiveThe OMS log file $ORACLE_HOME/sysman/log/emoms.trc can show:2010-12-31 00:06:58,204 [JobWorker 430:Thread-34] DEBUG emSDK.comm getStreamResponse.4015 - oracle.sysman.emSDK.emd.comm.CommException: java.io.IOException: javax.net.ssl.SSLPeerUnverifiedException: peer not authenticatedoracle.sysman.emSDK.emd.comm.CommException: java.io.IOException: javax.net.ssl.SSLPeerUnverifiedException: peer not authenticatedat oracle.sysman.emSDK.emd.comm.EMDClient.getStreamResponse_(EMDClient.java:4088)at oracle.sysman.emSDK.emd.comm.EMDClient.getStreamResponse(EMDClient.java:4009)at oracle.sysman.emSDK.emd.comm.EMDClient.remoteOperation(EMDClient.java:3404)at oracle.sysman.emdrep.jobs.CommandManager.requestRemoteCommand(CommandManager.java:765)at oracle.sysman.emdrep.jobs.commands.RemoteOp.executeCommand(RemoteOp.java:434)at oracle.sysman.emdrep.jobs.commands.RemoteOp.executeCommand(RemoteOp.java:491)at oracle.sysman.emdrep.jobs.BaseJobWorker.runStep(BaseJobWorker.java:614)at oracle.sysman.emdrep.jobs.BaseJobWorker.doOneOperation(BaseJobWorker.java:738)at oracle.sysman.emdrep.jobs.JobWorker.doOneOperation(JobWorker.java:306)at oracle.sysman.emdrep.jobs.JobWorker.run(JobWorker.java:288)at oracle.sysman.util.threadPoolManager.WorkerThread.run(Worker.java:261) Detailed symptoms for Grid Control Management Agent 10.2.0.5 on AIX Beginning 31-DEC-2010 00:00:00, 10.2.0.5 Management Agents running on the AIX 5L operating system will fail to monitor Oracle Application Server targets. As a result, the Availability Status for the Oracle Application Server targets will be in the "Metric Error" state. NOTE: The 10.2.0.5.0 Agents would experience these errors regardless of the version/platform of the OMS.The following metric error is seen in the console for the Oracle Application Server targets monitored by a Grid Control Management Agent 10.2.0.5 installed on AIX and experiencing a Root Certificate Authority issue:Message oracle.sysman.emSDK.emd.fetchlet.FetchletException: oracle.sysman.emSDK.emd.fetchlet.FetchletException: oracle.sysman.emSDK.emd.fetchlet.FetchletException: oracle.sysman.emSDK.emd.comm.CommException: java.io.IOException: javax.net.ssl.SSLPeerUnverifiedException: peer not authenticated The Grid Control Management Agent log file $ORACLE_HOME/sysman/log/emagentfetchlet.log (or $ORACLE_HOME/hostname/sysman/log/emagentfetchlet.log for a clustered Agent) includes the following errors:2010-12-31 00:01:03,626 [nmefmgr_getJNIFetchlet] ERROR ias.ResponseMetric getResponseMetric.154 - Unable tocompute application server statusoracle.sysman.emSDK.emd.fetchlet.FetchletException: oracle.sysman.emSDK.emd.comm.CommException: java.io.IOException: javax.net.ssl.SSLPeerUnverifiedException: peer not authenticatedat oracle.sysman.ias.ias.ResponseMetric.getResponseMetric(ResponseMetric.java:108)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:79)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:618)at oracle.sysman.emd.fetchlets.JavaWrapperFetchlet.getMetric(JavaWrapperFetchlet.java:217)at oracle.sysman.emd.fetchlets.FetchletWrapper.getMetric(FetchletWrapper.java:382) Beginning 31-DEC-2010, 10.2.0.5 Management Agents on the AIX 5L platform will fail to secure or re-secure with Oracle Management Service (OMS). This failure will cause installation of 10.2.0.5 Agents on the AIX 5L platform to fail.NOTE: The 10.2.0.5.0 Agents would experience these errors regardless of the version/platform of the OMS.The "emctl secure agent" command will fail with the following error, which will be written to the $ORACLE_HOME/sysman/log/secure.log file (or $ORACLE_HOME/hostname/sysman/log/secure.log for a clustered Agent) :2011-01-03 21:06:11,941 [main] ERROR agent.SecureAgentCmd main.207 - Failedto secure the Agent:javax.net.ssl.SSLPeerUnverifiedException: peer not authenticatedatcom.sun.net.ssl.internal.ssl.SSLSessionImpl.getPeerCertificateChain(DashoA6275)atoracle.sysman.emctl.secure.agent.SecureAgentCmd.checkUpload(SecureAgentCmd.java:478)atoracle.sysman.emctl.secure.agent.SecureAgentCmd.secureAgent(SecureAgentCmd.java:249)atoracle.sysman.emctl.secure.agent.SecureAgentCmd.main(SecureAgentCmd.java:200)  For solution, refer to AIX Grid Control 10.2.0.5 SSL Communication and Monitoring Issue since 31-DEC-2010 (Doc ID 1275070.1)

    Read the article

< Previous Page | 921 922 923 924 925 926 927 928 929 930 931 932  | Next Page >