Search Results

Search found 65196 results on 2608 pages for 'add service reference'.

Page 416/2608 | < Previous Page | 412 413 414 415 416 417 418 419 420 421 422 423  | Next Page >

  • Running mysql 5.5 on centos 5.9

    - by gerrytan
    I installed mysql using yum install mysql-server on centos 5.9 and realized it's version 5.0. I need version 5.5 so then I did yum install mysql55-server however I couldn't find a way to start server version 5.5 instead of 5.0. service mysqld start will start 5.0 server and removing mysql 5.0 doesn't help either because service mysqld start fail to find mysqld service Update 1 Nov 2013: I noticed mysql55 package was being installed to /opt/rh/mysql55/root/usr/bin, so I appended that into the start of my PATH env var but service mysqld start still runs 5.0 server. If I tried running the server using mysqld_safe located on above mysql55 path but it says [root@***** bin]# mysqld_safe Use "scl enable mysql55 'service ...'" invocation Not quite sure what it means. I checked the running mysql version by connecting to it using mysql command line client. [root@***** bin]# mysql Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 2 Server version: 5.0.95 Source distribution

    Read the article

  • Unable to create a VSS snapshot of the source

    - by SuperFurryToad
    The following error is preventing me from cloning a Windows 7 64bit computer. Unable to create a VSS snapshot of the source volume(s). Error code: 2147754754 (0x80042302) I'm using VMWare vCenter Converter Standalone Client 4.0.1. Any ideas on what might be causing this? When I checked the services running, I noticed that the Volume Shadow Copy Service was set to manual. So I started the service and switched it to automatic. It still didn't work after that. I checked the event logs and I got the following errors: Event ID: 22 Description: Volume Shadow Copy Service error: A critical component required by the Volume Shadow Copy service is not registered. This might happened if an error occurred during Windows setup or during installation of a Shadow Copy provider. The error returned from CoCreateInstance on class with CLSID {e579ab5f-1cc4-44b4-bed9-de0991ff0623} and Name IVssCoordinatorEx2 is [0x80040154, Class not registered Event ID: 8193 Description: Volume Shadow Copy Service error: Unexpected error calling routine CoCreateInstance. hr = 0x80040154, Class not registered

    Read the article

  • SQL SERVER – Fix : Error : 8501 MSDTC on server is unavailable. Changed database context to publishe

    - by pinaldave
    During configuring replication on one of the server, I received following error. This is very common error and the solution of the same is even simpler. MSDTC on server is unavailable. Changed database context to publisherdatabase. (Microsoft SQL Server, Error: 8501) Solution: Enable “Distributed Transaction Coordinator” in SQL Server. Method 1: Click on Start–>Control Panel->Administrative Tools->Services Select the service “Distributed Transaction Coordinator” Right on the service and choose “Start” Method 2: Type services.msc in the run command box Select “Services” manager; Hit Enter Select the service “Distributed Transaction Coordinator” Right on the service and choose “Start” Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Replication

    Read the article

  • How Social Is Your Contact Center?

    - by Charles Knapp
    More than 75% of consumers have complained on a social site after a poor customer experience. Yet, 70% of companies have little understanding of the social media conversations about their brand. To deliver upon your brand promise, retain customers, and increase their lifetime value, you must deliver great customer experiences across social, mobile, phone, and chat channels. Siloed channels produce poor customer experiences. Social channels must integrate with the people, processes, technology, and traditional channels used to satisfy customers. The more effective a company’s social marketing, the greater the demand for effective social service. However, service is not a job for social marketers. It is a job for service specialists, focused on KPIs such as response time, first contact resolution, satisfaction, churn, retention, and customer lifetime value. Most social-enabled contact centers are at the early adopter stage, attempting to “bolt on” social media as a side process. Many are experiencing inconsistent customer experiences, higher costs, and negligible return on investments. Service leaders should consider carefully how to integrate social channels with their current customer service and support people, processes, technology, and channels. Here is one company realizing success: the pre-integrated Oracle RightNow Social Experience “empowers our contact center operations by enabling our agents to join customer conversations that are happening on social sites like Twitter and Facebook and integrate those conversations into our overall multichannel customer engagement processes.” — Lisa Larson, Drugstore.com

    Read the article

  • Where does ASP.NET Web API Fit?

    - by Rick Strahl
    With the pending release of ASP.NET MVC 4 and the new ASP.NET Web API, there has been a lot of discussion of where the new Web API technology fits in the ASP.NET Web stack. There are a lot of choices to build HTTP based applications available now on the stack - we've come a long way from when WebForms and Http Handlers/Modules where the only real options. Today we have WebForms, MVC, ASP.NET Web Pages, ASP.NET AJAX, WCF REST and now Web API as well as the core ASP.NET runtime to choose to build HTTP content with. Web API definitely squarely addresses the 'API' aspect - building consumable services - rather than HTML content, but even to that end there are a lot of choices you have today. So where does Web API fit, and when doesn't it? But before we get into that discussion, let's talk about what a Web API is and why we should care. What's a Web API? HTTP 'APIs' (Microsoft's new terminology for a service I guess)  are becoming increasingly more important with the rise of the many devices in use today. Most mobile devices like phones and tablets run Apps that are using data retrieved from the Web over HTTP. Desktop applications are also moving in this direction with more and more online content and synching moving into even traditional desktop applications. The pending Windows 8 release promises an app like platform for both the desktop and other devices, that also emphasizes consuming data from the Cloud. Likewise many Web browser hosted applications these days are relying on rich client functionality to create and manipulate the browser user interface, using AJAX rather than server generated HTML data to load up the user interface with data. These mobile or rich Web applications use their HTTP connection to return data rather than HTML markup in the form of JSON or XML typically. But an API can also serve other kinds of data, like images or other binary files, or even text data and HTML (although that's less common). A Web API is what feeds rich applications with data. ASP.NET Web API aims to service this particular segment of Web development by providing easy semantics to route and handle incoming requests and an easy to use platform to serve HTTP data in just about any content format you choose to create and serve from the server. But .NET already has various HTTP Platforms The .NET stack already includes a number of technologies that provide the ability to create HTTP service back ends, and it has done so since the very beginnings of the .NET platform. From raw HTTP Handlers and Modules in the core ASP.NET runtime, to high level platforms like ASP.NET MVC, Web Forms, ASP.NET AJAX and the WCF REST engine (which technically is not ASP.NET, but can integrate with it), you've always been able to handle just about any kind of HTTP request and response with ASP.NET. The beauty of the raw ASP.NET platform is that it provides you everything you need to build just about any type of HTTP application you can dream up from low level APIs/custom engines to high level HTML generation engine. ASP.NET as a core platform clearly has stood the test of time 10+ years later and all other frameworks like Web API are built on top of this ASP.NET core. However, although it's possible to create Web APIs / Services using any of the existing out of box .NET technologies, none of them have been a really nice fit for building arbitrary HTTP based APIs. Sure, you can use an HttpHandler to create just about anything, but you have to build a lot of plumbing to build something more complex like a comprehensive API that serves a variety of requests, handles multiple output formats and can easily pass data up to the server in a variety of ways. Likewise you can use ASP.NET MVC to handle routing and creating content in various formats fairly easily, but it doesn't provide a great way to automatically negotiate content types and serve various content formats directly (it's possible to do with some plumbing code of your own but not built in). Prior to Web API, Microsoft's main push for HTTP services has been WCF REST, which was always an awkward technology that had a severe personality conflict, not being clear on whether it wanted to be part of WCF or purely a separate technology. In the end it didn't do either WCF compatibility or WCF agnostic pure HTTP operation very well, which made for a very developer-unfriendly environment. Personally I didn't like any of the implementations at the time, so much so that I ended up building my own HTTP service engine (as part of the West Wind Web Toolkit), as have a few other third party tools that provided much better integration and ease of use. With the release of Web API for the first time I feel that I can finally use the tools in the box and not have to worry about creating and maintaining my own toolkit as Web API addresses just about all the features I implemented on my own and much more. ASP.NET Web API provides a better HTTP Experience ASP.NET Web API differentiates itself from the previous Microsoft in-box HTTP service solutions in that it was built from the ground up around the HTTP protocol and its messaging semantics. Unlike WCF REST or ASP.NET AJAX with ASMX, it’s a brand new platform rather than bolted on technology that is supposed to work in the context of an existing framework. The strength of the new ASP.NET Web API is that it combines the best features of the platforms that came before it, to provide a comprehensive and very usable HTTP platform. Because it's based on ASP.NET and borrows a lot of concepts from ASP.NET MVC, Web API should be immediately familiar and comfortable to most ASP.NET developers. Here are some of the features that Web API provides that I like: Strong Support for URL Routing to produce clean URLs using familiar MVC style routing semantics Content Negotiation based on Accept headers for request and response serialization Support for a host of supported output formats including JSON, XML, ATOM Strong default support for REST semantics but they are optional Easily extensible Formatter support to add new input/output types Deep support for more advanced HTTP features via HttpResponseMessage and HttpRequestMessage classes and strongly typed Enums to describe many HTTP operations Convention based design that drives you into doing the right thing for HTTP Services Very extensible, based on MVC like extensibility model of Formatters and Filters Self-hostable in non-Web applications  Testable using testing concepts similar to MVC Web API is meant to handle any kind of HTTP input and produce output and status codes using the full spectrum of HTTP functionality available in a straight forward and flexible manner. Looking at the list above you can see that a lot of functionality is very similar to ASP.NET MVC, so many ASP.NET developers should feel quite comfortable with the concepts of Web API. The Routing and core infrastructure of Web API are very similar to how MVC works providing many of the benefits of MVC, but with focus on HTTP access and manipulation in Controller methods rather than HTML generation in MVC. There’s much improved support for content negotiation based on HTTP Accept headers with the framework capable of detecting automatically what content the client is sending and requesting and serving the appropriate data format in return. This seems like such a little and obvious thing, but it's really important. Today's service backends often are used by multiple clients/applications and being able to choose the right data format for what fits best for the client is very important. While previous solutions were able to accomplish this using a variety of mixed features of WCF and ASP.NET, Web API combines all this functionality into a single robust server side HTTP framework that intrinsically understands the HTTP semantics and subtly drives you in the right direction for most operations. And when you need to customize or do something that is not built in, there are lots of hooks and overrides for most behaviors, and even many low level hook points that allow you to plug in custom functionality with relatively little effort. No Brainers for Web API There are a few scenarios that are a slam dunk for Web API. If your primary focus of an application or even a part of an application is some sort of API then Web API makes great sense. HTTP ServicesIf you're building a comprehensive HTTP API that is to be consumed over the Web, Web API is a perfect fit. You can isolate the logic in Web API and build your application as a service breaking out the logic into controllers as needed. Because the primary interface is the service there's no confusion of what should go where (MVC or API). Perfect fit. Primary AJAX BackendsIf you're building rich client Web applications that are relying heavily on AJAX callbacks to serve its data, Web API is also a slam dunk. Again because much if not most of the business logic will probably end up in your Web API service logic, there's no confusion over where logic should go and there's no duplication. In Single Page Applications (SPA), typically there's very little HTML based logic served other than bringing up a shell UI and then filling the data from the server with AJAX which means the business logic required for data retrieval and data acceptance and validation too lives in the Web API. Perfect fit. Generic HTTP EndpointsAnother good fit are generic HTTP endpoints that to serve data or handle 'utility' type functionality in typical Web applications. If you need to implement an image server, or an upload handler in the past I'd implement that as an HTTP handler. With Web API you now have a well defined place where you can implement these types of generic 'services' in a location that can easily add endpoints (via Controller methods) or separated out as more full featured APIs. Granted this could be done with MVC as well, but Web API seems a clearer and more well defined place to store generic application services. This is one thing I used to do a lot of in my own libraries and Web API addresses this nicely. Great fit. Mixed HTML and AJAX Applications: Not a clear Choice  For all the commonality that Web API and MVC share they are fundamentally different platforms that are independent of each other. A lot of people have asked when does it make sense to use MVC vs. Web API when you're dealing with typical Web application that creates HTML and also uses AJAX functionality for rich functionality. While it's easy to say that all 'service'/AJAX logic should go into a Web API and all HTML related generation into MVC, that can often result in a lot of code duplication. Also MVC supports JSON and XML result data fairly easily as well so there's some confusion where that 'trigger point' is of when you should switch to Web API vs. just implementing functionality as part of MVC controllers. Ultimately there's a tradeoff between isolation of functionality and duplication. A good rule of thumb I think works is that if a large chunk of the application's functionality serves data Web API is a good choice, but if you have a couple of small AJAX requests to serve data to a grid or autocomplete box it'd be overkill to separate out that logic into a separate Web API controller. Web API does add overhead to your application (it's yet another framework that sits on top of core ASP.NET) so it should be worth it .Keep in mind that MVC can generate HTML and JSON/XML and just about any other content easily and that functionality is not going away, so just because you Web API is there it doesn't mean you have to use it. Web API is not a full replacement for MVC obviously either since there's not the same level of support to feed HTML from Web API controllers (although you can host a RazorEngine easily enough if you really want to go that route) so if you're HTML is part of your API or application in general MVC is still a better choice either alone or in combination with Web API. I suspect (and hope) that in the future Web API's functionality will merge even closer with MVC so that you might even be able to mix functionality of both into single Controllers so that you don't have to make any trade offs, but at the moment that's not the case. Some Issues To think about Web API is similar to MVC but not the Same Although Web API looks a lot like MVC it's not the same and some common functionality of MVC behaves differently in Web API. For example, the way single POST variables are handled is different than MVC and doesn't lend itself particularly well to some AJAX scenarios with POST data. Code Duplication I already touched on this in the Mixed HTML and Web API section, but if you build an MVC application that also exposes a Web API it's quite likely that you end up duplicating a bunch of code and - potentially - infrastructure. You may have to create authentication logic both for an HTML application and for the Web API which might need something different altogether. More often than not though the same logic is used, and there's no easy way to share. If you implement an MVC ActionFilter and you want that same functionality in your Web API you'll end up creating the filter twice. AJAX Data or AJAX HTML On a recent post's comments, David made some really good points regarding the commonality of MVC and Web API's and its place. One comment that caught my eye was a little more generic, regarding data services vs. HTML services. David says: I see a lot of merit in the combination of Knockout.js, client side templates and view models, calling Web API for a responsive UI, but sometimes late at night that still leaves me wondering why I would no longer be using some of the nice tooling and features that have evolved in MVC ;-) You know what - I can totally relate to that. On the last Web based mobile app I worked on, we decided to serve HTML partials to the client via AJAX for many (but not all!) things, rather than sending down raw data to inject into the DOM on the client via templating or direct manipulation. While there are definitely more bytes on the wire, with this, the overhead ended up being actually fairly small if you keep the 'data' requests small and atomic. Performance was often made up by the lack of client side rendering of HTML. Server rendered HTML for AJAX templating gives so much better infrastructure support without having to screw around with 20 mismatched client libraries. Especially with MVC and partials it's pretty easy to break out your HTML logic into very small, atomic chunks, so it's actually easy to create small rendering islands that can be used via composition on the server, or via AJAX calls to small, tight partials that return HTML to the client. Although this is often frowned upon as to 'heavy', it worked really well in terms of developer effort as well as providing surprisingly good performance on devices. There's still plenty of jQuery and AJAX logic happening on the client but it's more manageable in small doses rather than trying to do the entire UI composition with JavaScript and/or 'not-quite-there-yet' template engines that are very difficult to debug. This is not an issue directly related to Web API of course, but something to think about especially for AJAX or SPA style applications. Summary Web API is a great new addition to the ASP.NET platform and it addresses a serious need for consolidation of a lot of half-baked HTTP service API technologies that came before it. Web API feels 'right', and hits the right combination of usability and flexibility at least for me and it's a good fit for true API scenarios. However, just because a new platform is available it doesn't meant that other tools or tech that came before it should be discarded or even upgraded to the new platform. There's nothing wrong with continuing to use MVC controller methods to handle API tasks if that's what your app is running now - there's very little to be gained by upgrading to Web API just because. But going forward Web API clearly is the way to go, when building HTTP data interfaces and it's good to see that Microsoft got this one right - it was sorely needed! Resources ASP.NET Web API AspConf Ask the Experts Session (first 5 minutes) © Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • WCF REST on .Net 4.0

    - by AngelEyes
    A simple and straight forward article taken from: http://christopherdeweese.com/blog2/post/drop-the-soap-wcf-rest-and-pretty-uris-in-net-4 Drop the Soap: WCF, REST, and Pretty URIs in .NET 4 Years ago I was working in libraries when the Web 2.0 revolution began.  One of the things that caught my attention about early start-ups using the AJAX/REST/Web 2.0 model was how nice the URIs were for their applications.  Those were my first impressions of REST; pretty URIs.  Turns out there is a little more to it than that. REST is an architectural style that focuses on resources and structured ways to access those resources via the web.  REST evolved as an “anti-SOAP” movement, driven by developers who did not want to deal with all the complexity SOAP introduces (which is al lot when you don’t have frameworks hiding it all).  One of the biggest benefits to REST is that browsers can talk to rest services directly because REST works using URIs, QueryStrings, Cookies, SSL, and all those HTTP verbs that we don’t have to think about anymore. If you are familiar with ASP.NET MVC then you have been exposed to rest at some level.  MVC is relies heavily on routing to generate consistent and clean URIs.  REST for WCF gives you the same type of feel for your services.  Let’s dive in. WCF REST in .NET 3.5 SP1 and .NET 4 This post will cover WCF REST in .NET 4 which drew heavily from the REST Starter Kit and community feedback.  There is basic REST support in .NET 3.5 SP1 and you can also grab the REST Starter Kit to enable some of the features you’ll find in .NET 4. This post will cover REST in .NET 4 and Visual Studio 2010. Getting Started To get started we’ll create a basic WCF Rest Service Application using the new on-line templates option in VS 2010: When you first install a template you are prompted with this dialog: Dude Where’s my .Svc File? The WCF REST template shows us the new way we can simply build services.  Before we talk about what’s there, let’s look at what is not there: The .Svc File An Interface Contract Dozens of lines of configuration that you have to change to make your service work REST in .NET 4 is greatly simplified and leverages the Web Routing capabilities used in ASP.NET MVC and other parts of the web frameworks.  With REST in .NET 4 you use a global.asax to set the route to your service using the new ServiceRoute class.  From there, the WCF runtime handles dispatching service calls to the methods based on the Uri Templates. global.asax using System; using System.ServiceModel.Activation; using System.Web; using System.Web.Routing; namespace Blog.WcfRest.TimeService {     public class Global : HttpApplication     {         void Application_Start(object sender, EventArgs e)         {             RegisterRoutes();         }         private static void RegisterRoutes()         {             RouteTable.Routes.Add(new ServiceRoute("TimeService",                 new WebServiceHostFactory(), typeof(TimeService)));         }     } } The web.config contains some new structures to support a configuration free deployment.  Note that this is the default config generated with the template.  I did not make any changes to web.config. web.config <?xml version="1.0"?> <configuration>   <system.web>     <compilation debug="true" targetFramework="4.0" />   </system.web>   <system.webServer>     <modules runAllManagedModulesForAllRequests="true">       <add name="UrlRoutingModule" type="System.Web.Routing.UrlRoutingModule,            System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" />     </modules>   </system.webServer>   <system.serviceModel>     <serviceHostingEnvironment aspNetCompatibilityEnabled="true"/>     <standardEndpoints>       <webHttpEndpoint>         <!--             Configure the WCF REST service base address via the global.asax.cs file and the default endpoint             via the attributes on the <standardEndpoint> element below         -->         <standardEndpoint name="" helpEnabled="true" automaticFormatSelectionEnabled="true"/>       </webHttpEndpoint>     </standardEndpoints>   </system.serviceModel> </configuration> Building the Time Service We’ll create a simple “TimeService” that will return the current time.  Let’s start with the following code: using System; using System.ServiceModel; using System.ServiceModel.Activation; using System.ServiceModel.Web; namespace Blog.WcfRest.TimeService {     [ServiceContract]     [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]     [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)]     public class TimeService     {         [WebGet(UriTemplate = "CurrentTime")]         public string CurrentTime()         {             return DateTime.Now.ToString();         }     } } The endpoint for this service will be http://[machinename]:[port]/TimeService.  To get the current time http://[machinename]:[port]/TimeService/CurrentTime will do the trick. The Results Are In Remember That Route In global.asax? Turns out it is pretty important.  When you set the route name, that defines the resource name starting after the host portion of the Uri. Help Pages in WCF 4 Another feature that came from the starter kit are the help pages.  To access the help pages simply append Help to the end of the service’s base Uri. Dropping the Soap Having dabbled with REST in the past and after using Soap for the last few years, the WCF 4 REST support is certainly refreshing.  I’m currently working on some REST implementations in .NET 3.5 and VS 2008 and am looking forward to working on REST in .NET 4 and VS 2010.

    Read the article

  • SQL Authority News – Download Microsoft SQL Server 2014 Feature Pack and Microsoft SQL Server Developer’s Edition

    - by Pinal Dave
    Yesterday I attended the SQL Server Community Launch in Bangalore and presented on Performing an effective Presentation. It was a fun presentation and people very well received it. No matter on what subject, I present, I always end up talking about SQL. Here are two of the questions I had received during the event. Q1) I want to install SQL Server on my development server, where can we get it for free or at an economical price (I do not have MSDN)? A1) If you are not going to use your server in a production environment, you can just get SQL Server Developer’s Edition and you can read more about it over here. Here is another favorite question which I keep on receiving it during the event. Q2) I already have SQL Server installed on my machine, what are different feature pack should I install and where can I get them from. A2) Just download and install Microsoft SQL Server 2014 Service Pack. Here is the link for downloading it. The Microsoft SQL Server 2014 Feature Pack is a collection of stand-alone packages which provide additional value for Microsoft SQL Server. It includes tool and components for Microsoft SQL Server 2014 and add-on providers for Microsoft SQL Server 2014. Here is the list of component this product contains: Microsoft SQL Server Backup to Windows Azure Tool Microsoft SQL Server Cloud Adapter Microsoft Kerberos Configuration Manager for Microsoft SQL Server Microsoft SQL Server 2014 Semantic Language Statistics Microsoft SQL Server Data-Tier Application Framework Microsoft SQL Server 2014 Transact-SQL Language Service Microsoft Windows PowerShell Extensions for Microsoft SQL Server 2014 Microsoft SQL Server 2014 Shared Management Objects Microsoft Command Line Utilities 11 for Microsoft SQL Server Microsoft ODBC Driver 11 for Microsoft SQL Server – Windows Microsoft JDBC Driver 4.0 for Microsoft SQL Server Microsoft Drivers 3.0 for PHP for Microsoft SQL Server Microsoft SQL Server 2014 Transact-SQL ScriptDom Microsoft SQL Server 2014 Transact-SQL Compiler Service Microsoft System CLR Types for Microsoft SQL Server 2014 Microsoft SQL Server 2014 Remote Blob Store SQL RBS codeplex samples page SQL Server Remote Blob Store blogs Microsoft SQL Server Service Broker External Activator for Microsoft SQL Server 2014 Microsoft OData Source for Microsoft SQL Server 2014 Microsoft Balanced Data Distributor for Microsoft SQL Server 2014 Microsoft Change Data Capture Designer and Service for Oracle by Attunity for Microsoft SQL Server 2014 Microsoft SQL Server 2014 Master Data Service Add-in for Microsoft Excel Microsoft SQL Server StreamInsight Microsoft Connector for SAP BW for Microsoft SQL Server 2014 Microsoft SQL Server Migration Assistant Microsoft SQL Server 2014 Upgrade Advisor Microsoft OLEDB Provider for DB2 v5.0 for Microsoft SQL Server 2014 Microsoft SQL Server 2014 PowerPivot for Microsoft SharePoint 2013 Microsoft SQL Server 2014 ADOMD.NET Microsoft Analysis Services OLE DB Provider for Microsoft SQL Server 2014 Microsoft SQL Server 2014 Analysis Management Objects Microsoft SQL Server Report Builder for Microsoft SQL Server 2014 Microsoft SQL Server 2014 Reporting Services Add-in for Microsoft SharePoint Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL

    Read the article

  • Creating a dynamic proxy generator with c# – Part 3 – Creating the constructors

    - by SeanMcAlinden
    Creating a dynamic proxy generator with c# – Part 1 – Creating the Assembly builder, Module builder and caching mechanism Creating a dynamic proxy generator with c# – Part 2 – Interceptor Design For the latest code go to http://rapidioc.codeplex.com/ When building our proxy type, the first thing we need to do is build the constructors. There needs to be a corresponding constructor for each constructor on the passed in base type. We also want to create a field to store the interceptors and construct this list within each constructor. So assuming the passed in base type is a User<int, IRepository> class, were looking to generate constructor code like the following:   Default Constructor public User`2_RapidDynamicBaseProxy() {     this.interceptors = new List<IInterceptor<User<int, IRepository>>>();     DefaultInterceptor<User<int, IRepository>> item = new DefaultInterceptor<User<int, IRepository>>();     this.interceptors.Add(item); }     Parameterised Constructor public User`2_RapidDynamicBaseProxy(IRepository repository1) : base(repository1) {     this.interceptors = new List<IInterceptor<User<int, IRepository>>>();     DefaultInterceptor<User<int, IRepository>> item = new DefaultInterceptor<User<int, IRepository>>();     this.interceptors.Add(item); }   As you can see, we first populate a field on the class with a new list of the passed in base type. Construct our DefaultInterceptor class. Add the DefaultInterceptor instance to our interceptor collection. Although this seems like a relatively small task, there is a fair amount of work require to get this going. Instead of going through every line of code – please download the latest from http://rapidioc.codeplex.com/ and debug through. In this post I’m going to concentrate on explaining how it works. TypeBuilder The TypeBuilder class is the main class used to create the type. You instantiate a new TypeBuilder using the assembly module we created in part 1. /// <summary> /// Creates a type builder. /// </summary> /// <typeparam name="TBase">The type of the base class to be proxied.</typeparam> public static TypeBuilder CreateTypeBuilder<TBase>() where TBase : class {     TypeBuilder typeBuilder = DynamicModuleCache.Get.DefineType         (             CreateTypeName<TBase>(),             TypeAttributes.Class | TypeAttributes.Public,             typeof(TBase),             new Type[] { typeof(IProxy) }         );       if (typeof(TBase).IsGenericType)     {         GenericsHelper.MakeGenericType(typeof(TBase), typeBuilder);     }       return typeBuilder; }   private static string CreateTypeName<TBase>() where TBase : class {     return string.Format("{0}_RapidDynamicBaseProxy", typeof(TBase).Name); } As you can see, I’ve create a new public class derived from TBase which also implements my IProxy interface, this is used later for adding interceptors. If the base type is generic, the following GenericsHelper.MakeGenericType method is called. GenericsHelper using System; using System.Reflection.Emit; namespace Rapid.DynamicProxy.Types.Helpers {     /// <summary>     /// Helper class for generic types and methods.     /// </summary>     internal static class GenericsHelper     {         /// <summary>         /// Makes the typeBuilder a generic.         /// </summary>         /// <param name="concrete">The concrete.</param>         /// <param name="typeBuilder">The type builder.</param>         public static void MakeGenericType(Type baseType, TypeBuilder typeBuilder)         {             Type[] genericArguments = baseType.GetGenericArguments();               string[] genericArgumentNames = GetArgumentNames(genericArguments);               GenericTypeParameterBuilder[] genericTypeParameterBuilder                 = typeBuilder.DefineGenericParameters(genericArgumentNames);               typeBuilder.MakeGenericType(genericTypeParameterBuilder);         }           /// <summary>         /// Gets the argument names from an array of generic argument types.         /// </summary>         /// <param name="genericArguments">The generic arguments.</param>         public static string[] GetArgumentNames(Type[] genericArguments)         {             string[] genericArgumentNames = new string[genericArguments.Length];               for (int i = 0; i < genericArguments.Length; i++)             {                 genericArgumentNames[i] = genericArguments[i].Name;             }               return genericArgumentNames;         }     } }       As you can see, I’m getting all of the generic argument types and names, creating a GenericTypeParameterBuilder and then using the typeBuilder to make the new type generic. InterceptorsField The interceptors field will store a List<IInterceptor<TBase>>. Fields are simple made using the FieldBuilder class. The following code demonstrates how to create the interceptor field. FieldBuilder interceptorsField = typeBuilder.DefineField(     "interceptors",     typeof(System.Collections.Generic.List<>).MakeGenericType(typeof(IInterceptor<TBase>)),       FieldAttributes.Private     ); The field will now exist with the new Type although it currently has no data – we’ll deal with this in the constructor. Add method for interceptorsField To enable us to add to the interceptorsField list, we are going to utilise the Add method that already exists within the System.Collections.Generic.List class. We still however have to create the methodInfo necessary to call the add method. This can be done similar to the following: Add Interceptor Field MethodInfo addInterceptor = typeof(List<>)     .MakeGenericType(new Type[] { typeof(IInterceptor<>).MakeGenericType(typeof(TBase)) })     .GetMethod     (        "Add",        BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic,        null,        new Type[] { typeof(IInterceptor<>).MakeGenericType(typeof(TBase)) },        null     ); So we’ve create a List<IInterceptor<TBase>> type, then using the type created a method info called Add which accepts an IInterceptor<TBase>. Now in our constructor we can use this to call this.interceptors.Add(// interceptor); Building the Constructors This will be the first hard-core part of the proxy building process so I’m going to show the class and then try to explain what everything is doing. For a clear view, download the source from http://rapidioc.codeplex.com/, go to the test project and debug through the constructor building section. Anyway, here it is: DynamicConstructorBuilder using System; using System.Collections.Generic; using System.Reflection; using System.Reflection.Emit; using Rapid.DynamicProxy.Interception; using Rapid.DynamicProxy.Types.Helpers; namespace Rapid.DynamicProxy.Types.Constructors {     /// <summary>     /// Class for creating the proxy constructors.     /// </summary>     internal static class DynamicConstructorBuilder     {         /// <summary>         /// Builds the constructors.         /// </summary>         /// <typeparam name="TBase">The base type.</typeparam>         /// <param name="typeBuilder">The type builder.</param>         /// <param name="interceptorsField">The interceptors field.</param>         public static void BuildConstructors<TBase>             (                 TypeBuilder typeBuilder,                 FieldBuilder interceptorsField,                 MethodInfo addInterceptor             )             where TBase : class         {             ConstructorInfo interceptorsFieldConstructor = CreateInterceptorsFieldConstructor<TBase>();               ConstructorInfo defaultInterceptorConstructor = CreateDefaultInterceptorConstructor<TBase>();               ConstructorInfo[] constructors = typeof(TBase).GetConstructors();               foreach (ConstructorInfo constructorInfo in constructors)             {                 CreateConstructor<TBase>                     (                         typeBuilder,                         interceptorsField,                         interceptorsFieldConstructor,                         defaultInterceptorConstructor,                         addInterceptor,                         constructorInfo                     );             }         }           #region Private Methods           private static void CreateConstructor<TBase>             (                 TypeBuilder typeBuilder,                 FieldBuilder interceptorsField,                 ConstructorInfo interceptorsFieldConstructor,                 ConstructorInfo defaultInterceptorConstructor,                 MethodInfo AddDefaultInterceptor,                 ConstructorInfo constructorInfo             ) where TBase : class         {             Type[] parameterTypes = GetParameterTypes(constructorInfo);               ConstructorBuilder constructorBuilder = CreateConstructorBuilder(typeBuilder, parameterTypes);               ILGenerator cIL = constructorBuilder.GetILGenerator();               LocalBuilder defaultInterceptorMethodVariable =                 cIL.DeclareLocal(typeof(DefaultInterceptor<>).MakeGenericType(typeof(TBase)));               ConstructInterceptorsField(interceptorsField, interceptorsFieldConstructor, cIL);               ConstructDefaultInterceptor(defaultInterceptorConstructor, cIL, defaultInterceptorMethodVariable);               AddDefaultInterceptorToInterceptorsList                 (                     interceptorsField,                     AddDefaultInterceptor,                     cIL,                     defaultInterceptorMethodVariable                 );               CreateConstructor(constructorInfo, parameterTypes, cIL);         }           private static void CreateConstructor(ConstructorInfo constructorInfo, Type[] parameterTypes, ILGenerator cIL)         {             cIL.Emit(OpCodes.Ldarg_0);               if (parameterTypes.Length > 0)             {                 LoadParameterTypes(parameterTypes, cIL);             }               cIL.Emit(OpCodes.Call, constructorInfo);             cIL.Emit(OpCodes.Ret);         }           private static void LoadParameterTypes(Type[] parameterTypes, ILGenerator cIL)         {             for (int i = 1; i <= parameterTypes.Length; i++)             {                 cIL.Emit(OpCodes.Ldarg_S, i);             }         }           private static void AddDefaultInterceptorToInterceptorsList             (                 FieldBuilder interceptorsField,                 MethodInfo AddDefaultInterceptor,                 ILGenerator cIL,                 LocalBuilder defaultInterceptorMethodVariable             )         {             cIL.Emit(OpCodes.Ldarg_0);             cIL.Emit(OpCodes.Ldfld, interceptorsField);             cIL.Emit(OpCodes.Ldloc, defaultInterceptorMethodVariable);             cIL.Emit(OpCodes.Callvirt, AddDefaultInterceptor);         }           private static void ConstructDefaultInterceptor             (                 ConstructorInfo defaultInterceptorConstructor,                 ILGenerator cIL,                 LocalBuilder defaultInterceptorMethodVariable             )         {             cIL.Emit(OpCodes.Newobj, defaultInterceptorConstructor);             cIL.Emit(OpCodes.Stloc, defaultInterceptorMethodVariable);         }           private static void ConstructInterceptorsField             (                 FieldBuilder interceptorsField,                 ConstructorInfo interceptorsFieldConstructor,                 ILGenerator cIL             )         {             cIL.Emit(OpCodes.Ldarg_0);             cIL.Emit(OpCodes.Newobj, interceptorsFieldConstructor);             cIL.Emit(OpCodes.Stfld, interceptorsField);         }           private static ConstructorBuilder CreateConstructorBuilder(TypeBuilder typeBuilder, Type[] parameterTypes)         {             return typeBuilder.DefineConstructor                 (                     MethodAttributes.Public | MethodAttributes.SpecialName | MethodAttributes.RTSpecialName                     | MethodAttributes.HideBySig, CallingConventions.Standard, parameterTypes                 );         }           private static Type[] GetParameterTypes(ConstructorInfo constructorInfo)         {             ParameterInfo[] parameterInfoArray = constructorInfo.GetParameters();               Type[] parameterTypes = new Type[parameterInfoArray.Length];               for (int p = 0; p < parameterInfoArray.Length; p++)             {                 parameterTypes[p] = parameterInfoArray[p].ParameterType;             }               return parameterTypes;         }           private static ConstructorInfo CreateInterceptorsFieldConstructor<TBase>() where TBase : class         {             return ConstructorHelper.CreateGenericConstructorInfo                 (                     typeof(List<>),                     new Type[] { typeof(IInterceptor<TBase>) },                     BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic                 );         }           private static ConstructorInfo CreateDefaultInterceptorConstructor<TBase>() where TBase : class         {             return ConstructorHelper.CreateGenericConstructorInfo                 (                     typeof(DefaultInterceptor<>),                     new Type[] { typeof(TBase) },                     BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic                 );         }           #endregion     } } So, the first two tasks within the class should be fairly clear, we are creating a ConstructorInfo for the interceptorField list and a ConstructorInfo for the DefaultConstructor, this is for instantiating them in each contructor. We then using Reflection get an array of all of the constructors in the base class, we then loop through the array and create a corresponding proxy contructor. Hopefully, the code is fairly easy to follow other than some new types and the dreaded Opcodes. ConstructorBuilder This class defines a new constructor on the type. ILGenerator The ILGenerator allows the use of Reflection.Emit to create the method body. LocalBuilder The local builder allows the storage of data in local variables within a method, in this case it’s the constructed DefaultInterceptor. Constructing the interceptors field The first bit of IL you’ll come across as you follow through the code is the following private method used for constructing the field list of interceptors. private static void ConstructInterceptorsField             (                 FieldBuilder interceptorsField,                 ConstructorInfo interceptorsFieldConstructor,                 ILGenerator cIL             )         {             cIL.Emit(OpCodes.Ldarg_0);             cIL.Emit(OpCodes.Newobj, interceptorsFieldConstructor);             cIL.Emit(OpCodes.Stfld, interceptorsField);         } The first thing to know about generating code using IL is that you are using a stack, if you want to use something, you need to push it up the stack etc. etc. OpCodes.ldArg_0 This opcode is a really interesting one, basically each method has a hidden first argument of the containing class instance (apart from static classes), constructors are no different. This is the reason you can use syntax like this.myField. So back to the method, as we want to instantiate the List in the interceptorsField, first we need to load the class instance onto the stack, we then load the new object (new List<TBase>) and finally we store it in the interceptorsField. Hopefully, that should follow easily enough in the method. In each constructor you would now have this.interceptors = new List<User<int, IRepository>>(); Constructing and storing the DefaultInterceptor The next bit of code we need to create is the constructed DefaultInterceptor. Firstly, we create a local builder to store the constructed type. Create a local builder LocalBuilder defaultInterceptorMethodVariable =     cIL.DeclareLocal(typeof(DefaultInterceptor<>).MakeGenericType(typeof(TBase))); Once our local builder is ready, we then need to construct the DefaultInterceptor<TBase> and store it in the variable. Connstruct DefaultInterceptor private static void ConstructDefaultInterceptor     (         ConstructorInfo defaultInterceptorConstructor,         ILGenerator cIL,         LocalBuilder defaultInterceptorMethodVariable     ) {     cIL.Emit(OpCodes.Newobj, defaultInterceptorConstructor);     cIL.Emit(OpCodes.Stloc, defaultInterceptorMethodVariable); } As you can see, using the ConstructorInfo named defaultInterceptorConstructor, we load the new object onto the stack. Then using the store local opcode (OpCodes.Stloc), we store the new object in the local builder named defaultInterceptorMethodVariable. Add the constructed DefaultInterceptor to the interceptors field collection Using the add method created earlier in this post, we are going to add the new DefaultInterceptor object to the interceptors field collection. Add Default Interceptor private static void AddDefaultInterceptorToInterceptorsList     (         FieldBuilder interceptorsField,         MethodInfo AddDefaultInterceptor,         ILGenerator cIL,         LocalBuilder defaultInterceptorMethodVariable     ) {     cIL.Emit(OpCodes.Ldarg_0);     cIL.Emit(OpCodes.Ldfld, interceptorsField);     cIL.Emit(OpCodes.Ldloc, defaultInterceptorMethodVariable);     cIL.Emit(OpCodes.Callvirt, AddDefaultInterceptor); } So, here’s whats going on. The class instance is first loaded onto the stack using the load argument at index 0 opcode (OpCodes.Ldarg_0) (remember the first arg is the hidden class instance). The interceptorsField is then loaded onto the stack using the load field opcode (OpCodes.Ldfld). We then load the DefaultInterceptor object we stored locally using the load local opcode (OpCodes.Ldloc). Then finally we call the AddDefaultInterceptor method using the call virtual opcode (Opcodes.Callvirt). Completing the constructor The last thing we need to do is complete the constructor. Complete the constructor private static void CreateConstructor(ConstructorInfo constructorInfo, Type[] parameterTypes, ILGenerator cIL)         {             cIL.Emit(OpCodes.Ldarg_0);               if (parameterTypes.Length > 0)             {                 LoadParameterTypes(parameterTypes, cIL);             }               cIL.Emit(OpCodes.Call, constructorInfo);             cIL.Emit(OpCodes.Ret);         }           private static void LoadParameterTypes(Type[] parameterTypes, ILGenerator cIL)         {             for (int i = 1; i <= parameterTypes.Length; i++)             {                 cIL.Emit(OpCodes.Ldarg_S, i);             }         } So, the first thing we do again is load the class instance using the load argument at index 0 opcode (OpCodes.Ldarg_0). We then load each parameter using OpCode.Ldarg_S, this opcode allows us to specify an index position for each argument. We then setup calling the base constructor using OpCodes.Call and the base constructors ConstructorInfo. Finally, all methods are required to return, even when they have a void return. As there are no values on the stack after the OpCodes.Call line, we can safely call the OpCode.Ret to give the constructor a void return. If there was a value, we would have to pop the value of the stack before calling return otherwise, the method would try and return a value. Conclusion This was a slightly hardcore post but hopefully it hasn’t been too hard to follow. The main thing is that a number of the really useful opcodes have been used and now the dynamic proxy is capable of being constructed. If you download the code and debug through the tests at http://rapidioc.codeplex.com/, you’ll be able to create proxies at this point, they cannon do anything in terms of interception but you can happily run the tests, call base methods and properties and also take a look at the created assembly in Reflector. Hope this is useful. The next post should be up soon, it will be covering creating the private methods for calling the base class methods and properties. Kind Regards, Sean.

    Read the article

  • What is a "Public DNS registered IP address"?

    - by Emma
    I've been reading this ICANN agreement on new TLDs and it has this section on DNS service availability, which i don't completely understand: Refers to the ability of the group of listed-as-authoritative name servers of a particular domain name (e.g., a TLD), to answer DNS queries from DNS probes. For the service to be considered available at a particular moment, at least, two of the delegated name servers registered in the DNS must have successful results from “DNS tests” to each of their public-DNS registered “IP addresses” to which the name server resolves. If 51% or more of the DNS testing probes see the service as unavailable during a given time, the DNS service will be considered unavailable. I'm not 100% sure of the part in bold: does "public" refer to the DNS or to the IP addesses? It looks like there's a mistake and that the hyphen should have been after "DNS". So basically does it mean "the public IP addresses registered in the DNS"?

    Read the article

  • pingback / trackback support for photo sharing?

    - by cboettig
    Is there a photo sharing service, such as flickr or picasa, that will collect urls of the locations where the photo has been posted on other blogs (or mentioned in tweets, etc?) This could be accomplished by posting each photo as a blog entry on a site like wordpress, which would then automatically handle pingbacks, but of course a blog doesn't perform quite like a proper photo service. Perhaps this could be done with a private photo hosting server like zenphoto by editing the php, but that seems rather involved. Does such a service already exist?

    Read the article

  • Webcast with Brian Griffin, Ancestry, 2013 Winner 10 Best Web Support Sites

    - by Tuula Fai
    The web is one of the fastest growing channels for providing service, support and information, as seen in The Service Council's (TSC) latest multi-channel research survey. Join TSC's Chief Customer Officer Sumair Dutta as he shares key findings from his current customer experience research from over 200 organizations. Sumair will be joined by Brian Griffin, Senior Program Manager, Global Support Experience, Ancestry.com who will show how Ancestry is using the web as a powerful tool to enhance self-service opportunities and increase customer engagement. Smarter Web Service Educast Thursday, November 14th 2 pm ET / 11 am PT Register: http://bit.ly/1cwz4Ns  

    Read the article

  • runit - unable to open supervise/ok: file does not exist

    - by Alexandr Kurilin
    I'm trying to figure out why runit will not boot or give me the status for the managed applications. Running on Ubuntu 12.04. I created /service, /etc/sv/myapp (with a run script, a config file, a log folder and a run script inside of it). I create a symlink from /service/ to /etc/sv/myapp When I run sudo sv s /service/* I get the following error message: warning: /service/myapp: unable to open supervice/ok: file does not exist Some of my Googling revealed that supposedly rebooting the svscan service might fix this, but killing it and running svscanboot didn't make a difference. Any suggestions? Am I missing a step here somewhere?

    Read the article

  • Tracking My Internet Provider Speeds

    - by Scott Weinstein
    Of late, our broadband internet has been feeling sluggish. A call to the company took way more hold-time than I wanted to spend, and it only fixed the problem for a short while. Thus a perfect opportunity to play with some new tech to solve a problem, in this case, documenting a systemic issue from a service provider. The goal – a log a internet speeds, taken say every 15 min. Recording ping time, upload speed, download speed, and local LAN usage.   The solution A WCF service to measure speeds Internet speed was measured via speedtest.net LAN usage was measured by querying my router for packets received and sent A SQL express instance to persist the data A PowerShell script to invoke the WCF service – launched by Windows’ Task Scheduler An OData WCF Data Service to allow me to read the data MS PowerPivot to show a nice viz (scratch that, the beta expired) LinqPad to get the data, export it to excel Tableau Public to show the viz     Powered by Tableau

    Read the article

  • Overwriting TFS Web Services

    - by javarg
    In this blog I will share a technique I used to intercept TFS Web Services calls. This technique is a very invasive one and requires you to overwrite default TFS Web Services behavior. I only recommend taking such an approach when other means of TFS extensibility fail to provide the same functionality (this is not a supported TFS extensibility point). For instance, intercepting and aborting a Work Item change operation could be implemented using this approach (consider TFS Subscribers functionality before taking this approach, check Martin’s post about subscribers). So let’s get started. The technique consists in versioning TFS Web Services .asmx service classes. If you look into TFS’s ASMX services you will notice that versioning is supported by creating a class hierarchy between different product versions. For instance, let’s take the Work Item management service .asmx. Check the following .asmx file located at: %Program Files%\Microsoft Team Foundation Server 2010\Application Tier\Web Services\_tfs_resources\WorkItemTracking\v3.0\ClientService.asmx The .asmx references the class Microsoft.TeamFoundation.WorkItemTracking.Server.ClientService3: <%-- Copyright (c) Microsoft Corporation. All rights reserved. --%> <%@ webservice language="C#" Class="Microsoft.TeamFoundation.WorkItemTracking.Server.ClientService3" %> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The inheritance hierarchy for this service class follows: Note the naming convention used for service versioning (ClientService3, ClientService2, ClientService). We will need to overwrite the latest service version provided by the product (in this case ClientService3 for TFS 2010). The following example intercepts and analyzes WorkItem fields. Suppose we need to validate state changes with more advanced logic other than the provided validations/constraints of the process template. Important: Backup the original .asmx file and create one of your own. Create a Visual Studio Web App Project and include a new ASMX Web Service in the project Add the following references to the project (check the folder %Program Files%\Microsoft Team Foundation Server 2010\Application Tier\Web Services\bin\): Microsoft.TeamFoundation.Framework.Server.dll Microsoft.TeamFoundation.Server.dll Microsoft.TeamFoundation.Server.dll Microsoft.TeamFoundation.WorkItemTracking.Client.QueryLanguage.dll Microsoft.TeamFoundation.WorkItemTracking.Server.DataAccessLayer.dll Microsoft.TeamFoundation.WorkItemTracking.Server.DataServices.dll Replace the default service implementation with the something similar to the following code: Code Snippet /// <summary> /// Inherit from ClientService3 to overwrite default Implementation /// </summary> [WebService(Namespace = "http://schemas.microsoft.com/TeamFoundation/2005/06/WorkItemTracking/ClientServices/03", Description = "Custom Team Foundation WorkItemTracking ClientService Web Service")] public class CustomTfsClientService : ClientService3 {     [WebMethod, SoapHeader("requestHeader", Direction = SoapHeaderDirection.In)]     public override bool BulkUpdate(         XmlElement package,         out XmlElement result,         MetadataTableHaveEntry[] metadataHave,         out string dbStamp,         out Payload metadata)     {         var xe = XElement.Parse(package.OuterXml);         // We only intercept WorkItems Updates (we can easily extend this sample to capture any operation).         var wit = xe.Element("UpdateWorkItem");         if (wit != null)         {             if (wit.Attribute("WorkItemID") != null)             {                 int witId = (int)wit.Attribute("WorkItemID");                 // With this Id. I can query TFS for more detailed information, using TFS Client API (assuming the WIT already exists).                 var stateChanged =                     wit.Element("Columns").Elements("Column").FirstOrDefault(c => (string)c.Attribute("Column") == "System.State");                 if (stateChanged != null)                 {                     var newStateName = stateChanged.Element("Value").Value;                     if (newStateName == "Resolved")                     {                         throw new Exception("Cannot change state to Resolved!");                     }                 }             }         }         // Finally, we call base method implementation         return base.BulkUpdate(package, out result, metadataHave, out dbStamp, out metadata);     } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 4. Build your solution and overwrite the original .asmx with the new implementation referencing our new service version (don’t forget to backup it up first). 5. Copy your project’s .dll into the following path: %Program Files%\Microsoft Team Foundation Server 2010\Application Tier\Web Services\bin 6. Try saving a WorkItem into the Resolved state. Enjoy!

    Read the article

  • Using Teleriks new LINQ implementation to create OData feeds

    This week Telerik released a new LINQ implementation that is simple to use and produces domain models very fast. Built on top of the enterprise grade OpenAccess ORM, you can connect to any database that OpenAccess can connect to such as: SQL Server, MySQL, Oracle, SQL Azure, VistaDB, etc. While this is a separate LINQ implementation from traditional OpenAccess Entites, you can use the visual designer without ever interacting with OpenAccess, however, you can always hook into the advanced ORM features like caching, fetch plan optimization, etc, if needed. Just to show off how easy our LINQ implementation is to use, I will walk you through building an OData feed using Data Services Update for .NET Framework 3.5 SP1. (Memo to Microsoft: P-L-E-A-S-E hire someone from Apple to name your products.) How easy is it? If you have a fast machine, are skilled with the mouse, and type fast, you can do this in about 60 seconds via three easy steps. (I promise in about 2-3 weeks that you can do this in less then 30 seconds. Stay tuned for that.)  Step 1 (15-20 seconds): Building your Domain Model In your web project in Visual Studio, right click on the project and select Add|New Item and select Telerik OpenAccess Domain Model as your item template. Give the file a meaningful name as well. Select your database type (SQL Server, SQL Azure, Oracle, MySQL, VistaDB, etc) and build the connection string. If you already have a Visual Studio connection string already saved, this step is trivial.  Then select your tables, enter a name for your model and click Finish. In this case I connected to Northwind and selected only Customers, Orders, and Order Details.  I named my model NorthwindEntities and will use that in my DataService. Step 2 (20-25 seconds): Adding and Configuring your Data Service In your web project in Visual Studio, right click on the project and select Add|New Item and select ADO .NET Data Service as your item template and name your service. In the code behind for your Data Service you have to make three small changes. Add the name of your Telerik Domain Model (entered in Step 1) as the DataService name (shown on line 6 below as NorthwindEntities) and uncomment line 11 and add a * to show all entities. Optionally if you want to take advantage of the DataService 3.5 updates, add line 13 (and change IDataServiceConfiguration to DataServiceConfiguration in line 9.) 1: using System.Data.Services; 2: using System.Data.Services.Common; 3:   4: namespace Telerik.RLINQ.Astoria.Web 5: { 6: public class NorthwindService : DataService<NorthwindEntities> 7: { 8: //change the IDataServiceConfigurationto DataServiceConfiguration 9: public static void InitializeService(DataServiceConfiguration config) 10: { 11: config.SetEntitySetAccessRule("*", EntitySetRights.All); 12: //take advantage of the "Astoria3.5 Update" features 13: config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; 14: } 15: } 16: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Step 3 (~30 seconds): Adding the DataServiceKeys You now have to tell your data service what are the primary keys of each entity. To do this you have to create a new code file and create a few partial classes. If you type fast, use copy and paste from your first entity,  and use a refactoring productivity tool, you can add these 6-8 lines of code or so in about 30 seconds. This is the most tedious step, but dont worry, Ive bribed some of the developers and our next update will eliminate this step completely. Just create a partial class for each entity you have mapped and add the attribute [DataServiceKey] on top of it along with the keys field name. If you have any complex properties, you will need to make them a primitive type, as I do in line 15. Create this as a separate file, dont manipulate the generated data access classes in case you want to regenerate them again later (even thought that would be much faster.) 1: using System.Data.Services.Common; 2:   3: namespace Telerik.RLINQ.Astoria.Web 4: { 5: [DataServiceKey("CustomerID")] 6: public partial class Customer 7: { 8: } 9:   10: [DataServiceKey("OrderID")] 11: public partial class Order 12: { 13: } 14:   15: [DataServiceKey(new string[] { "OrderID", "ProductID" })] 16: public partial class OrderDetail 17: { 18: } 19:   20: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Done! Time to run the service. Now, lets run the service! Select the svc file and right click and say View in Browser. You will see your OData service and can interact with it in the browser. Now that you have an OData service set up, you can consume it in one of the many ways that OData is consumed: using LINQ, the Silverlight OData client, Excel PowerPivot, or PhP, etc. Happy Data Servicing! Technorati Tags: Telerik,Astoria,Data Services Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Laissez les bon temps rouler! (Microsoft BI Conference 2010)

    - by smisner
    "Laissez les bons temps rouler" is a Cajun phrase that I heard frequently when I lived in New Orleans in the mid-1990s. It means "Let the good times roll!" and encapsulates a feeling of happy expectation. As I met with many of my peers and new acquaintances at the Microsoft BI Conference last week, this phrase kept running through my mind as people spoke about their plans in their respective businesses, the benefits and opportunities that the recent releases in the BI stack are providing, and their expectations about the future of the BI stack. Notwithstanding some jabs here and there to point out the platform is neither perfect now nor will be anytime soon (along with admissions that the competitors are also not perfect), and notwithstanding several missteps by the event organizers (which I don't care to enumerate), the overarching mood at the conference was positive. It was a refreshing change from the doom and gloom hovering over several conferences that I attended in 2009. Although many people expect economic hardships to continue over the coming year or so, everyone I know in the BI field is busier than ever and expects to stay busy for quite a while. Self-Service BI Self-service was definitely a theme of the BI conference. In the keynote, Ted Kummert opened with a look back to a fairy tale vision of self-service BI that he told in 2008. At that time, the fairy tale future was a time when "every end user was able to use BI technologies within their job in order to move forward more effectively" and transitioned to the present time in which SQL Server 2008 R2, Office 2010, and SharePoint 2010 are available to deliver managed self-service BI. This set of technologies is presumably poised to address the needs of the 80% of users that Kummert said do not use BI today. He proceeded to outline a series of activities that users ought to be able to do themselves--from simple changes to a report like formatting or an addtional data visualization to integration of an additional data source. The keynote then continued with a series of demonstrations of both current and future technology in support of self-service BI. Some highlights that interested me: PowerPivot, of course, is the flagship product for self-service BI in the Microsoft BI stack. In the TechEd keynote, which was open to the BI conference attendees, Amir Netz (twitter) impressed the audience by demonstrating interactivity with a workbook containing 100 million rows. He upped the ante at the BI keynote with his demonstration of a future-state PowerPivot workbook containing over 2 billion records. It's important to note that this volume of data is being processed by a server engine, and not in the PowerPivot client engine. (Yes, I think it's impressive, but none of my clients are typically wrangling with 2 billion records at a time. Maybe they're thinking too small. This ability to work quickly with large data sets has greater implications for BI solutions than for self-service BI, in my opinion.) Amir also demonstrated KPIs for the future PowerPivot, which appeared to be easier to implement than in any other Microsoft product that supports KPIs, apart from simple KPIs in SharePoint. (My initial reaction is that we have one more place to build KPIs. Great. It's confusing enough. I haven't seen how well those KPIs integrate with other BI tools, which will be important for adoption.) One more PowerPivot feature that Amir showed was a graphical display of the lineage for calculations. (This is hugely practical, especially if you build up calculations incrementally. You can more easily follow the logic from calculation to calculation. Furthermore, if you need to make a change to one calculation, you can assess the impact on other calculations.) Another product demonstration will be available within the next 30 days--Pivot for Reporting Services. If you haven't seen this technology yet, check it out at www.getpivot.com. (It definitely has a wow factor, but I'm skeptical about its practicality. However, I'm looking forward to trying it out with data that I understand.) Michael Tejedor (twitter) demonstrated a feature that I think is really interesting and not emphasized nearly enough--overshadowed by PowerPivot, no doubt. That feature is the Microsoft Business Intelligence Indexing Connector, which enables search of the content of Excel workbooks and Reporting Services reports. (This capability existed in MOSS 2007, but was more cumbersome to implement. The search results in SharePoint 2010 are not only cooler, but more useful by describing whether the content is found in a table or a chart, for example.) This may yet be the dawning of the age of self-service BI - a phrase I've heard repeated from time to time over the last decade - but I think BI professionals are likely to stay busy for a long while, and need not start looking for a new line of work. Kummert repeatedly referenced strategic BI solutions in contrast to self-service BI to emphasize that self-service BI is not a replacement for the services that BI professionals provide. After all, self-service BI does not appear magically on user desktops (or whatever device they want to use). A supporting infrastructure is necessary, and grows in complexity in proportion to the need to simplify BI for users. It's one thing to hear the party line touted by Microsoft employees at the BI keynote, but it's another to hear from the people who are responsible for implementing and supporting it within an organization. Rob Collie (blog | twitter), Kasper de Jonge (blog | twitter), Vidas Matelis (site | twitter), and I were invited to join Andrew Brust (blog | twitter) as he led a Birds of a Feather session at TechEd entitled "PowerPivot: Is It the BI Deal-Changer for Developers and IT Pros?" I would single out the prevailing concern in this session as the issue of control. On one side of this issue were those who were concerned that they would lose control once PowerPivot is implemented. On the other side were those who believed that data should be freely accessible to users in PowerPivot, and even acknowledgment that users would get the data they want even if it meant they would have to manually enter into a workbook to have it ready for analysis. For another viewpoint on how PowerPivot played out at the conference, see Rob Collie's observations. Collaborative BI I have been intrigued by the notion of collaborative BI for a very long time. Before I discovered BI, I was a Lotus Notes developer and later a manager of developers, working in a software company that enabled collaboration in the legal industry. Not only did I help create collaborative systems for our clients, I created a complete project management from the ground up to collaboratively manage our custom development work. In that case, collaboration involved my team, my client contacts, and me. I was also able to produce my own BI from that system as well, but didn't know that's what I was doing at the time. Only in recent years has SharePoint begun to catch up with the capabilities that I had with Lotus Notes more than a decade ago. Eventually, I had the opportunity at that job to formally investigate BI as another product offering for our software, and the rest - as they say - is history. I built my first data warehouse with Scott Cameron (who has also ventured into the authoring world by writing Analysis Services 2008 Step by Step and was at the BI Conference last week where I got to reminisce with him for a bit) and that began a career that I never imagined at the time. Fast forward to 2010, and I'm still lauding the virtues of collaborative BI, if only the tools will catch up to my vision! Thus, I was anxious to see what Donald Farmer (blog | twitter) and Rita Sallam of Gartner had to say on the subject in their session "Collaborative Decision Making." As I suspected, the tools aren't quite there yet, but the vendors are moving in the right direction. One thing I liked about this session was a non-Microsoft perspective of the state of the industry with regard to collaborative BI. In addition, this session included a better demonstration of SharePoint collaborative BI capabilities than appeared in the BI keynote. Check out the video in the link to the session to see the demonstration. One of the use cases that was demonstrated was linking from information to a person, because, as Donald put it, "People don't trust data, they trust people." The Microsoft BI Stack in General A question I hear all the time from students when I'm teaching is how to know what tools to use when there is overlap between products in the BI stack. I've never taken the time to codify my thoughts on the subject, but saw that my friend Dan Bulos provided good insight on this topic from a variety of perspectives in his session, "So Many BI Tools, So Little Time." I thought one of his best points was that ideally you should be able to design in your tool of choice, and then deploy to your tool of choice. Unfortunately, the ideal is yet to become real across the platform. The closest we come is with the RDL in Reporting Services which can be produced from two different tools (Report Builder or Business Intelligence Development Studio's Report Designer), manually, or by a third-party or custom application. I have touted the idea for years (and publicly said so about 5 years ago) that eventually more products would be RDL producers or consumers, but we aren't there yet. Maybe in another 5 years. Another interesting session that covered the BI stack against a backdrop of competitive products was delivered by Andrew Brust. Andrew did a marvelous job of consolidating a lot of information in a way that clearly communicated how various vendors' offerings compared to the Microsoft BI stack. He also made a particularly compelling argument about how the existence of an ecosystem around the Microsoft BI stack provided innovation and opportunities lacking for other vendors. Check out his presentation, "How Does the Microsoft BI Stack...Stack Up?" Expo Hall I had planned to spend more time in the Expo Hall to see who was doing new things with the BI stack, but didn't manage to get very far. Each time I set out on an exploratory mission, I got caught up in some fascinating conversations with one or more of my peers. I find interacting with people that I meet at conferences just as important as attending sessions to learn something new. There were a couple of items that really caught me eye, however, that I'll share here. Pragmatic Works. Whether you develop SSIS packages, build SSAS cubes, or author SSRS reports (or all of the above), you really must take a look at BI Documenter. Brian Knight (twitter) walked me through the key features, and I must say I was impressed. Once you've seen what this product can do, you won't want to document your BI projects any other way. You can download a free single-user database edition, or choose from more feature-rich standard or professional editions. Microsoft Press ebooks. I also stopped by the O'Reilly Media booth to meet some folks that one of my acquisitions editors at Microsoft Press recommended. In case you haven't heard, Microsoft Press has partnered with O'Reilly Media for distribution and publishing. Apart from my interest in learning more about O'Reilly Media as an author, an advertisement in their booth caught me eye which I think is a really great move. When you buy Microsoft Press ebooks through the O'Reilly web site, you can receive it in any (or all) of the following formats where possible: PDF, epub, .mobi for Kindle and .apk for Android. You also have lifetime DRM-free access to the ebooks. As someone who is an avid collector of books, I fnd myself running out of room for storage. In addition, I travel a lot, and it's hard to lug my reference library with me. Today's e-reader options make the move to digital books a more viable way to grow my library. Having a variety of formats means I am not limited to a single device, and lifetime access means I don't have to worry about keeping track of where I've stored my files. Because the e-books are DRM-free, I can copy and paste when I'm compiling notes, and I can print pages when necessary. That's a winning combination in my mind! Overall, I was pleased with the BI conference. There were many more sessions that I couldn't attend, either because the room was full when I got there or there were multiple sessions running concurrently that I wanted to see. Fortunately, many of the sessions are accessible for viewing online at http://www.msteched.com/2010/NorthAmerica along with the TechEd sessions. You can spot the BI sessions by the yellow skyline on the title slide of the presentation as shown below. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • CodePlex Daily Summary for Wednesday, April 21, 2010

    CodePlex Daily Summary for Wednesday, April 21, 2010New ProjectsA WPF ImageButton that uses pixel shader for dynamically change the image status: A WPF ImageButton that uses pixel shader for dynamically change the image status.CCLI SongSelect Library: A managed code library for reading and working with CCLI SongSelect files in C#, VB.NET or any other language that can use DLLs built with Managed ...code paste: A code paste with idea, insight and funny from web. All the paster rights belong the original author.CodeBlock: CodeBlock is CLI implementation which uses LLVM as its native code generator. CodeBlock provides LLVM binding for .NET which is written in C#, a...CSS 360 Planetary Calendar: This is the Planetary Calendar for UW Bothell -- CSS 360DNN 4 Friendly Url Modification: DNN 4.6.2.0 source code modification for Friendly URL support.Event Scavenger: "Event Scavenger" was created out of a need to manage and monitor multiple servers and have a global view of what's happening in the logs of multip...FastBinaryFileSearch: General Purpose Binary Search Implementation Of BoyerMooreHorspool Algorithmgotstudentathelaticprofile: Test project Grate: Grate process photos by color-dispersion-tree It's developed in C#GZK2010: GZK Project is used for sdo.gzk 2010JpAccountingBeta: JpAccountingBetaLog4Net Viewer: Log4Net Viewer is a log4net files parser and exposes them to webservices. You can then watch them on a rich interface. This code is .NET4 with an ...MarkLogic Toolkit for Excel: The MarkLogic Toolkit for Excel allows you to quickly build content applications with MarkLogic Server that extend the functionality of Microsoft E...MarkLogic Toolkit for PowerPoint: The MarkLogic Toolkit for PowerPoint allows you to quickly build content applications with MarkLogic Server that extend the functionality of Micros...MarkLogic Toolkit for Word: The MarkLogic Toolkit for Word allows you to quickly build content applications with MarkLogic Server that extend the functionality of Microsoft Wo...MvcContrib Portable Areas: This project hosts mvccontrib portable areas.OgmoXNA: OgmoXNA is an XNA 3.1 Game Library and set of Content Pipeline Extensions for Matt Thorson's multi-platform Ogmo Editor. Its goal is to provide ne...Pdf ebook seaerch engine and viewer: PDF Search Engine is a book search engine search on sites, forums, message boards for pdf files. You can find and download a tons of e-books but p...ResizeDragBehavior: This C# Silverlight 3 ResizeDragBehavior is a simpel implementation of resizing columns left, right, above or under a workingspace. It allows you ...Roguelike school project: A simple Rogue-like game made in c# for my school project. Uses Windows forms for GUI and ADO.NET + SQL Server CE for persistency.SharePoint Service Account Password Recovery Tool: A utility to recover SharePoint service account and application pool passwords.Smart Include - a powerful & easy way to handle your CSS and Javascript includes: Smart Include provides web developers with a very easy way to have all their css and javascript files compressed and/or minified, placed in a singl...sNPCedit: Perfect World NPC Editorstatusupdates: Generic status updatesTRXtoHTML: This is a command line utility to generate html report files from VSTS 2008 result files (trx). Usage: VSTSTestReport /q <input trx file> <outpu...WawaWuzi: 网页版五子棋,欢迎大家参与哦,呵呵。WPF Alphabet: WPF Alphabet is a application that I created to help my child learn the alphabet. It displays each letter and pronounces it using speech synthesis....WPF AutoCompleteBox for Chinese Spell: CSAutoBox is a type of WPF AutoCompleteBox for Chinese Spell in Input,Like Google,Bing.WpfCollections: WpfCollections help in WPF MVVM scenarios, resolving some of common problems and tasks: - delayed CurrentChange in ListCollectionView, - generate...XML Log Viewer in the Cloud: Silvelright 4 application hosted on Windows Azure. Upload any log file based on xml format. View log, search log, diff log, catalog etc.New ReleasesA Guide to Parallel Programming: Drop 3 - Guide Preface, Chapters 1, 2, 5, and code: This is Drop 3 with Guide Preface, Chapters 1, 2, 5, and References, and the accompanying code samples. This drop requires Visual Studio 2010 Beta ...Artefact Animator: Artefact Animator - Silverlight 4 and WPF .Net 4: Artefact Animator Version 4.0.4.6 Silverlight 4 ArtefactSL.dll for both Debug and Release. WPF 4 Artefact.dll for both Debug and Release.ASP.NET Wiki Control: Release 1.2: Includes SyntaxHighlighter integration. The control will display the functionality if it detects that http://alexgorbatchev.com/wiki/SyntaxHighli...C# to VB.NET VB.NET To C# Code Convertor: CSharp To VB.Net Convertor VS2010 Support: !VS2010 Support Added To The Addon Visual Studio buildin VB.Net To C# , C# To VB.Net Convertor using NRefactor from icsharpcode's SharpDevelop. The...CodeBlock: LLVM - 2010-04-20: These are precompiled LLVM dynamic link libraries. One for AMD64 architecture and one for IA32. To use these DLL's you should copy them to corresp...crudwork library: crudwork 2.2.0.4: What's new: qapp (shorts for Query Analyzer Plus Plus) is a SQL query analyzer previously for SQLite only now works for all databases (SQL, Oracle...DiffPlex - a .NET Diff Generator: DiffPlex 1.1: Change listFixed performance bug caused by logging for debug build Added small performance fix in the core diff algorithm Package the release b...Event Scavenger: First release (version 3.0): This release does not include any installers yet as the system is a bit more complex than a simple MSI installer can handle. Follow the instruction...Extend SmallBasic: Teaching Extensions v.012: added archiving for screen shots (Tortoise.approve) ColorWheel exits if emptyFree Silverlight & WPF Chart Control - Visifire: Charts for Silverlight 4!: Hi, Visifire now works with Silverlight 4. Microsoft released Silverlight 4 last week. Some of the new features include more fluid animations, Web...IST435: Lab 5 - DotNetNuke Module Development: Lab 5 - DotNetNuke Module DevelopmentThis is the instructions for Lab 5 on. This lab must be completed in-class. It is based on your Lab 4.KEMET_API: Kemet API v0.2d: new platform with determiners and ideograms ... please consult the "release_note.txt" for more informations.MDT Scripts, Front Ends, Web Services, and Utilities for use with ConfigMgr/SCCM: PrettyGoodFrontEndClone (v1.0): This is a clone of the great PrettyGoodFrontEnd written by Johan Arwidmark that uses the Deployment Webservice as a backend so you don't need to ho...NMigrations: 1.0.0.3: CHG: upgraded solution/projects to Visual Studio 2010 FIX: removed precision/scale from MONEY data type (issue #8081) FIX: added support for binary...Object/Relational Mapper & Code Generator in Net 2.0 for Relational & XML Schema: 2.6: Minor release.OgmoXNA: OgmoXNA Alpha Binaries: Binaries Release build binaries for the Windows and Xbox 360 platforms. Includes the Content Pipeline Extensions needed to build your projects in ...Ox Game Engine for XNA: Release 70 - Fixes: Update in 2.2.3.2 Removed use of 'reflected' render state. May fix some render errors. Original Hi all! I fixed all of the major known problems...patterns & practices – Enterprise Library: Enterprise Library 5.0 - April 2010: Microsoft Enterprise Library 5.0 official binaries can be downloaded from MSDN. Please be patient as the bits get propagated through the download s...Pdf ebook seaerch engine and viewer: Codes PDf ebook: CodesPlay-kanaler (Windows Media Center Plug-in): Playkanaler 1.0.4: Playkanaler version 1.0.4 Viasatkanalerna kanske fungerar igen, tack vare AleksandarF. Pausa och spola fungerar inte ännu.PokeIn Comet Ajax Library: PokeIn v06 x64: Bug fix release of PokeIn x64 Security Bugs Fixed Encoding Bugs Fixed Performance Improvements Made New Method in BrowserHelper classPokeIn Comet Ajax Library: PokeIn v06 x86: Bug fix release of PokeIn x86 Security Bugs Fixed Encoding Bugs Fixed Performance Improvements Made New Method in BrowserHelper classPowerSlim - Acceptance Testing for Enterprise Applications: PowerSlim 0.2: We’re pleased to announce the PowerSlim 0.2. The main feature of this release is Windows Setup which installs all you need to start doing Acceptan...Rapidshare Episode Downloader: RED v0.8.5: This release fixes some bugs that mainly have to do with Next and Add Show functionality.Rawr: Rawr 2.3.15: - Improvements to Wowhead/Armory parsing. - Rawr.Mage: Fix for calculations being broken on 32bit OSes. - Rawr.Warlock: Lots more work on fleshin...ResizeDragBehavior: ResizeDragBehavior 1.0: First release of the ResizeDragBehavior. Also includes a sampleproject to see how this behavior can be implemented.RoTwee: RoTwee (11.0.0.0): 17316 Follow Visual Studio 2010/.NET Framework 4.0SharePoint Service Account Password Recovery Tool: 1.0: This is the first release of the password recovery toolSilverlightFTP: SilverlightFTP Beta RUS: SilverlightFTP with drag-n-drop support. Russian.SqlCe Viewer (SeasonStar Database Management): SqlCe Viewer(SSDM) 0.0.8.3: 1:Downgrade to .net framework 3.5 sp1 2:Fix some bugs 3:Refactor Mysql EntranceThe Ghost: DEL3SWE: DEL3SWETMap for VS2010: TMap for Visual Studio 2010: TMap for Visual Studio 2010Sogeti has developed a testing process template that integrates the TMap test approach with Visual Studio 2010 (VS2010)....TRXtoHTML: TRXtoHTML v1.1: Minor updateVisual Studio Find Results Window Tweak: Find Results Window Tweak: First stable release of the tool, which enables you to tweak the find results window.Web Service Software Factory: 15 Minute Walkthrough for WSSF2010: This walkthrough provides a very brief introduction for those who either do not have a lot of time for a full introduction, or those who are lookin...Web Service Software Factory: Hands On Lab - Building a Web Service (VS2010): This hands-on lab provides eight exercies to briefly introduce most of the experiences of building a Web service using the Service Factory 2010. Th...Web Service Software Factory: Web Service Software Factory 2010 Source Code: System Requirements • Microsoft Visual Studio 2010 (Premium, Professional or Ultimate Edition) • Guidance Automation Extensions 2010 • Visu...WPF Alphabet: Source Code plus Binaries: Compete C# and WPF source code available below. I have also included the binary for those that just want to run it.WPF AutoCompleteBox for Chinese Spell: CSAutoBox V1.0: This is CSAutoBox V1.0 Beta,if you have any questions,please email me.Most Popular ProjectsRawrWBFS ManagerSilverlight ToolkitAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryASP.NETMicrosoft SQL Server Community & SamplesPHPExcelMost Active ProjectsRawrpatterns & practices – Enterprise LibraryBlogEngine.NETIonics Isapi Rewrite FilterFarseer Physics EnginePHPExcelTweetSharpCaliburn: An Application Framework for WPF and SilverlightNB_Store - Free DotNetNuke Ecommerce Catalog ModulePokeIn Comet Ajax Library

    Read the article

  • Windows Azure Evolution &ndash; Caching (Preview)

    - by Shaun
    Caching is a popular topic when we are building a high performance and high scalable system not only on top of the cloud platform but the on-premise environment as well. On March 2011 the Windows Azure AppFabric Caching had been production launched. It provides an in-memory, distributed caching service over the cloud. And now, in this June 2012 update, the cache team announce a grand new caching solution on Windows Azure, which is called Windows Azure Caching (Preview). And the original Windows Azure AppFabric Caching was renamed to Windows Azure Shared Caching.   What’s Caching (Preview) If you had been using the Shared Caching you should know that it is constructed by a bunch of cache servers. And when you want to use you should firstly create a cache account from the developer portal and specify the size you want to use, which means how much memory you can use to store your data that wanted to be cached. Then you can add, get and remove them through your code through the cache URL. The Shared Caching is a multi-tenancy system which host all cached items across all users. So you don’t know which server your data was located. This caching mode works well and can take most of the cases. But it has some problems. The first one is the performance. Since the Shared Caching is a multi-tenancy system, which means all cache operations should go through the Shared Caching gateway and then routed to the server which have the data your are looking for. Even though there are some caches in the Shared Caching system it also takes time from your cloud services to the cache service. Secondary, the Shared Caching service works as a block box to the developer. The only thing we know is my cache endpoint, and that’s all. Someone may satisfied since they don’t want to care about anything underlying. But if you need to know more and want more control that’s impossible in the Shared Caching. The last problem would be the price and cost-efficiency. You pay the bill based on how much cache you requested per month. But when we host a web role or worker role, it seldom consumes all of the memory and CPU in the virtual machine (service instance). If using Shared Caching we have to pay for the cache service while waste of some of our memory and CPU locally. Since the issues above Microsoft offered a new caching mode over to us, which is the Caching (Preview). Instead of having a separated cache service, the Caching (Preview) leverage the memory and CPU in our cloud services (web role and worker role) as the cache clusters. Hence the Caching (Preview) runs on the virtual machines which hosted or near our cloud applications. Without any gateway and routing, since it located in the same data center and same racks, it provides really high performance than the Shared Caching. The Caching (Preview) works side-by-side to our application, initialized and worked as a Windows Service running in the virtual machines invoked by the startup tasks from our roles, we could get more information and control to them. And since the Caching (Preview) utilizes the memory and CPU from our existing cloud services, so it’s free. What we need to pay is the original computing price. And the resource on each machines could be used more efficiently.   Enable Caching (Preview) It’s very simple to enable the Caching (Preview) in a cloud service. Let’s create a new windows azure cloud project from Visual Studio and added an ASP.NET Web Role. Then open the role setting and select the Caching page. This is where we enable and configure the Caching (Preview) on a role. To enable the Caching (Preview) just open the “Enable Caching (Preview Release)” check box. And then we need to specify which mode of the caching clusters we want to use. There are two kinds of caching mode, co-located and dedicate. The co-located mode means we use the memory in the instances we run our cloud services (web role or worker role). By using this mode we must specify how many percentage of the memory will be used as the cache. The default value is 30%. So make sure it will not affect the role business execution. The dedicate mode will use all memory in the virtual machine as the cache. In fact it will reserve some for operation system, azure hosting etc.. But it will try to use as much as the available memory to be the cache. As you can see, the Caching (Preview) was defined based on roles, which means all instances of this role will apply the same setting and play as a whole cache pool, and you can consume it by specifying the name of the role, which I will demonstrate later. And in a windows azure project we can have more than one role have the Caching (Preview) enabled. Then we will have more caches. For example, let’s say I have a web role and worker role. The web role I specified 30% co-located caching and the worker role I specified dedicated caching. If I have 3 instances of my web role and 2 instances of my worker role, then I will have two caches. As the figure above, cache 1 was contributed by three web role instances while cache 2 was contributed by 2 worker role instances. Then we can add items into cache 1 and retrieve it from web role code and worker role code. But the items stored in cache 1 cannot be retrieved from cache 2 since they are isolated. Back to our Visual Studio we specify 30% of co-located cache and use the local storage emulator to store the cache cluster runtime status. Then at the bottom we can specify the named caches. Now we just use the default one. Now we had enabled the Caching (Preview) in our web role settings. Next, let’s have a look on how to consume our cache.   Consume Caching (Preview) The Caching (Preview) can only be consumed by the roles in the same cloud services. As I mentioned earlier, a cache contributed by web role can be connected from a worker role if they are in the same cloud service. But you cannot consume a Caching (Preview) from other cloud services. This is different from the Shared Caching. The Shared Caching is opened to all services if it has the connection URL and authentication token. To consume the Caching (Preview) we need to add some references into our project as well as some configuration in the Web.config. NuGet makes our life easy. Right click on our web role project and select “Manage NuGet packages”, and then search the package named “WindowsAzure.Caching”. In the package list install the “Windows Azure Caching Preview”. It will download all necessary references from the NuGet repository and update our Web.config as well. Open the Web.config of our web role and find the “dataCacheClients” node. Under this node we can specify the cache clients we are going to use. For each cache client it will use the role name to identity and find the cache. Since we only have this web role with the Caching (Preview) enabled so I pasted the current role name in the configuration. Then, in the default page I will add some code to show how to use the cache. I will have a textbox on the page where user can input his or her name, then press a button to generate the email address for him/her. And in backend code I will check if this name had been added in cache. If yes I will return the email back immediately. Otherwise, I will sleep the tread for 2 seconds to simulate the latency, then add it into cache and return back to the page. 1: protected void btnGenerate_Click(object sender, EventArgs e) 2: { 3: // check if name is specified 4: var name = txtName.Text; 5: if (string.IsNullOrWhiteSpace(name)) 6: { 7: lblResult.Text = "Error. Please specify name."; 8: return; 9: } 10:  11: bool cached; 12: var sw = new Stopwatch(); 13: sw.Start(); 14:  15: // create the cache factory and cache 16: var factory = new DataCacheFactory(); 17: var cache = factory.GetDefaultCache(); 18:  19: // check if the name specified is in cache 20: var email = cache.Get(name) as string; 21: if (email != null) 22: { 23: cached = true; 24: sw.Stop(); 25: } 26: else 27: { 28: cached = false; 29: // simulate the letancy 30: Thread.Sleep(2000); 31: email = string.Format("{0}@igt.com", name); 32: // add to cache 33: cache.Add(name, email); 34: } 35:  36: sw.Stop(); 37: lblResult.Text = string.Format( 38: "Cached = {0}. Duration: {1}s. {2} => {3}", 39: cached, sw.Elapsed.TotalSeconds.ToString("0.00"), name, email); 40: } The Caching (Preview) can be used on the local emulator so we just F5. The first time I entered my name it will take about 2 seconds to get the email back to me since it was not in the cache. But if we re-enter my name it will be back at once from the cache. Since the Caching (Preview) is distributed across all instances of the role, so we can scaling-out it by scaling-out our web role. Just use 2 instances and tweak some code to show the current instance ID in the page, and have another try. Then we can see the cache can be retrieved even though it was added by another instance.   Consume Caching (Preview) Across Roles As I mentioned, the Caching (Preview) can be consumed by all other roles within the same cloud service. For example, let’s add another web role in our cloud solution and add the same code in its default page. In the Web.config we add the cache client to one enabled in the last role, by specifying its role name here. Then we start the solution locally and go to web role 1, specify the name and let it generate the email to us. Since there’s no cache for this name so it will take about 2 seconds but will save the email into cache. And then we go to web role 2 and specify the same name. Then you can see it retrieve the email saved by the web role 1 and returned back very quickly. Finally then we can upload our application to Windows Azure and test again. Make sure you had changed the cache cluster status storage account to the real azure account.   More Awesome Features As a in-memory distributed caching solution, the Caching (Preview) has some fancy features I would like to highlight here. The first one is the high availability support. This is the first time I have heard that a distributed cache support high availability. In the distributed cache world if a cache cluster was failed, the data it stored will be lost. This behavior was introduced by Memcached and is followed by almost all distributed cache productions. But Caching (Preview) provides high availability, which means you can specify if the named cache will be backup automatically. If yes then the data belongs to this named cache will be replicated on another role instance of this role. Then if one of the instance was failed the data can be retrieved from its backup instance. To enable the backup just open the Caching page in Visual Studio. In the named cache you want to enable backup, change the Backup Copies value from 0 to 1. The value of Backup Copies only for 0 and 1. “0” means no backup and no high availability while “1” means enabled high availability with backup the data into another instance. But by using the high availability feature there are something we need to make sure. Firstly the high availability does NOT means the data in cache will never be lost for any kind of failure. For example, if we have a role with cache enabled that has 10 instances, and 9 of them was failed, then most of the cached data will be lost since the primary and backup instance may failed together. But normally is will not be happened since MS guarantees that it will use the instance in the different fault domain for backup cache. Another one is that, enabling the backup means you store two copies of your data. For example if you think 100MB memory is OK for cache, but you need at least 200MB if you enabled backup. Besides the high availability, the Caching (Preview) support more features introduced in Windows Server AppFabric Caching than the Windows Azure Shared Caching. It supports local cache with notification. It also support absolute and slide window expiration types as well. And the Caching (Preview) also support the Memcached protocol as well. This means if you have an application based on Memcached, you can use Caching (Preview) without any code changes. What you need to do is to change the configuration of how you connect to the cache. Similar as the Windows Azure Shared Caching, MS also offers the out-of-box ASP.NET session provider and output cache provide on top of the Caching (Preview).   Summary Caching is very important component when we building a cloud-based application. In the June 2012 update MS provides a new cache solution named Caching (Preview). Different from the existing Windows Azure Shared Caching, Caching (Preview) runs the cache cluster within the role instances we have deployed to the cloud. It gives more control, more performance and more cost-effect. So now we have two caching solutions in Windows Azure, the Shared Caching and Caching (Preview). If you need a central cache service which can be used by many cloud services and web sites, then you have to use the Shared Caching. But if you only need a fast, near distributed cache, then you’d better use Caching (Preview).   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Oracle Outsourced Repair Solution: The “Control Tower” for the Reverse Supply Chain

    - by John Murphy
    By Hannes Sandmeier, Vice President of cMRO and Depot Repair Development Smart businesses are increasing their focus on core competencies and aggressively cutting costs in their supply chains. Outsourcing repairs can enable a business to focus on what they do best and most profitably while delivering top-notch customer service through partners that specialize in reverse logistics and repair. A well managed “virtual service organization” can deliver fast turn times, lower costs and high customer satisfaction. A poorly managed partner network can deliver disaster for your business. Managing a virtual service organization requires accurate, real-time information and collaboration tools that enable smart, informed and immediate corrective action. To meet this need, Oracle has released the Oracle Outsourced Repair Solution to provide the “control tower” for managing outsourced reverse supply chain operations from customer complaint through remediation to partner claim settlement. The new solution provides real-time visibility to return status, location, turn time, discrepancies and partner performance. Additionally, its web portals allow partners and carriers to view assigned work, request parts, enter data, capture time and submit claims. Leveraging the combined power of Oracle E-Business Suite and Oracle E-Business Suite Extensions for Oracle Endeca, the Oracle Outsourced Repair Solution provides a comprehensive set of tools that range from quick online partner registration to partner claim reconciliation, from capturing parts and labor to Oracle Cost Management and Financials integration, and from part requisition to waste and hazmat controls. These tools empower service operations managers to: · Increase customer satisfaction Ensure customers are satisfied by holding partners accountable for the speed and quality of repairs, and taking immediate corrective action when things go wrong · Reduce costs: Remove waste from the repair process using accurate job cost and cost breakdown data · Increase return velocity: Users have the tools to view all orders in flight and immediately know the current location, status, owner and contact point for repairs so as to be able to remove bottlenecks, resolve discrepancies and manage escalations The Oracle Outsourced Repair Solution further demonstrates Oracle’s commitment to helping supply chain professionals and service managers deliver high customer satisfaction at the lowest cost. For more information on the Oracle Outsourced Repair Solution, visit here. 

    Read the article

  • WCF client hell (2 replies)

    I've a remote service available via tcp://. When I add a service reference on my client project, VS doesn't create all proxy objects! I miss every xxxClient class, and I have only types used as parameters in my methods. I tried to start a new empty project, add the same service reference, and in this project I can see al proxy objects! It's an hell, what can I do? thanks

    Read the article

  • NServiceBus Generic Host and mqsvc.exe high CPU

    - by Michael Stephenson
    We have been doing some work with NServiceBus recently and observed some unusual behaviour which was caused by our mistake and seemed worthy of a small post.   The Scenario In our solution we were doing some standard NServiceBus stuff by pushing a message to a queue using NServiceBus.  We had a direct send/receive scenario rather than a publish/subscribe one.   The background process which was meant to collect the message and then process it was a normal NServiceBus message handler.  We would run the NServiceBus.Host.exe which would find the handler and then do the usual NServiceBus magic.   The Problem In this solution we were creating some automated tests around this module of the integration process to ensure that it would work well.  We had two tests.   Test 1 This test would start NServiceBus.Host.exe using the Process object, then seed a message to the queue via our web service façade sitting above the queue which wrapped NServiceBus.  The background process would then process the message and the test would check the message had been processed fine.   If all was well then the NServiceBus.Host.exe process was stopped.   Test 2 In test 2 we would do a very similar thing except that instead of starting the process the test would install NServiceBus.Host.exe as a windows service and then start the service before the test and once the test was executed it would stop the test.   The Results of the Tests Test 1 worked really well, however in test 2 we found that it didn’t really work at all, instead of doing the background process we were finding that between mqsvc.exe and NServiceBus.Host.exe the CPU on the machine was maxed and nothing was really happening.   The Solution After trying a few things we found it was the permissions on the queue were not set correctly.  Once this was resolved it all worked fine and CPU was not excessive and ran just like the console application.   I think the couple of take aways from this are:   Make sure you set the windows service for NserviceBus Generic Host to the right credentials When you install the generic host as a windows service then by default it will use the default windows credentials.  For any production like scenario you should be using a domain account to run the process as via the windows service. Make sure you have the queue set with the right permissions For the credentials you have used to configure the generic host as a windows service you should ensure that this user has the appropriate permissions for any queues it will interact with. Make sure you turn on the right logging configuration in NServiceBus When this wasnt working correctly we didnt know there was an issue, we were just experiencing the high CPU condition.  I am a little surprised that there wasnt something logged and that the process didnt crash.  I guess this could be by design bearing in mind that the process could be monitoring many queues.  In this point Im just saying that originally we didnt have all of the log4net logging which is available from NServiceBus turned on.  Its probably a good idea to have this turned on and configured until you are happy your solution is working fine.   Thanks to Ahmed Hashmi on my team who got this working in the end.

    Read the article

  • WCF client hell (2 replies)

    I've a remote service available via tcp://. When I add a service reference on my client project, VS doesn't create all proxy objects! I miss every xxxClient class, and I have only types used as parameters in my methods. I tried to start a new empty project, add the same service reference, and in this project I can see al proxy objects! It's an hell, what can I do? thanks

    Read the article

< Previous Page | 412 413 414 415 416 417 418 419 420 421 422 423  | Next Page >