Search Results

Search found 22891 results on 916 pages for 'service layer'.

Page 266/916 | < Previous Page | 262 263 264 265 266 267 268 269 270 271 272 273  | Next Page >

  • CodePlex Daily Summary for Wednesday, January 19, 2011

    CodePlex Daily Summary for Wednesday, January 19, 2011Popular ReleasesAutoSPInstaller: AutoSPInstaller for SharePoint 2010 (v2 Beta): New package consisting of AutoSPInstaller v2 files. If you have not previously downloaded AutoSPInstaller, or are interested in the new functionality and format, this is the recommended release.MiniTwitter: 1.65: MiniTwitter 1.65 ???? ?? List ????? in-reply-to ???????? ????????????????????????? ?? OAuth ????????????????????????????iTracker Asp.Net Starter Kit: Version 3.0.0: This is the inital release of the version 3.0.0 Visual Studio 2010 (.Net 4.0) remake of the ITracker application. I connsider this a working, stable application but since there are still some features missing to make it "complete" I'm leaving it listed as a "beta" release. I am hoping to make it feature complete for v3.1.0 but anything is possible.mytrip.mvc (CMS & e-Commerce): mytrip.mvc 1.0.52.1 beta 2: New MVC3 RTM fix bug: Dropdown select fix bug: Add Store/Department and Add Store/Produser WEB.mytrip.mvc 1.0.52.1 Web for install hosting System Requirements: NET 4.0, MSSQL 2008 or MySql (auto creation table to database) if .\SQLEXPRESS auto creation database (App_Data folder) SRC.mytrip.mvc 1.0.52.1 System Requirements: Visual Studio 2010 or Web Deweloper 2010 MSSQL 2008 or MySql (auto creation table to database) if .\SQLEXPRESS auto creation database (App_Data folder) Connector/Net...ASP.NET MVC Project Awesome, jQuery Ajax helpers (controls): 1.6.1: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager changes: RenderView controller extension works for razor also live demo switched to razorBloodSim: BloodSim - 1.3.3.1: - Priority update to resolve a bug that was causing Boss damage to ignore Blood Shields entirelyRawr: Rawr 4.0.16 Beta: Rawr is now web-based. The link to use Rawr4 is: http://elitistjerks.com/rawr.phpThis is the Cataclysm Beta Release. More details can be found at the following link http://rawr.codeplex.com/Thread/View.aspx?ThreadId=237262 As of this release, you can now also begin using the new Downloadable WPF version of Rawr!This is a pre-alpha release of the WPF version, there are likely to be a lot of issues. If you have a problem, please follow the Posting Guidelines and put it into the Issue Tracker. W...MvcContrib: an Outer Curve Foundation project: MVC 3 - 3.0.51.0: Please see the Change Log for a complete list of changes. MVC BootCamp Description of the releases: MvcContrib.Release.zip MvcContrib.dll MvcContrib.TestHelper.dll MvcContrib.Extras.Release.zip T4MVC. The extra view engines / controller factories and other functionality which is in the project. This file includes the main MvcContrib assembly. Samples are included in the release. You do not need MvcContrib if you download the Extras.Yahoo! UI Library: YUI Compressor for .Net: Version 1.5.0.0 - Jalthi: Updated solution to VS2010. New: Work Item #4450 - Optional MSBuild task parameter :: Do not error if no files were found. Fixed: Work Item #5028 - Output file encoding is the same as the optional MSBuild task encoding argument. Fixed: Work Item #5824 - MSBuilds where slow, after the first build due to the Current Thread being forced to en-gb, on none en-gb systems. Changed: Work Item #6873 - Project license changed from MS-PL to GPLv2. New: Added all the unit tests from the Java YU...N2 CMS: 2.1.1: N2 is a lightweight CMS framework for ASP.NET. It helps you build great web sites that anyone can update. 2.1.1 Maintenance release List of changes 2.1 Major Changes Support for auto-implemented properties ({get;set;}, based on contribution by And Poulsen) File manager improvements (multiple file upload, resize images to fit) New image gallery Infinite scroll paging on news Content templates First time with N2? Try the demo site Download one of the template packs (above) and open...VidCoder: 0.8.1: Adds ability to choose an arbitrary range (in seconds or frames) to encode. Adds ability to override the title number in the output file name when enqueing multiple titles. Updated presets: Added iPhone 4, Apple TV 2, fixed some existing presets that should have had weightp=0 or trellis=0 on them. Added {parent} option to auto-name format. Use {parent:2} to refer to a folder 2 levels above the input file. Added {title:2} option to auto-name format. Adds leading zeroes to reach the sp...Microsoft SQL Server Product Samples: Database: AdventureWorks2008R2 without filestream: This download contains a version of the AdventureWorks2008R2 OLTP database without FILESTREAM properties. You do not need to have filestream enabled to attach this database. No additional schema or data changes have been made. To install the version of AdventureWorks2008R2 that includes filestream, use the SR1 installer available here. Prerequisites: Microsoft SQL Server 2008 R2 must be installed. Full-Text Search must be enabled. Installing the AdventureWorks2008R2 OLTP database: 1. Cl...NuGet: NuGet 1.0 RTM: NuGet is a free, open source developer focused package management system for the .NET platform intent on simplifying the process of incorporating third party libraries into a .NET application during development. This release is a Visual Studio 2010 extension and contains the the Package Manager Console and the Add Package Dialog. The URL to the package OData feed is: http://go.microsoft.com/fwlink/?LinkID=206669MVC Music Store: MVC Music Store v2.0: This is the 2.0 release of the MVC Music Store Tutorial. This tutorial is updated for ASP.NET MVC 3 and Entity Framework Code-First, and contains fixes and improvements based on feedback and common questions from previous releases. The main download, MvcMusicStore-v2.0.zip, contains everything you need to build the sample application, including A detailed tutorial document in PDF format Assets you will need to build the project, including images, a stylesheet, and a pre-populated databas...Fluent Validation for .NET: 2.0: Changes since 2.0 RC Fix typo in the name of FallbackAwareResourceAccessorBuilder Fix issue #7062 - allow validator selectors to work against nullable properties with overriden names. Fix error in German localization. Better support for client-side validation messages in MVC integration. All changes since 1.3 Allow custom MVC ModelValidators to be added to the FVModelValidatorProvider Support resource provider for custom property validators through the new IResourceAccessorBuilder ...EnhSim: EnhSim 2.3.2 ALPHA: 2.3.2 ALPHAThis release supports WoW patch 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Quick update to ...??????????: All-In-One Code Framework ??? 2011-01-12: 2011???????All-In-One Code Framework(??) 2011?1??????!!http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1code&DownloadId=128165 ?????release?,???????ASP.NET, AJAX, WinForm, Windows Shell????13?Sample Code。???,??????????sample code。 ?????:http://blog.csdn.net/sjb5201/archive/2011/01/13/6135037.aspx ??,??????MSDN????????????。 http://social.msdn.microsoft.com/Forums/zh-CN/codezhchs/threads ?????????????????,??Email ????patterns & practices – Enterprise Library: Enterprise Library 5.0 - Extensibility Labs: This is a preview release of the Hands-on Labs to help you learn and practice different ways the Enterprise Library can be extended. Learning MapCustom exception handler (estimated time to complete: 1 hr 15 mins) Custom logging trace listener (1 hr) Custom configuration source (registry-based) (30 mins) System requirementsEnterprise Library 5.0 / Unity 2.0 installed SQL Express 2008 installed Visual Studio 2010 Pro (or better) installed AuthorsChris Tavares, Microsoft Corporation ...Orchard Project: Orchard 1.0: Orchard Release Notes Build: 1.0.20 Published: 1/12/2010 How to Install OrchardTo install Orchard using Web PI, follow these instructions: http://www.orchardproject.net/docs/Installing-Orchard.ashx Web PI will detect your hardware environment and install the application. --OR-- Alternatively, to install the release manually, download the Orchard.Web.1.0.20.zip file. http://orchardproject.net/docs/Manually-installing-Orchard-zip-file.ashx The zip contents are pre-built and ready-to-run...Umbraco CMS: Umbraco 4.6.1: The Umbraco 4.6.1 (codename JUNO) release contains many new features focusing on an improved installation experience, a number of robust developer features, and contains nearly 200 bug fixes since the 4.5.2 release. Getting Started A great place to start is with our Getting Started Guide: Getting Started Guide: http://umbraco.codeplex.com/Project/Download/FileDownload.aspx?DownloadId=197051 Make sure to check the free foundation videos on how to get started building Umbraco sites. They're ...New ProjectsBAM Service Generator: BizTalk BAM Service Generator is a command line to generate .NET 4.0 WCF services for BizTalk BAM activities. Don't Drink and Code Library: Don't Drink and Code Library is a collection of useful services to help you build functional websites and applications.F# Sample: Transportation Algorithm: Implementation of the transportation algorithm in F#.Fast Replace: This application replaces characters in big files in very fast way, by not loading the hole file in memory. Usuful because Microsoft's Word, Wordpad and Notepad cannot handle big files (more than 50 MB) very well, so replacing characters is a painful process.Hall: ????I Know: Post anonymous information online.jQuery Image Rotator Plugin: jQuery Image Rotator Plugin makes it easier for web developers to create a rotating display of images. You'll no longer have to write this logic on a case by case basis. It's developed in JavaScript using jQuery.MetaREST: MetaREST makes it easier for Web or Service Developers to publish REST Services using simple Class, Method and Parameters Attributes. You can publish your legacy business class as a REST Service in a couple of minutes. Multi-Project Templates with Wizard: Visual Studio 2010 Sample: This project shows you simple example of creating multi-project templates with wizard using Visual Studio 2010, which generates VSIX file. A VSIX file enables us to install Visual Studio extensions (tools, controls, template etc) with a single click.Orchard Contact Form: This project is to create an Orchard module that allows you to create one or more Contact Pages, alternatively you can assign a ContactForm as a content part or widget to an arbitrary page. Orchard LDAP module: Active directory authentication module for Orchard.ProcessDomain: ProcessDomain allows developers to isolate the execution of code in a separate process using similar semantics to that of AppDomain. It's developed in C# and the binaries can be used with .NET Framework 2.0 or later.QRCode Helper: This helper for WebMatrix and ASP.NET Web Pages allows you to easily display QR Code graph to your site. ????ASP.NET????QR?????????????????、WebMatrix ??? ASP.NET Web Pages ????????。QuantaOS: QuantaOS is a 32bit operating system, multitasking and GUI support is soon to be added. Written in C++ and assemblyShuttle Service Bus: Shuttle Service Bus is a free open-source software project that aims to provide an enterprise ready service bus without any restrictions.Socket Policy File Server: The socket policy file server is a simple TCP server that serves the socket policy file required by Adobe Flash Player for cross domain resource access. TibiaPinger: Tibia Pinger, Tools, Bot, ArkBot, Tibia, NeoBot, TibiaAuto, NG, ElfBotVingy - Visual Studio Instant Search Addin: A simple, but effective add in for Visual Studio 2010 so that you can search the web in a non intrusive way, and can filter results based on sources.

    Read the article

  • CodePlex Daily Summary for Saturday, March 12, 2011

    CodePlex Daily Summary for Saturday, March 12, 2011Popular ReleasesXML Explorer: XML Explorer 4.0.2: Changes in 4.0: This release is built on the Microsoft .NET Framework 4 Client Profile. Changed XSD validation to use the schema specified by the XML documents. Added a VS style Error List, double-clicking an error takes you to the offending node. XPathNavigator schema validation finally gives SourceObject (was fixed in .NET 4). Added Namespaces window and better support for XPath expressions in documents with a default namespace. Added ExpandAll and CollapseAll toolbar buttons (in a...Mobile Device Detection and Redirection: 1.0.0.0: Stable Release 51 Degrees.mobi Foundation has been in beta for some time now and has been used on thousands of websites worldwide. We’re now highly confident in the product and have designated this release as stable. We recommend all users update to this version. New Capabilities MappingsTo improve compatibility with other libraries some new .NET capabilities are now populated with wurfl data: “maximumRenderedPageSize” populated with “max_deck_size” “rendersBreaksAfterWmlAnchor” populated ...ASP.NET MVC Project Awesome, jQuery Ajax helpers (controls): 1.7.3: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager added interactive search for the lookupWPF Inspector: WPF Inspector 0.9.7: New Features in Version 0.9.7 - Support for .NET 3.5 and 4.0 - Multi-inspection of the same process - Property-Filtering for multiple keywords e.g. "Height Width" - Smart Element Selection - Select Controls by clicking CTRL, - Select Template-Parts by clicking CTRL+SHIFT - Possibility to hide the element adorner (over the context menu on the visual tree) - Many bugfixes??????????: All-In-One Code Framework ??? 2011-03-10: http://download.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1codechs&DownloadId=216140 ??,????。??????????All-In-One Code Framework ???,??20?Sample!!????,?????。http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1code&DownloadId=128165 ASP.NET ??: CSASPNETBingMaps VBASPNETRemoteUploadAndDownload CS/VBASPNETSerializeJsonString CSASPNETIPtoLocation CSASPNETExcelLikeGridView ....... Winform??: FTPDownload FTPUpload MultiThreadedWebDownloader...Rawr: Rawr 4.1.0: Rawr is now web-based. The link to use Rawr4 is: http://elitistjerks.com/rawr.phpThis is the Cataclysm Release. More details can be found at the following link http://rawr.codeplex.com/Thread/View.aspx?ThreadId=237262 As of the 4.0.16 release, you can now also begin using the new Downloadable WPF version of Rawr!This is a Release of the WPF version, most of the general issues have been resolved. If you have a problem, please follow the Posting Guidelines and put it into the Issue Tracker. Whe...PHP Manager for IIS: PHP Manager 1.1.2 for IIS 7: This is a localization release of PHP Manager for IIS 7. It contains all the functionality available in 56962 plus a few bug fixes (see change list for more details). Most importantly this release is translated into five languages: German - the translation is provided by Christian Graefe Dutch - the translation is provided by Harrie Verveer Turkish - the translation is provided by Yusuf Oztürk Japanese - the translation is provided by Kenichi Wakasa Russian - the translation is provid...TweetSharp: TweetSharp v2.0.0: Documentation for this release may be found at http://tweetsharp.codeplex.com/wikipage?title=UserGuide&referringTitle=Documentation. Beta ChangesAdded user streams support Serialization is not attempted for Twitter 5xx errors Fixes based on feedback Third Party Library VersionsHammock v1.2.0: http://hammock.codeplex.com Json.NET 4.0 Release 1: http://json.codeplex.comMicrosoft All-In-One Code Framework - a centralized code sample library: Visual Studio 2008 Code Samples 2011-03-09: Code samples for Visual Studio 2008Office Web.UI: Version 2.4: After having lost all modifications done for 2.3. I finally did it again... Have a look at http://www.officewebui.com/change-log Also, the documentation continues to grow... http://www.officewebui.com/category/kb ThanksmyCollections: Version 1.3: New in version 1.3 : Added Editor management for Books Added Amazon API for Books Us, Fr, De Added Amazon Us, Fr, De for Movies Added The MovieDB for Fr and De Added Author for Books Added Editor and Platform for Games Added Amazon Us, De for Games Added Studio for XXX Added Background for XXX Bug fixing with Softonic API Bug fixing with IMDB UI improvement Removed GraceNote Added Amazon Us,Fr, De for Series Added TVDB Fr and De for Series Added Tracks for Musi...Facebook Graph Toolkit: Facebook Graph Toolkit 1.1: Version 1.1 (8 Mar 2011)new Dialog class for redirecting users to Facebook dialogs new Async publishing methods new Check for Extended Permissions option fixed bug: inappropiate condition of redirecting to login in Api class fixed bug: IframeRedirect method not workingpatterns & practices : Composite Services: Composite Services Guidance - CTP2: Overview The Composite Services guidance (codename Reykjavik) provides best practices and capabilities for applying industry-known SOA design patterns when building robust, connected, service-oriented composite enterprise applications. These capabilities are implemented as a set of reusable components for analytic tracing, service virtualization, metadata centralization and versioning, and policy centralization as well as exception management, included in this release. Changes in this CTP ...Python Tools for Visual Studio: 1.0 Beta 1: Beta 1You can't install IronPython Tools for Visual Studio side-by-side with Python Tools for Visual Studio. A race condition sometimes causes local MPI debugging to miss breakpoints. When MPI jobs on a cluster fail they don’t get cleaned up correctly, which can cause debugging to stall because the associated MPI job is stuck in the queue. The "Threads" view has a race condition which can cause it not to display properly at times. VS2010 shortcuts that are pinned to the taskbar are so...DotNetAge -a lightweight Mvc jQuery CMS: DotNetAge 2: What is new in DotNetAge 2.0 ? Completely update DJME to DJME2, enhance user experience ,more beautiful and more interactively visit DJME project home to lean more about DJME http://www.dotnetage.com/sites/home/djme.html A new widget engine has came! Faster and easiler. Runtime performance enhanced. SEO enhanced. UI Designer enhanced. A new web resources explorer. Page manager enhanced. BlogML supports added that allows you import/export your blog data to/from dotnetage publishi...Kooboo CMS: Kooboo CMS 3.0 Beta: Files in this downloadkooboo_CMS.zip: The kooboo application files Content_DBProvider.zip: Additional content database implementation of MSSQL,SQLCE, RavenDB and MongoDB. Default is XML based database. To use them, copy the related dlls into web root bin folder and remove old content provider dlls. Content provider has the name like "Kooboo.CMS.Content.Persistence.SQLServer.dll" View_Engines.zip: Supports of Razor, webform and NVelocity view engine. Copy the dlls into web root bin folder t...IronPython: 2.7 Release Candidate 2: On behalf of the IronPython team, I am pleased to announce IronPython 2.7 Release Candidate 2. The releases contains a few minor bug fixes, including a working webbrowser module. Please see the release notes for 61395 for what was fixed in previous releases.LINQ to Twitter: LINQ to Twitter Beta v2.0.20: Mono 2.8, Silverlight, OAuth, 100% Twitter API coverage, streaming, extensibility via Raw Queries, and added documentation.Minemapper: Minemapper v0.1.6: Once again supports biomes, thanks to an updated Minecraft Biome Extractor, which added support for the new Minecraft beta v1.3 map format. Updated mcmap to support new biome format.Sandcastle Help File Builder: SHFB v1.9.3.0 Release: This release supports the Sandcastle June 2010 Release (v2.6.10621.1). It includes full support for generating, installing, and removing MS Help Viewer files. This new release is compiled under .NET 4.0, supports Visual Studio 2010 solutions and projects as documentation sources, and adds support for projects targeting the Silverlight Framework. This release uses the Sandcastle Guided Installation package used by Sandcastle Styles. Download and extract to a folder and then run SandcastleI...New Projects8428-3127-6884-4328: No summary providedAngua R.P.G. Engine: The Angua R.P.G. Engine is a C# open-source system for playing turn-based Role-Playing Games over the Internet or TCP/IP based networks. AppServices - SOA for greenhorns: AppServices is a simple service container. It can inject services to any public or private property or field with declared ServiceAttribute. It supports Windows Forms, WPF and ASP.NET. Silverlight is not tested yet, but it should do also. AppServices simplifies service referenceAxHibernate: AxHibernate aims to produce a POCO-mapped style domain-library for Dynamics AX. I.e., an AX business-connector-ignorant ORM domain library for Dynamics Ax.Business localizer, translation: Translation and localization software used in your business. Help to localize resource file on top of .NET Microsoft stack.cloudsync: cloudsyncDaily Deals .Net - Groupon Clone: Daily Deals .Net is a "daily deals" application based on the .net technology. Utilizing MVC 3 to provide a Groupon-like experience. This project needs your support. Please sign up today to help bring a viable daily deals project to the .Net open source world. ENGUILABE.INFO - Descubre un nuevo mundo: codigo reutilizable y aprendibleeuler 40: euler 40euler 42: euler 42gibon: gibbon blog+portfolio+communitiHoley Ship: Battleship project.Improved CAPTCHA Control ASP.NET 2.0 C#: Improved C# port of Jeff Atwood's custom CAPTCHA control (image verification against robots).IslandUnit: IslandUnit helps you isolate the dependencies of your system with an fluent interface that makes easier to produce mocks and stubs with existing frameworks (Moq, NMock, NBuilder, AutoPoco) and put the isolated dependencies in IoC containers, leaving your system highly testable.KidsChores: Helps keep tracks of to do list for kids, allowing them to earn points, allowances, etc.Local Copy SharePoint Items: The "Local Copy SharePoint Items" is a PowerShell (v1) written script to grab all documents out of libraries/lists from a given SharePoint 2007 site. Get yourself exited to run this script as folliowing: .\LCSPI.ps1 -url http://moss2007 -bkploc c:\destination mkfashio: mkfashion MyTest: MyTestNaja, a Cobra IDE: Naja (a.k.a. Cobra IDE) is an Integrated Development Environment for the Cobra programming language (www.cobra-language.com). NDasm: Visual Studio add-in that displays IL code for any managed method in your .Net solution. Helpful for studying .Net platform. For example now you can easily see what happens when you are declaring delegate type or how your cool lambda expression actually looks like. olympicgameslondon: The aim of the Olympicgameslondon project is to provide a platform where sports professionals and athletes can broadcast themselves, individuals/supporters, Londoners, visitors etc can post views, share information and find interesting news about the London 2012 Olympic games.Populate SharePoint Form Fields from QueryString via Javascript: This project makes it possible to pre-fill SharePoint Edit form fields using JavaScript and field values passed via querystring - without server-side deployment.Project Server: Xin chào t?t c? các thành viên c?a team th?c t?p t?t nghi?p c?a công ty Toàn C?u Th?nh. Ðây là server du?c dùng d? giúp d? m?i ngu?i ch?nh s?a project c?a m?t m?t cách d? dàng nh?t trong quá trình làm vi?c nhóm. Support: ngtrongtri@yahoo.com.vnPure Midi: Pure Midi - Handling midi communication in .NET with full sequencing support and arranger like musical styles playing.Relative Time: Render a TimeSpan object as something that's consumable by humans like "about 9 minutes ago" or "yesterday".RPG Character Creation: RPG Character Creation makes it easier to create and maintain characters using the Avalon 2.0 ruleset. It is developed in C#.SearchBox for WPF: Customizable search box for WPF with auto-complete capabilities.Sebro: Sebro is a freelance-like platform for the SEO related market. The main feature of the platform is an automated tracking and statistics gathering of work and strongly formalized criteria of its completion or failure.ShoppingApp2: Project will cover basics of creating ASP.NET application. Aim of project is creating application which will help users, via web, to manage their home budget. Socket Redirection for Terminal Services: Socket Redirection for Terminal Services allows to connect to the Internet on a machine, which does not has access to I-net, using another machine in the network, which has that access. Sort Calculator: it is a c# sort calculator that generates randomly array numbers and implements several sorting algorithms.SpSource 2010: Port of SPSource to work with VS 2010 Tools for SharePoint 2010SQL Script Executer: SQL Executer makes it easy to deploy, execute multiple scripts on SQL Server as you could do in VS 2008. In Visual Studio 2008's Database Project, one had an option to select multiple sql files and Run them on choosen DB. Visual Studio 2010, doesn't come with this functionality. Test By Wire: Test by wire is a unit test framework, which handles automatic setup and orchestration of test-target and dependencies. In addition Test By Wire features an automatic mocking feature, that is interfaces with pure BDD style syntax. Time Manager System: The TMS (Time Manager System) allows you to plan, organize and schedule your daily activities. The TMS is based on the Pomodoro Technique that improves your productivity.Windows Azure Service Instances Auto Scaling: Windows Azure Service Instances Auto Scaling is a way for dynamically scaling-up and scaling-down the instances number of a running hosted service. In this version this management is done based on a time schedule by an Azure Worker Role.

    Read the article

  • ASP.NET WebAPI Security 4: Examples for various Authentication Scenarios

    - by Your DisplayName here!
    The Thinktecture.IdentityModel.Http repository includes a number of samples for the various authentication scenarios. All the clients follow a basic pattern: Acquire client credential (a single token, multiple tokens, username/password). Call Service. The service simply enumerates the claims it finds on the request and returns them to the client. I won’t show that part of the code, but rather focus on the step 1 and 2. Basic Authentication This is the most basic (pun inteneded) scenario. My library contains a class that can create the Basic Authentication header value. Simply set username and password and you are good to go. var client = new HttpClient { BaseAddress = _baseAddress }; client.DefaultRequestHeaders.Authorization = new BasicAuthenticationHeaderValue("alice", "alice"); var response = client.GetAsync("identity").Result; response.EnsureSuccessStatusCode();   SAML Authentication To integrate a Web API with an existing enterprise identity provider like ADFS, you can use SAML tokens. This is certainly not the most efficient way of calling a “lightweight service” ;) But very useful if that’s what it takes to get the job done. private static string GetIdentityToken() {     var factory = new WSTrustChannelFactory(         new WindowsWSTrustBinding(SecurityMode.Transport),         _idpEndpoint);     factory.TrustVersion = TrustVersion.WSTrust13;     var rst = new RequestSecurityToken     {         RequestType = RequestTypes.Issue,         KeyType = KeyTypes.Bearer,         AppliesTo = new EndpointAddress(Constants.Realm)     };     var token = factory.CreateChannel().Issue(rst) as GenericXmlSecurityToken;     return token.TokenXml.OuterXml; } private static Identity CallService(string saml) {     var client = new HttpClient { BaseAddress = _baseAddress };     client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("SAML", saml);     var response = client.GetAsync("identity").Result;     response.EnsureSuccessStatusCode();     return response.Content.ReadAsAsync<Identity>().Result; }   SAML to SWT conversion using the Azure Access Control Service Another possible options for integrating SAML based identity providers is to use an intermediary service that allows converting the SAML token to the more compact SWT (Simple Web Token) format. This way you only need to roundtrip the SAML once and can use the SWT afterwards. The code for the conversion uses the ACS OAuth2 endpoint. The OAuth2Client class is part of my library. private static string GetServiceTokenOAuth2(string samlToken) {     var client = new OAuth2Client(_acsOAuth2Endpoint);     return client.RequestAccessTokenAssertion(         samlToken,         SecurityTokenTypes.Saml2TokenProfile11,         Constants.Realm).AccessToken; }   SWT Authentication When you have an identity provider that directly supports a (simple) web token, you can acquire the token directly without the conversion step. Thinktecture.IdentityServer e.g. supports the OAuth2 resource owner credential profile to issue SWT tokens. private static string GetIdentityToken() {     var client = new OAuth2Client(_oauth2Address);     var response = client.RequestAccessTokenUserName("bob", "abc!123", Constants.Realm);     return response.AccessToken; } private static Identity CallService(string swt) {     var client = new HttpClient { BaseAddress = _baseAddress };     client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", swt);     var response = client.GetAsync("identity").Result;     response.EnsureSuccessStatusCode();     return response.Content.ReadAsAsync<Identity>().Result; }   So you can see that it’s pretty straightforward to implement various authentication scenarios using WebAPI and my authentication library. Stay tuned for more client samples!

    Read the article

  • Running a Mongo Replica Set on Azure VM Roles

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/10/15/running-a-mongo-replica-set-on-azure-vm-roles.aspxSetting up a MongoDB Replica Set with a bunch of Azure VMs is straightforward stuff. Here’s a step-by-step which gets you from 0 to fully-redundant 3-node document database in about 30 minutes (most of which will be spent waiting for VMs to fire up). First, create yourself 3 VM roles, which is the minimum number of nodes you need for high availability. You can use any OS that Mongo supports. This guide uses Windows but the only difference will be the mechanism for starting the Mongo service when the VM starts (Windows Service, daemon etc.) While the VMs are provisioning, download and install Mongo locally, so you can set up the replica set with the Mongo shell. We’ll create our replica set from scratch, doing one machine at a time (if you have a single node you want to upgrade to a replica set, it’s the same from step 3 onwards): 1. Setup Mongo Log into the first node, download mongo and unzip it to C:. Rename the folder to remove the version – so you have c:\MongoDB\bin etc. – and create a new folder for the logs, c:\MongoDB\logs. 2. Setup your data disk When you initialize a node in a replica set, Mongo pre-allocates a whole chunk of storage to use for data replication. It will use up to 5% of your data disk, so if you use a Windows VM image with a defsault 120Gb disk and host your data on C:, then Mongo will allocate 6Gb for replication. And that takes a while. Instead you can create yourself a new partition by shrinking down the C: drive in Computer Management, by say 10Gb, and then creating a new logical disk for your data from that spare 10Gb, which will be allocated as E:. Create a new folder, e:\data. 3. Start Mongo When that’s done, start a command line, point to the mongo binaries folder, install Mongo as a Windows Service, running in replica set mode, and start the service: cd c:\mongodb\bin mongod -logpath c:\mongodb\logs\mongod.log -dbpath e:\data -replSet TheReplicaSet –install net start mongodb 4. Open the ports Mongo uses port 27017 by default, so you need to allow access in the machine and in Azure. In the VM, open Windows Firewall and create a new inbound rule to allow access via port 27017. Then in the Azure Management Console for the VM role, under the Configure tab add a new rule, again to allow port 27017. 5. Initialise the replica set Start up your local mongo shell, connecting to your Azure VM, and initiate the replica set: c:\mongodb\bin\mongo sc-xyz-db1.cloudapp.net rs.initiate() This is the bit where the new node (at this point the only node) allocates its replication files, so if your data disk is large, this can take a long time (if you’re using the default C: drive with 120Gb, it may take so long that rs.initiate() never responds. If you’re sat waiting more than 20 minutes, start another instance of the mongo shell pointing to the same machine to check on it). Run rs.conf() and you should see one node configured. 6. Fix the host name for the primary – *don’t miss this one* For the first node in the replica set, Mongo on Windows doesn’t populate the full machine name. Run rs.conf() and the name of the primary is sc-xyz-db1, which isn’t accessible to the outside world. The replica set configuration needs the full DNS name of every node, so you need to manually rename it in your shell, which you can do like this: cfg = rs.conf() cfg.members[0].host = ‘sc-xyz-db1.cloudapp.net:27017’ rs.reconfig(cfg) When that returns, rs.conf() will have your full DNS name for the primary, and the other nodes will be able to connect. At this point you have a working database, so you can start adding documents, but there’s no replication yet. 7. Add more nodes For the next two VMs, follow steps 1 through to 4, which will give you a working Mongo database on each node, which you can add to the replica set from the shell with rs.add(), using the full DNS name of the new node and the port you’re using: rs.add(‘sc-xyz-db2.cloudapp.net:27017’) Run rs.status() and you’ll see your new node in STARTUP2 state, which means its initializing and replicating from the PRIMARY. Repeat for your third node: rs.add(‘sc-xyz-db3.cloudapp.net:27017’) When all nodes are finished initializing, you will have a PRIMARY and two SECONDARY nodes showing in rs.status(). Now you have high availability, so you can happily stop db1, and one of the other nodes will become the PRIMARY with no loss of data or service. Note – the process for AWS EC2 is exactly the same, but with one important difference. On the Azure Windows Server 2012 base image, the MongoDB release for 64-bit 2008R2+ works fine, but on the base 2012 AMI that release keeps failing with a UAC permission error. The standard 64-bit release is fine, but it lacks some optimizations that are in the 2008R2+ version.

    Read the article

  • Employee Engagement Q&A with John Brunswick

    - by Kellsey Ruppel
    As we are focusing this week on Employee Engagement, I recently sat down with industry expert and thought leader John Brunswick on the topic. Here is the Q&A dialogue we shared.  Q: How do you effectively engage employees to drive business value?A: Motivation, both extrinsic and intrinsic, combined with the relevancy of various channels to support it.  Beyond chaining business strategies like compensation models within an organization, engagement ultimately is most successful when driven by employee's motivations.  Business value derived from engagement through technical capabilities can be objectively measured through metrics like the rate and accuracy of problem solving for a given business function or frequency of innovation created.  Providing employees performing "knowledge work" with capabilities that allow them to perform work with a higher degree of accuracy in the same or ideally less time, adds value for that individual and in turn, drives their level of engagement to drive business value. Q: Organizations with high levels of employee engagement outperform the total stock market index by 22%. Can you comment on why you think this might be? A: Alignment through shared purpose.  Zappos is an excellent example of a culture that arguably has higher than average levels of employee engagement and it permeates every aspect of their organization – embodied externally through their customer experience.  I recently made my first purchase with them and it was obvious through their web experience, visual design, communication style, customer service and attention to detail down to green packaging, that they have an amazingly strong shared purpose.  The Zappos.com ‘About page’ outlines their "Family Core Values", the first three being "Deliver WOW Through Service, Embrace and Drive Change & Create Fun and A Little Weirdness" – all reflected externally in my interaction with them.  Strong shared purpose enables higher product and service experience, equating to a dedicated customer base, repeat purchases and expanded marketshare. Q: Have you seen any trends in the market regarding employee engagement? A: Some companies now see offering a form of social engagement similar to Facebook and LinkedIn as standard communication infrastructure like email or instant messaging.  Originally offered as standalone tools, the value is now seen when these capabilities are offered in an integrated fashion in the context of business entities.  An emerging area of focus is around employee activities related to their organization on external social platforms, implicitly creating external communities with employees acting on behalf of the brand and interacting with each other (e.g. Twitter).  Companies have reached a formal understand that this now established communication medium requires strategies allowing employees to engage.  I have personally met colleagues from Oracle, like Oracle User Experience Director Ultan O'Broin (@ultan), via Twitter before meeting first through internal channels. Q: Employee engagement is important, but what about engaging customers and partners? A: The last few years we have witnessed an interesting evolution from the novelty of self-service to expectations of "intelligent" self-service.  From a consumer standpoint, engagement can end up being a key differentiator, especially in mature markets.  Customers that perform some level of interaction with a brand develop greater affinity for the brand and have a greater probability of acting as an advocate.  As organizations move toward a model of deeper engagement, they must ensure that their business is positioned to support deeper relationships, offering potentially greater transparency. From a partner standpoint greater engagement can lead to new types of business opportunities, much in the way that Amazon.com offers a unified shopping experience that can potentially span various vendors.  This same model can be extended to blending services and product delivery models, based on a closeness not easily possible before increased capability of engagement mechanisms. Q: What types of solutions are available to successfully deliver employee engagement? A: Solutions enabling higher levels of engagement do so on the basis of relevancy.  This relevancy is generally supported by aspects of content management, social collaboration, business intelligence, portal and process management technologies.  These technologies can help deliver an experience tailored to a given role or process within an organization that applies equally to work that is structured or unstructured, appearing in the form of functionality as simple as an online employee directory search, knowledge communities supported by social collaboration, as well as more feature rich business intelligence dashboards and portals. Looking to learn more about how to effectively engage your employees? Check out this webcast, or read more from John Brunswick. 

    Read the article

  • Adding SQL Cache Dependencies to the Loosely coupled .NET Cache Provider

    - by Rhames
    This post adds SQL Cache Dependency support to the loosely coupled .NET Cache Provider that I described in the previous post (http://geekswithblogs.net/Rhames/archive/2012/09/11/loosely-coupled-.net-cache-provider-using-dependency-injection.aspx). The sample code is available on github at https://github.com/RobinHames/CacheProvider.git. Each time we want to apply a cache dependency to a call to fetch or cache a data item we need to supply an instance of the relevant dependency implementation. This suggests an Abstract Factory will be useful to create cache dependencies as needed. We can then use Dependency Injection to inject the factory into the relevant consumer. Castle Windsor provides a typed factory facility that will be utilised to implement the cache dependency abstract factory (see http://docs.castleproject.org/Windsor.Typed-Factory-Facility-interface-based-factories.ashx). Cache Dependency Interfaces First I created a set of cache dependency interfaces in the domain layer, which can be used to pass a cache dependency into the cache provider. ICacheDependency The ICacheDependency interface is simply an empty interface that is used as a parent for the specific cache dependency interfaces. This will allow us to place a generic constraint on the Cache Dependency Factory, and will give us a type that can be passed into the relevant Cache Provider methods. namespace CacheDiSample.Domain.CacheInterfaces { public interface ICacheDependency { } }   ISqlCacheDependency.cs The ISqlCacheDependency interface provides specific SQL caching details, such as a Sql Command or a database connection and table. It is the concrete implementation of this interface that will be created by the factory in passed into the Cache Provider. using System; using System.Collections.Generic; using System.Linq; using System.Text;   namespace CacheDiSample.Domain.CacheInterfaces { public interface ISqlCacheDependency : ICacheDependency { ISqlCacheDependency Initialise(string databaseConnectionName, string tableName); ISqlCacheDependency Initialise(System.Data.SqlClient.SqlCommand sqlCommand); } } If we want other types of cache dependencies, such as by key or file, interfaces may be created to support these (the sample code includes an IKeyCacheDependency interface). Modifying ICacheProvider to accept Cache Dependencies Next I modified the exisitng ICacheProvider<T> interface so that cache dependencies may be passed into a Fetch method call. I did this by adding two overloads to the existing Fetch methods, which take an IEnumerable<ICacheDependency> parameter (the IEnumerable allows more than one cache dependency to be included). I also added a method to create cache dependencies. This means that the implementation of the Cache Provider will require a dependency on the Cache Dependency Factory. It is pretty much down to personal choice as to whether this approach is taken, or whether the Cache Dependency Factory is injected directly into the repository or other consumer of Cache Provider. I think, because the cache dependency cannot be used without the Cache Provider, placing the dependency on the factory into the Cache Provider implementation is cleaner. ICacheProvider.cs using System; using System.Collections.Generic;   namespace CacheDiSample.Domain.CacheInterfaces { public interface ICacheProvider<T> { T Fetch(string key, Func<T> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry); T Fetch(string key, Func<T> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry, IEnumerable<ICacheDependency> cacheDependencies);   IEnumerable<T> Fetch(string key, Func<IEnumerable<T>> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry); IEnumerable<T> Fetch(string key, Func<IEnumerable<T>> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry, IEnumerable<ICacheDependency> cacheDependencies);   U CreateCacheDependency<U>() where U : ICacheDependency; } }   Cache Dependency Factory Next I created the interface for the Cache Dependency Factory in the domain layer. ICacheDependencyFactory.cs namespace CacheDiSample.Domain.CacheInterfaces { public interface ICacheDependencyFactory { T Create<T>() where T : ICacheDependency;   void Release<T>(T cacheDependency) where T : ICacheDependency; } }   I used the ICacheDependency parent interface as a generic constraint on the create and release methods in the factory interface. Now the interfaces are in place, I moved on to the concrete implementations. ISqlCacheDependency Concrete Implementation The concrete implementation of ISqlCacheDependency will need to provide an instance of System.Web.Caching.SqlCacheDependency to the Cache Provider implementation. Unfortunately this class is sealed, so I cannot simply inherit from this. Instead, I created an interface called IAspNetCacheDependency that will provide a Create method to create an instance of the relevant System.Web.Caching Cache Dependency type. This interface is specific to the ASP.NET implementation of the Cache Provider, so it should be defined in the same layer as the concrete implementation of the Cache Provider (the MVC UI layer in the sample code). IAspNetCacheDependency.cs using System.Web.Caching;   namespace CacheDiSample.CacheProviders { public interface IAspNetCacheDependency { CacheDependency CreateAspNetCacheDependency(); } }   Next, I created the concrete implementation of the ISqlCacheDependency interface. This class also implements the IAspNetCacheDependency interface. This concrete implementation also is defined in the same layer as the Cache Provider implementation. AspNetSqlCacheDependency.cs using System.Web.Caching; using CacheDiSample.Domain.CacheInterfaces;   namespace CacheDiSample.CacheProviders { public class AspNetSqlCacheDependency : ISqlCacheDependency, IAspNetCacheDependency { private string databaseConnectionName;   private string tableName;   private System.Data.SqlClient.SqlCommand sqlCommand;   #region ISqlCacheDependency Members   public ISqlCacheDependency Initialise(string databaseConnectionName, string tableName) { this.databaseConnectionName = databaseConnectionName; this.tableName = tableName; return this; }   public ISqlCacheDependency Initialise(System.Data.SqlClient.SqlCommand sqlCommand) { this.sqlCommand = sqlCommand; return this; }   #endregion   #region IAspNetCacheDependency Members   public System.Web.Caching.CacheDependency CreateAspNetCacheDependency() { if (sqlCommand != null) return new SqlCacheDependency(sqlCommand); else return new SqlCacheDependency(databaseConnectionName, tableName); }   #endregion   } }   ICacheProvider Concrete Implementation The ICacheProvider interface is implemented by the CacheProvider class. This implementation is modified to include the changes to the ICacheProvider interface. First I needed to inject the Cache Dependency Factory into the Cache Provider: private ICacheDependencyFactory cacheDependencyFactory;   public CacheProvider(ICacheDependencyFactory cacheDependencyFactory) { if (cacheDependencyFactory == null) throw new ArgumentNullException("cacheDependencyFactory");   this.cacheDependencyFactory = cacheDependencyFactory; }   Next I implemented the CreateCacheDependency method, which simply passes on the create request to the factory: public U CreateCacheDependency<U>() where U : ICacheDependency { return this.cacheDependencyFactory.Create<U>(); }   The signature of the FetchAndCache helper method was modified to take an additional IEnumerable<ICacheDependency> parameter:   private U FetchAndCache<U>(string key, Func<U> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry, IEnumerable<ICacheDependency> cacheDependencies) and the following code added to create the relevant System.Web.Caching.CacheDependency object for any dependencies and pass them to the HttpContext Cache: CacheDependency aspNetCacheDependencies = null;   if (cacheDependencies != null) { if (cacheDependencies.Count() == 1) // We know that the implementations of ICacheDependency will also implement IAspNetCacheDependency // so we can use a cast here and call the CreateAspNetCacheDependency() method aspNetCacheDependencies = ((IAspNetCacheDependency)cacheDependencies.ElementAt(0)).CreateAspNetCacheDependency(); else if (cacheDependencies.Count() > 1) { AggregateCacheDependency aggregateCacheDependency = new AggregateCacheDependency(); foreach (ICacheDependency cacheDependency in cacheDependencies) { // We know that the implementations of ICacheDependency will also implement IAspNetCacheDependency // so we can use a cast here and call the CreateAspNetCacheDependency() method aggregateCacheDependency.Add(((IAspNetCacheDependency)cacheDependency).CreateAspNetCacheDependency()); } aspNetCacheDependencies = aggregateCacheDependency; } }   HttpContext.Current.Cache.Insert(key, value, aspNetCacheDependencies, absoluteExpiry.Value, relativeExpiry.Value);   The full code listing for the modified CacheProvider class is shown below: using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Caching; using CacheDiSample.Domain.CacheInterfaces;   namespace CacheDiSample.CacheProviders { public class CacheProvider<T> : ICacheProvider<T> { private ICacheDependencyFactory cacheDependencyFactory;   public CacheProvider(ICacheDependencyFactory cacheDependencyFactory) { if (cacheDependencyFactory == null) throw new ArgumentNullException("cacheDependencyFactory");   this.cacheDependencyFactory = cacheDependencyFactory; }   public T Fetch(string key, Func<T> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry) { return FetchAndCache<T>(key, retrieveData, absoluteExpiry, relativeExpiry, null); }   public T Fetch(string key, Func<T> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry, IEnumerable<ICacheDependency> cacheDependencies) { return FetchAndCache<T>(key, retrieveData, absoluteExpiry, relativeExpiry, cacheDependencies); }   public IEnumerable<T> Fetch(string key, Func<IEnumerable<T>> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry) { return FetchAndCache<IEnumerable<T>>(key, retrieveData, absoluteExpiry, relativeExpiry, null); }   public IEnumerable<T> Fetch(string key, Func<IEnumerable<T>> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry, IEnumerable<ICacheDependency> cacheDependencies) { return FetchAndCache<IEnumerable<T>>(key, retrieveData, absoluteExpiry, relativeExpiry, cacheDependencies); }   public U CreateCacheDependency<U>() where U : ICacheDependency { return this.cacheDependencyFactory.Create<U>(); }   #region Helper Methods   private U FetchAndCache<U>(string key, Func<U> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry, IEnumerable<ICacheDependency> cacheDependencies) { U value; if (!TryGetValue<U>(key, out value)) { value = retrieveData(); if (!absoluteExpiry.HasValue) absoluteExpiry = Cache.NoAbsoluteExpiration;   if (!relativeExpiry.HasValue) relativeExpiry = Cache.NoSlidingExpiration;   CacheDependency aspNetCacheDependencies = null;   if (cacheDependencies != null) { if (cacheDependencies.Count() == 1) // We know that the implementations of ICacheDependency will also implement IAspNetCacheDependency // so we can use a cast here and call the CreateAspNetCacheDependency() method aspNetCacheDependencies = ((IAspNetCacheDependency)cacheDependencies.ElementAt(0)).CreateAspNetCacheDependency(); else if (cacheDependencies.Count() > 1) { AggregateCacheDependency aggregateCacheDependency = new AggregateCacheDependency(); foreach (ICacheDependency cacheDependency in cacheDependencies) { // We know that the implementations of ICacheDependency will also implement IAspNetCacheDependency // so we can use a cast here and call the CreateAspNetCacheDependency() method aggregateCacheDependency.Add( ((IAspNetCacheDependency)cacheDependency).CreateAspNetCacheDependency()); } aspNetCacheDependencies = aggregateCacheDependency; } }   HttpContext.Current.Cache.Insert(key, value, aspNetCacheDependencies, absoluteExpiry.Value, relativeExpiry.Value);   } return value; }   private bool TryGetValue<U>(string key, out U value) { object cachedValue = HttpContext.Current.Cache.Get(key); if (cachedValue == null) { value = default(U); return false; } else { try { value = (U)cachedValue; return true; } catch { value = default(U); return false; } } }   #endregion } }   Wiring up the DI Container Now the implementations for the Cache Dependency are in place, I wired them up in the existing Windsor CacheInstaller. First I needed to register the implementation of the ISqlCacheDependency interface: container.Register( Component.For<ISqlCacheDependency>() .ImplementedBy<AspNetSqlCacheDependency>() .LifestyleTransient());   Next I registered the Cache Dependency Factory. Notice that I have not implemented the ICacheDependencyFactory interface. Castle Windsor will do this for me by using the Type Factory Facility. I do need to bring the Castle.Facilities.TypedFacility namespace into scope: using Castle.Facilities.TypedFactory;   Then I registered the factory: container.AddFacility<TypedFactoryFacility>();   container.Register( Component.For<ICacheDependencyFactory>() .AsFactory()); The full code for the CacheInstaller class is: using Castle.MicroKernel.Registration; using Castle.MicroKernel.SubSystems.Configuration; using Castle.Windsor; using Castle.Facilities.TypedFactory;   using CacheDiSample.Domain.CacheInterfaces; using CacheDiSample.CacheProviders;   namespace CacheDiSample.WindsorInstallers { public class CacheInstaller : IWindsorInstaller { public void Install(IWindsorContainer container, IConfigurationStore store) { container.Register( Component.For(typeof(ICacheProvider<>)) .ImplementedBy(typeof(CacheProvider<>)) .LifestyleTransient());   container.Register( Component.For<ISqlCacheDependency>() .ImplementedBy<AspNetSqlCacheDependency>() .LifestyleTransient());   container.AddFacility<TypedFactoryFacility>();   container.Register( Component.For<ICacheDependencyFactory>() .AsFactory()); } } }   Configuring the ASP.NET SQL Cache Dependency There are a couple of configuration steps required to enable SQL Cache Dependency for the application and database. From the Visual Studio Command Prompt, the following commands should be used to enable the Cache Polling of the relevant database tables: aspnet_regsql -S <servername> -E -d <databasename> –ed aspnet_regsql -S <servername> -E -d CacheSample –et –t <tablename>   (The –t option should be repeated for each table that is to be made available for cache dependencies). Finally the SQL Cache Polling needs to be enabled by adding the following configuration to the <system.web> section of web.config: <caching> <sqlCacheDependency pollTime="10000" enabled="true"> <databases> <add name="BloggingContext" connectionStringName="BloggingContext"/> </databases> </sqlCacheDependency> </caching>   (obviously the name and connection string name should be altered as required). Using a SQL Cache Dependency Now all the coding is complete. To specify a SQL Cache Dependency, I can modify my BlogRepositoryWithCaching decorator class (see the earlier post) as follows: public IList<Blog> GetAll() { var sqlCacheDependency = cacheProvider.CreateCacheDependency<ISqlCacheDependency>() .Initialise("BloggingContext", "Blogs");   ICacheDependency[] cacheDependencies = new ICacheDependency[] { sqlCacheDependency };   string key = string.Format("CacheDiSample.DataAccess.GetAll");   return cacheProvider.Fetch(key, () => { return parentBlogRepository.GetAll(); }, null, null, cacheDependencies) .ToList(); }   This will add a dependency of the “Blogs” table in the database. The data will remain in the cache until the contents of this table change, then the cache item will be invalidated, and the next call to the GetAll() repository method will be routed to the parent repository to refresh the data from the database.

    Read the article

  • User-Defined Customer Events & their impact (FA Type Profile)

    - by Rajesh Sharma
    CC&B automatically creates field activities when a specific Customer Event takes place. This depends on the way you have setup your Field Activity Type Profiles, the templates within, and associated SP Condition(s) on the template. CC&B uses the service point type, its state and referenced customer event to determine which field activity type to generate.   Customer events available in the base product include: Cut for Non-payment (CNP) Disconnect Warning (DIWA) Reconnect for Payment (REPY) Reread (RERD) Stop Service (STOP) Start Service (STRT) Start/Stop (STSP)   Note the Field values/codes defined for each event.   CC&B comes with a flexibility to define new set of customer events. These can be defined in the Look Up - CUST_EVT_FLG. Values from the Look Up are used on the Field Activity Type Profile Template page.     So what's the use of having user-defined Customer Events? And how will the system detect such events in order to create field activity(s)?   Well, system can only detect such events when you reference a user-defined customer event on a Severance Event Type for an event type Create Field Activities.     This way you can create additional field activities of a specific field activity type for user-defined customer events.   One of our customers adopted this feature and created a user-defined customer event CNPW - Cut for Non-payment for Water Services. This event was then linked on a Field Activity Type Profile and referenced on a Severance Event - CUT FOR NON PAY-W. The associated Severance Process was configured to trigger a reconnection process if it was cancelled (done by defining a Post Cancel Algorithm). Whenever this Severance Event was executed, a specific type of Field Activity was generated for disconnection purposes. The Field Activity type was determined by the system from the Field Activity Type Profile referenced for the SP Type, SP's state and the referenced user-defined customer event. All was working well until the time when they realized that in spite of the Severance Process getting cancelled (when a payment was made); the Post Cancel Algorithm was not executed to start a Reconnection Severance Process for the purpose of generating a reconnection field activity and reconnecting the service.   Basically, the Post Cancel algorithm (if specified on a Severance Process Template) is triggered when a Severance Process gets cancelled because a credit transaction has affected/relieved a Service Agreement's debt.   So what exactly was happening? Now we come to actual question as to what is the impact in having a user-defined customer event.   System defined/base customer events are hard-coded across the entire system. There is an impact even if you remove any customer event entry from the Look Up. User-defined customer events are not recognized by the system anywhere else except in the severance process, as described above.   There are few programs which have routines to first validate the completion of disconnection field activities, which were raised as a result of customer event CNP - Cut for Non-payment in order to perform other associated actions. One such program is the Post Cancel Algorithm, referenced on a Severance Process Template, generally used to reconnect services which were disconnected from other Severance Event, specifically CNP - Cut for Non-Payment. Post cancel algorithm provided by the product - SEV POST CAN does the following (below is the algorithm's description):   This algorithm is called after a severance process has been cancelled (typically because the debt was paid and the SA is no longer eligible to be on the severance process). It checks to see if the process has a completed 'disconnect' event and, if so, starts a reconnect process using the Reconnect Severance Process Template defined in the parameter.    Notice the underlined text. This algorithm implicitly checks for Field Activities having completed status, which were generated from Severance Events as a result of CNP - Cut for Non-payment customer event.   Now if we look back to the customer's issue, we can relate that the Post Cancel algorithm was triggered, but was not able to find any 'Completed' CNP - Cut for Non-payment related field activity. And hence was not able to start a reconnection severance process. This was because a field activity was generated and completed for a customer event CNPW - Cut for Non-payment of Water Services instead.   To conclude, if you introduce new customer events that extend or simulate base customer events, the ones that are included in the base product, ensure that there is no other impact either direct or indirect to other business functions that the application has to offer.  

    Read the article

  • Row Count Plus Transformation

    As the name suggests we have taken the current Row Count Transform that is provided by Microsoft in the Integration Services toolbox and we have recreated the functionality and extended upon it. There are two things about the current version that we thought could do with cleaning up Lack of a custom UI You have to type the variable name yourself In the Row Count Plus Transformation we solve these issues for you. Another thing we thought was missing is the ability to calculate the time taken between components in the pipeline. An example usage would be that you want to know how many rows flowed between Component A and Component B and how long it took. Again we have solved this issue. Credit must go to Erik Veerman of Solid Quality Learning for the idea behind noting the duration. We were looking at one of his packages and saw that he was doing something very similar but he was using a Script Component as a transformation. Our philosophy is that if you have to write or Copy and Paste the same piece of code more than once then you should be thinking about a custom component and here it is. The Row Count Plus Transformation populates variables with the values returned from; Counting the rows that have flowed through the path Returning the time in seconds between when it first saw a row come down this path and when it saw the final row. It is possible to leave both these boxes blank and the component will still work.   All input columns are passed through the transformation unaltered, you are not permitted to change or add to the inputs or outputs of this component. Optionally you can set the component to fire an event, which happens during the PostExecute phase of the execution. This can be useful to improve visibility of this information, such that it is captured in package logging, or can be used to drive workflow in the case of an error event. Properties Property Data Type Description OutputRowCountVariable String The name of the variable into which the amount of row read will be passed (Optional). OutputDurationVariable String The name of the variable into which the duration in seconds will be passed. (Optional). EventType RowCountPlusTransform.EventType The type of event to fire during post execute, included in which are the row count and duration values. RowCountPlusTransform.EventType Enumeration Name Value Description None 0 Do not fire any event. Information 1 Fire an Information event. Warning 2 Fire a Warning event. Error 3 Fire an Error event. Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. For 2005/2008 Only - Finally you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Row Count Plus Transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations, and this component requires a minimum of SQL Server 2005 Service Pack 1. Downloads The Row Number Transformation is available for SQL Server 2005, SQL Server 2008 (includes R2) and SQL Server 2012. Please choose the version to match your SQL Server version, or you can install multiple versions and use them side by side if you have more than one version of SQL Server installed. Row Count Plus Transformation for SQL Server 2005 Row Count Plus Transformation for SQL Server 2008 Row Count Plus Transformation for SQL Server 2012 Version History SQL Server 2012 Version 3.0.0.6 - SQL Server 2012 release. Includes upgrade support for both 2005 and 2008 packages to 2012. (5 Jun 2012) SQL Server 2008 Version 2.0.0.5 - SQL Server 2008 release. (15 Oct 2008) SQL Server 2005 Version 1.1.0.43 - Bug fix for duration. For long running processes the duration second count may have been incorrect. (8 Sep 2006) Version 1.1.0.42 - SP1 Compatibility Testing. Added the ability to raise an event with the count and duration data for easier logging or workflow. (18 Jun 2006) Version 1.0.0.1 - SQL Server 2005 RTM. Made available as general public release. (20 Mar 2006) Screenshot Troubleshooting Make sure you have downloaded the version that matches your version of SQL Server. We offer separate downloads for SQL Server 2005, SQL Server 2008 and SQL Server 2012. If you get an error when you try and use the component along the lines of The component could not be added to the Data Flow task. Please verify that this component is properly installed.  ... The data flow object "Konesans ..." is not installed correctly on this computer, this usually indicates that the internal cache of SSIS components needs to be updated. This is held by the SSIS service, so you need restart the the SQL Server Integration Services service. You can do this from the Services applet in Control Panel or Administrative Tools in Windows. You can also restart the computer if you prefer. You may also need to restart any current instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Once installation is complete you need to manually add the task to the toolbox before you will see it and to be able add it to packages - How do I install a task or transform component?

    Read the article

  • CodePlex Daily Summary for Wednesday, June 20, 2012

    CodePlex Daily Summary for Wednesday, June 20, 2012Popular ReleasesApex: Apex 1.4: Apex 1.4Apex 1.4 provides a framework for rapid MVVM development. Download Apex 1.4 to get the core binaries, Visual Studio Extensions, Project Templates, Samples and Documentation. The 1.4 Release provides a vast number of enhancements via the Apex Broker. The Apex Broker is an object that can be used to retrieve models, get the view for a view model and more, much like an IoC container. The new Zune Style application templates for WPF and Silverlight give a great starting point for makin...Auto Proxy Configuration: V 1.0: This tool consists of a windows application that allows you to define a proxy server for each DNSDomain and a windows service that matches the DNSDomain to the database and set the proxy accordingly.51Degrees.mobi - Mobile Device Detection and Redirection: 2.1.6.11: One Click Install from NuGet Changes to Version 2.1.6.111. Altered the GetIsCrawler method in MobileCapabilities.cs to return null if no value for IsCrawler is provided. Previously the method returned false breaking values provided via the BrowserCap file. 2. Enhanced Xml/Reader.cs to reduce the number of byte objects created when decompressing zipped XML files. 3. Altered Location.cs GetUrl method to remove replacement string tags {0} from the new url if no values were found to replace them...NShader - HLSL - GLSL - CG - Shader Syntax Highlighter AddIn for Visual Studio: NShader 1.3 - VS2010 + VS2012: This is a small maintenance release to support new VS2012 as well as VS2010. This release is also fixing the issue The "Comment Selection" include the first line after the selection If the new NShader version doesn't highlight your shader, you can try to: Remove the registry entry: HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\11.0\FontAndColors\Cache Remove all lines using "fx" or "hlsl" in file C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\CommonExtensions\Micr...asp.net membership: v1.0 Membership Management: Abmemunit is a Membership Management Project where developer can use their learning purposes of various issues related to ASP.NET, MVC, ASP.NET membership, Entity Framework, Code First, Ajax, Unit Test, IOC Container, Repository, and Service etc. Though it is a very simple project and all of these topics are used in an easy manner gathering from various big projects, it can easily understand. User End Functional Specification The functionalities of this project are very simple and straight...JSON Toolkit: JSON Toolkit 4.0: Up to 2.5x performance improvement in stringify operations Up to 1.7x performance improvement in parse operations Improved error messages when parsing invalid JSON strings Extended support to .Net 2.0, .Net 3.5, .Net 4.0, Silverlight 4, Windows Phone, Windows 8 metro apps and Xbox JSON namespace changed to ComputerBeacon.Json namespaceXenta Framework - extensible enterprise n-tier application framework: Xenta Framework 1.8.0: System Requirements OS Windows 7 Windows Vista Windows Server 2008 Windows Server 2008 R2 Web Server Internet Information Service 7.0 or above .NET Framework .NET Framework 4.0 WCF Activation feature HTTP Activation Non-HTTP Activation for net.pipe/net.tcp WCF bindings ASP.NET MVC ASP.NET MVC 3.0 Database Microsoft SQL Server 2005 Microsoft SQL Server 2008 Microsoft SQL Server 2008 R2 Additional Deployment Configuration Started Windows Process Activation service Start...ASP.NET REST Services Framework: Release 1.3 - Standard version: The REST-services Framework v1.3 has important functional changes allowing to use complex data types as service call parameters. Such can be mapped to form or query string variables or the HTTP Message Body. This is especially useful when REST-style service URLs with POST or PUT HTTP method is used. Beginning from v1.1 the REST-services Framework is compatible with ASP.NET Routing model as well with CRUD (Create, Read, Update, and Delete) principle. These two are often important when buildin...NanoMVVM: a lightweight wpf MVVM framework: v0.10 stable beta: v0.10 Minor fixes to ui and code, added error example to async commands, separated project into various releases (mainly into logical wholes), removed expression blend satellite assembliesCrashReporter.NET : Exception reporting library for C# and VB.NET: CrashReporter.NET 1.1: Added screenshot support that takes screenshot of user's desktop on application crash and provides option to include screenshot with crash report. Added Windows version in crash reports. Added email field and exception type field in crash report dialog. Added exception type in crash reports. Added screenshot tab that shows crash screenshot.MFCMAPI: June 2012 Release: Build: 15.0.0.1034 Full release notes at SGriffin's blog. If you just want to run the MFCMAPI or MrMAPI, get the executables. If you want to debug them, get the symbol files and the source. The 64 bit builds will only work on a machine with Outlook 2010 64 bit installed. All other machines should use the 32 bit builds, regardless of the operating system. Facebook BadgeMonoGame - Write Once, Play Everywhere: MonoGame 2.5.1: Release Notes The MonoGame team are pleased to announce that MonoGame v2.5.1 has been released. This release contains important bug fixes and minor updates. Recent additions include project templates for iOS and MacOS. The MonoDevelop.MonoGame AddIn also works on Linux. We have removed the dependency on the thirdparty GamePad library to allow MonoGame to be included in the debian/ubuntu repositories. There have been a major bug fix to ensure textures are disposed of correctly as well as some ...????: ????2.0.2: 1、???????????。 2、DJ???????10?,?????????10?。 3、??.NET 4.5(Windows 8)????????????。 4、???????????。 5、??????????????。 6、???Windows 8????。 7、?????2.0.1???????????????。 8、??DJ?????????。Azure Storage Explorer: Azure Storage Explorer 5 Preview 1 (6.17.2012): Azure Storage Explorer verison 5 is in development, and Preview 1 provides an early look at the new user interface and some of the new features. Here's what's new in v5 Preview 1: New UI, similar to the new Windows Azure HTML5 portal Support for configuring and viewing storage account logging Support for configuring and viewing storage account monitoring Uses the Windows Azure 1.7 SDK libraries Bug fixesCodename 'Chrometro': Developer Preview: Welcome to the Codename 'Chrometro' Developer Preview! This is the very first public preview of the app. Please note that this is a highly primitive build and the app is not even half of what it is meant to be. The Developer Preview sports the following: 1) An easy to use application setup. 2) The Assistant which simplifies your task of customization. 3) The partially complete Metro UI. 4) A variety of settings 5) A partially complete web browsing experience To get started, download the Ins...Cosmos (C# Open Source Managed Operating System): Release 92560: Prerequisites Visual Studio 2010 - Any version including Express. Express users must also install Visual Studio 2010 Integrated Shell runtime VMWare - Cosmos can run on real hardware as well as other virtualization environments but our default debug setup is configured for VMWare. VMWare Player (Free). or Workstation VMWare VIX API 1.11AutoUpdaterdotNET : Autoupdate for VB.NET and C# Developer: AutoUpdater.NET 1.1: Release Notes New feature added that allows user to select remind later interval.Microsoft SQL Server Product Samples: Database: AdventureWorks 2008 OLTP Script: Install AdventureWorks2008 OLTP database from script The AdventureWorks database can be created by running the instawdb.sql DDL script contained in the AdventureWorks 2008 OLTP Script.zip file. The instawdb.sql script depends on two path environment variables: SqlSamplesDatabasePath and SqlSamplesSourceDataPath. The SqlSamplesDatabasePath environment variable is set to the default Microsoft ® SQL Server 2008 path. You will need to change the SqlSamplesSourceDataPath environment variable to th...WipeTouch, a jQuery touch plugin: 1.2.0: Changes since 1.1.0: New: wipeMove event, triggered while moving the mouse/finger. New: added "source" to the result object. Bug fix: sometimes vertical wipe events would not trigger correctly. Bug fix: improved tapToClick handler. General code refactoring. Windows Phone 7 is not supported, yet! Its behaviour is completely broken and would require some special tricks to make it work. Maybe in the future...Phalanger - The PHP Language Compiler for the .NET Framework: 3.0.0.3026 (June 2012): Fixes: round( 0.0 ) local TimeZone name TimeZone search compiling multi-script-assemblies PhpString serialization DocDocument::loadHTMLFile() token_get_all() parse_url()New ProjectsAMPlebrot: Sample code for Microsoft AMP including a auto-zooming Mandlebrot set browser and various random number generator implementations.Aquasoft ISZR web services Proxy: Knihovna pro pripojení k Informacnímu Systému Základních Registru (ISZR).Article Authoring Add-in for Word (NLM JATS): Use Microsoft Word to create, edit, save, and upload journal articles in the NLM Journal Article Tag Suite (JATS) DTDBetter Calculator: Scientific calculator with support for user variables and full expression evaluation.Cosmos Image Converter (CIC) GUI: CIC Gui is a program developed for COSMOS (http://cosmos.codeplex.com/) that processes an image and creates code compatible with COSMOS for you.CSNN: ?????? ?????? ????????? ???????????? ???????????? ????????? ????, ??????????? ????????? ????????? ? ??????????? ???????? ?? ????????.Cuddy Chat Server Client: Small Chat Server & Client - basierend auf C# mit SocketsdemoASP: asp.net mvcEspera: Espera is a portable music player, specialized for partys. It is written in C# with WPF as frontend technology.ExamEvaluator: Utility to create excelspreadsheets that can be used to evaluate the results of an exam. (Dutch only for now)FTToolkit for WinRT: FTToolkit for WinRT is the Windows RT Version of FTToolkit. The framework is designed for using in Metro Style Apps on Windows 8HydroServer Lite: HydroServer Lite is a lightweight version of the CUAHSI HydroServer written in PHP. It can be run on any webhosting service that supports PHP and MySQL. ImmunityBusterWP7: We will let you know about this soon...JavaVM for small microcontrollers: uJavaVM is a Java virtual machine for small microcontrollers written in portable C. The project is organized to support multiple microcontroller platforms. Jaw-Breaker: Hello,JIMS: Jangids Inventory Management SystemLee's Simple HTA Template: A SIMPLE HTA that can be used in various scripts and OSD. Very Minimal, intended to be used a starting template.Log4netExtensions: Log4net extensions for none static classMedical information Management System: The Medical Information Management System (MIMS) is a comprehensive solution for the various stakeholders in Medical Industry in the Nigeria.Minimap XNA game component for TemporalWars Indie Game Engine: This Minimap XNA Component is designed to show unit movement, structures placed in the game world, and take direct orders (Windows Platform). MusicCreator: This is a program that make sounds. Just open a sound and add into the program. then press the record button and the sound will be recorded.Osnova CMS: Osnova CMS, content management frameworkPicBin: PicBin is like PasteBin only that it is for pics. The Project uses .NET 4.5, ASP.NET MVC 4 and HTML5.PowerShell scripts to enable and disable tracing for Microsoft Dynamics CRM 2011: These are PowerShell scripts (files). There are 2 scripts in the zip file "CRM2011EnableDisableTrace.zip".PrepareQuery for MVC: ??????????? ??? ASP.NET MVC. Central.Linq.Mvc ?????? ?????????? ??? Central.Linq, ??????? ????????? ??????????? ???????????? ??????? ?? ?????? ?? ??????.Proggy: This will be a code-driven CMS for .NET developers. If I ever get the time to finish it.QuickSL: SL????, ??:??????、??????。 ?UI???????Temporal-Wars XNA Indie Game Engine: Temporal Wars 3D Engine includes a full suite of WYSIWG tools designed for rapid creation of your game world. Created for myself (Ben), now available for FREE.VIPER: VIPER RuntimeWeibo API library: Sina Weibo API library is NOT based on OAuth, but it is much more powerful in operating weibo activities via program. Major usage: 1. Rebot 2. Archive content

    Read the article

  • CodePlex Daily Summary for Thursday, June 27, 2013

    CodePlex Daily Summary for Thursday, June 27, 2013Popular ReleasesV8.NET: V8.NET Beta Release v1.2.23.5: 1. Fixed some minor bugs. 2. Added the ability to reuse existing V8NativeObject types. Simply set the "Handle" property of the objects to a new native object handle (just write to it directly). Example: "myV8NativeObject.Handle = {V8Engine}.GlobalObject.Handle;" 3. Supports .NET 4.0! (forgot to compile against .NET 4.0 instead of 4.5, which is now fixed)Stored Procedure Pager: LYB.NET.SPPager 1.10: check bugs: 1 the total page count of default stored procedure ".LYBPager" always takes error as this: page size: 10 total item count: 100 then total page count should be 10, but last version is 11. 2 update some comments with English forbidding messy code.StyleMVVM: 3.0.3: This is a minor feature and bug fix release Features: SingletonAttribute - Shared(Permanently=true) has been obsoleted in favor of SingletonAttribute StyleMVVMServiceLocator - releasing new nuget package that implements the ServiceLocator pattern Bug Fixes: Fixed problem with ModuleManager when used in not XAML application Fixed problem with nuget packages in MVC & WCF project templatesWinRT by Example Sample Applications: Chapters 1 - 10: Source code for chapters 1 - 9 with tech edits for chapters 1 - 5.Terminals: Version 3.0 - Release: Changes since version 2.0:Choose 100% portable or installed version Removed connection warning when running RDP 8 (Windows 8) client Fixed Active directory search Extended Active directory search by LDAP filters Fixed single instance mode when running on Windows Terminal server Merged usage of Tags and Groups Added columns sorting option in tables No UAC prompts on Windows 7 Completely new file persistence data layer New MS SQL persistence layer (Store data in SQL database)...NuGet: NuGet 2.6: Released June 26, 2013. Release notes: http://docs.nuget.org/docs/release-notes/nuget-2.6Python Tools for Visual Studio: 2.0 Beta: We’re pleased to announce the release of Python Tools for Visual Studio 2.0 Beta. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including CPython/IronPython, Edit/Intellisense/Debug/Profile, Cloud, HPC, IPython, and cross platform debugging support. For a quick overview of the general IDE experience, please watch this video: http://www.youtube.com/watch?v=TuewiStN...xFunc: xFunc 2.3: Improved the Simplify method; Rewrote the Differentiate method in the Derivative class; Changed API; Added project for .Net 2.0/3.0/3.5/4.5/Portable;Player Framework by Microsoft: Player Framework for Windows 8 and WP8 (v1.3 beta): Preview: New MPEG DASH adaptive streaming plugin for WAMS. Preview: New Ultraviolet CFF plugin. Preview: New WP7 version with WP8 compatibility. (source code only) Source code is now available via CodePlex Git Misc bug fixes and improvements: WP8 only: Added optional fullscreen and mute buttons to default xaml JS only: protecting currentTime from returning infinity. Some videos would cause currentTime to be infinity which could cause errors in plugins expecting only finite values. (...AssaultCube Reloaded: 2.5.8: SERVER OWNERS: note that the default maprot has changed once again. Linux has Ubuntu 11.10 32-bit precompiled binaries and Ubuntu 10.10 64-bit precompiled binaries, but you can compile your own as it also contains the source. If you are using Mac or other operating systems, please wait while we continue to try to package for those OSes. Or better yet, try to compile it. If it fails, download a virtual machine. The server pack is ready for both Windows and Linux, but you might need to compi...Compare .NET Objects: Version 1.7.2.0: If you like it, please rate it. :) Performance Improvements Fix for deleted row in a data table Added ability to ignore the collection order Fix for Ignoring by AttributesMicrosoft Ajax Minifier: Microsoft Ajax Minifier 4.95: update parser to allow for CSS3 calc( function to nest. add recognition of -pponly (Preprocess-Only) switch in AjaxMinManifestTask build task. Fix crashing bug in EXE when processing a manifest file using the -xml switch and an error message needs to be displayed (like a missing input file). Create separate Clean and Bundle build tasks for working with manifest files (AjaxMinManifestCleanTask and AjaxMinBundleTask). Removed the IsCleanOperation from AjaxMinManifestTask -- use AjaxMinMan...VG-Ripper & PG-Ripper: VG-Ripper 2.9.44: changes NEW: Added Support for "ImgChili.net" links FIXED: Auto UpdaterDocument.Editor: 2013.25: What's new for Document.Editor 2013.25: Improved Spell Check support Improved User Interface Minor Bug Fix's, improvements and speed upsWPF Composites: Version 4.3.0: In this Beta release, I broke my code out into two separate projects. There is a core FasterWPF.dll with the minimal required functionality. This can run with only the Aero.dll and the Rx .dll's. Then, I have a FasterWPFExtras .dll that requires and supports the Extended WPF Toolkit™ Community Edition V 1.9.0 (including Xceed DataGrid) and the Thriple .dll. This is for developers who want more . . . Finally, you may notice the other OPTIONAL .dll's available in the download such as the Dyn...Channel9's Absolute Beginner Series: Windows Phone 8: Entire source code for the Channel 9 series, Windows Phone 8 Development for Absolute Beginners.Indent Guides for Visual Studio: Indent Guides v13: ImportantThis release does not support Visual Studio 2010. The latest stable release for VS 2010 is v12.1. Version History Changed in v13 Added page width guide lines Added guide highlighting options Fixed guides appearing over collapsed blocks Fixed guides not appearing in newly opened files Fixed some potential crashes Fixed lines going through pragma statements Various updates for VS 2012 and VS 2013 Removed VS 2010 support Changed in v12.1: Fixed crash when unable to start...Fluent Ribbon Control Suite: Fluent Ribbon Control Suite 2.1.0 - Prerelease d: Fluent Ribbon Control Suite 2.1.0 - Prerelease d(supports .NET 3.5, 4.0 and 4.5) Includes: Fluent.dll (with .pdb and .xml) Showcase Application Samples (not for .NET 3.5) Foundation (Tabs, Groups, Contextual Tabs, Quick Access Toolbar, Backstage) Resizing (ribbon reducing & enlarging principles) Galleries (Gallery in ContextMenu, InRibbonGallery) MVVM (shows how to use this library with Model-View-ViewModel pattern) KeyTips ScreenTips Toolbars ColorGallery *Walkthrough (do...Magick.NET: Magick.NET 6.8.5.1001: Magick.NET compiled against ImageMagick 6.8.5.10. Breaking changes: - MagickNET.Initialize has been made obsolete because the ImageMagick files in the directory are no longer necessary. - MagickGeometry is no longer IDisposable. - Renamed dll's so they include the platform name. - Image profiles can now only be accessed and modified with ImageProfile classes. - Renamed DrawableBase to Drawable. - Removed Args part of PathArc/PathCurvetoArgs/PathQuadraticCurvetoArgs classes. The...Three-Dimensional Maneuver Gear for Minecraft: TDMG 1.1.0.0 for 1.5.2: CodePlex???(????????) ?????????(???1/4) ??????????? ?????????? ???????????(??????????) ??????????????????????? ↑????、?????????????????????(???????) ???、??????????、?????????????????????、????????1.5?????????? Shift+W(????)??????????????????10°、?10°(?????????)???New ProjectsAriana - .NET Userkit: Ariana Userkit is a library which will allow a file to persist after being executed. It can allow manipulating processes, registry keys, directories etc.Baidu Map: Baidu MapBTC Trader: BTC Trader is a tool for the automated and manual trading on BTC exchanges.Cafe Software Management: This is software for SokulChangix.Sql: Very, very simple ORM framework =)Composite WPF Extensions: Extensions to Composite Client Application Library for WPFCross Site Collection Search Configurator: A solution for SharePoint to allow you to share your site collection based search configuration between site collections in the same web application.CSS Extractor: Attempts to extract inline css styles from a HTML file and put them into a separate CSS fileCSS600Summer2013: This is a project for students enrolled in CSS 600 at University of Washington, Bothell campus.Easy Backup Application: Easy Backup Application - application for easy scheduled backup local or network folders.Easy Backup Windows Service: Easy Backup Windows Service - application for easy scheduled backup local or network folders.fangxue: huijiaGamelight: Gamelight is a 2D game library for Silverlight. It features 2D graphics manipulation to make the creation of games that don't adhere to traditional UI (such as sprite-based games) much easier. It also streamlines sound and input management. General Method: General MethodICE - Interface Communications Engine: ICE is an engine which makes it easy to create .NET plugins for running on servers for the purpose of communication between systems.LogWriterReader using Named pipe: LogWriterReader using Named pipeLu 6.2 Viewer: Tool for analysing LU62 strings. Visualisation of node lists in tree structure, it offers the possability to compare two lu62 strings in seperate windows and visualise the differences between the two segments selected. Fw 2.0 developed by Riccardo di Nuzzo (http://www.dinuzzo.it)Math.Net PowerShell: PowerShell environment for Math.Net Numerics library, defines a few cmdlets that allow to create and manipulate matrices using a straightforward syntax.mycode: my codeNFC#: NFCSharp is a project aimed at building a native C# support for NFC tag manipulation.NJquery: jquery???????????????Page Pro Image Viewer and Twain: Low level image viewer Proyecto Ferreteria "CristoRey": FirstRobótica e Automação: Sistema desenvolvido por Alessandro José Segura de Oliveira - MERodolpho Brock Projects: A MVC 'like' start application for MSVS2010 - Microsoft Visual Studio 2010 UIL - User Interface Layer : <UIB - User Interface Base> - GUI: Graphic User Interface - WEB: ASP.NET MVC BLL - Business Logic Layer : <BLB - Business Logic Base> - Only one core!!! DAL - Data Access Layer : <DAB - Data Access Base> - SQL Server - PostgreSQL ORM - Object-Relational Mapping <Object-Relational Base> - ADO.NET Entity FrameworkSilverlight Layouts: Silverlight Layouts is a project for controls that behave as content placeholders with pre-defined GUI layout for some of common scenarios: - frozen headers, - frozen columns, - cyrcle layouts etc.Simple Logging Façade for C++: Port for Simple Logging Façade for C++, with a Java equivalent known as slf4j. SQL Server Data Compare: * Compare the *data* in two databases record by record. * Generate *html compare results* with interactive interface. * Generate *T-SQL synchronization scripts*UserVisitLogsHelp: UserVisitLogsHelp is Open-source the log access records HttpModule extension Components .The component is developed with C#,it's a custom HttpModule component.V5Soft: webcome v5softvcseVBA - version controlled software engineering in VBA: Simplify distributed application development for Visual Basic for Applications with this Excel AddinVersionizer: Versionizer is a configuration management command-line tool developed using C# .NET for editing AssemblyInfo files of Microsoft .NET projects.WebsiteFilter: WebsiteFilter is an open source asp.net web site to access IP address filtering httpmodule components. Wordion: Wordion is a tool for foreign language learners to memorize words.????????: ????????-?????????,??@???? ????。1、????????????,2、??????????????(????,???????,“?? ??”),3、?????>??>>????5???,4、??"@?/?"?Excel??????@??,5、??“??@",???????@ 。????QQ:6763745

    Read the article

  • Easing the Journey to the Private Cloud with Oracle Consulting

    - by MichaelM-Oracle
    By Sanjai Marimadaiah, Senior Director, Strategy & Business Development – Cloud Solutions, Oracle Consulting Services Business leaders are now leading the charge on how their firms can profit from cloud solutions. Agility and innovation are becoming the primary drivers of the business case for the cloud, even more than the anticipated cost savings. Leaders need to find the right strategy and optimize the use of cloud-based applications across their enterprise-computing infrastructure. The Problem – Current State With prevalent IT practices, many organizations find that they run multiple IT solutions serving similar business needs. This has led to the proliferation of technology stacks, for example: Oracle 10g on Sun T4 running Solaris 9; Oracle 11g on Exadata running Linux; or Oracle 12c on commodity x86 servers. This variance has a huge impact on an organization’s agility and expenses, and requires IT professionals with varied skills as well as on-going training for different systems and tools. Fortunately there is a practical business strategy to overcome this unneeded redundancy. Thus begins a journey to the right cloud computing solution. The Solution – Cloud Services from Oracle Consulting Services (OCS) Oracle Consulting Services (OCS ) works closely with our clients as trusted advisors to proactively respond to business needs and IT concerns. OCS understands that making the transition to cloud solutions begins with a strategic conversation, based on its deep expertise for successfully completing private cloud service engagements with several companies. For a journey to the cloud, Oracle Consulting Services leads the client through four phases– standardization, consolidation, service delivery, and enterprise cloud – to achieve optimal returns. Phase 1 - Standardization Oracle Consulting Services (OCS) works with clients to evaluate their business requirements and propose a set of standard solutions stacks for various IT solutions. This is an opportune time to evaluate cloud ready solutions, such as Oracle 12c, Oracle Exadata, and the Oracle Database Appliance (ODA). The OCS consultants, together with the delivery team, then turn to upgrading and migrating existing solution stacks to standardized offerings. OCS has the expertise and tools to complete this stage in a fraction of the time required by other IT services companies. Clients quickly realize cost savings in tools, processes, and type/number of resources required. This standardization also improves agility of the IT organizations and their abilities to respond to the needs of various business units. Phase 2 - Consolidation During the consolidation phase, OCS consultants programmatically consolidate hundreds of databases into a smaller number of servers to improve utilization, reduce floor space, and optimize maintenance costs. Consolidation helps clients realize huge savings in CapEx investments and shrink OpEx costs. The use of engineered systems, such as Oracle Exadata, greatly reduces the client’s risk of moving to a new solution stack. OCS recommends clients to pursue Phase 1 (Standardization) and Phase 2 (Consolidation) simultaneously to reduce the overall time, effort, and expense of the cloud journey. Phase 3 - Service Delivery Once a client is on a path of standardization and consolidation, OCS consultants create Service Catalogues based on the SLAs requirements and the criticality of the solutions. The number and types of Service Catalogues (Platinum, Gold, Silver, Bronze, etc.) vary from client to client. OCS consultants also implement a variety of value-added cloud solutions, including monitoring, metering, and charge-back solutions. At this stage, clients are able to achieve a high level of understanding in their cloud journey. Their IT organizations are operating efficiently and are more agile in responding to the needs of business units. Phase 4 - Enterprise Cloud In the final phase of the cloud journey, the economics of the IT organizations change. Business units can request services on-demand; applications can be deployed and consumed on a pay-as-you-go model. OCS has the expertise and capabilities to establish processes, programs, and solutions required for IT organizations to transform how they interact with business units. The Promise of Cloud Solutions Depending the size and complexity of their business model, some clients are able to abbreviate some phases of their cloud journey. Cloud solutions are still evolving and there is rapid pace of innovation to transform how IT organizations operate. The lesson is clear. Cloud solutions hold a lot of promise for business agility. Business leaders can now leverage an additional set of capabilities and services. They can ramp up their pace of innovation. With cloud maturity, they can compete more effectively in their respective markets. But there are certainly challenges ahead. A skilled consulting services partner can play a pivotal role as a trusted advisor in the successful adoption of cloud solutions. Oracle Consulting Services has expertise and a portfolio of services to help clients succeed on their journey to the cloud.

    Read the article

  • Limitations of User-Defined Customer Events (FA Type Profile)

    - by Rajesh Sharma
    CC&B automatically creates field activities when a specific Customer Event takes place. This depends on the way you have setup your Field Activity Type Profiles, the templates within, and associated SP Condition(s) on the template. CC&B uses the service point type, its state and referenced customer event to determine which field activity type to generate.   Customer events available in the base product include: Cut for Non-payment (CNP) Disconnect Warning (DIWA) Reconnect for Payment (REPY) Reread (RERD) Stop Service (STOP) Start Service (STRT) Start/Stop (STSP)   Note the Field values/codes defined for each event.   CC&B comes with a flexibility to define new set of customer events. These can be defined in the Look Up - CUST_EVT_FLG. Values from the Look Up are used on the Field Activity Type Profile Template page.     So what's the use of having user-defined Customer Events? And how will the system detect such events in order to create field activity(s)?   Well, system can only detect such events when you reference a user-defined customer event on a Severance Event Type for an event type Create Field Activities.     This way you can create additional field activities of a specific field activity type for user-defined customer events.   One of our customers adopted this feature and created a user-defined customer event CNPW - Cut for Non-payment for Water Services. This event was then linked on a Field Activity Type Profile and referenced on a Severance Event - CUT FOR NON PAY-W. The associated Severance Process was configured to trigger a reconnection process if it was cancelled (done by defining a Post Cancel Algorithm). Whenever this Severance Event was executed, a specific type of Field Activity was generated for disconnection purposes. The Field Activity type was determined by the system from the Field Activity Type Profile referenced for the SP Type, SP's state and the referenced user-defined customer event. All was working well until the time when they realized that in spite of the Severance Process getting cancelled (when a payment was made); the Post Cancel Algorithm was not executed to start a Reconnection Severance Process for the purpose of generating a reconnection field activity and reconnecting the service.   Basically, the Post Cancel algorithm (if specified on a Severance Process Template) is triggered when a Severance Process gets cancelled because a credit transaction has affected/relieved a Service Agreement's debt.   So what exactly was happening? Now we come to actual question as to what are limitations in having user-defined customer event.   System defined/base customer events are hard-coded across the entire system. There is an impact even if you remove any customer event entry from the Look Up. User-defined customer events are not recognized by the system anywhere else except in the severance process, as described above.   There are few programs which have routines to first validate the completion of disconnection field activities, which were raised as a result of customer event CNP - Cut for Non-payment in order to perform other associated actions. One such program is the Post Cancel Algorithm, referenced on a Severance Process Template, generally used to reconnect services which were disconnected from other Severance Event, specifically CNP - Cut for Non-Payment. Post cancel algorithm provided by the product - SEV POST CAN does the following (below is the algorithm's description):   This algorithm is called after a severance process has been cancelled (typically because the debt was paid and the SA is no longer eligible to be on the severance process). It checks to see if the process has a completed 'disconnect' event and, if so, starts a reconnect process using the Reconnect Severance Process Template defined in the parameter.    Notice the underlined text. This algorithm implicitly checks for Field Activities having completed status, which were generated from Severance Events as a result of CNP - Cut for Non-payment customer event.   Now if we look back to the customer's issue, we can relate that the Post Cancel algorithm was triggered, but was not able to find any 'Completed' CNP - Cut for Non-payment related field activity. And hence was not able to start a reconnection severance process. This was because a field activity was generated and completed for a customer event CNPW - Cut for Non-payment of Water Services instead.   To conclude, if you introduce new customer events, you should be aware that you don't extend or simulate base customer events, the ones that are included in the base product, as they are further used to provide/validate additional business functions.  

    Read the article

  • Enabling DNS for IPv6 infrastructure

    After successful automatic distribution of IPv6 address information via DHCPv6 in your local network it might be time to start offering some more services. Usually, we would use host names in order to communicate with other machines instead of their bare IPv6 addresses. During the following paragraphs we are going to enable our own DNS name server with IPv6 address resolving. This is the third article in a series on IPv6 configuration: Configure IPv6 on your Linux system DHCPv6: Provide IPv6 information in your local network Enabling DNS for IPv6 infrastructure Accessing your web server via IPv6 Piece of advice: This is based on my findings on the internet while reading other people's helpful articles and going through a couple of man-pages on my local system. What's your name and your IPv6 address? $ sudo service bind9 status * bind9 is running If the service is not recognised, you have to install it first on your system. This is done very easy and quickly like so: $ sudo apt-get install bind9 Once again, there is no specialised package for IPv6. Just the regular application is good to go. But of course, it is necessary to enable IPv6 binding in the options. Let's fire up a text editor and modify the configuration file. $ sudo nano /etc/bind/named.conf.optionsacl iosnet {        127.0.0.1;        192.168.1.0/24;        ::1/128;        2001:db8:bad:a55::/64;};listen-on { iosnet; };listen-on-v6 { any; };allow-query { iosnet; };allow-transfer { iosnet; }; Most important directive is the listen-on-v6. This will enable your named to bind to your IPv6 addresses specified on your system. Easiest is to specify any as value, and named will bind to all available IPv6 addresses during start. More details and explanations are found in the man-pages of named.conf. Save the file and restart the named service. As usual, check your log files and correct your configuration in case of any logged error messages. Using the netstat command you can validate whether the service is running and to which IP and IPv6 addresses it is bound to, like so: $ sudo service bind9 restart $ sudo netstat -lnptu | grep "named\W*$"tcp        0      0 192.168.1.2:53        0.0.0.0:*               LISTEN      1734/named      tcp        0      0 127.0.0.1:53          0.0.0.0:*               LISTEN      1734/named      tcp6       0      0 :::53                 :::*                    LISTEN      1734/named      udp        0      0 192.168.1.2:53        0.0.0.0:*                           1734/named      udp        0      0 127.0.0.1:53          0.0.0.0:*                           1734/named      udp6       0      0 :::53                 :::*                                1734/named   Sweet! Okay, now it's about time to resolve host names and their assigned IPv6 addresses using our own DNS name server. $ host -t aaaa www.6bone.net 2001:db8:bad:a55::2Using domain server:Name: 2001:db8:bad:a55::2Address: 2001:db8:bad:a55::2#53Aliases: www.6bone.net is an alias for 6bone.net.6bone.net has IPv6 address 2001:5c0:1000:10::2 Alright, our newly configured BIND named is fully operational. Eventually, you might be more familiar with the dig command. Here is the same kind of IPv6 host name resolve but it will provide more details about that particular host as well as the domain in general. $ dig @2001:db8:bad:a55::2 www.6bone.net. AAAA More details on the Berkeley Internet Name Domain (bind) daemon and IPv6 are available in Chapter 22.1 of Peter Bieringer's HOWTO on IPv6. Setting up your own DNS zone Now, that we have an operational named in place, it's about time to implement and configure our own host names and IPv6 address resolving. The general approach is to create your own zone database below the bind folder and to add AAAA records for your hosts. In order to achieve this, we have to define the zone first in the configuration file named.conf.local. $ sudo nano /etc/bind/named.conf.local //// Do any local configuration here//zone "ios.mu" {        type master;        file "/etc/bind/zones/db.ios.mu";}; Here we specify the location of our zone database file. Next, we are going to create it and add our host names, our IP and our IPv6 addresses. $ sudo nano /etc/bind/zones/db.ios.mu $ORIGIN .$TTL 259200     ; 3 daysios.mu                  IN SOA  ios.mu. hostmaster.ios.mu. (                                2014031101 ; serial                                28800      ; refresh (8 hours)                                7200       ; retry (2 hours)                                604800     ; expire (1 week)                                86400      ; minimum (1 day)                                )                        NS      server.ios.mu.$ORIGIN ios.mu.server                  A       192.168.1.2server                  AAAA    2001:db8:bad:a55::2client1                 A       192.168.1.3client1                 AAAA    2001:db8:bad:a55::3client2                 A       192.168.1.4client2                 AAAA    2001:db8:bad:a55::4 With a couple of machines in place, it's time to reload that new configuration. Note: Each time you are going to change your zone databases you have to modify the serial information, too. Named loads the plain text zone definitions and converts them into an internal, indexed binary format to improve lookup performance. If you forget to change your serial then named will not use the new records from the text file but the indexed ones. Or you have to flush the index and force a reload of the zone. This can be done easily by either restarting the named: $ sudo service bind9 restart or by reloading the configuration file using the name server control utility - rndc: $ sudo rndc reconfig Check your log files for any error messages and whether the new zone database has been accepted. Next, we are going to resolve a host name trying to get its IPv6 address like so: $ host -t aaaa server.ios.mu. 2001:db8:bad:a55::2Using domain server:Name: 2001:db8:bad:a55::2Address: 2001:db8:bad:a55::2#53Aliases: server.ios.mu has IPv6 address 2001:db8:bad:a55::2 Looks good. Alternatively, you could have just ping'd the system as well using the ping6 command instead of the regular ping: $ ping6 serverPING server(2001:db8:bad:a55::2) 56 data bytes64 bytes from 2001:db8:bad:a55::2: icmp_seq=1 ttl=64 time=0.615 ms64 bytes from 2001:db8:bad:a55::2: icmp_seq=2 ttl=64 time=0.407 ms^C--- ios1 ping statistics ---2 packets transmitted, 2 received, 0% packet loss, time 1001msrtt min/avg/max/mdev = 0.407/0.511/0.615/0.104 ms That also looks promising to me. How about your configuration? Next, it might be interesting to extend the range of available services on the network. One essential service would be to have web sites at hand.

    Read the article

  • Loosely coupled .NET Cache Provider using Dependency Injection

    - by Rhames
    I have recently been reading the excellent book “Dependency Injection in .NET”, written by Mark Seemann. I do not generally buy software development related books, as I never seem to have the time to read them, but I have found the time to read Mark’s book, and it was time well spent I think. Reading the ideas around Dependency Injection made me realise that the Cache Provider code I wrote about earlier (see http://geekswithblogs.net/Rhames/archive/2011/01/10/using-the-asp.net-cache-to-cache-data-in-a-model.aspx) could be refactored to use Dependency Injection, which should produce cleaner code. The goals are to: Separate the cache provider implementation (using the ASP.NET data cache) from the consumers (loose coupling). This will also mean that the dependency on System.Web for the cache provider does not ripple down into the layers where it is being consumed (such as the domain layer). Provide a decorator pattern to allow a consumer of the cache provider to be implemented separately from the base consumer (i.e. if we have a base repository, we can decorate this with a caching version). Although I used the term repository, in reality the cache consumer could be just about anything. Use constructor injection to provide the Dependency Injection, with a suitable DI container (I use Castle Windsor). The sample code for this post is available on github, https://github.com/RobinHames/CacheProvider.git ICacheProvider In the sample code, the key interface is ICacheProvider, which is in the domain layer. 1: using System; 2: using System.Collections.Generic; 3:   4: namespace CacheDiSample.Domain 5: { 6: public interface ICacheProvider<T> 7: { 8: T Fetch(string key, Func<T> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry); 9: IEnumerable<T> Fetch(string key, Func<IEnumerable<T>> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry); 10: } 11: }   This interface contains two methods to retrieve data from the cache, either as a single instance or as an IEnumerable. the second paramerter is of type Func<T>. This is the method used to retrieve data if nothing is found in the cache. The ASP.NET implementation of the ICacheProvider interface needs to live in a project that has a reference to system.web, typically this will be the root UI project, or it could be a separate project. The key thing is that the domain or data access layers do not need system.web references adding to them. In my sample MVC application, the CacheProvider is implemented in the UI project, in a folder called “CacheProviders”: 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Web; 5: using System.Web.Caching; 6: using CacheDiSample.Domain; 7:   8: namespace CacheDiSample.CacheProvider 9: { 10: public class CacheProvider<T> : ICacheProvider<T> 11: { 12: public T Fetch(string key, Func<T> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry) 13: { 14: return FetchAndCache<T>(key, retrieveData, absoluteExpiry, relativeExpiry); 15: } 16:   17: public IEnumerable<T> Fetch(string key, Func<IEnumerable<T>> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry) 18: { 19: return FetchAndCache<IEnumerable<T>>(key, retrieveData, absoluteExpiry, relativeExpiry); 20: } 21:   22: #region Helper Methods 23:   24: private U FetchAndCache<U>(string key, Func<U> retrieveData, DateTime? absoluteExpiry, TimeSpan? relativeExpiry) 25: { 26: U value; 27: if (!TryGetValue<U>(key, out value)) 28: { 29: value = retrieveData(); 30: if (!absoluteExpiry.HasValue) 31: absoluteExpiry = Cache.NoAbsoluteExpiration; 32:   33: if (!relativeExpiry.HasValue) 34: relativeExpiry = Cache.NoSlidingExpiration; 35:   36: HttpContext.Current.Cache.Insert(key, value, null, absoluteExpiry.Value, relativeExpiry.Value); 37: } 38: return value; 39: } 40:   41: private bool TryGetValue<U>(string key, out U value) 42: { 43: object cachedValue = HttpContext.Current.Cache.Get(key); 44: if (cachedValue == null) 45: { 46: value = default(U); 47: return false; 48: } 49: else 50: { 51: try 52: { 53: value = (U)cachedValue; 54: return true; 55: } 56: catch 57: { 58: value = default(U); 59: return false; 60: } 61: } 62: } 63:   64: #endregion 65:   66: } 67: }   The FetchAndCache helper method checks if the specified cache key exists, if it does not, the Func<U> retrieveData method is called, and the results are added to the cache. Using Castle Windsor to register the cache provider In the MVC UI project (my application root), Castle Windsor is used to register the CacheProvider implementation, using a Windsor Installer: 1: using Castle.MicroKernel.Registration; 2: using Castle.MicroKernel.SubSystems.Configuration; 3: using Castle.Windsor; 4:   5: using CacheDiSample.Domain; 6: using CacheDiSample.CacheProvider; 7:   8: namespace CacheDiSample.WindsorInstallers 9: { 10: public class CacheInstaller : IWindsorInstaller 11: { 12: public void Install(IWindsorContainer container, IConfigurationStore store) 13: { 14: container.Register( 15: Component.For(typeof(ICacheProvider<>)) 16: .ImplementedBy(typeof(CacheProvider<>)) 17: .LifestyleTransient()); 18: } 19: } 20: }   Note that the cache provider is registered as a open generic type. Consuming a Repository I have an existing couple of repository interfaces defined in my domain layer: IRepository.cs 1: using System; 2: using System.Collections.Generic; 3:   4: using CacheDiSample.Domain.Model; 5:   6: namespace CacheDiSample.Domain.Repositories 7: { 8: public interface IRepository<T> 9: where T : EntityBase 10: { 11: T GetById(int id); 12: IList<T> GetAll(); 13: } 14: }   IBlogRepository.cs 1: using System; 2: using CacheDiSample.Domain.Model; 3:   4: namespace CacheDiSample.Domain.Repositories 5: { 6: public interface IBlogRepository : IRepository<Blog> 7: { 8: Blog GetByName(string name); 9: } 10: }   These two repositories are implemented in the DataAccess layer, using Entity Framework to retrieve data (this is not important though). One important point is that in the BaseRepository implementation of IRepository, the methods are virtual. This will allow the decorator to override them. The BlogRepository is registered in a RepositoriesInstaller, again in the MVC UI project. 1: using Castle.MicroKernel.Registration; 2: using Castle.MicroKernel.SubSystems.Configuration; 3: using Castle.Windsor; 4:   5: using CacheDiSample.Domain.CacheDecorators; 6: using CacheDiSample.Domain.Repositories; 7: using CacheDiSample.DataAccess; 8:   9: namespace CacheDiSample.WindsorInstallers 10: { 11: public class RepositoriesInstaller : IWindsorInstaller 12: { 13: public void Install(IWindsorContainer container, IConfigurationStore store) 14: { 15: container.Register(Component.For<IBlogRepository>() 16: .ImplementedBy<BlogRepository>() 17: .LifestyleTransient() 18: .DependsOn(new 19: { 20: nameOrConnectionString = "BloggingContext" 21: })); 22: } 23: } 24: }   Now I can inject a dependency on the IBlogRepository into a consumer, such as a controller in my sample code: 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Web; 5: using System.Web.Mvc; 6:   7: using CacheDiSample.Domain.Repositories; 8: using CacheDiSample.Domain.Model; 9:   10: namespace CacheDiSample.Controllers 11: { 12: public class HomeController : Controller 13: { 14: private readonly IBlogRepository blogRepository; 15:   16: public HomeController(IBlogRepository blogRepository) 17: { 18: if (blogRepository == null) 19: throw new ArgumentNullException("blogRepository"); 20:   21: this.blogRepository = blogRepository; 22: } 23:   24: public ActionResult Index() 25: { 26: ViewBag.Message = "Welcome to ASP.NET MVC!"; 27:   28: var blogs = blogRepository.GetAll(); 29:   30: return View(new Models.HomeModel { Blogs = blogs }); 31: } 32:   33: public ActionResult About() 34: { 35: return View(); 36: } 37: } 38: }   Consuming the Cache Provider via a Decorator I used a Decorator pattern to consume the cache provider, this means my repositories follow the open/closed principle, as they do not require any modifications to implement the caching. It also means that my controllers do not have any knowledge of the caching taking place, as the DI container will simply inject the decorator instead of the root implementation of the repository. The first step is to implement a BlogRepository decorator, with the caching logic in it. Note that this can reside in the domain layer, as it does not require any knowledge of the data access methods. BlogRepositoryWithCaching.cs 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5:   6: using CacheDiSample.Domain.Model; 7: using CacheDiSample.Domain; 8: using CacheDiSample.Domain.Repositories; 9:   10: namespace CacheDiSample.Domain.CacheDecorators 11: { 12: public class BlogRepositoryWithCaching : IBlogRepository 13: { 14: // The generic cache provider, injected by DI 15: private ICacheProvider<Blog> cacheProvider; 16: // The decorated blog repository, injected by DI 17: private IBlogRepository parentBlogRepository; 18:   19: public BlogRepositoryWithCaching(IBlogRepository parentBlogRepository, ICacheProvider<Blog> cacheProvider) 20: { 21: if (parentBlogRepository == null) 22: throw new ArgumentNullException("parentBlogRepository"); 23:   24: this.parentBlogRepository = parentBlogRepository; 25:   26: if (cacheProvider == null) 27: throw new ArgumentNullException("cacheProvider"); 28:   29: this.cacheProvider = cacheProvider; 30: } 31:   32: public Blog GetByName(string name) 33: { 34: string key = string.Format("CacheDiSample.DataAccess.GetByName.{0}", name); 35: // hard code 5 minute expiry! 36: TimeSpan relativeCacheExpiry = new TimeSpan(0, 5, 0); 37: return cacheProvider.Fetch(key, () => 38: { 39: return parentBlogRepository.GetByName(name); 40: }, 41: null, relativeCacheExpiry); 42: } 43:   44: public Blog GetById(int id) 45: { 46: string key = string.Format("CacheDiSample.DataAccess.GetById.{0}", id); 47:   48: // hard code 5 minute expiry! 49: TimeSpan relativeCacheExpiry = new TimeSpan(0, 5, 0); 50: return cacheProvider.Fetch(key, () => 51: { 52: return parentBlogRepository.GetById(id); 53: }, 54: null, relativeCacheExpiry); 55: } 56:   57: public IList<Blog> GetAll() 58: { 59: string key = string.Format("CacheDiSample.DataAccess.GetAll"); 60:   61: // hard code 5 minute expiry! 62: TimeSpan relativeCacheExpiry = new TimeSpan(0, 5, 0); 63: return cacheProvider.Fetch(key, () => 64: { 65: return parentBlogRepository.GetAll(); 66: }, 67: null, relativeCacheExpiry) 68: .ToList(); 69: } 70: } 71: }   The key things in this caching repository are: I inject into the repository the ICacheProvider<Blog> implementation, via the constructor. This will make the cache provider functionality available to the repository. I inject the parent IBlogRepository implementation (which has the actual data access code), via the constructor. This will allow the methods implemented in the parent to be called if nothing is found in the cache. I override each of the methods implemented in the repository, including those implemented in the generic BaseRepository. Each override of these methods follows the same pattern. It makes a call to the CacheProvider.Fetch method, and passes in the parentBlogRepository implementation of the method as the retrieval method, to be used if nothing is present in the cache. Configuring the Caching Repository in the DI Container The final piece of the jigsaw is to tell Castle Windsor to use the BlogRepositoryWithCaching implementation of IBlogRepository, but to inject the actual Data Access implementation into this decorator. This is easily achieved by modifying the RepositoriesInstaller to use Windsor’s implicit decorator wiring: 1: using Castle.MicroKernel.Registration; 2: using Castle.MicroKernel.SubSystems.Configuration; 3: using Castle.Windsor; 4:   5: using CacheDiSample.Domain.CacheDecorators; 6: using CacheDiSample.Domain.Repositories; 7: using CacheDiSample.DataAccess; 8:   9: namespace CacheDiSample.WindsorInstallers 10: { 11: public class RepositoriesInstaller : IWindsorInstaller 12: { 13: public void Install(IWindsorContainer container, IConfigurationStore store) 14: { 15:   16: // Use Castle Windsor implicit wiring for the block repository decorator 17: // Register the outermost decorator first 18: container.Register(Component.For<IBlogRepository>() 19: .ImplementedBy<BlogRepositoryWithCaching>() 20: .LifestyleTransient()); 21: // Next register the IBlogRepository inmplementation to inject into the outer decorator 22: container.Register(Component.For<IBlogRepository>() 23: .ImplementedBy<BlogRepository>() 24: .LifestyleTransient() 25: .DependsOn(new 26: { 27: nameOrConnectionString = "BloggingContext" 28: })); 29: } 30: } 31: }   This is all that is needed. Now if the consumer of the repository makes a call to the repositories method, it will be routed via the caching mechanism. You can test this by stepping through the code, and seeing that the DataAccess.BlogRepository code is only called if there is no data in the cache, or this has expired. The next step is to add the SQL Cache Dependency support into this pattern, this will be a future post.

    Read the article

  • 7-Eleven Improves the Digital Guest Experience With 10-Minute Application Provisioning

    - by MichaelM-Oracle
    By Vishal Mehra - Director, Cloud Computing, Oracle Consulting Making the Cloud Journey Matter There’s much more to cloud computing than cutting costs and closing data centers. In fact, cloud computing is fast becoming the engine for innovation and productivity in the digital age. Oracle Consulting Services contributes to our customers’ cloud journey by accelerating application provisioning and rapidly deploying enterprise solutions. By blending flexibility with standardization, our Middleware as a Service (MWaaS) offering is ensuring the success of many cloud initiatives. 10-Minute Application Provisioning Times at 7-Eleven As a case in point, 7-Eleven recently highlighted the scope, scale, and results of a cloud-powered environment. The world’s largest convenience store chain is rolling out a Digital Guest Experience (DGE) program across 8,500 stores in the U.S. and Canada. Everyday, 7-Eleven connects with tens of millions of customers through point-of-sale terminals, web sites, and mobile apps. Promoting customer loyalty, targeting promotions, downloading digital coupons, and accepting digital payments are all part of the roadmap for a comprehensive and rewarding customer experience. And what about the time required for deploying successive versions of this mission-critical solution? Ron Clanton, 7-Eleven's DGE Program Manager, Information Technology reported at Oracle Open World, " We are now able to provision new environments in less than 10 minutes. This includes the complete SOA Suite on Exalogic, and Enterprise Manager managing both the SOA Suite, Exalogic, and our Exadata databases ." OCS understands the complex nature of innovative solutions and has processes and expertise to help clients like 7-Eleven rapidly develop technology that enhances the customer experience with little more than the click of a button. OCS understood that the 7-Eleven roadmap required careful planning, agile development, and a cloud-capable environment to move fast and perform at enterprise scale. Business Agility Today’s business-savvy technology leaders face competing priorities as they confront the digital disruptions of the mobile revolution and next-generation enterprise applications. To support an innovation agenda, IT is required to balance competing priorities between development and operations groups. Standardization and consolidation of computing resources are the keys to success. With our operational and technical expertise promoting business agility, Oracle Consulting's deep Middleware as a Service experience can make a significant difference to our clients by empowering enterprise IT organizations with the computing environment they seek to keep up with the pace of change that digitally driven business units expect. Depending on the needs of the organization, this environment runs within a private, public, or hybrid cloud infrastructure. Through on-demand access to a shared pool of configurable computing resources, IT delivers the standard tools and methods for developing, integrating, deploying, and scaling next-generation applications. Gold profiles of predefined configurations eliminate the version mismatches among databases, application servers, and SOA suite components, delivered both by Oracle and other enterprise ISVs. These computing resources are well defined in business terms, enabling users to select what they need from a service catalog. Striking the Balance between Development and Operations As a result, development groups have the flexibility to choose among a menu of available services with descriptions of standard business functions, service level guarantees, and costs. Faced with the consumerization of enterprise IT, they can deliver the innovative customer experiences that seamlessly integrate with underlying enterprise applications and services. This cloud-powered development and testing environment accelerates release cycles to ensure agile development and rapid deployments. At the same time, the operations group is relying on certified stacks and frameworks, tuned to predefined environments and patterns. Operators can maintain a high level of security, and continue best practices for applications/systems monitoring and management. Moreover, faced with the challenges of delivering on service level agreements (SLAs) with the business units, operators can ensure performance, scalability, and reliability of the infrastructure. The elasticity of a cloud-computing environment – the ability to rapidly add virtual machines and storage in response to computing demands -- makes a difference for hardware utilization and efficiency. Contending with Continuous Change What does it take to succeed on the promise of the cloud? As the engine for innovation and productivity in the digital age, IT must face not only the technical transformations but also the organizational challenges of the cloud. Standardizing key technologies, resources, and services through cloud computing is only one part of the cloud journey. Managing relationships among multiple department and projects over time – developing the management, governance, and monitoring capabilities within IT – is an often unmentioned but all too important second part. In fact, IT must have the organizational agility to contend with continuous change. This is where a skilled consulting services partner can play a pivotal role as a trusted advisor in the successful adoption of cloud solutions. With a lifecycle services approach to delivering innovative business solutions, Oracle Consulting Services has expertise and a portfolio of services to help enterprise customers succeed on their cloud journeys as well as other converging mega trends .

    Read the article

  • High Availability for IaaS, PaaS and SaaS in the Cloud

    - by BuckWoody
    Outages, natural disasters and unforeseen events have proved that even in a distributed architecture, you need to plan for High Availability (HA). In this entry I'll explain a few considerations for HA within Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). In a separate post I'll talk more about Disaster Recovery (DR), since each paradigm has a different way to handle that. Planning for HA in IaaS IaaS involves Virtual Machines - so in effect, an HA strategy here takes on many of the same characteristics as it would on-premises. The primary difference is that the vendor controls the hardware, so you need to verify what they do for things like local redundancy and so on from the hardware perspective. As far as what you can control and plan for, the primary factors fall into three areas: multiple instances, geographical dispersion and task-switching. In almost every cloud vendor I've studied, to ensure your application will be protected by any level of HA, you need to have at least two of the Instances (VM's) running. This makes sense, but you might assume that the vendor just takes care of that for you - they don't. If a single VM goes down (for whatever reason) then the access to it is lost. Depending on multiple factors, you might be able to recover the data, but you should assume that you can't. You should keep a sync to another location (perhaps the vendor's storage system in another geographic datacenter or to a local location) to ensure you can continue to serve your clients. You'll also need to host the same VM's in another geographical location. Everything from a vendor outage to a network path problem could prevent your users from reaching the system, so you need to have multiple locations to handle this. This means that you'll have to figure out how to manage state between the geo's. If the system goes down in the middle of a transaction, you need to figure out what part of the process the system was in, and then re-create or transfer that state to the second set of systems. If you didn't write the software yourself, this is non-trivial. You'll also need a manual or automatic process to detect the failure and re-route the traffic to your secondary location. You could flip a DNS entry (if your application can tolerate that) or invoke another process to alias the first system to the second, such as load-balancing and so on. There are many options, but all of them involve coding the state into the application layer. If you've simply moved a state-ful application to VM's, you may not be able to easily implement an HA solution. Planning for HA in PaaS Implementing HA in PaaS is a bit simpler, since it's built on the concept of stateless applications deployment. Once again, you need at least two copies of each element in the solution (web roles, worker roles, etc.) to remain available in a single datacenter. Also, you need to deploy the application again in a separate geo, but the advantage here is that you could work out a "shared storage" model such that state is auto-balanced across the world. In fact, you don't have to maintain a "DR" site, the alternate location can be live and serving clients, and only take on extra load if the other site is not available. In Windows Azure, you can use the Traffic Manager service top route the requests as a type of auto balancer. Even with these benefits, I recommend a second backup of storage in another geographic location. Storage is inexpensive; and that second copy can be used for not only HA but DR. Planning for HA in SaaS In Software-as-a-Service (such as Office 365, or Hadoop in Windows Azure) You have far less control over the HA solution, although you still maintain the responsibility to ensure you have it. Since each SaaS is different, check with the vendor on the solution for HA - and make sure you understand what they do and what you are responsible for. They may have no HA for that solution, or pin it to a particular geo, or perhaps they have a massive HA built in with automatic load balancing (which is often the case).   All of these options (with the exception of SaaS) involve higher costs for the design. Do not sacrifice reliability for cost - that will always cost you more in the end. Build in the redundancy and HA at the very outset of the project - if you try to tack it on later in the process the business will push back and potentially not implement HA. References: http://www.bing.com/search?q=windows+azure+High+Availability  (each type of implementation is different, so I'm routing you to a search on the topic - look for the "Patterns and Practices" results for the area in Azure you're interested in)

    Read the article

  • A very useful custom component

    - by Kevin Smith
    Whenever I am debugging a problem in WebCenter Content (WCC) I often find it useful to see the contents of the internal data binder used by WCC when executing a service. I want to know the value of all parameters passed in by the caller, either a user in the web GUI or from an application calling the service via RIDC or web services. I also want to the know the value of binder variables calculated by WCC as it processes a service. What defaults has it applied based on configuration settings or profile rules? What values has it derived based on the user input? To help with this I created a  component that uses a java filter to dump out the contents of the internal data binder to the WCC trace file. It dumps the binder contents using the toString() method. You can register this filter code using many different filter hooks to see how the binder is updated as WCC processes the service. By default, it uses the validateStandard filter hook which is useful during a CHECKIN service. It uses the system trace section, so make sure that trace section is enabled before looking for the output from this component. Here is some sample output>system/6    10.09 09:57:40.648    IdcServer-1    filter: postParseDataForServiceRequest, binder start -- system/6    10.09 09:57:40.698    IdcServer-1    *** LocalData *** system/6    10.09 09:57:40.698    IdcServer-1    (10 keys + 0 defaults) system/6    10.09 09:57:40.698    IdcServer-1    ClientEncoding=UTF-8 system/6    10.09 09:57:40.698    IdcServer-1    IdcService=CHECKIN_UNIVERSAL system/6    10.09 09:57:40.698    IdcServer-1    NoHttpHeaders=0 system/6    10.09 09:57:40.698    IdcServer-1    UserDateFormat=iso8601 system/6    10.09 09:57:40.698    IdcServer-1    UserTimeZone=UTC system/6    10.09 09:57:40.698    IdcServer-1    dDocTitle=Check in from RIDC using Framework Folder system/6    10.09 09:57:40.698    IdcServer-1    dDocType=Document system/6    10.09 09:57:40.698    IdcServer-1    dSecurityGroup=Public system/6    10.09 09:57:40.698    IdcServer-1    parentFolderPath=/folder1/folder2 system/6    10.09 09:57:40.698    IdcServer-1    primaryFile=testfile5.bin     system/6    10.09 09:57:40.698    IdcServer-1    ***  RESULT SETS  ***>system/6    10.09 09:57:40.698    IdcServer-1    binder end -------------------------------------------- See the readme included in the component for more details. You can download the component from here.

    Read the article

  • Clouds, Clouds, Clouds Everywhere, Not a Drop of Rain!

    - by sxkumar
    At the recently concluded Oracle OpenWorld 2012, the center of discussion was clearly Cloud. Over the five action packed days, I got to meet a large number of customers and most of them had serious interest in all things cloud.  Public Cloud - particularly the Oracle Cloud - clearly got a lot of attention and interest. I think the use cases and the value proposition for public cloud is pretty straight forward. However, when it comes to private cloud, there were some interesting revelations.  Well, I shouldn’t really call them revelations since they are pretty consistent with what I have heard from customers at other conferences as well as during 1:1 interactions. While the interest in enterprise private cloud remains to be very high, only a handful of enterprises have truly embarked on a journey to create what the purists would call true private cloud - with capabilities such as self-service and chargeback/show back. For a large majority, today's reality is simply consolidation and virtualization - and they are quite far off from creating an agile, self-service and transparent IT infrastructure which is what the enterprise cloud is all about.  Even a handful of those who have actually implemented a close-to-real enterprise private cloud have taken an infrastructure centric approach and are seeing only limited business upside. Quite a few were frank enough to admit that chargeback and self-service isn’t something that they see an immediate need for.  This is in quite contrast to the picture being painted by all those surveys out there that show a large number of enterprises having already implemented an enterprise private cloud.  On the face of it, this seems quite contrary to the observations outlined above. So what exactly is the reality? Well, the reality is that there is undoubtedly a huge amount of interest among enterprises about transforming their legacy IT environment - which is often seen as too rigid, too fragmented, and ultimately too expensive - to something more agile, transparent and business-focused. At the same time however, there is a great deal of confusion among CIOs and architects about how to get there. This isn't very surprising given all the buzz and hype surrounding cloud computing. Every IT vendor claims to have the most unique solution and there isn't a single IT product out there that does not have a cloud angle to it. Add to this the chatter on the blogosphere, it will get even a sane mind spinning.  Consequently, most  enterprises are still struggling to fully understand the concept and value of enterprise private cloud.  Even among those who have chosen to move forward relatively early, quite a few have made their decisions more based on vendor influence/preferences rather than what their businesses actually need.  Clearly, there is a disconnect between the promise of the enterprise private cloud and the current adoption trends.  So what is the way forward?  I certainly do not claim to have all the answers. But here is a perspective that many cloud practitioners have found useful and thus worth sharing. To take a step back, the fundamental premise of the enterprise private cloud is IT transformation. It is the quest to create a more agile, transparent and efficient IT infrastructure that is driven more by business needs rather than constrained by operational and procedural inefficiencies. It is the new way of delivering and consuming IT services - where the IT organizations operate more like enablers of  strategic services rather than just being the gatekeepers of IT resources. In an enterprise private cloud environment, IT organizations are expected to empower the end users via self-service access/control and provide the business stakeholders a transparent view of how the resources are being used, what’s the cost of delivering a given service, how well are the customers being served, etc.  But the most important thing to note here is the enterprise private cloud is not just an IT project, rather it is a business initiative to create an IT setup that is more aligned with the needs of today's dynamic and highly competitive business environment. Surprised? You shouldn’t be. Just remember how the business users have been at the forefront of public cloud adoption within enterprises and private cloud is no exception.   Such a broad-based transformation makes cloud more than a technology initiative. It requires people (organizational) and process changes as well, and these changes are as critical as is the choice of right tools and technology. In my next blog,  I will share how essential it is for enterprise cloud technology to go hand-in hand with process re-engineering and organization changes to unlock true value of  enterprise cloud. I am sharing a short video from my session "Managing your private Cloud" at Oracle OpenWorld 2012. More videos from this session will be posted at the recently introduced Zero to Cloud resource page. Many other experts of Oracle enterprise private cloud solution will join me on this blog "Zero to Cloud"  and share best practices , deployment tips and information on how to plan, build, deploy, monitor, manage , meter and optimize the enterprise private cloud. We look forward to your feedback, suggestions and having an engaging conversion with you on this blog.

    Read the article

  • How to use Ninject with XNA?

    - by Rosarch
    I'm having difficulty integrating Ninject with XNA. static class Program { /** * The main entry point for the application. */ static void Main(string[] args) { IKernel kernel = new StandardKernel(NinjectModuleManager.GetModules()); CachedContentLoader content = kernel.Get<CachedContentLoader>(); // stack overflow here MasterEngine game = kernel.Get<MasterEngine>(); game.Run(); } } // constructor for the game public MasterEngine(IKernel kernel) : base(kernel) { this.inputReader = kernel.Get<IInputReader>(); graphicsDeviceManager = kernel.Get<GraphicsDeviceManager>(); Components.Add(kernel.Get<GamerServicesComponent>()); // Tell the loader to look for all files relative to the "Content" directory. Assets = kernel.Get<CachedContentLoader>(); //Sets dimensions of the game window graphicsDeviceManager.PreferredBackBufferWidth = 800; graphicsDeviceManager.PreferredBackBufferHeight = 600; graphicsDeviceManager.ApplyChanges(); IsMouseVisible = false; } Ninject.cs: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Ninject.Modules; using HWAlphaRelease.Controller; using Microsoft.Xna.Framework; using Nuclex.DependencyInjection.Demo.Scaffolding; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.Graphics; namespace HWAlphaRelease { public static class NinjectModuleManager { public static NinjectModule[] GetModules() { return new NinjectModule[1] { new GameModule() }; } /// <summary>Dependency injection rules for the main game instance</summary> public class GameModule : NinjectModule { #region class ServiceProviderAdapter /// <summary>Delegates to the game's built-in service provider</summary> /// <remarks> /// <para> /// When a class' constructor requires an IServiceProvider, the dependency /// injector cannot just construct a new one and wouldn't know that it has /// to create an instance of the Game class (or take it from the existing /// Game instance). /// </para> /// <para> /// The solution, then, is this small adapter that takes a Game instance /// and acts as if it was a freely constructable IServiceProvider implementation /// while in reality, it delegates all lookups to the Game's service container. /// </para> /// </remarks> private class ServiceProviderAdapter : IServiceProvider { /// <summary>Initializes a new service provider adapter for the game</summary> /// <param name="game">Game the service provider will be taken from</param> public ServiceProviderAdapter(Game game) { this.gameServices = game.Services; } /// <summary>Retrieves a service from the game service container</summary> /// <param name="serviceType">Type of the service that will be retrieved</param> /// <returns>The service that has been requested</returns> public object GetService(Type serviceType) { return this.gameServices; } /// <summary>Game services container of the Game instance</summary> private GameServiceContainer gameServices; } #endregion // class ServiceProviderAdapter #region class ContentManagerAdapter /// <summary>Delegates to the game's built-in ContentManager</summary> /// <remarks> /// This provides shared access to the game's ContentManager. A dependency /// injected class only needs to require the ISharedContentService in its /// constructor and the dependency injector will automatically resolve it /// to this adapter, which delegates to the Game's built-in content manager. /// </remarks> private class ContentManagerAdapter : ISharedContentService { /// <summary>Initializes a new shared content manager adapter</summary> /// <param name="game">Game the content manager will be taken from</param> public ContentManagerAdapter(Game game) { this.contentManager = game.Content; } /// <summary>Loads or accesses shared game content</summary> /// <typeparam name="AssetType">Type of the asset to be loaded or accessed</typeparam> /// <param name="assetName">Path and name of the requested asset</param> /// <returns>The requested asset from the the shared game content store</returns> public AssetType Load<AssetType>(string assetName) { return this.contentManager.Load<AssetType>(assetName); } /// <summary>The content manager this instance delegates to</summary> private ContentManager contentManager; } #endregion // class ContentManagerAdapter /// <summary>Initializes the dependency configuration</summary> public override void Load() { // Allows access to the game class for any components with a dependency // on the 'Game' or 'DependencyInjectionGame' classes. Bind<MasterEngine>().ToSelf().InSingletonScope(); Bind<NinjectGame>().To<MasterEngine>().InSingletonScope(); Bind<Game>().To<MasterEngine>().InSingletonScope(); // Let the dependency injector construct a graphics device manager for // all components depending on the IGraphicsDeviceService and // IGraphicsDeviceManager interfaces Bind<GraphicsDeviceManager>().ToSelf().InSingletonScope(); Bind<IGraphicsDeviceService>().To<GraphicsDeviceManager>().InSingletonScope(); Bind<IGraphicsDeviceManager>().To<GraphicsDeviceManager>().InSingletonScope(); // Some clever adapters that hand out the Game's IServiceProvider and allow // access to its built-in ContentManager Bind<IServiceProvider>().To<ServiceProviderAdapter>().InSingletonScope(); Bind<ISharedContentService>().To<ContentManagerAdapter>().InSingletonScope(); Bind<IInputReader>().To<UserInputReader>().InSingletonScope().WithConstructorArgument("keyMapping", Constants.DEFAULT_KEY_MAPPING); Bind<CachedContentLoader>().ToSelf().InSingletonScope().WithConstructorArgument("rootDir", "Content"); } } } } NinjectGame.cs /// <summary>Base class for Games making use of Ninject</summary> public class NinjectGame : Game { /// <summary>Initializes a new Ninject game instance</summary> /// <param name="kernel">Kernel the game has been created by</param> public NinjectGame(IKernel kernel) { Type ownType = this.GetType(); if(ownType != typeof(Game)) { kernel.Bind<NinjectGame>().To<MasterEngine>().InSingletonScope(); } kernel.Bind<Game>().To<NinjectGame>().InSingletonScope(); } } } // namespace Nuclex.DependencyInjection.Demo.Scaffolding When I try to get the CachedContentLoader, I get a stack overflow exception. I'm basing this off of this tutorial, but I really have no idea what I'm doing. Help?

    Read the article

  • iOS bluetooth low energy not detecting peripherals

    - by user3712524
    My app won't detect peripherals. Im using light blue to simulate a bluetooth low energy peripheral and my app just won't sense it. I even installed light blue on two devices to make sure it was generating a peripheral signal properly and it is. Any suggestions? My labels are updating and the NSLog is showing that the scanning is starting. Thanks in advance. #import <UIKit/UIKit.h> #import <CoreBluetooth/CoreBluetooth.h> @interface ViewController : UIViewController @property (weak, nonatomic) IBOutlet UITextField *navDestination; @end #import "ViewController.h" @implementation ViewController - (IBAction)connect:(id)sender { } - (IBAction)navDestination:(id)sender { NSString *destinationText = self.navDestination.text; } - (void)viewDidLoad { } - (void)viewWillDisappear:(BOOL)animated { [super viewWillDisappear:animated]; } - (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; // Dispose of any resources that can be recreated. } @end #import <UIKit/UIKit.h> #import "ViewController.h" @interface BlueToothViewController : UIViewController @property (strong, nonatomic) CBCentralManager *centralManager; @property (strong, nonatomic) CBPeripheral *discoveredPerepheral; @property (strong, nonatomic) NSMutableData *data; @property (strong, nonatomic) IBOutlet UITextView *textview; @property (weak, nonatomic) IBOutlet UILabel *charLabel; @property (weak, nonatomic) IBOutlet UILabel *isConnected; @property (weak, nonatomic) IBOutlet UILabel *myPeripherals; @property (weak, nonatomic) IBOutlet UILabel *aLabel; - (void)centralManagerDidUpdateState:(CBCentralManager *)central; - (void)centralManger:(CBCentralManager *)central didDiscoverPeripheral: (CBPeripheral *)peripheral advertisementData:(NSDictionary *)advertisementData RSSI:(NSNumber *)RSSI; -(void)centralManager:(CBCentralManager *)central didFailToConnectPeripheral:(CBPeripheral *)peripheral error:(NSError *)error; -(void)cleanup; -(void)centralManager:(CBCentralManager *)central didConnectPeripheral:(CBPeripheral *)peripheral; -(void)peripheral:(CBPeripheral *)peripheral didDiscoverServices:(NSError *)error; -(void)peripheral:(CBPeripheral *)peripheral didDiscoverCharacteristicsForService:(CBService *)service error:(NSError *)error; -(void)centralManager:(CBCentralManager *)central didDisconnectPeripheral:(CBPeripheral *)peripheral error:(NSError *)error; -(void)peripheral:(CBPeripheral *)peripheral didUpdateValueForCharacteristic:(CBCharacteristic *)characteristic error:(NSError *)error; -(void)peripheral:(CBPeripheral *)peripheral didUpdateNotificationStateForCharacteristic:(CBCharacteristic *)characteristic error:(NSError *)error; @end @interface BlueToothViewController () @end @implementation BlueToothViewController - (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil { self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil]; if (self) { // Custom initialization } return self; } - (void)viewDidLoad { _centralManager = [[CBCentralManager alloc]initWithDelegate:self queue:nil options:nil]; _data = [[NSMutableData alloc]init]; } - (void)viewWillDisappear:(BOOL)animated { [super viewWillDisappear:animated]; [_centralManager stopScan]; } - (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; // Dispose of any resources that can be recreated. } - (void)centralManagerDidUpdateState:(CBCentralManager *)central { //you should test all scenarios if (central.state == CBCentralManagerStateUnknown) { self.aLabel.text = @"I dont do anything because my state is unknown."; return; } if (central.state == CBCentralManagerStatePoweredOn) { //scan for devices [_centralManager scanForPeripheralsWithServices:nil options:@{ CBCentralManagerScanOptionAllowDuplicatesKey : @YES }]; NSLog(@"Scanning Started"); } if (central.state == CBCentralManagerStateResetting) { self.aLabel.text = @"I dont do anything because my state is resetting."; return; } if (central.state == CBCentralManagerStateUnsupported) { self.aLabel.text = @"I dont do anything because my state is unsupported."; return; } if (central.state == CBCentralManagerStateUnauthorized) { self.aLabel.text = @"I dont do anything because my state is unauthorized."; return; } if (central.state == CBCentralManagerStatePoweredOff) { self.aLabel.text = @"I dont do anything because my state is powered off."; return; } } - (void)centralManger:(CBCentralManager *)central didDiscoverPeripheral:(CBPeripheral *)peripheral advertisementData:(NSDictionary *)advertisementData RSSI:(NSNumber *)RSSI { NSLog(@"Discovered %@ at %@", peripheral.name, RSSI); self.myPeripherals.text = [NSString stringWithFormat:@"%@%@",peripheral.name, RSSI]; if (_discoveredPerepheral != peripheral) { //save a copy of the peripheral _discoveredPerepheral = peripheral; //and connect NSLog(@"Connecting to peripheral %@", peripheral); [_centralManager connectPeripheral:peripheral options:nil]; self.aLabel.text = [NSString stringWithFormat:@"%@", peripheral]; } } -(void)centralManager:(CBCentralManager *)central didFailToConnectPeripheral:(CBPeripheral *)peripheral error:(NSError *)error { NSLog(@"Failed to connect"); [self cleanup]; } -(void)cleanup { //see if we are subscribed to a characteristic on the peripheral if (_discoveredPerepheral.services != nil) { for (CBService *service in _discoveredPerepheral.services) { if (service.characteristics != nil) { for (CBCharacteristic *characteristic in service.characteristics) { if ([characteristic.UUID isEqual:[CBUUID UUIDWithString:@"508EFF8E-F541-57EF-BD82-B0B4EC504CA9"]]) { if (characteristic.isNotifying) { [_discoveredPerepheral setNotifyValue:NO forCharacteristic:characteristic]; return; } } } } } } [_centralManager cancelPeripheralConnection:_discoveredPerepheral]; } -(void)centralManager:(CBCentralManager *)central didConnectPeripheral:(CBPeripheral *)peripheral { NSLog(@"Connected"); [_centralManager stopScan]; NSLog(@"Scanning stopped"); self.isConnected.text = [NSString stringWithFormat:@"Connected"]; [_data setLength:0]; peripheral.delegate = self; [peripheral discoverServices:nil]; } -(void)peripheral:(CBPeripheral *)peripheral didDiscoverServices:(NSError *)error { if (error) { [self cleanup]; return; } for (CBService *service in peripheral.services) { [peripheral discoverCharacteristics:nil forService:service]; } //discover other characteristics } -(void)peripheral:(CBPeripheral *)peripheral didDiscoverCharacteristicsForService:(CBService *)service error:(NSError *)error { if (error) { [self cleanup]; return; } for (CBCharacteristic *characteristic in service.characteristics) { [peripheral setNotifyValue:YES forCharacteristic:characteristic]; } } -(void)peripheral:(CBPeripheral *)peripheral didUpdateValueForCharacteristic:(CBCharacteristic *)characteristic error:(NSError *)error { if (error) { NSLog(@"Error"); return; } NSString *stringFromData = [[NSString alloc]initWithData:characteristic.value encoding:NSUTF8StringEncoding]; self.charLabel.text = [NSString stringWithFormat:@"%@", stringFromData]; //Have we got everything we need? if ([stringFromData isEqualToString:@"EOM"]) { [_textview setText:[[NSString alloc]initWithData:self.data encoding:NSUTF8StringEncoding]]; [peripheral setNotifyValue:NO forCharacteristic:characteristic]; [_centralManager cancelPeripheralConnection:peripheral]; } } -(void)peripheral:(CBPeripheral *)peripheral didUpdateNotificationStateForCharacteristic:(CBCharacteristic *)characteristic error:(NSError *)error { if ([characteristic.UUID isEqual:nil]) { return; } if (characteristic.isNotifying) { NSLog(@"Notification began on %@", characteristic); } else { [_centralManager cancelPeripheralConnection:peripheral]; } } -(void)centralManager:(CBCentralManager *)central didDisconnectPeripheral:(CBPeripheral *)peripheral error:(NSError *)error { _discoveredPerepheral = nil; self.isConnected.text = [NSString stringWithFormat:@"Connecting..."]; [_centralManager scanForPeripheralsWithServices:nil options:@{ CBCentralManagerScanOptionAllowDuplicatesKey : @YES}]; } @end

    Read the article

  • How do I configure Tomcat services in Ubuntu?

    - by Karan
    I have created a Tomcat script inside the /etc/init.d directory which is #!/bin/bash # description: Tomcat Start Stop Restart # processname: tomcat # chkconfig: 234 20 80 JAVA_HOME=/usr/java/jdk1.6.0_30 export JAVA_HOME PATH=$JAVA_HOME/bin:$PATH export PATH CATALINA_HOME=/usr/tomcat/apache-tomcat-6.0.32 case $1 in start) sh $CATALINA_HOME/bin/startup.sh ;; stop) sh $CATALINA_HOME/bin/shutdown.sh ;; restart) sh $CATALINA_HOME/bin/shutdown.sh sh $CATALINA_HOME/bin/startup.sh ;; esac exit 0 After this I am trying to add this into chkconfig which is as [root@blanche init.d]# chkconfig --add tomcat [root@blanche init.d]# chkconfig --level 234 tomcat on But it is giving me the following error: [root@blanche init.d]:/etc/init.d$ chkconfig --add tomcat insserv: warning: script 'K20acpi-support' missing LSB tags and overrides insserv: warning: script 'tomcat' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'failsafe-x' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'acpid' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'dmesg' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'udevmonitor' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'ufw' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'module-init-tools' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'plymouth-splash' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'gdm' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'rsyslog' missing LSB tags and overrides insserv: warning: current start runlevel(s) (0 6) of script `wpa-ifupdown' overwrites defaults (empty). The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'hwclock' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'console-setup' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'udev' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'plymouth-log' missing LSB tags and overrides insserv: warning: current start runlevel(s) (0) of script `halt' overwrites defaults (empty). The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'mysql' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'atd' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'network-manager' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'alsa-mixer-save' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'udev-finish' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'screen-cleanup' missing LSB tags and overrides insserv: warning: script 'acpi-support' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'avahi-daemon' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'dbus' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'procps' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'irqbalance' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'plymouth-stop' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'anacron' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'plymouth' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'udevtrigger' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'hostname' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'hwclock-save' missing LSB tags and overrides insserv: warning: current start runlevel(s) (0 6) of script `networking' overwrites defaults (empty). insserv: warning: current start runlevel(s) (0 6) of script `umountfs' overwrites defaults (empty). insserv: warning: current start runlevel(s) (0 6) of script `umountnfs.sh' overwrites defaults (empty). The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'network-interface' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'network-interface-security' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'cron' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'apport' missing LSB tags and overrides insserv: warning: current start runlevel(s) (6) of script `reboot' overwrites defaults (empty). insserv: warning: current start runlevel(s) (0 6) of script `umountroot' overwrites defaults (empty). insserv: warning: current start runlevel(s) (0 6) of script `sendsigs' overwrites defaults (empty). insserv: There is a loop between service rsyslog and pulseaudio if stopped insserv: loop involving service pulseaudio at depth 3 insserv: loop involving service rsyslog at depth 2 insserv: loop involving service udev at depth 1 insserv: There is a loop between service rsyslog and pulseaudio if stopped insserv: loop involving service bluetooth at depth 2 insserv: exiting now without changing boot order! /sbin/insserv failed, exit code 1 tomcat 0:off 1:off 2:off 3:off 4:off 5:off 6:off Please suggest what to do for configuring a Tomcat server as a service.

    Read the article

  • IIS Strategies for Accessing Secured Network Resources

    - by ErikE
    Problem: A user connects to a service on a machine, such as an IIS web site or a SQL Server database. The site or the database need to gain access to network resources such as file shares (the most common) or a database on a different server. Permission is denied. This is because the user the service is running under doesn't have network permissions in the first place, or if it does, it doesn't have rights to access the remote resource. I keep running into this problem over and over again and am tired of not having a really solid way of handling it. Here are some workarounds I'm aware of: Run IIS as a custom-created domain user who is granted high permissions If permissions are granted one file share at a time, then every time I want to read from a new share, I would have to ask a network admin to add it for me. Eventually, with many web sites reading from many shares, it is going to get really complicated. If permissions are just opened up wide for the user to access any file shares in our domain, then this seems like an unnecessary security surface area to present. This also applies to all the sites running on IIS, rather than just the selected site or virtual directory that needs the access, a further surface area problem. Still use the IUSR account but give it network permissions and set up the same user name on the remote resource (not a domain user, a local user) This also has its problems. For example, there's a file share I am using that I have full rights to for sharing, but I can't log in to the machine. So I have to find the right admin and ask him to do it for me. Any time something has to change, it's another request to an admin. Allow IIS users to connect as anonymous, but set the account used for anonymous access to a high-privilege one This is even worse than giving the IIS IUSR full privileges, because it means my web site can't use any kind of security in the first place. Connect using Kerberos, then delegate This sounds good in principle but has all sorts of problems. First of all, if you're using virtual web sites where the domain name you connect to the site with is not the base machine name (as we do frequently), then you have to set up a Service Principal Name on the webserver using Microsoft's SetSPN utility. It's complicated and apparently prone to errors. Also, you have to ask your network/domain admin to change security policy for both the web server and the domain account so they are "trusted for delegation." If you don't get everything perfectly right, suddenly your intended Kerberos authentication is NTLM instead, and you can only impersonate rather than delegate, and thus no reaching out over the network as the user. Also, this method can be problematic because sometimes you need the web site or database to have permissions that the connecting user doesn't have. Create a service or COM+ application that fetches the resource for the web site Services and COM+ packages are run with their own set of credentials. Running as a high-privilege user is okay since they can do their own security and deny requests that are not legitimate, putting control in the hands of the application developer instead of the network admin. Problems: I am using a COM+ package that does exactly this on Windows Server 2000 to deliver highly sensitive images to a secured web application. I tried moving the web site to Windows Server 2003 and was suddenly denied permission to instantiate the COM+ object, very likely registry permissions. I trolled around quite a bit and did not solve the problem, partly because I was reluctant to give the IUSR account full registry permissions. That seems like the same bad practice as just running IIS as a high-privilege user. Note: This is actually really simple. In a programming language of your choice, you create a class with a function that returns an instance of the object you want (an ADODB.Connection, for example), and build a dll, which you register as a COM+ object. In your web server-side code, you create an instance of the class and use the function, and since it is running under a different security context, calls to network resources work. Map drive letters to shares This could theoretically work, but in my mind it's not really a good long-term strategy. Even though mappings can be created with specific credentials, and this can be done by others than a network admin, this also is going to mean that there are either way too many shared drives (small granularity) or too much permission is granted to entire file servers (large granularity). Also, I haven't figured out how to map a drive so that the IUSR gets the drives. Mapping a drive is for the current user, I don't know the IUSR account password to log in as it and create the mappings. Move the resources local to the web server/database There are times when I've done this, especially with Access databases. Does the database have to live out on the file share? Sometimes, it was just easiest to move the database to the web server or to the SQL database server (so the linked server to it would work). But I don't think this is a great all-around solution, either. And it won't work when the resource is a service rather than a file. Move the service to the final web server/database I suppose I could run a web server on my SQL Server database, so the web site can connect to it using impersonation and make me happy. But do we really want random extra web servers on our database servers just so this is possible? No. Virtual directories in IIS I know that virtual directories can help make remote resources look as though they are local, and this supports using custom credentials for each virtual directory. I haven't been able to come up with, yet, how this would solve the problem for system calls. Users could reach file shares directly, but this won't help, say, classic ASP code access resources. I could use a URL instead of a file path to read remote data files in a web page, but this isn't going to help me make a connection to an Access database, a SQL server database, or any other resource that uses a connection library rather than being able to just read all the bytes and work with them. I wish there was some kind of "service tunnel" that I could create. Think about how a VPN makes remote resources look like they are local. With a richer aliasing mechanism, perhaps code-based, why couldn't even database connections occur under a defined security context? Why not a special Windows component that lets you specify, per user, what resources are available and what alternate credentials are used for the connection? File shares, databases, web sites, you name it. I guess I'm almost talking about a specialized local proxy server. Anyway, so there's my list. I may update it if I think of more. Does anyone have any ideas for me? My current problem today is, yet again, I need a web site to connect to an Access database on a file share. Here we go again...

    Read the article

  • Does anyone really understand how HFSC scheduling in Linux/BSD works?

    - by Mecki
    I read the original SIGCOMM '97 PostScript paper about HFSC, it is very technically, but I understand the basic concept. Instead of giving a linear service curve (as with pretty much every other scheduling algorithm), you can specify a convex or concave service curve and thus it is possible to decouple bandwidth and delay. However, even though this paper mentions to kind of scheduling algorithms being used (real-time and link-share), it always only mentions ONE curve per scheduling class (the decoupling is done by specifying this curve, only one curve is needed for that). Now HFSC has been implemented for BSD (OpenBSD, FreeBSD, etc.) using the ALTQ scheduling framework and it has been implemented Linux using the TC scheduling framework (part of iproute2). Both implementations added two additional service curves, that were NOT in the original paper! A real-time service curve and an upper-limit service curve. Again, please note that the original paper mentions two scheduling algorithms (real-time and link-share), but in that paper both work with one single service curve. There never have been two independent service curves for either one as you currently find in BSD and Linux. Even worse, some version of ALTQ seems to add an additional queue priority to HSFC (there is no such thing as priority in the original paper either). I found several BSD HowTo's mentioning this priority setting (even though the man page of the latest ALTQ release knows no such parameter for HSFC, so officially it does not even exist). This all makes the HFSC scheduling even more complex than the algorithm described in the original paper and there are tons of tutorials on the Internet that often contradict each other, one claiming the opposite of the other one. This is probably the main reason why nobody really seems to understand how HFSC scheduling really works. Before I can ask my questions, we need a sample setup of some kind. I'll use a very simple one as seen in the image below: Here are some questions I cannot answer because the tutorials contradict each other: What for do I need a real-time curve at all? Assuming A1, A2, B1, B2 are all 128 kbit/s link-share (no real-time curve for either one), then each of those will get 128 kbit/s if the root has 512 kbit/s to distribute (and A and B are both 256 kbit/s of course), right? Why would I additionally give A1 and B1 a real-time curve with 128 kbit/s? What would this be good for? To give those two a higher priority? According to original paper I can give them a higher priority by using a curve, that's what HFSC is all about after all. By giving both classes a curve of [256kbit/s 20ms 128kbit/s] both have twice the priority than A2 and B2 automatically (still only getting 128 kbit/s on average) Does the real-time bandwidth count towards the link-share bandwidth? E.g. if A1 and B1 both only have 64kbit/s real-time and 64kbit/s link-share bandwidth, does that mean once they are served 64kbit/s via real-time, their link-share requirement is satisfied as well (they might get excess bandwidth, but lets ignore that for a second) or does that mean they get another 64 kbit/s via link-share? So does each class has a bandwidth "requirement" of real-time plus link-share? Or does a class only have a higher requirement than the real-time curve if the link-share curve is higher than the real-time curve (current link-share requirement equals specified link-share requirement minus real-time bandwidth already provided to this class)? Is upper limit curve applied to real-time as well, only to link-share, or maybe to both? Some tutorials say one way, some say the other way. Some even claim upper-limit is the maximum for real-time bandwidth + link-share bandwidth? What is the truth? Assuming A2 and B2 are both 128 kbit/s, does it make any difference if A1 and B1 are 128 kbit/s link-share only, or 64 kbit/s real-time and 128 kbit/s link-share, and if so, what difference? If I use the seperate real-time curve to increase priorities of classes, why would I need "curves" at all? Why is not real-time a flat value and link-share also a flat value? Why are both curves? The need for curves is clear in the original paper, because there is only one attribute of that kind per class. But now, having three attributes (real-time, link-share, and upper-limit) what for do I still need curves on each one? Why would I want the curves shape (not average bandwidth, but their slopes) to be different for real-time and link-share traffic? According to the little documentation available, real-time curve values are totally ignored for inner classes (class A and B), they are only applied to leaf classes (A1, A2, B1, B2). If that is true, why does the ALTQ HFSC sample configuration (search for 3.3 Sample configuration) set real-time curves on inner classes and claims that those set the guaranteed rate of those inner classes? Isn't that completely pointless? (note: pshare sets the link-share curve in ALTQ and grate the real-time curve; you can see this in the paragraph above the sample configuration). Some tutorials say the sum of all real-time curves may not be higher than 80% of the line speed, others say it must not be higher than 70% of the line speed. Which one is right or are they maybe both wrong? One tutorial said you shall forget all the theory. No matter how things really work (schedulers and bandwidth distribution), imagine the three curves according to the following "simplified mind model": real-time is the guaranteed bandwidth that this class will always get. link-share is the bandwidth that this class wants to become fully satisfied, but satisfaction cannot be guaranteed. In case there is excess bandwidth, the class might even get offered more bandwidth than necessary to become satisfied, but it may never use more than upper-limit says. For all this to work, the sum of all real-time bandwidths may not be above xx% of the line speed (see question above, the percentage varies). Question: Is this more or less accurate or a total misunderstanding of HSFC? And if assumption above is really accurate, where is prioritization in that model? E.g. every class might have a real-time bandwidth (guaranteed), a link-share bandwidth (not guaranteed) and an maybe an upper-limit, but still some classes have higher priority needs than other classes. In that case I must still prioritize somehow, even among real-time traffic of those classes. Would I prioritize by the slope of the curves? And if so, which curve? The real-time curve? The link-share curve? The upper-limit curve? All of them? Would I give all of them the same slope or each a different one and how to find out the right slope? I still haven't lost hope that there exists at least a hand full of people in this world that really understood HFSC and are able to answer all these questions accurately. And doing so without contradicting each other in the answers would be really nice ;-)

    Read the article

  • Does anyone really understand how HFSC scheduling in Linux/BSD works?

    - by Mecki
    I read the original SIGCOMM '97 PostScript paper about HFSC, it is very technically, but I understand the basic concept. Instead of giving a linear service curve (as with pretty much every other scheduling algorithm), you can specify a convex or concave service curve and thus it is possible to decouple bandwidth and delay. However, even though this paper mentions to kind of scheduling algorithms being used (real-time and link-share), it always only mentions ONE curve per scheduling class (the decoupling is done by specifying this curve, only one curve is needed for that). Now HFSC has been implemented for BSD (OpenBSD, FreeBSD, etc.) using the ALTQ scheduling framework and it has been implemented Linux using the TC scheduling framework (part of iproute2). Both implementations added two additional service curves, that were NOT in the original paper! A real-time service curve and an upper-limit service curve. Again, please note that the original paper mentions two scheduling algorithms (real-time and link-share), but in that paper both work with one single service curve. There never have been two independent service curves for either one as you currently find in BSD and Linux. Even worse, some version of ALTQ seems to add an additional queue priority to HSFC (there is no such thing as priority in the original paper either). I found several BSD HowTo's mentioning this priority setting (even though the man page of the latest ALTQ release knows no such parameter for HSFC, so officially it does not even exist). This all makes the HFSC scheduling even more complex than the algorithm described in the original paper and there are tons of tutorials on the Internet that often contradict each other, one claiming the opposite of the other one. This is probably the main reason why nobody really seems to understand how HFSC scheduling really works. Before I can ask my questions, we need a sample setup of some kind. I'll use a very simple one as seen in the image below: Here are some questions I cannot answer because the tutorials contradict each other: What for do I need a real-time curve at all? Assuming A1, A2, B1, B2 are all 128 kbit/s link-share (no real-time curve for either one), then each of those will get 128 kbit/s if the root has 512 kbit/s to distribute (and A and B are both 256 kbit/s of course), right? Why would I additionally give A1 and B1 a real-time curve with 128 kbit/s? What would this be good for? To give those two a higher priority? According to original paper I can give them a higher priority by using a curve, that's what HFSC is all about after all. By giving both classes a curve of [256kbit/s 20ms 128kbit/s] both have twice the priority than A2 and B2 automatically (still only getting 128 kbit/s on average) Does the real-time bandwidth count towards the link-share bandwidth? E.g. if A1 and B1 both only have 64kbit/s real-time and 64kbit/s link-share bandwidth, does that mean once they are served 64kbit/s via real-time, their link-share requirement is satisfied as well (they might get excess bandwidth, but lets ignore that for a second) or does that mean they get another 64 kbit/s via link-share? So does each class has a bandwidth "requirement" of real-time plus link-share? Or does a class only have a higher requirement than the real-time curve if the link-share curve is higher than the real-time curve (current link-share requirement equals specified link-share requirement minus real-time bandwidth already provided to this class)? Is upper limit curve applied to real-time as well, only to link-share, or maybe to both? Some tutorials say one way, some say the other way. Some even claim upper-limit is the maximum for real-time bandwidth + link-share bandwidth? What is the truth? Assuming A2 and B2 are both 128 kbit/s, does it make any difference if A1 and B1 are 128 kbit/s link-share only, or 64 kbit/s real-time and 128 kbit/s link-share, and if so, what difference? If I use the seperate real-time curve to increase priorities of classes, why would I need "curves" at all? Why is not real-time a flat value and link-share also a flat value? Why are both curves? The need for curves is clear in the original paper, because there is only one attribute of that kind per class. But now, having three attributes (real-time, link-share, and upper-limit) what for do I still need curves on each one? Why would I want the curves shape (not average bandwidth, but their slopes) to be different for real-time and link-share traffic? According to the little documentation available, real-time curve values are totally ignored for inner classes (class A and B), they are only applied to leaf classes (A1, A2, B1, B2). If that is true, why does the ALTQ HFSC sample configuration (search for 3.3 Sample configuration) set real-time curves on inner classes and claims that those set the guaranteed rate of those inner classes? Isn't that completely pointless? (note: pshare sets the link-share curve in ALTQ and grate the real-time curve; you can see this in the paragraph above the sample configuration). Some tutorials say the sum of all real-time curves may not be higher than 80% of the line speed, others say it must not be higher than 70% of the line speed. Which one is right or are they maybe both wrong? One tutorial said you shall forget all the theory. No matter how things really work (schedulers and bandwidth distribution), imagine the three curves according to the following "simplified mind model": real-time is the guaranteed bandwidth that this class will always get. link-share is the bandwidth that this class wants to become fully satisfied, but satisfaction cannot be guaranteed. In case there is excess bandwidth, the class might even get offered more bandwidth than necessary to become satisfied, but it may never use more than upper-limit says. For all this to work, the sum of all real-time bandwidths may not be above xx% of the line speed (see question above, the percentage varies). Question: Is this more or less accurate or a total misunderstanding of HSFC? And if assumption above is really accurate, where is prioritization in that model? E.g. every class might have a real-time bandwidth (guaranteed), a link-share bandwidth (not guaranteed) and an maybe an upper-limit, but still some classes have higher priority needs than other classes. In that case I must still prioritize somehow, even among real-time traffic of those classes. Would I prioritize by the slope of the curves? And if so, which curve? The real-time curve? The link-share curve? The upper-limit curve? All of them? Would I give all of them the same slope or each a different one and how to find out the right slope? I still haven't lost hope that there exists at least a hand full of people in this world that really understood HFSC and are able to answer all these questions accurately. And doing so without contradicting each other in the answers would be really nice ;-)

    Read the article

  • Windows 7 explorer always crashes, opens small "Personalized Settings" window

    - by Ian Sellar
    My Windows 7 desktop PC, built by me, started acting very weird in the last couple of days. I use it quite often, about half of the time through TeamViewer. Explorer would crash and restart randomly, almost always through TeamViewer. This made me suspect that TeamViewer was the problem but I have reproduced it with and without TeamViewer several times. The only way I can seem to get the problem not to occur is by booting into Safe Mode. I have used CCleaner and Malwarebytes to make sure it wasn't a registry error or malware causing the problem, and I have tried the fix in the seemly related issue here as well every other fix I have found online including removing security updates KB980408 and KB2926765 as well as using "sfc /scannow" and a bunch of other things I can't remember. More recently when I try to start explorer it is popping up a small window that says "Personalized Settings" on the top, but is completely empty and crashes instantly. The only way I can get it to disappear is to kill the explorer.exe process. I wish I could take a screenshot but I can't seem to open paint or even find the exe. I have tried restarting it, I have tried starting it while the personalized settings window was open. I have come up with two lists of processes the first is the list of active processes when I boot into safe mode and explorer seems to work fine. The second is the list of processes that I can narrow it down to in normal boot and still replicate the problem. There is one process that I can't seem to close. NisSrv.exe which is describes as "Microsoft Network Realtime Inspection Service". When I try to close the process NisSrv.exe it says "The operation could not be completed. Access is denied." When I try to close the related service it gives the same message. Image Name PID Session Name Session# Mem Usage ========================= ======== ================ =========== ============ System Idle Process 0 Services 0 24 K System 4 Services 0 2,660 K smss.exe 304 Services 0 1,196 K csrss.exe 408 Services 0 4,156 K wininit.exe 444 Services 0 4,608 K csrss.exe 452 Console 1 8,700 K services.exe 492 Services 0 7,700 K winlogon.exe 524 Console 1 5,756 K lsass.exe 536 Services 0 10,644 K lsm.exe 544 Services 0 4,316 K svchost.exe 652 Services 0 8,976 K MsMpEng.exe 804 Services 0 40,696 K explorer.exe 1332 Console 1 85,220 K ctfmon.exe 1376 Console 1 3,680 K dllhost.exe 1624 Console 1 8,656 K chrome.exe 1408 Console 1 98,504 K WmiPrvSE.exe 2352 Services 0 6,472 K chrome.exe 1744 Console 1 65,116 K taskmgr.exe 372 Console 1 14,948 K cmd.exe 2776 Console 1 2,960 K conhost.exe 1816 Console 1 3,580 K tasklist.exe 2308 Console 1 5,868 K And the list of processes I have narrowed it down to. Image Name PID Session Name Session# Mem Usage ========================= ======== ================ =========== ============ System Idle Process 0 Services 0 24 K System 4 Services 0 2,808 K smss.exe 316 Services 0 1,216 K csrss.exe 484 Services 0 4,532 K wininit.exe 596 Services 0 4,604 K csrss.exe 604 Console 1 23,676 K services.exe 652 Services 0 11,344 K lsass.exe 668 Services 0 12,692 K lsm.exe 676 Services 0 4,464 K MsMpEng.exe 972 Services 0 68,436 K winlogon.exe 168 Console 1 7,784 K svchost.exe 496 Services 0 19,140 K NisSrv.exe 3176 Services 0 808 K svchost.exe 1684 Services 0 11,260 K taskmgr.exe 4524 Console 1 20,696 K cmd.exe 4764 Console 1 7,224 K conhost.exe 4772 Console 1 6,916 K sublime_text.exe 2340 Console 1 45,012 K dllhost.exe 4476 Console 1 8,736 K tasklist.exe 3796 Console 1 5,768 K WmiPrvSE.exe 1768 Services 0 6,344 K Here is the event data xml from event viewer for the error I am getting. <EventData> <Data>explorer.exe</Data> <Data>6.1.7601.17567</Data> <Data>4d672ee4</Data> <Data>vrfcore.dll</Data> <Data>6.3.9600.16384</Data> <Data>5215f8f5</Data> <Data>80000003</Data> <Data>0000000000003a00</Data> <Data>12e4</Data> <Data>01cfb84fa70f89dc</Data> <Data>C:\Windows\system32\explorer.exe</Data> <Data>C:\Windows\SYSTEM32\vrfcore.dll</Data> <Data>e5957093-2442-11e4-9f8a-94de806ed9cb</Data> </EventData> I was looking through the eventvwr log again and I found this, possibly related <EventData> <Data>runonce.exe</Data> <Data>6.1.7601.17514</Data> <Data>4ce7a253</Data> <Data>MSVCR100.dll</Data> <Data>10.0.40219.325</Data> <Data>4df2bcac</Data> <Data>c0000005</Data> <Data>000000000003c145</Data> <Data>670</Data> <Data>01cfb8dabbd85942</Data> <Data>C:\Windows\system32\runonce.exe</Data> <Data>C:\Windows\system32\MSVCR100.dll</Data> <Data>fa6f82b9-24cd-11e4-80a8-94de806ed9cb</Data> </EventData> And the general error details Faulting application name: Explorer.EXE, version: 6.1.7601.17567, time stamp: 0x4d672ee4 Faulting module name: vrfcore.dll, version: 6.3.9600.16384, time stamp: 0x5215f8f5 Exception code: 0x80000003 Fault offset: 0x0000000000003a00 Faulting process id: 0xc38 Faulting application start time: 0x01cfb84e5e852c5f Faulting application path: C:\Windows\Explorer.EXE Faulting module path: C:\Windows\SYSTEM32\vrfcore.dll Report Id: 9dc19e6d-2441-11e4-9f8a-94de806ed9cb Another probably unrelated error that I seem to be getting pretty often. Event filter with query "SELECT * FROM __InstanceModificationEvent WITHIN 60 WHERE TargetInstance ISA "Win32_Processor" AND TargetInstance.LoadPercentage > 99" could not be reactivated in namespace "//./root/CIMV2" because of error 0x80041003. Events cannot be delivered through this filter until the problem is corrected. My explorer tab in Autoruns seen below along with the error when I try to uncheck something. I should add that I seem to be able to disable shell extensions with ShellExView but I still can't get explorer to start correctly. EXPLORER SHELL UPDATE - See screenshot below I can access the explorer right click menu through a file manager I downloaded called NexusFile, but still no luck starting explorer. Another round of errors that I am getting regarding Windows Search Service The search service has detected corrupted data files in the index {id=4700}. The service will attempt to automatically correct this problem by rebuilding the index. Details: The content index catalog is corrupt. (HRESULT : 0xc0041801) (0xc0041801) followed by The Windows Search Service is being stopped because there is a problem with the indexer: The catalog is corrupt. Details: The content index catalog is corrupt. (HRESULT : 0xc0041801) (0xc0041801 and The plug-in in <Search.JetPropStore> cannot be initialized. Context: Windows Application, SystemIndex Catalog Details: The content index catalog is corrupt. (HRESULT : 0xc0041801) (0xc0041801) and The gatherer object cannot be initialized. Context: Windows Application, SystemIndex Catalog Details: The content index catalog is corrupt. (HRESULT : 0xc0041801) (0xc0041801) and The Windows Search Service cannot load the property store information. Context: Windows Application, SystemIndex Catalog Details: The content index database is corrupt. (HRESULT : 0xc0041800) (0xc0041800) WER Log http://pastebin.com/WXKGDT4Q I'll add information as I remember it or people request it.

    Read the article

< Previous Page | 262 263 264 265 266 267 268 269 270 271 272 273  | Next Page >