Search Results

Search found 13451 results on 539 pages for 'physical environment'.

Page 516/539 | < Previous Page | 512 513 514 515 516 517 518 519 520 521 522 523  | Next Page >

  • CodePlex Daily Summary for Friday, November 26, 2010

    CodePlex Daily Summary for Friday, November 26, 2010Popular ReleasesMath.NET Numerics: Beta 1: First beta of Math.NET Numerics. Only contains the managed linear algebra provider. Beta 2 will include the native linear algebra providers along with better documentation and examples.WatchersNET.SiteMap: WatchersNET.SiteMap 01.03.02: Whats NewNew Tax Filter, You can now select which Terms you want to Use.TextGen - Another Template Based Text Generator: TextGen v0.2: This is the first version of TextGen exposing its core functionality to COM. See the Access demo (Access 2000 file format) included in the package. For installation and usage instructions see ReadMe.txt. Have fun and provide feedback!Minecraft GPS: Minecraft GPS 1.1: 1.1 Release New Features Compass! New style. Set opacity on main window to allow overlay of Minecraft.Microsoft All-In-One Code Framework: Visual Studio 2010 Code Samples 2010-11-25: Code samples for Visual Studio 2010Typps (formerly jiffycms) wysiwyg rich text HTML editor for ASP.NET AJAX: Typps 2.9: -When uploading files (not images), through the file uploader and the multi-file uploader, FileUploaded and MultiFileUploaded event handlers were reporting an empty event argument, this is fixed now. -Fixed also url field not updating when uploading a file ( not image)Wii Backup Fusion: Wii Backup Fusion 0.8.5 Beta: - WBFS repair (default) options fixed - Transfer to image fixed - Settings ui widget names fixed - Some little bug fixes You need to reset the settings! Delete WiiBaFu's config file or registry entries on windows: Linux: ~/.config/WiiBaFu/wiibafu.conf Windows: HKEY_CURRENT_USER\Software\WiiBaFu\wiibafu Mac OS X: ~/Library/Preferences/com.wiibafu.wiibafu.plist Caution: This is a BETA version! Errors, crashes and data loss not impossible! Use in test environments only, not on productive syste...Minemapper: Minemapper v0.1.3: Added process count and world size calculation progress to the status bar. Added View->'Status Bar' menu item to show/hide the status bar. Status bar is automatically shown when loading a world. Added a prompt, when loading a world, to use or clear cached images.SQL Monitor: SQL Monitor 1.4: 1.added automatically load sql server instances 2.added friendly wait cursor 3.fixed problem with 4.0 fx 4.added exception handlingLateBindingApi.Excel: LateBindingApi.Excel Release 0.7f (fixed): Unterschiede zur Vorgängerversion: - XlConverter.ToRgb umbenannt zu XlConverter.ToDouble - XlConverter.GetFileExtension hinzugefügt (.xls oder .xlsx) - Insert Methoden+Overloads für Range und ShapeNodes - Xml Doku im Code entfernt Release+Samples V0.7f: - Enthält Laufzeit DLL und Beispielprojekte Beispielprojekte: COMAddinExample - Demonstriert ein versionslos angebundenes COMAddin Example01 - Background Colors und Borders für Cells Example02 - Font Attributes undAlignment für Cells Examp...Deep Zoom for WPF: First Release: This first release of the Deep Zoom control has the same source code, binaries and demos as the CodeProject article (http://www.codeproject.com/KB/WPF/DeepZoom.aspx).BlogEngine.NET: BlogEngine.NET 2.0 RC: This is a Release Candidate version for BlogEngine.NET 2.0. The most current, stable version of BlogEngine.NET is version 1.6. Find out more about the BlogEngine.NET 2.0 RC here. If you want to extend or modify BlogEngine.NET, you should download the source code. To get started, be sure to check out our installation documentation and the installation screencast. If you are upgrading from a previous version, please take a look at the Upgrading to BlogEngine.NET 2.0 instructions. As this ...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel Template, version 1.0.1.156: The NodeXL Excel template displays a network graph using edge and vertex lists stored in an Excel 2007 or Excel 2010 workbook. What's NewThis release adds a feature for aggregating the overall metrics in a folder full of NodeXL workbooks, adds geographical coordinates to the Twitter import features, and fixes a memory-related bug. See the Complete NodeXL Release History for details. Please Note: There is a new option in the setup program to install for "Just Me" or "Everyone." Most people...VFPX: FoxBarcode v.0.11: FoxBarcode v.0.11 - Released 2010.11.22 FoxBarcode is a 100% Visual FoxPro class that provides a tool for generating images with different bar code symbologies to be used in VFP forms and reports, or exported to other applications. Its use and distribution is free for all Visual FoxPro Community. Whats is new? Added a third parameter to the BarcodeImage() method Fixed some minor bugs History FoxBarcode v.0.10 - Released 2010.11.19 - 85 Downloads Project page: FoxBarcodeDotNetAge -a lightweight Mvc jQuery CMS: DotNetAge 1.1.0.5: What is new in DotNetAge 1.1.0.5 ?Document Library features and template added. Resolve issues of templates Improving publishing service performance Opml support added. What is new in DotNetAge 1.1 ? D.N.A Core updatesImprove runtime performance , more stabilize. The DNA core objects model added. Personalization features added that allows users create the personal website, manage their resources, store personal data DynamicUIFixed the PageManager could not move page node bug. ...ASP.NET MVC Project Awesome (jQuery Ajax helpers): 1.3.1 and demos: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form and Pager tested on mozilla, safari, chrome, opera, ie 9b/8/7/6MDownloader: MDownloader-0.15.24.6966: Fixed Updater; Fixed minor bugs;WPF Application Framework (WAF): WPF Application Framework (WAF) 2.0.0.1: Version: 2.0.0.1 (Milestone 1): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Remark The sample applications are using Microsoft’s IoC container MEF. However, the WPF Application Framework (WAF) doesn’t force you to use the same IoC container in your application. You can use ....NET Extensions - Extension Methods Library for C# and VB.NET: Release 2011.01: Added new extensions for - object.CountLoopsToNull Added new extensions for DateTime: - DateTime.IsWeekend - DateTime.AddWeeks Added new extensions for string: - string.Repeat - string.IsNumeric - string.ExtractDigits - string.ConcatWith - string.ToGuid - string.ToGuidSave Added new extensions for Exception: - Exception.GetOriginalException Added new extensions for Stream: - Stream.Write (overload) And other new methods ... Release as of dotnetpro 01/2011Free language translator and file converter: Free Language Translator 2.2: Starting with version 2.0, the translator encountered a major redesign that uses MEF based plugins and .net 4.0. I've also fixed some bugs and added support for translating subtitles that can show up in video media players. Version 2.1 shows the context menu 'Translate' in Windows Explorer on right click. Version 2.2 has links to start the media file with its associated subtitle. Download the zip file and expand it in a temporary location on your local disk. At a minimum , you should uninstal...New Projects.NET DroneController: The .NET DroneController makes it easy to write applications that allow you to control an ARDrone quadricopter. BELT (A PowerShell Snapin for IE Browser Automation): BELT is a PowerShell snapin for IE browser automation. BELT makes it easier to control IE by PowerShell. "BELT" originally stands for "Browser Element Locating Tool".BusStationInfo: BusStationInfoDadaist: Dadaist is a random natural language generator that allows creation of random yet understandable (and often crazy) placeholder text for web designers. It could one day replace "Lorem ipsum" altogether! It is developed in C# and is a console application.Darskade LMS: Darskade is a kind of Learning Management System (LMS as a branch of CMS) and its main goal is to help professors and teacher assistants to manage classes, communicate with students, upload course contents ,put assignments and grades on it and more.ESRI for TableTops: An extension of the ESRI Api for use on digital tabletops and multitouch surfaces.Execute a SQL Server Agent Job - SSIS Package: The focus of SSMSAGENTJOBVBS is to explain how you can execute a SQL Server Agent Job remotely with the SSIS package, or DTSX file, as a job step. Look at the source code and dissect. Format.NET: Format.NET is an easy to use library to enable advanced and smart object formatting in .NET projects. Extends the default String.Format(...) allowing property resolution, custom formatting and text alignment.Gallery.net: Gallery.net is a tag-based image management solution, initially targeting for High Schools to manage and organize their large digital image assets. It is developed in C#, ASP.Net MVC3. GroupChallenge: GroupChallenge is a trivia game for one or more simultaneous players to interactively answer questions and submit questions to a hosted game server. Source code includes a WCF Data Service (OData) server, Windows Phone client, and a Silverlight client. Great for User Groups.GSWork: Manage clients and events in your company in a fast and efficient, without relying on a physical file. Ideal for companies with workers with no need to be in the same place.hermesystest1: ????? test ???.HTC Sense Util: Windows Mobile application for managing HTC Sense Tabs. The application will control sense and manage the tab control file.IdentityChecker: IdentityCheckerIndexed Material Splatting for Terrain Rendering Demo: This project is a demo using Indexed Material Splatting technique for terrain renderingIntegra: PFCKinet SDK: Kinect SDKLyra 3: Lyra is a small Windows Application to create and manage song-books. Each song-book may contain an arbitrary number of songs which can be created or edited within Lyra and projected using any screen device (e.g. a beamer). Support for full text search and presentation templates.NETWorking: NETWorking allows developers to easily integrate Internet connectivity into their applications. Be it a fault-tolerant distributed cluster or a peer-to-peer system, NETWorking exposes high-performance classes to facilitate application needs in a concise and understandable manner.RAD Platform: Rapid Application Development (RAD) Platform is a free run time report/form/chart designer and generator engine and multi-platform web/windows application servers. New generation of database independent .Net software framework. Design once, run at once on web and windows. Enjoy!Razor Repository Engine: <project name>RazorRepository</project> <programming language>C#</programming language> <activity>Finished</activity>Scientific Calculator: HTML 5 and CSS 3: Online SCIENTIFIC CALCULATOR implements latest features, available in HTML 5, CSS 3 and jQuery. Developed as Rich Internet Application (RIA) w/extremely small footprint (<20kb), it demonstrates best scripting techniques and coding practices; does not require any image files.Seguimiento Facon: Control de seguimiento faconSharePoint Social: Have you ever needed to show the facebook, twitter updates of your organization on your sharepoint portal ? If yes, this project is for you. Simple auto update tool: Makes a easy way to auto update your client software.SteelBattalion.NET: .NET-based library for interfacing with the original Steel Battalion X-Box controller. Utilizes LibUSB drivers. Works on 32-bit and 64-bit platforms, including Windows 7.SystemTimeFreezer: Freezes system datetime to some value you've selectedTeam Explorer Remover: Utility to completely remove TFS Team Explorer from VS.NET 2010. * Removes all registry entries regarding Team Explorer. * Removes all files related to Team Explorer. * Removes all VS.NET commands, menu items and hyperlinks related to Team Explorer (incl. one on StartUp page)TIMESHARE-SELLERS: TIMESHARE-SELLERSVsi Builder 2010: Vsi Builder is an extension for Visual Studio 2010 which allows packaging code snippets and old-style add-ins into .Vsi redistributable packagesWCF Peer Resolver: WCF Peer Resolver is an very extendable and simplified resolver framework to use instead of the built in CurstomPeerResolver in .Net Framework. It is completely developed in C# with .Net Framework 3.5.wsPDV2010: Primeiro desenvolvimento experimental de PDV.

    Read the article

  • Finally, upgrade from Nokia X3 to Samsung Galaxy S III

    This time, something slightly different but nonetheless not less interesting, hopefully. Living on a remote island like Mauritius, ill-praised 'Cyber Island' in the Indian Ocean, has its advantages in life style and relaxed environment to life in but in terms of technological aspects it can be quite a nightmare. Well, I guess this might be different story to report about... one day. Cyber Island Mauritius Despite it's shiny advertisement as Cyber Island and business in ICT hub to Africa, Mauritius is not on the latest track of available models in computer hardware or, in the context of this article, cellulars or smart-phone, or communication technology in general. Okay, I have to admit that this statement is only partly true. Money can buy, even here in Mauritius. Luckily, there are ways and ways to deal with this outcry of modern, read: technological, civilisation issues. Online shopping you might think? Yes, for sure, until you discover in your checkout procedure that a small island in the Indian Ocean isn't a preferred destination for delivery and the precious time you spent on putting your items into your cart and feeding your personal level of anticipation gets ruined on the last stint. Ordering from abroad saves you money Anyway, I got in touch with my personal courier and luckily there were some extra-kilos left in the luggage. First obstacle sorted, we have a Transporter! Okay, on the next occasion off to Amazon online and using their Prime service for fast delivery. Actually, the order was placed on Saturday evening and everything got delivered on Tuesday morning - nice job in less than 72 hours. Okay, among the items of that shopping rush I ordered a shiny Samsung Galaxy S III 16GB in oceanic blue - did I mention, that you hardly get a blue model in Mauritius? - for my BWE. Interesting side-notes: First, Amazon Germany dropped the prices for roughly 30% on the S3, and we got the 16GB model for less than 500 Euro (or approx. Rs. 19.500,-) compared to the usual Rs. 27.000,- on the local market. It even varies whether the local price is inclusive or exclusive VAT (15%). Second, since a while she was bothering me to get an iPhone and an iPad for her, fair enough I thought, decent hardware, posh design and reliable services. Until we watched the 'magical' introduction of Samsung's new models at the IFA exhibition, she read the bashing comments on Google+ on the iPhone 5 and I gave her a brief summary on the law suit between Apple and Samsung in the USA. So, yes, Samsung USA is right, the next big thing is already here - literally. My BWE loves the look and touch of the Galaxy S3. And for me it was more cost-effective in terms of purchases done at the App Store, ups, Play Store. Transfer of contacts, text messages and media files Okay, now that the hardware is in place, how to transfer all those contacts, text messages, media files, etc. between those two devices? In the past, I used to use the Nokia Communication Suite between various models but now for Android? Well, as usual Google and Bing are reliable friends and among the first hits I came across an article about How to Transfer Contacts from Nokia to Android. Couldn't be easier, right? Well, sort of... my main Windows systems are already running on Windows 8, and this actually caused problems with the mobile/smart-phone device drivers. The article provides the download for an older version 1.10 which upgrades to 2.11 (as time of writing this entry) but both couldn't get the Galaxy S3 and the Nokia connected. Shame on me... the product page clearly doesn't mention Windows 8 (for now) and Windows 8 isn't available for the general audience at all... After I took a spare machine running on Windows Vista everything went smooth. Software installed, upgrade done, device drivers for Android automatically downloaded and installed, and the same painless routine for the Nokia part. I think, I rebooted the system twice during the whole setup procedure but hey, it was more or less a distraction while coding some stuff in ASP.NET MVC and Telerik Kendo UI. The transfer of contacts and text messages was done via Wondershare MobileGo for Android, and all media files by moving the additional microSD card from one device to the other. But even without an external SD card, it would have been very easy to copy the files via Windows Explorer directly. Little catch and excellent service Fine, we are almost done and the only step left is to shift the SIM card... Ouch, gotcha! The X3 uses a standard size SIM card while the S III only accepts microSIM form factor. What an irony, bigger smartphone needs smaller SIM card. Luckily, the next showroom of Emtel is just 5 mins away up the road, and the service staff over there know their job. Finally, after roughly 10 mins of paper work, activation and small chit-chat, the S3 came to life on the mobile network. Owning a smart-phone now and knowing that my BWE would like to interact more on social networks away from home, especially to upload pictures and provide local 'check-ins', I activated a data package for her in advance, too. Even that it is Saturday, everything was already done and ready to be used. Nice bonus: The Emtel clerk directly offered me to set up the configuration for the Emtel data services, yes sure, go ahead, this saves me to search for that in the settings. Okay, spoiler-alert here, setting a static APN to access the Emtel network and the internet wouldn't be a challenge. But hey, she already had the phone in her hands and I could keep my eyes on the children. Well done, Emtel! Resume Thanks to the useful software package by Wondershare is was a hands-free experience to transfer all the data from a Nokia mobile on Symbian S60 to a Samsung Galaxy S III on Android Ice Cream Sandwich (ICS). In the future, this wont be a serious issue at all anymore thanks to synchronisation services and cloud storage. And for now, I'm only waiting for the official upgrades for Jelly Bean.

    Read the article

  • Real World Nuget

    - by JoshReuben
    Why Nuget A higher level of granularity for managing references When you have solutions of many projects that depend on solutions of many projects etc à escape from Solution Hell. Links · Using A GUI (Package Explorer) to build packages - http://docs.nuget.org/docs/creating-packages/using-a-gui-to-build-packages · Creating a Nuspec File - http://msdn.microsoft.com/en-us/vs2010trainingcourse_aspnetmvcnuget_topic2.aspx · consuming a Nuget Package - http://msdn.microsoft.com/en-us/vs2010trainingcourse_aspnetmvcnuget_topic3 · Nuspec reference - http://docs.nuget.org/docs/reference/nuspec-reference · updating packages - http://nuget.codeplex.com/wikipage?title=Updating%20All%20Packages · versioning - http://docs.nuget.org/docs/reference/versioning POC Folder Structure POC Setup Steps · Install package explorer · Source o Create a source solution – configure output directory for projects (Project > Properties > Build > Output Path) · Package o Add assemblies to package from output directory (D&D)- add net folder o File > Export – save .nuspec files and lib contents <?xml version="1.0" encoding="utf-16"?> <package > <metadata> <id>MyPackage</id> <version>1.0.0.3</version> <title /> <authors>josh-r</authors> <owners /> <requireLicenseAcceptance>false</requireLicenseAcceptance> <description>My package description.</description> <summary /> </metadata> </package> o File > Save – saves .nupkg file · Create Target Solution o In Tools > Options: Configure package source & Add package Select projects: Output from package manager (powershell console) ------- Installing...MyPackage 1.0.0 ------- Added file 'NugetSource.AssemblyA.dll' to folder 'MyPackage.1.0.0\lib'. Added file 'NugetSource.AssemblyA.pdb' to folder 'MyPackage.1.0.0\lib'. Added file 'NugetSource.AssemblyB.dll' to folder 'MyPackage.1.0.0\lib'. Added file 'NugetSource.AssemblyB.pdb' to folder 'MyPackage.1.0.0\lib'. Added file 'MyPackage.1.0.0.nupkg' to folder 'MyPackage.1.0.0'. Successfully installed 'MyPackage 1.0.0'. Added reference 'NugetSource.AssemblyA' to project 'AssemblyX' Added reference 'NugetSource.AssemblyB' to project 'AssemblyX' Added file 'packages.config'. Added file 'packages.config' to project 'AssemblyX' Added file 'repositories.config'. Successfully added 'MyPackage 1.0.0' to AssemblyX. ============================== o Packages folder created at solution level o Packages.config file generated in each project: <?xml version="1.0" encoding="utf-8"?> <packages>   <package id="MyPackage" version="1.0.0" targetFramework="net40" /> </packages> A local Packages folder is created for package versions installed: Each folder contains the downloaded .nupkg file and its unpacked contents – eg of dlls that the project references Note: this folder is not checked in UpdatePackages o Configure Package Manager to automatically check for updates o Browse packages - It automatically picked up the updates Update Procedure · Modify source · Change source version in assembly info · Build source · Open last package in package explorer · Increment package version number and re-add assemblies · Save package with new version number and export its definition · In target solution – Tools > Manage Nuget Packages – click on All to trigger refresh , then click on recent packages to see updates · If problematic, delete packages folder Versioning uninstall-package mypackage install-package mypackage –version 1.0.0.3 uninstall-package mypackage install-package mypackage –version 1.0.0.4 Dependencies · <?xml version="1.0" encoding="utf-16"?> <package xmlns="http://schemas.microsoft.com/packaging/2012/06/nuspec.xsd"> <metadata> <id>MyDependentPackage</id> <version>1.0.0</version> <title /> <authors>josh-r</authors> <owners /> <requireLicenseAcceptance>false</requireLicenseAcceptance> <description>My package description.</description> <dependencies> <group targetFramework=".NETFramework4.0"> <dependency id="MyPackage" version="1.0.0.4" /> </group> </dependencies> </metadata> </package> Using NuGet without committing packages to source control http://docs.nuget.org/docs/workflows/using-nuget-without-committing-packages Right click on the Solution node in Solution Explorer and select Enable NuGet Package Restore. — Recall that packages folder is not part of solution If you get downloading package ‘Nuget.build’ failed, config proxy to support certificate for https://nuget.org/api/v2/ & allow unrestricted access to packages.nuget.org To test connectivity: get-package –listavailable To test Nuget Package Restore – delete packages folder and open vs as admin. In nugget msbuild: <Import Project="$(SolutionDir)\.nuget\nuget.targets" /> TFSBuild Integration Modify Nuget.Targets file <RestorePackages Condition="  '$(RestorePackages)' == '' "> True </RestorePackages> … <PackageSource Include="\\IL-CV-004-W7D\Packages" /> Add System Environment variable EnableNuGetPackageRestore=true & restart the “visual studio team foundation build service host” service. Important: Ensure Network Service has access to Packages folder Nugetter TFS Build integration Add Nugetter build process templates to TFS source control For Build Controller - Specify location of custom assemblies Generate .nuspec file from Package Explorer: File > Export Edit the file elements – remove path info from src and target attributes <?xml version="1.0" encoding="utf-16"?> <package xmlns="http://schemas.microsoft.com/packaging/2012/06/nuspec.xsd">     <metadata>         <id>Common</id>         <version>1.0.0</version>         <title />         <authors>josh-r</authors>         <owners />         <requireLicenseAcceptance>false</requireLicenseAcceptance>         <description>My package description.</description>         <dependencies>             <group targetFramework=".NETFramework3.5" />         </dependencies>     </metadata>     <files>         <file src="CommonTypes.dll" target="CommonTypes.dll" />         <file src="CommonTypes.pdb" target="CommonTypes.pdb" /> … Add .nuspec file to solution so that it is available for build: Dev\NovaNuget\Common\NuSpec\common.1.0.0.nuspec Add a Build Process Definition based on the Nugetter build process template: Configure the build process – specify: · .sln to build · Base path (output directory) · Nuget.exe file path · .nuspec file path Copy DLLs to a binary folder 1) Set copy local for an assembly reference to false 2)  MSBuild Copy Task – modify .csproj file: http://msdn.microsoft.com/en-us/library/3e54c37h.aspx <ItemGroup>     <MySourceFiles Include="$(MSBuildProjectDirectory)\..\SourceAssemblies\**\*.*" />   </ItemGroup>     <Target Name="BeforeBuild">     <Copy SourceFiles="@(MySourceFiles)" DestinationFolder="bin\debug\SourceAssemblies" />   </Target> 3) Set Probing assembly search path from app.config - http://msdn.microsoft.com/en-us/library/823z9h8w(v=vs.80).aspx -                 <?xml version="1.0" encoding="utf-8" ?> <configuration>   <runtime>     <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">       <probing privatePath="SourceAssemblies"/>     </assemblyBinding>   </runtime> </configuration> Forcing 'copy local = false' The following generic powershell script was added to the packages install.ps1: param($installPath, $toolsPath, $package, $project) if( $project.Object.Project.Name -ne "CopyPackages") { $asms = $package.AssemblyReferences | %{$_.Name} foreach ($reference in $project.Object.References) { if ($asms -contains $reference.Name + ".dll") { $reference.CopyLocal = $false; } } } An empty project named "CopyPackages" was added to the solution - it references all the packages and is the only one set to CopyLocal="true". No MSBuild knowledge required.

    Read the article

  • Christmas in the Clouds

    - by andrewbrust
    I have been spending the last 2 weeks immersing myself in a number of Windows Azure and SQL Azure technologies.  And in setting up a new business (I’ll speak more about that in the future), I have also become a customer of Microsoft’s BPOS (Business Productivity Online Services).  In short, it has been a fortnight of Microsoft cloud computing. On the Azure side, I’ve looked, of course, at Web Roles and Worker Roles.  But I’ve also looked at Azure Storage’s REST API (including coding to it directly), I’ve looked at Azure Drive and the new VM Role; I’ve looked quite a bit at SQL Azure (including the project “Houston” Silverlight UI) and I’ve looked at SQL Azure labs’ OData service too. I’ve also looked at DataMarket and its integration with both PowerPivot and native Excel.  Then there’s AppFabric Caching, SQL Azure Reporting (what I could learn of it) and the Visual Studio tooling for Azure, including the storage of certificate-based credentials.  And to round it out with some user stuff, on the BPOS side, I’ve been working with Exchange Online, SharePoint Online and LiveMeeting. I have to say I like a lot of what I’ve been seeing.  Azure’s not perfect, and BPOS certainly isn’t either.  But there’s good stuff in all these products, and there’s a lot of value. Azure Goes Deep Most people know that Web and Worker roles put the platform in charge of spinning virtual machines up and down, and keeping them up to date. But you can go way beyond that now.  The still-in-beta VM Role gives you the power to craft the machine (much as does Amazon’s EC2), though it takes away the platform’s self-managing attributes.  It still spins instances up and down, making drive storage non-durable, but Azure Drive gives you the ability to store VHD files as blobs and mount them as virtual hard drives that are readable and writeable.  Whether with Azure Storage or SQL Azure, Azure does data.  And OData is everywhere.  Azure Table Storage supports an OData Interface.  So does SQL Azure and so does DataMarket (the former project “Dallas”).  That means that Azure data repositories aren’t just straightforward to provision and configure…they’re also easy to program against, from just about any programming environment, in a RESTful manner.  And for more .NET-centric implementations, Azure AppFabric caching takes the technology formerly known as “Velocity” and throws it up into the cloud, speeding data access even more. Snapping in Place Once you get the hang of it, this stuff just starts to work in a way that becomes natural to understand.  I wasn’t expecting that, and I was really happy to discover it. In retrospect, I am not surprised, because I think the various Azure teams are the center of gravity for Redmond’s innovation right now.  The products belie this and so do my observations of the product teams’ motivation and high morale.  It is really good to see this; Microsoft needs to lead somewhere, and they need to be seen as the underdog while doing so.  With Azure, both requirements are in place.   BPOS: Bad Acronym, Easy Setup BPOS is about products you already know; Exchange, SharePoint, Live Meeting and Office Communications Server.  As such, it’s hard not to be underwhelmed by BPOS.  Until you realize how easy it makes it to get all that stuff set up.  I would say that from sign-up to productive use took me about 45 minutes…and that included the time necessary to wrestle with my DNS provider, set up Outlook and my SmartPhone up to talk to the Exchange account, create my SharePoint site collection, and configure the Outlook Conferencing add-in to talk to the provisioned Live Meeting account. Never before did I think setting up my own Exchange mail could come anywhere close to the simplicity of setting up an SMTP/POP account, and yet BPOS actually made it faster.   What I want from my Azure Christmas Next Year Not everything about Microsoft’s cloud is good.  I close this post with a list of things I’d like to see addressed: BPOS offerings are still based on the 2007 Wave of Microsoft server technologies.  We need to get to 2010, and fast.  Arguably, the 2010 products should have been released to the off-premises channel before the on-premise sone.  Office 365 can’t come fast enough. Azure’s Internet tooling and domain naming, is scattered and confusing.  Deployed ASP.NET applications go to cloudapp.net; SQL Azure and Azure storage work off windows.net.  The Azure portal and Project Houston are at azure.com.  Then there’s appfabriclabs.com and sqlazurelabs.com.  There is a new Silverlight portal that replaces most, but not all of the HTML ones.  And Project Houston is Silvelright-based too, though separate from the Silverlight portal tooling. Microsoft is the king off tooling.  They should not make me keep an entire OneNote notebook full of portal links, account names, access keys, assemblies and namespaces and do so much CTRL-C/CTRL-V work.  I’d like to see more project templates, have them automatically reference the appropriate assemblies, generate the right using/Imports statements and prime my config files with the right markup.  Then I want a UI that lets me log in with my Live ID and pick the appropriate project, database, namespace and key string to get set up fast. Beta programs, if they’re open, should onboard me quickly.  I know the process is difficult and everyone’s going as fast as they can.  But I don’t know why it’s so difficult or why it takes so long.  Getting developers up to speed on new features quickly helps popularize the platform.  Make this a priority. Make Azure accessible from the simplicity platforms, i.e. ASP.NET Web Pages (Razor) and LightSwitch.  Support .NET 4 now.  Make WebMatrix, IIS Express and SQL Compact work with the Azure development fabric. Have HTML helpers make Azure programming easier.  Have LightSwitch work with SQL Azure and not require SQL Express.  LightSwitch has some promising Azure integration now.  But we need more.  WebMatrix has none and that’s just silly, now that the Extra Small Instance is being introduced. The Windows Azure Platform Training Kit is great.  But I want Microsoft to make it even better and I want them to evangelize it much more aggressively.  There’s a lot of good material on Azure development out there, but it’s scattered in the same way that the platform is.   The Training Kit ties a lot of disparate stuff together nicely.  Make it known. Should Old Acquaintance Be Forgot All in all, diving deep into Azure was a good way to end the year.  Diving deeper into Azure should a great way to spend next year, not just for me, but for Microsoft too.

    Read the article

  • Exalogic 2.0.1 Tea Break Snippets - Creating a ModifyJeOS VirtualBox

    - by The Old Toxophilist
    Following on from my previous blog entry "Modifying the Base Template" I decided to put together a quick blog to show how to create a small VirtualBox, guest, that can be used to execute the ModifyJeOS and hence edit you templates. One of the main advantages of this is that Templates can be created away from the Exalogic Environment. For the Guest OS I chose OEL 6u3 and decided to create it as a basic server because I did not require a graphical interface but it's a simple change to create it with a GUI. Required Software Virtual Box. Oracle Enterprise Linux. Creating the VM I'll assume that the reader is experienced with Virtual Box and installing OEL and hence will make this section brief. Create VirtualBox Guest Create a new VirtualBox Guest and select oracle Linux 64 bit. Follow through the create process and select Dynamic Disk Size and the default 12GB disk size. The actual image will be a lot smaller than this but the OEL install will fail with insufficient disk space if you attempt a smaller size. Once the guest has been created attach the previously downloaded OEL 6u3 iso to the cd drive and start the guest. Install OEL On starting the guest the system will boot off the associated OEL 6u3 iso and take you through the standard installation process. Select all the appropriate information but when you reach the installation type select Basic Server because we do not need that additional packages and only need to access through the command line interface. Complete the installation and reboot the Guest. At this point we now have a basic OEL server running. Installing Guest Add-ons Before we can easily access the Guest we will need to add the VirtualBox guest add-ons. These will provide better keyboard and mouse integration and allow access the shared folders on the host machine. Before we can do this we will need to do the following: Enable Networking. Install additional rpms.  To enable the networking (eth0), that appears to be disabled by default, we can execute: ifup eth0 This will start the eth0 connection but once the Guest is rebooted the network will be down again. To resolve this you will need to edit the /etc/sysconfig/network-scripts/ifcfg-eth0 file and change the ONBOOT parameter to "yes". Now we have enabled the network we will need to install a number of addition rpm. First we will need to configure the yum repository as follows: [ol6_latest] name=Oracle Linux $releasever Latest ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/latest/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=1 [ol6_ga_base] name=Oracle Linux $releasever GA installation media copy ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/0/base/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=0 [ol6_u1_base] name=Oracle Linux $releasever Update 1 installation media copy ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/1/base/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=0 [ol6_u2_base] name=Oracle Linux $releasever Update 2 installation media copy ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/2/base/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=0 [ol6_u3_base] name=Oracle Linux $releasever Update 3 installation media copy ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/3/base/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=0 [ol6_UEK_latest] name=Latest Unbreakable Enterprise Kernel for Oracle Linux $releasever ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/UEK/latest/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=1 [ol6_UEK_base] name=Unbreakable Enterprise Kernel for Oracle Linux $releasever ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/UEK/base/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=0 Once the repository has been edited we will need to execute the following yum commands: yum update yum install gcc yum install kernel-uek-devel yum install kernel-devel yum install createrepo At this point we now have all the additional packages required to install the VirtualBox Guest Add-ons. So select Devices->InstallGuest Additions on you running guest: This will simply place the VirtualBoxGuestAdditions.iso in the virtual cd and we will need to execute the following before we can run them. mkdir /media/cdrom mount -t iso9660 -o ro /dev/cdrom /media/cdrom cd /media/cdrom/ ls ./VBoxLinuxAdditions.run This will initiate the install and kernel rebuild. What you will notice is that during the installation a Failed will be displayed but this is simply because we have no graphical components. At this point we the installation will also have added the vboxsf group to the system and to access any shared folders we will create our user will need to be a member of this group an so the next stage is to add the root user to this group as follows: usermod -G vboxsf root cat /etc/group cat /etc/passwd init 0 Now simply shutdown the guest and add the Shared folder within your guests settings. Install ModifyJeOS Once the shared folder has been added restart the guest and change directory into the shared folder (/media/sf_<folder name>). For the next step I am assuming the ModifyJeOS rpms are located in the shared folder. We can simply execute: rpm -ivh ovm-modify-jeos-1.1.0-17.el5.noarch.rpm # Test with modifyjeos Using ModifyJeOS I have a modified MountSystemImg.sh script that should be copied into the /root/bin directory (you may need to create this) and from here it can be executed from any location: MountSystemImg.sh #!/bin/sh # The script assumes it's being run from the directory containing the System.img # Export for later i.e. during unmount export LOOP=`losetup -f` export SYSTEMIMG=/mnt/elsystem export TEMPLATEDIR=`pwd` # Make Temp Mount Directory mkdir -p $SYSTEMIMG # Create Loop for the System Image losetup $LOOP System.img kpartx -a $LOOP mount /dev/mapper/`basename $LOOP`p2 $SYSTEMIMG #Change Dir into mounted Image cd $SYSTEMIMG echo "######################################################################" echo "### ###" echo "### Starting Bash shell for editing. When completed log out to ###" echo "### Unmount the System.img file. ###" echo "### ###" echo "######################################################################" echo bash cd ~ cd $TEMPLATEDIR umount $SYSTEMIMG kpartx -d $LOOP losetup -d $LOOP rm -rf $SYSTEMIMG This script will simple create a mount directory, mount the System.img and then start a new shell in the mounted directory. On exiting the shell it will unmount the System.img. It only requires that you execute the script in the directory containing the System.img. These can be created under the mounted shared directory. In the example below I have extracted the Base template within the shared folder and then renamed it OEL_40GB_ROOT before changing into that directory and executing the script.

    Read the article

  • How do I restrict concurrent statistics gathering to a small set of tables from a single schema?

    - by Maria Colgan
    I got an interesting question from one of my colleagues in the performance team last week about how to restrict a concurrent statistics gather to a small subset of tables from one schema, rather than the entire schema. I thought I would share the solution we came up with because it was rather elegant, and took advantage of concurrent statistics gathering, incremental statistics, and the not so well known “obj_filter_list” parameter in DBMS_STATS.GATHER_SCHEMA_STATS procedure. You should note that the solution outline below with “obj_filter_list” still applies, even when concurrent statistics gathering and/or incremental statistics gathering is disabled. The reason my colleague had asked the question in the first place was because he wanted to enable incremental statistics for 5 large partitioned tables in one schema. The first time you gather statistics after you enable incremental statistics on a table, you have to gather statistics for all of the existing partitions so that a synopsis may be created for them. If the partitioned table in question is large and contains a lot of partition, this could take a considerable amount of time. Since my colleague only had the Exadata environment at his disposal overnight, he wanted to re-gather statistics on 5 partition tables as quickly as possible to ensure that it all finished before morning. Prior to Oracle Database 11g Release 2, the only way to do this would have been to write a script with an individual DBMS_STATS.GATHER_TABLE_STATS command for each partition, in each of the 5 tables, as well as another one to gather global statistics on the table. Then, run each script in a separate session and manually manage how many of this session could run concurrently. Since each table has over one thousand partitions that would definitely be a daunting task and would most likely keep my colleague up all night! In Oracle Database 11g Release 2 we can take advantage of concurrent statistics gathering, which enables us to gather statistics on multiple tables in a schema (or database), and multiple (sub)partitions within a table concurrently. By using concurrent statistics gathering we no longer have to run individual statistics gathering commands for each partition. Oracle will automatically create a statistics gathering job for each partition, and one for the global statistics on each partitioned table. With the use of concurrent statistics, our script can now be simplified to just five DBMS_STATS.GATHER_TABLE_STATS commands, one for each table. This approach would work just fine but we really wanted to get this down to just one command. So how can we do that? You may be wondering why we didn’t just use the DBMS_STATS.GATHER_SCHEMA_STATS procedure with the OPTION parameter set to ‘GATHER STALE’. Unfortunately the statistics on the 5 partitioned tables were not stale and enabling incremental statistics does not mark the existing statistics stale. Plus how would we limit the schema statistics gather to just the 5 partitioned tables? So we went to ask one of the statistics developers if there was an alternative way. The developer told us the advantage of the “obj_filter_list” parameter in DBMS_STATS.GATHER_SCHEMA_STATS procedure. The “obj_filter_list” parameter allows you to specify a list of objects that you want to gather statistics on within a schema or database. The parameter takes a collection of type DBMS_STATS.OBJECTTAB. Each entry in the collection has 5 feilds; the schema name or the object owner, the object type (i.e., ‘TABLE’ or ‘INDEX’), object name, partition name, and subpartition name. You don't have to specify all five fields for each entry. Empty fields in an entry are treated as if it is a wildcard field (similar to ‘*’ character in LIKE predicates). Each entry corresponds to one set of filter conditions on the objects. If you have more than one entry, an object is qualified for statistics gathering as long as it satisfies the filter conditions in one entry. You first must create the collection of objects, and then gather statistics for the specified collection. It’s probably easier to explain this with an example. I’m using the SH sample schema but needed a couple of additional partitioned table tables to get recreate my colleagues scenario of 5 partitioned tables. So I created SALES2, SALES3, and COSTS2 as copies of the SALES and COSTS table respectively (setup.sql). I also deleted statistics on all of the tables in the SH schema beforehand to more easily demonstrate our approach. Step 0. Delete the statistics on the tables in the SH schema. Step 1. Enable concurrent statistics gathering. Remember, this has to be done at the global level. Step 2. Enable incremental statistics for the 5 partitioned tables. Step 3. Create the DBMS_STATS.OBJECTTAB and pass it to the DBMS_STATS.GATHER_SCHEMA_STATS command. Here, you will notice that we defined two variables of DBMS_STATS.OBJECTTAB type. The first, filter_lst, will be used to pass the list of tables we want to gather statistics on, and will be the value passed to the obj_filter_list parameter. The second, obj_lst, will be used to capture the list of tables that have had statistics gathered on them by this command, and will be the value passed to the objlist parameter. In Oracle Database 11g Release 2, you need to specify the objlist parameter in order to get the obj_filter_list parameter to work correctly due to bug 14539274. Will also needed to define the number of objects we would supply in the obj_filter_list. In our case we ere specifying 5 tables (filter_lst.extend(5)). Finally, we need to specify the owner name and object name for each of the objects in the list. Once the list definition is complete we can issue the DBMS_STATS.GATHER_SCHEMA_STATS command. Step 4. Confirm statistics were gathered on the 5 partitioned tables. Here are a couple of other things to keep in mind when specifying the entries for the  obj_filter_list parameter. If a field in the entry is empty, i.e., null, it means there is no condition on this field. In the above example , suppose you remove the statement Obj_filter_lst(1).ownname := ‘SH’; You will get the same result since when you have specified gather_schema_stats so there is no need to further specify ownname in the obj_filter_lst. All of the names in the entry are normalized, i.e., uppercased if they are not double quoted. So in the above example, it is OK to use Obj_filter_lst(1).objname := ‘sales’;. However if you have a table called ‘MyTab’ instead of ‘MYTAB’, then you need to specify Obj_filter_lst(1).objname := ‘”MyTab”’; As I said before, although we have illustrated the usage of the obj_filter_list parameter for partitioned tables, with concurrent and incremental statistics gathering turned on, the obj_filter_list parameter is generally applicable to any gather_database_stats, gather_dictionary_stats and gather_schema_stats command. You can get a copy of the script I used to generate this post here. +Maria Colgan

    Read the article

  • Real tortoises keep it slow and steady. How about the backups?

    - by Maria Zakourdaev
      … Four tortoises were playing in the backyard when they decided they needed hibiscus flower snacks. They pooled their money and sent the smallest tortoise out to fetch the snacks. Two days passed and there was no sign of the tortoise. "You know, she is taking a lot of time", said one of the tortoises. A little voice from just out side the fence said, "If you are going to talk that way about me I won't go." Is it too much to request from the quite expensive 3rd party backup tool to be a way faster than the SQL server native backup? Or at least save a respectable amount of storage by producing a really smaller backup files?  By saying “really smaller”, I mean at least getting a file in half size. After Googling the internet in an attempt to understand what other “sql people” are using for database backups, I see that most people are using one of three tools which are the main players in SQL backup area:  LiteSpeed by Quest SQL Backup by Red Gate SQL Safe by Idera The feedbacks about those tools are truly emotional and happy. However, while reading the forums and blogs I have wondered, is it possible that many are accustomed to using the above tools since SQL 2000 and 2005.  This can easily be understood due to the fact that a 300GB database backup for instance, using regular a SQL 2005 backup statement would have run for about 3 hours and have produced ~150GB file (depending on the content, of course).  Then you take a 3rd party tool which performs the same backup in 30 minutes resulting in a 30GB file leaving you speechless, you run to management persuading them to buy it due to the fact that it is definitely worth the price. In addition to the increased speed and disk space savings you would also get backup file encryption and virtual restore -  features that are still missing from the SQL server. But in case you, as well as me, don’t need these additional features and only want a tool that performs a full backup MUCH faster AND produces a far smaller backup file (like the gain you observed back in SQL 2005 days) you will be quite disappointed. SQL Server backup compression feature has totally changed the market picture. Medium size database. Take a look at the table below, check out how my SQL server 2008 R2 compares to other tools when backing up a 300GB database. It appears that when talking about the backup speed, SQL 2008 R2 compresses and performs backup in similar overall times as all three other tools. 3rd party tools maximum compression level takes twice longer. Backup file gain is not that impressive, except the highest compression levels but the price that you pay is very high cpu load and much longer time. Only SQL Safe by Idera was quite fast with it’s maximum compression level but most of the run time have used 95% cpu on the server. Note that I have used two types of destination storage, SATA 11 disks and FC 53 disks and, obviously, on faster storage have got my backup ready in half time. Looking at the above results, should we spend money, bother with another layer of complexity and software middle-man for the medium sized databases? I’m definitely not going to do so.  Very large database As a next phase of this benchmark, I have moved to a 6 terabyte database which was actually my main backup target. Note, how multiple files usage enables the SQL Server backup operation to use parallel I/O and remarkably increases it’s speed, especially when the backup device is heavily striped. SQL Server supports a maximum of 64 backup devices for a single backup operation but the most speed is gained when using one file per CPU, in the case above 8 files for a 2 Quad CPU server. The impact of additional files is minimal.  However, SQLsafe doesn’t show any speed improvement between 4 files and 8 files. Of course, with such huge databases every half percent of the compression transforms into the noticeable numbers. Saving almost 470GB of space may turn the backup tool into quite valuable purchase. Still, the backup speed and high CPU are the variables that should be taken into the consideration. As for us, the backup speed is more critical than the storage and we cannot allow a production server to sustain 95% cpu for such a long time. Bottomline, 3rd party backup tool developers, we are waiting for some breakthrough release. There are a few unanswered questions, like the restore speed comparison between different tools and the impact of multiple backup files on restore operation. Stay tuned for the next benchmarks.    Benchmark server: SQL Server 2008 R2 sp1 2 Quad CPU Database location: NetApp FC 15K Aggregate 53 discs Backup statements: No matter how good that UI is, we need to run the backup tasks from inside of SQL Server Agent to make sure they are covered by our monitoring systems. I have used extended stored procedures (command line execution also is an option, I haven’t noticed any impact on the backup performance). SQL backup LiteSpeed SQL Backup SQL safe backup database <DBNAME> to disk= '\\<networkpath>\par1.bak' , disk= '\\<networkpath>\par2.bak', disk= '\\<networkpath>\par3.bak' with format, compression EXECUTE master.dbo.xp_backup_database @database = N'<DBName>', @backupname= N'<DBName> full backup', @desc = N'Test', @compressionlevel=8, @filename= N'\\<networkpath>\par1.bak', @filename= N'\\<networkpath>\par2.bak', @filename= N'\\<networkpath>\par3.bak', @init = 1 EXECUTE master.dbo.sqlbackup '-SQL "BACKUP DATABASE <DBNAME> TO DISK= ''\\<networkpath>\par1.sqb'', DISK= ''\\<networkpath>\par2.sqb'', DISK= ''\\<networkpath>\par3.sqb'' WITH DISKRETRYINTERVAL = 30, DISKRETRYCOUNT = 10, COMPRESSION = 4, INIT"' EXECUTE master.dbo.xp_ss_backup @database = 'UCMSDB', @filename = '\\<networkpath>\par1.bak', @backuptype = 'Full', @compressionlevel = 4, @backupfile = '\\<networkpath>\par2.bak', @backupfile = '\\<networkpath>\par3.bak' If you still insist on using 3rd party tools for the backups in your production environment with maximum compression level, you will definitely need to consider limiting cpu usage which will increase the backup operation time even more: RedGate : use THREADPRIORITY option ( values 0 – 6 ) LiteSpeed : use  @throttle ( percentage, like 70%) SQL safe :  the only thing I have found was @Threads option.   Yours, Maria

    Read the article

  • Visual Studio 2010 Productivity Tips and Tricks&ndash;Part 1: Extensions

    - by ToStringTheory
    I don’t know about you, but when it comes to development, I prefer my environment to be as free of clutter as possible.  It may surprise you to know that I have tried ReSharper, and did not like it, for the reason that I stated above.  In my opinion, it had too much clutter.  Don’t get me wrong, there were a couple of features that I did like about it (inversion of if blocks, code feedback), but for the most part, I actually felt that it was slowing me down. Introduction Another large factor besides intrusiveness/speed in my choice to dislike ReSharper would probably be that I have become comfortable with my current setup and extensions.  I believe I have a good collection, and am quite happy with what I can accomplish in a short amount of time.  I figured that I would share some of my tips/findings regarding Visual Studio productivity here, and see what you had to say. The first section of things that I would like to cover, are Visual Studio Extensions.  In case you have been living under a rock for the past several years, Extensions are available under the Tools menu in Visual Studio: The extension manager enables integrated access to the Microsoft Visual Studio Gallery online with access to a few thousand different extensions.  I have tried many extensions, but for reasons of lack reliability, usability, or features, have uninstalled almost all of them.  However, I have come across several that I find I can not do without anymore: NuGet Package Manager (Microsoft) Perspectives (Adam Driscoll) Productivity Power Tools (Microsoft) Web Essentials (Mads Kristensen) Extensions NuGet Package Manager To be honest, I debated on whether or not to put this in here.  Most people seem to have it, however, there was a time when I didn’t, and was always confused when blogs/posts would say to right click and “Add Package Reference…” which with one of the latest updates is now “Manage NuGet Packages”.  So, if you haven’t downloaded the NuGet Package Manager yet, or don’t know what it is, I would highly suggest downloading it now! Features Simply put, the NuGet Package Manager gives you a GUI and command line to access different libraries that have been uploaded to NuGet. Some of its features include: Ability to search NuGet for packages via the GUI, with information in the detail bar on the right. Quick access to see what packages are in a solution, and what packages have updates available, with easy 1-click updating. If you download a package that requires references to work on other NuGet packages, they will be downloaded and referenced automatically. Productivity Tip If you use any type of source control in Visual Studio as well as using NuGet packages, be sure to right-click on the solution and click "Enable NuGet Package Restore". What this does is add a NuGet package to the solution so that it will be checked in along side your solution, as well as automatically grab packages from NuGet on build if needed. This is an extremely simple system to use to manage your package references, instead of having to manually go into TFS and add the Packages folder. Perspectives I can't stand developing with just one monitor. Especially if it comes to debugging. The great thing about Visual Studio 2010, is that all of the panels and windows are floatable, and can dock to other screens. The only bad thing is, I don't use the same toolset with everything that I am doing. By this, I mean that I don't use all of the same windows for debugging a web application, as I do for coding a WPF application. Only thing is, Visual Studio doesn't save the screen positions for all of the undocked windows. So, I got curious one day and decided to check and see if there was an extension to help out. This is where I found Perspectives. Features Perspectives gives you the ability to configure window positions across any or your monitors, and then to save the positions in a profile. Perspectives offers a Panel to manage different presets/favorites, and a toolbar to add to the toolbars at the top of Visual Studio. Ability to 'Favorite' a profile to add it to the perspectives toolbar. Productivity Tip Take the time to setup profiles for each of your scenarios - debugging web/winforms/xaml, coding, maintenance, etc. Try to remember to use the profiles for a few days, and at the end of a week, you may find that your productivity was never better. Productivity Power Tools Ah, the Productivity Power Tools... Quite possibly one of my most used extensions, if not my most used. The tool pack gives you a variety of enhancements ranging from key shortcuts, interface tweaks, and completely new features to Visual Studio 2010. Features I don't want to bore you with all of the features here, so here are my favorite: Quick Find - Unobtrusive search box in upper-right corner of the code window. Great for searching in general, especially in a file. Solution Navigator - The 'Solution Explorer' on steroids. Easy to search for files, see defined members/properties/methods in files, and my favorite feature is the 'set as root' option. Updated 'Add Reference...' Dialog - This is probably my favorite enhancement period... The 'Add Reference...' dialog redone in a manner that resembles the Extension/Package managers. I especially love the ability to search through all of the references. "Ctrl - Click" for Definition - I am still getting used to this as I usually try to use my keyboard for everything, but I love the ability to hold Ctrl and turn property/methods/variables into hyperlinks, that you click on to see their definitions. Great for travelling down a rabbit hole in an application to research problems. While there are other commands/utilities, I find these to be the ones that I lean on the most for the usefulness. Web Essentials If you have do any type of web development in ASP .Net, ASP .Net MVC, even HTML, I highly suggest grabbing the Web Essentials right NOW! This extension alone is great for productivity in web development, and greatly decreases my development time on new features. Features Some of its best features include: CSS Previews - I say 'previews' because of the multiple kinds of previews in CSS that you get font-family, color, background/background-image previews. This is great for just tweaking UI slightly in different ways and seeing how they look in the CSS window at a glance. Live Preview - One word - awesome! This goes well with my multi-monitor setup. I put the site on one monitor in a Live Preview panel, and then as I make changes to CSS/cshtml/aspx/html, the preview window will update with each save/build automatically. For CSS, you can even turn on live-update, so as you are tweaking CSS, the style changes in real time. Great for tweaking colors or font-sizes. Outlining - Small, but I like to be able to collapse regions/declarations that are in the way of new work, or are just distracting. Commenting Shortcuts - I don't know why it wasn't included by default, but it is nice to have the key shortcuts for commenting working in the CSS editor as well. Productivity Tip When working on a site, hit CTRL-ALT-ENTER to launch the Live Preview window. Dock it to another monitor. When you make changes to the document/css, just save and glance at the other monitor. No need to alt tab, then alt tab before continuing editing. Conclusion These extensions are only the most useful and least intrusive - ones that I use every day. The great thing about Visual Studio 2010 is the extensibility options that it gives developers to utilize. Have an extension that you use that isn't intrusive, but isn't listed here? Please, feel free to comment. I love trying new things, and am always looking for new additions to my toolset of the most useful. Finally, please keep an eye out for Part 2 on key shortcuts in Visual Studio. Also, if you are visiting my site (http://tostringtheory.com || http://geekswithblogs.net/tostringtheory) from an actual browser and not a feed, please let me know what you think of the new styling!

    Read the article

  • Why do we (really) program to interfaces?

    - by Kyle Burns
    One of the earliest lessons I was taught in Enterprise development was "always program against an interface".  This was back in the VB6 days and I quickly learned that no code would be allowed to move to the QA server unless my business objects and data access objects each are defined as an interface and have a matching implementation class.  Why?  "It's more reusable" was one answer.  "It doesn't tie you to a specific implementation" a slightly more knowing answer.  And let's not forget the discussion ending "it's a standard".  The problem with these responses was that senior people didn't really understand the reason we were doing the things we were doing and because of that, we were entirely unable to realize the intent behind the practice - we simply used interfaces and had a bunch of extra code to maintain to show for it. It wasn't until a few years later that I finally heard the term "Inversion of Control".  Simply put, "Inversion of Control" takes the creation of objects that used to be within the control (and therefore a responsibility of) of your component and moves it to some outside force.  For example, consider the following code which follows the old "always program against an interface" rule in the manner of many corporate development shops: 1: ICatalog catalog = new Catalog(); 2: Category[] categories = catalog.GetCategories(); In this example, I met the requirement of the rule by declaring the variable as ICatalog, but I didn't hit "it doesn't tie you to a specific implementation" because I explicitly created an instance of the concrete Catalog object.  If I want to test the functionality of the code I just wrote I have to have an environment in which Catalog can be created along with any of the resources upon which it depends (e.g. configuration files, database connections, etc) in order to test my functionality.  That's a lot of setup work and one of the things that I think ultimately discourages real buy-in of unit testing in many development shops. So how do I test my code without needing Catalog to work?  A very primitive approach I've seen is to change the line the instantiates catalog to read: 1: ICatalog catalog = new FakeCatalog();   once the test is run and passes, the code is switched back to the real thing.  This obviously poses a huge risk for introducing test code into production and in my opinion is worse than just keeping the dependency and its associated setup work.  Another popular approach is to make use of Factory methods which use an object whose "job" is to know how to obtain a valid instance of the object.  Using this approach, the code may look something like this: 1: ICatalog catalog = CatalogFactory.GetCatalog();   The code inside the factory is responsible for deciding "what kind" of catalog is needed.  This is a far better approach than the previous one, but it does make projects grow considerably because now in addition to the interface, the real implementation, and the fake implementation(s) for testing you have added a minimum of one factory (or at least a factory method) for each of your interfaces.  Once again, developers say "that's too complicated and has me writing a bunch of useless code" and quietly slip back into just creating a new Catalog and chalking any test failures up to "it will probably work on the server". This is where software intended specifically to facilitate Inversion of Control comes into play.  There are many libraries that take on the Inversion of Control responsibilities in .Net and most of them have many pros and cons.  From this point forward I'll discuss concepts from the standpoint of the Unity framework produced by Microsoft's Patterns and Practices team.  I'm primarily focusing on this library because it questions about it inspired this posting. At Unity's core and that of most any IoC framework is a catalog or registry of components.  This registry can be configured either through code or using the application's configuration file and in the most simple terms says "interface X maps to concrete implementation Y".  It can get much more complicated, but I want to keep things at the "what does it do" level instead of "how does it do it".  The object that exposes most of the Unity functionality is the UnityContainer.  This object exposes methods to configure the catalog as well as the Resolve<T> method which is used to obtain an instance of the type represented by T.  When using the Resolve<T> method, Unity does not necessarily have to just "new up" the requested object, but also can track dependencies of that object and ensure that the entire dependency chain is satisfied. There are three basic ways that I have seen Unity used within projects.  Those are through classes directly using the Unity container, classes requiring injection of dependencies, and classes making use of the Service Locator pattern. The first usage of Unity is when classes are aware of the Unity container and directly call its Resolve method whenever they need the services advertised by an interface.  The up side of this approach is that IoC is utilized, but the down side is that every class has to be aware that Unity is being used and tied directly to that implementation. Many developers don't like the idea of as close a tie to specific IoC implementation as is represented by using Unity within all of your classes and for the most part I agree that this isn't a good idea.  As an alternative, classes can be designed for Dependency Injection.  Dependency Injection is where a force outside the class itself manipulates the object to provide implementations of the interfaces that the class needs to interact with the outside world.  This is typically done either through constructor injection where the object has a constructor that accepts an instance of each interface it requires or through property setters accepting the service providers.  When using dependency, I lean toward the use of constructor injection because I view the constructor as being a much better way to "discover" what is required for the instance to be ready for use.  During resolution, Unity looks for an injection constructor and will attempt to resolve instances of each interface required by the constructor, throwing an exception of unable to meet the advertised needs of the class.  The up side of this approach is that the needs of the class are very clearly advertised and the class is unaware of which IoC container (if any) is being used.  The down side of this approach is that you're required to maintain the objects passed to the constructor as instance variables throughout the life of your object and that objects which coordinate with many external services require a lot of additional constructor arguments (this gets ugly and may indicate a need for refactoring). The final way that I've seen and used Unity is to make use of the ServiceLocator pattern, of which the Patterns and Practices team has also provided a Unity-compatible implementation.  When using the ServiceLocator, your class calls ServiceLocator.Retrieve in places where it would have called Resolve on the Unity container.  Like using Unity directly, it does tie you directly to the ServiceLocator implementation and makes your code aware that dependency injection is taking place, but it does have the up side of giving you the freedom to swap out the underlying IoC container if necessary.  I'm not hugely concerned with hiding IoC entirely from the class (I view this as a "nice to have"), so the single biggest problem that I see with the ServiceLocator approach is that it provides no way to proactively advertise needs in the way that constructor injection does, allowing more opportunity for difficult to track runtime errors. This blog entry has not been intended in any way to be a definitive work on IoC, but rather as something to spur thought about why we program to interfaces and some ways to reach the intended value of the practice instead of having it just complicate your code.  I hope that it helps somebody begin or continue a journey away from being a "Cargo Cult Programmer".

    Read the article

  • CodePlex Daily Summary for Thursday, September 06, 2012

    CodePlex Daily Summary for Thursday, September 06, 2012Popular Releasesmenu4web: menu4web 0.4.1 - javascript menu for web sites: This release is for those who believe that global variables are evil. menu4web has been wrapped into m4w singleton object. Added "Vertical Tabs" example which illustrates object notation.WinRT XAML Toolkit: WinRT XAML Toolkit - 1.2.1: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features AsyncUI extensions Controls and control extensions Converters Debugging helpers Imaging IO helpers VisualTree helpers Samples Recent changes NOTE: Namespace changes DebugConsol...iPDC - Free Phasor Data Concentrator: iPDC-v1.3.1: iPDC suite version-1.3.1, Modifications and Bug Fixed (from v 1.3.0) New User Manual for iPDC-v1.3.1 available on websites. Bug resolved : PMU Simulator TCP connection error and hang connection for client (PDC). Now PMU Simulator (server) can communicate more than one PDCs (clients) over TCP and UDP parallely. PMU Simulator is now sending the exact data frames as mentioned in data rate by user. PMU Simulator data rate has been verified by iPDC database entries and PMU Connection Tes...Microsoft SQL Server Product Samples: Database: AdventureWorks OData Feed: The AdventureWorks OData service exposes resources based on specific SQL views. The SQL views are a limited subset of the AdventureWorks database that results in several consuming scenarios: CompanySales Documents ManufacturingInstructions ProductCatalog TerritorySalesDrilldown WorkOrderRouting How to install the sample You can consume the AdventureWorks OData feed from http://services.odata.org/AdventureWorksV3/AdventureWorks.svc. You can also consume the AdventureWorks OData fe...Desktop Google Reader: 1.4.6: Sorting feeds alphabetical is now optional (see preferences window)DotNetNuke® Community Edition CMS: 06.02.03: Major Highlights Fixed issue where mailto: links were not working when sending bulk email Fixed issue where uses did not see friendship relationships Problem is in 6.2, which does not show in the Versions Affected list above. Fixed the issue with cascade deletes in comments in CoreMessaging_Notification Fixed UI issue when using a date fields as a required profile property during user registration Fixed error when running the product in debug mode Fixed visibility issue when...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.65: Fixed null-reference error in the build task constructor.Active Forums for DotNetNuke CMS: Active Forums 5.0.0 RC: RC release of Active Forums 5.0.Droid Explorer: Droid Explorer 0.8.8.7 Beta: Bug in the display icon for apk's, will fix with next release Added fallback icon if unable to get the image/icon from the Cloud Service Removed some stale plugins that were either out dated or incomplete. Added handler for *.ab files for restoring backups Added plugin to create device backups Backups stored in %USERPROFILE%\Android Backups\%DEVICE_ID%\ Added custom folder icon for the android backups directory better error handling for installing an apk bug fixes for the Runn...BI System Monitor: v2.1: Data Audits report and supporting SQL, and SSIS package Environment Overview report enhancements, improving the appearance, addition of data audit finding indicators Note: SQL 2012 version coming soon.Hidden Capture (HC): Hidden Capture 1.1: Hidden Capture 1.1 by Mohsen E.Dawatgar http://Hidden-Capture.blogfa.comExt Spec: Ext Spec 0.2.1: Refined examples and improved distribution options.The Visual Guide for Building Team Foundation Server 2012 Environments: Version 1: --Nearforums - ASP.NET MVC forum engine: Nearforums v8.5: Version 8.5 of Nearforums, the ASP.NET MVC Forum Engine. New features include: Built-in search engine using Lucene.NET Flood control improvements Notifications improvements: sync option and mail body View Roadmap for more details webdeploy package sha1 checksum: 961aff884a9187b6e8a86d68913cdd31f8deaf83WiX Toolset: WiX Toolset v3.6: WiX Toolset v3.6 introduces the Burn bootstrapper/chaining engine and support for Visual Studio 2012 and .NET Framework 4.5. Other minor functionality includes: WixDependencyExtension supports dependency checking among MSI packages. WixFirewallExtension supports more features of Windows Firewall. WixTagExtension supports Software Id Tagging. WixUtilExtension now supports recursive directory deletion. Melt simplifies pure-WiX patching by extracting .msi package content and updating .w...Iveely Search Engine: Iveely Search Engine (0.2.0): ????ISE?0.1.0??,?????,ISE?0.2.0?????????,???????,????????20???follow?ISE,????,??ISE??????????,??????????,?????????,?????????0.2.0??????,??????????。 Iveely Search Engine ?0.2.0?????????“??????????”,??????,?????????,???????,???????????????????,????、????????????。???0.1.0????????????: 1. ??“????” ??。??????????,?????????,???????????????????。??:????????,????????????,??????????????????。??????。 2. ??“????”??。?0.1.0??????,???????,???????????????,?????????????,????????,?0.2.0?,???????...GmailDefaultMaker: GmailDefaultMaker 3.0.0.2: Add QQ Mail BugfixSmart Data Access layer: Smart Data access Layer Ver 3: In this version support executing inline query is added. Check Documentation section for detail.DotNetNuke® Form and List: 06.00.04: DotNetNuke Form and List 06.00.04 Don't forget to backup your installation before upgrade. Changes in 06.00.04 Fix: Sql Scripts for 6.003 missed object qualifiers within stored procedures Fix: added missing resource "cmdCancel.Text" in form.ascx.resx Changes in 06.00.03 Fix: MakeThumbnail was broken if the application pool was configured to .Net 4 Change: Data is now stored in nvarchar(max) instead of ntext Changes in 06.00.02 The scripts are now compatible with SQL Azure, tested in a ne...Coevery - Free CRM: Coevery 1.0.0.24: Add a sample database, and installation instructions.New ProjectsAny-Service: AnyService is a .net 4.0 Windows service shell. It hosts any windows application in non-gui mode to run as a service.BabyCloudDrives - the multi cloud drive desktop's application: wpf ????BLACK ORANGE: Download The HPAD TEXT EDITOR and use it Wisely.. CodePlex New Release Checker: CodePlex New Release Checker is a small library that makes it easy to add, "New Version Available!" functionality to your CodePlex project.Collect: ????????!CSVManager: CSV??CSV?????,????CSV??,??????Exam Project: My Exam Project. Computer Vision, C and OpenCV-FTP: Hey guys thanks for checking out my ftp!Haushaltsbuch: 1ModMaker.Lua: ModMaker.Lua is an open source .NET library that parses and executes Lua code.MyJabbr: MyJabbr netduinoscope: Design shield and software to use netduino as oscilloscopeNetSurveillance Web Application: Net Surveillance Web ApplicationNiconicoApiHelper: ????API?????????OStega: A simple library for encrypt text into an bmp or png image.OURORM: ormTFS Cloud Deployment Toolkit: The TFS Cloud Deployment Toolkit is a set of tools that integrate with TFS 2010 to help manage configuration and deployment to various remote environments.The Visual Guide for Building Team Foundation Server 2012 Environments: A step-by-step guide for building Team Foundation Server 2012 environments that include SharePoint Server 2010, SQL Server 2012, Windows Server 2012 and more!WinRT LineChart: An attempt at creating an usable LineChart for everyone to use in his/her own Windows 8 Apps

    Read the article

  • How I Work: Staying Productive Whilst Traveling

    - by BuckWoody
    I travel a lot. Not like some folks that are gone every week, mind you, although in the last month I’ve been to: Cambridge, UK; Anchorage, AK; San Jose, CA; Copenhagen, DK, Boston, MA; and I’m currently en-route to Anaheim, CA.  While this many places in a month is a bit unusual for me, I would say I travel frequently. I’ve travelled most of my 28+ years in IT, and at one time was a consultant traveling weekly.   With that much time away from my primary work location, I have to find ways to stay productive. Some might say “just rest – take a nap!” – but I’m not able to do that. For one thing, I’m a very light sleeper and I’ve never slept on a plane - even a 30+ hour trip to New Zealand in Business Class - so that just isn’t option. I also am not always in the plane, of course. There’s the hotel, the taxi/bus/train, the airport and then all that over again when I arrive. Since my regular jobs have many demands, I have to get work done.   Note: No, I’m not always focused on work. I need downtime just like everyone else. Sometimes I just think, watch a movie or listen to tunes – and I give myself permission to do that anytime – sometimes the whole trip. I have too fewheartbeats left in life to only focus on work – it’s just not that important, and neither am I. Some of these tasks are letters to friends and family, or other personal things. What I’m talking about here is a plan, not some task list I have to follow. When I get to the location I’m traveling to, I always build in as much time as I can to ensure I enjoy those sights and the people I’m with. I would find traveling to be a waste if not for that.   The Unrealistic Expectation As I would evaluate the trip I was taking – say a 6-8 hour flight – I would expect to get 10-12 hours of work done. After all, there’s the time at the airport, the taxi and so on, and then of course the time in the air with all of the room, power, internet and everything else I needed to get my work done. I would pile up tasks at home, pack my bags, and head happily to the magical land of the TSA.   Right. On return from the trip, I had accomplished little, had more e-mails and other work that had piled up, and I was tired, hungry, and unorganized. This had to change. So, I decided to do three things: Segment my work Set realistic expectations Plan accordingly  Segmenting By Available Resources The first task was to decide what kind of work I could do in each location – if any. I found that I was dependent on a few things to get work done, such as power, the Internet, and a place to sit down. Before I fly, I take some time at home to get all of the work I’d like to accomplish while away segmented into these areas, and print that out on paper, which goes in my suit-coat pocket along with a mechanical pencil. I print my tickets, and I’m all set for the adventure ahead. Then I simply do each kind of work whenever I’m in that situation. No power There are certain times when I don’t have power available. But not only that, I might not even be able to use most of my electronics. So I now schedule as many phone calls as I can for the taxi/bus/train ride and the airports as I can. I have a paper notebook (Moleskine, of course) and a pencil and I print out any notes or numbers I need prior to the trip. Once I’m airborne or at the airport, I work on my laptop. I check and respond to e-mails, create slides, write code, do architecture, whatever I can.  If I can’t use any electronics, or once the power runs out, I schedule time for reading. I can read at the airport or anywhere, actually, even in-flight or any other transport. I “read with a pencil”, meaning I take a lot of notes, which I liketo put in OneNote, but since in most cases I don’t have power, I use the Moleskine to do that. Speaking of which, sometimes as I’m thinking I come up with new topics, ideas, blog posts, or things to teach in my classes. Once again I take out the notebook and write it down. All of these notes get a check-mark when I get back to the office and transfer the writing to OneNote. I’ve tried those “smart pens” and so on to automate this, but it just never works out. Pencil and paper are just fine. As I mentioned, sometime I just need to think. I’ll do nothing, and let my mind wander, thinking of nothing in particular, or some math problem or science question I’m interested in. My only issue with this is that I communicate tothink, and I don’t want to drive people crazy by being that guy that won’t shut up, so I think in a different way. Power, but no Internet or Phone If I have power but no Internet or phone, I focus on the laptop and the tablet as before, and I also recharge my other gadgets. Power, Internet, Phone and a Place to Work At first I thought that when I arrived at the hotel or event I could get the same amount of work done that I do at the office. Not so. There’s simply too many distractions, things you need, or other issues that allow this. Of course, Ican work on any device, read, think, write or whatever, but I am simply not as productive as I am in my home office. So I plan for about 25-50% as much work getting done in this environment as I think I could really do. I’ve done some measurements, and this holds out to be true almost every time. The key is that I re-set my expectations (and my co-worker’s expectations as well) that this is the case. I use the Out-Of-Office notices to let people know that I’m just not going to be 100% at this time – it’s hard for everyone, but it’s more honest and realistic, and I’d rather they know that – and that I realize that – than to let them think I’m totally available. Because I’m not – I’m traveling. I don’t tend to put too much detail, because after all I don’t necessarily want to let people know when I’m not home :) but I do think it’s important to let people that depend on my know that I’ll get back with them later. I hope this helps you think through your own methodology of staying productive when you travel. Or perhaps you just go offline, and don’t worry about any of this – good for you! That’s completely valid as well.   (Oh, and yes, I wrote this at 35K feet, on Alaska Airlines on a trip. :)  Practice what you preach, Buck.)

    Read the article

  • ODEE Green Field (Windows) Part 4 - Documaker

    - by AndyL-Oracle
    Welcome back! We're about nearing completion of our installation of Oracle Documaker Enterprise Edition ("ODEE") in a green field. In my previous post, I covered the installation of SOA Suite for WebLogic. Before that, I covered the installation of WebLogic, and Oracle 11g database - all of which constitute the prerequisites for installing ODEE. Naturally, if your environment already has a WebLogic server and Oracle database, then you can skip all those components and go straight for the heart of the installation of ODEE. The ODEE installation is comprised of two procedures, the first covers the installation, which is running the installer and answering some questions. This will lay down the files necessary to install into the tiers (e.g. database schemas, WebLogic domains, etcetera). The second procedure is to deploy the configuration files into the various components (e.g. deploy the database schemas, WebLogic domains, SOA composites, etcetera). I will segment my posts accordingly! Let's get started, shall we? Unpack the installation files into a temporary directory location. This should extract a zip file. Extract that zip file into the temporary directory location. Navigate to and execute the installer in Disk1/setup.exe. You may have to allow the program to run if User Account Control is enabled. Once the dialog below is displayed, click Next. Select your ODEE Home - inside this directory is where all the files will be deployed. For ease of support, I recommend using the default, however you can put this wherever you want. Click Next. Select the database type, database connection type – note that the database name should match the value used for the connection type (e.g. if using SID, then the name should be IDMAKER; if using ServiceName, the name should be “idmaker.us.oracle.com”). Verify whether or not you want to enable advanced compression. Note: if you are not licensed for Oracle 11g Advanced Compression option do not use this option! Terrible, terrible calamities will befall you if you do! Click Next. Enter the Documaker Admin user name (default "dmkr_admin" is recommended for support purposes) and set the password. Update the System name and ID (must be unique) if you want/need to - since this is a green field install you should be able to use the default System ID. The only time you'd change this is if you were, for some reason, installing a new ODEE system into an existing schema that already had a system. Click Next. Enter the Assembly Line user name (default "dmkr_asline" is recommended) and set the password. Update the Assembly Line name and ID (must be unique) if you want/need to - it's quite possible that at some point you will create another assembly line, in which case you have several methods of doing so. One is to re-run the installer, and in this case you would pick a different assembly line ID and name. Click Next. Note: you can set the DB folder if needed (typically you don’t – see ODEE Installation Guide for specifics. Select the appropriate Application Server type - in this case, our green field install is going to use WebLogic - set the username to weblogic (this is required) and specify your chosen password. This credential will be used to access the application server console/control panel. Keep in mind that there are specific criteria on password choices that are required by WebLogic, but are not enforced by the installer (e.g. must contain a number, must be of a certain length, etcetera). Choose a strong password. Set the connection information for the JMS server. Note that for the 12.3.x version, the installer creates a separate JVM (WebLogic managed server) that hosts the JMS server, whereas prior editions place the JMS server on the AdminServer.  You may also specify a separate URL to the JMS server in case you intend to move the JMS resources to a separate/different server (e.g. back to AdminServer). You'll need to provide a login principal and credentials - for simplicity I usually make this the same as the WebLogic domain user, however this is not a secure practice! Make your JMS principal different from the WebLogic principal and choose a strong password, then click Next. Specify the Hot Folder(s) (comma-delimited if more than one) - this is the directory/directories that is/are monitored by ODEE for jobs to process. Click Next. If you will be setting up an SMTP server for ODEE to send emails, you may configure the connection details here. The details required are simple: hostname, port, user/password, and the sender's address (e.g. emails will appear to be sent by the address shown here so if the recipient clicks "reply", this is where it will go). Click Next. If you will be using Oracle WebCenter:Content (formerly known as Oracle UCM) you can enable this option and set the endpoints/credentials here. If you aren't sure, select False - you can always go back and enable this later. I'm almost 76% certain there will be a post sometime in the future that details how to configure ODEE + WCC:C! Click Next. If you will be using Oracle UMS for sending MMS/text messages, you can enable and set the endpoints/credentials here. As with UCM, if you're not sure, don't enable it - you can always set it later. Click Next. On this screen you can change the endpoints for the Documaker Web Service (DWS), and the endpoints for approval processing in Documaker Interactive. The deployment process for ODEE will create 3 managed WebLogic servers for hosting various Documaker components (JMS, Interactive, DWS, Dashboard, Documaker Administrator, etcetera) and it will set the ports used for each of these services. In this screen you can change these values if you know how you want to deploy these managed servers - but for now we'll just accept the defaults. Click Next. Verify the installation details and click Install. You can save the installation into a response file if you need to (which might be useful if you want to rerun this installation in an unattended fashion). Allow the installation to progress... Click Next. You can save the response file if needed (e.g. in case you forgot to save it earlier!) Click Finish. That's it, you're done with the initial installation. Have a look around the ODEE_HOME that you just installed (remember we selected c:\oracle\odee_1?) and look at the files that are laid down. Don't change anything just yet! Stay tuned for the next segment where we complete and verify the installation. 

    Read the article

  • Effectiveness and Efficiency

    - by Daniel Moth
    In the professional environment, i.e. at work, I am always seeking personal growth and to be challenged. The result is that my assignments, my work list, my tasks, my goals, my commitments, my [insert whatever word resonates with you] keep growing (in scope and desired impact). Which in turn means I have to keep finding new ways to deliver more value, while not falling into the trap of working more hours. To do that I continuously evaluate both my effectiveness and my efficiency. EFFECTIVENESS The first thing I check is my effectiveness: Am I doing the right things? Am I focusing too much on unimportant things? Am I spending more time doing stuff that is important to my team/org/division/business/company, or am I spending it on stuff that is important to me and that I enjoy doing? Am I valuing activities that maybe I have outgrown and should be delegated to others who are at a stage I have surpassed (in Microsoft speak: is the work I am doing level appropriate or am I still operating at the previous level)? Notice how the answers to those questions change over time and due to certain events, so I have to remind myself to revisit them frequently. Events that force me to re-examine them are: change of role, change of team/org/etc, change of direction of team/org/etc, re-org, new hires on the team that take on some of the work I did, personal promotion, change of manager... and if none of those events has occurred since the last annual review, I ask myself those at each annual review anyway. If you think you are not being effective at work, make a list of the stuff that you do and start tracking where your time goes. In parallel, have a discussion with your manager about where they think your time should go. Ultimately your time is finite and hence it is your most precious investment, don't waste it. If your management doesn't value as highly what you spend your time on, then either convince your management, or stop spending your time on it, or find different management: Lead, Follow, or get out of the way! That's my view on effectiveness. You have to fix that before moving to being efficient, or you may end up being very efficient at stuff that nobody wants you to be doing in the first place. For example, you may be spending your time writing blog posts and becoming better and faster at it all the time. If your manager thinks that is not even part of your job description, you are wasting your time to satisfy your inner desires. Nobody can help you with your effectiveness other than your management chain and your management peers - they are the judges of it. EFFICIENCY The second thing I check is my efficiency: Am I doing things right? For me, doing things right means that I deliver the same quality of work faster [than what I used to, and than my peers, and than expected of me]. The result is that I can achieve more [than what I used to, and than my peers, and than expected of me]. Notice how the efficiency goal is a more portable one. If, by whatever criteria, you think you are the best at [insert your own skill here], this can change at two events: because you have new colleagues (who are potentially better than your older ones), and it can change with a change of manager (who has potentially higher expectations). That's about it. Once you are efficient at something, you carry that with you... All you need to really be doing here is, when taking on new kinds of work that you haven't done before, try a few approaches and devise a system so that you can become efficient at this new activity too... Just keep "collecting" stuff that you are efficient at. If you think you are not being efficient at something, break it down: What are the steps you take to complete that task? How long do you spend on each step? Talk to others about what steps they take, to see if you can optimize some steps away or trade them for better steps, or just learn how to complete a step faster. Have a system for every task you take so that you can have repeatable success. That's my view on efficiency. You have to fix it so that you can free up time to do more. When you plan a route from A to B - all else being equal - you try to get there as fast as possible so why would you not want to do that with your everyday work? For example, imagine you are inefficient at processing email: You spend more time than necessary dealing with email, and you still end up with dropped email threads and with slower response times than others. How can you improve? Talk to someone that you think is good at this, understand their system (e.g. here is my email processing system) and come up with one that works for you. Parting Thoughts Are you considered, by your colleagues and manager, an effective and efficient person at your workplace? If you are, what would you change if you were asked by your management to do the job of two people? Seriously, think about that! Your immediate reaction may be "that is not possible", but it actually is. You just have to re-assess what things that were previously important will now stop being important, by discussing them with your management and reaching agreement on relative priorities. For example, stuff that was previously on your plate may now have to be delegated or dropped. Where you thought you were efficient, maybe now you have to find an even faster path to completion, perhaps keeping in mind that Perfect is the Enemy of “Good Enough”. My personal experience (from both observing others and from my own reflection) is that when folks are struggling to keep up at work it is because of two reasons: They are investing energy in stuff that they enjoy doing which the business regards as having a lower priority than a lot of other things on their plate. They are completing tasks to a level of higher quality than what is required (due to personal pride) missing the big picture which almost always mandates completing three tasks at good enough quality than knocking only one of them out of the park while the other two come in late or not at all. There is a lot of content on the web, so I strongly encourage you to use your favorite search engine to read other views on effectiveness and efficiency (Bing, Google). Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • SQL Server Developer Tools &ndash; Codename Juneau vs. Red-Gate SQL Source Control

    - by Ajarn Mark Caldwell
    So how do the new SQL Server Developer Tools (previously code-named Juneau) stack up against SQL Source Control?  Read on to find out. At the PASS Community Summit a couple of weeks ago, it was announced that the previously code-named Juneau software would be released under the name of SQL Server Developer Tools with the release of SQL Server 2012.  This replacement for Database Projects in Visual Studio (also known in a former life as Data Dude) has some great new features.  I won’t attempt to describe them all here, but I will applaud Microsoft for making major improvements.  One of my favorite changes is the way database elements are broken down.  Previously every little thing was in its own file.  For example, indexes were each in their own file.  I always hated that.  Now, SSDT uses a pattern similar to Red-Gate’s and puts the indexes and keys into the same file as the overall table definition. Of course there are really cool features to keep your database model in sync with the actual source scripts, and the rename refactoring feature is now touted as being more than just a search and replace, but rather a “semantic-aware” search and replace.  Funny, it reminds me of SQL Prompt’s Smart Rename feature.  But I’m not writing this just to criticize Microsoft and argue that they are late to the party with this feature set.  Instead, I do see it as a viable alternative for folks who want all of their source code to be version controlled, but there are a couple of key trade-offs that you need to know about when you choose which tool set to use. First, the basics Both tool sets integrate with a wide variety of source control systems including the most popular: Subversion, GIT, Vault, and Team Foundation Server.  Both tools have integrated functionality to produce objects to upgrade your target database when you are ready (DACPACs in SSDT, integration with SQL Compare for SQL Source Control).  If you regularly live in Visual Studio or the Business Intelligence Development Studio (BIDS) then SSDT will likely be comfortable for you.  Like BIDS, SSDT is a Visual Studio Project Type that comes with SQL Server, and if you don’t already have Visual Studio installed, it will install the shell for you.  If you already have Visual Studio 2010 installed, then it will just add this as an available project type.  On the other hand, if you regularly live in SQL Server Management Studio (SSMS) then you will really enjoy the SQL Source Control integration from within SSMS.  Both tool sets store their database model in script files.  In SSDT, these are on your file system like other source files; in SQL Source Control, these are stored in the folder structure in your source control system, and you can always GET them to your file system if you want to browse them directly. For me, the key differentiating factors are 1) a single, unified check-in, and 2) migration scripts.  How you value those two features will likely make your decision for you. Unified Check-In If you do a continuous-integration (CI) style of development that triggers an automated build with unit testing on every check-in of source code, and you use Visual Studio for the rest of your development, then you will want to really consider SSDT.  Because it is just another project in Visual Studio, it can be added to your existing Solution, and you can then do a complete, or unified single check-in of all changes whether they are application or database changes.  This is simply not possible with SQL Source Control because it is in a different development tool (SSMS instead of Visual Studio) and there is no way to do one unified check-in between the two.  You CAN do really fast back-to-back check-ins, but there is the possibility that the automated build that is triggered from the first check-in will cause your unit tests to fail and the CI tool to report that you broke the build.  Of course, the automated build that is triggered from the second check-in which contains the “other half” of your changes should pass and so the amount of time that the build was broken may be very, very short, but if that is very, very important to you, then SQL Source Control just won’t work; you’ll have to use SSDT. Refactoring and Migrations If you work on a mature system, or on a not-so-mature but also not-so-well-designed system, where you want to refactor the database schema as you go along, but you can’t have data suddenly disappearing from your target system, then you’ll probably want to go with SQL Source Control.  As I wrote previously, there are a number of changes which you can make to your database that the comparison tools (both from Microsoft and Red Gate) simply cannot handle without the possibility (or probability) of data loss.  Currently, SSDT only offers you the ability to inject PRE and POST custom deployment scripts.  There is no way to insert your own script in the middle to override the default behavior of the tool.  In version 3.0 of SQL Source Control (Early Access version now available) you have that ability to create your own custom migration script to take the place of the commands that the tool would have done, and ensure the preservation of your data.  Or, even if the default tool behavior would have worked, but you simply know a better way then you can take control and do things your way instead of theirs. You Decide In the environment I work in, our automated builds are not triggered off of check-ins, but off of the clock (currently once per night) and so there is no point at which the automated build and unit tests will be triggered without having both sides of the development effort already checked-in.  Therefore having a unified check-in, while handy, is not critical for us.  As for migration scripts, these are critically important to us.  We do a lot of new development on systems that have already been in production for years, and it is not uncommon for us to need to do a refactoring of the database.  Because of the maturity of the existing system, that often involves data migrations or other additional SQL tasks that the comparison tools just can’t detect on their own.  Therefore, the ability to create a custom migration script to override the tool’s default behavior is very important to us.  And so, you can see why we will continue to use Red Gate SQL Source Control for the foreseeable future.

    Read the article

  • Hosting and consuming WCF services without configuration files

    - by martinsj
    In this post, I'll demonstrate how to configure both the host and the client in code without the need for configuring services i the <system.serviceModel> section of the config-file. In fact, you don't need a  <system.serviceModel> section at all. What you'll do need (and want) sometimes, is the Uri of the service in the configuration file. Configuring the Uri of the the service is actually only needed for the client or when self-hosting, not when hosting in IIS. So, exactly What do we need to configure? The binding type and the binding constraints The metadata behavior Debug behavior You can of course configure even more, and even more if you want to, WCF is after all the king of configuration… As an example I'll be hosting and consuming a service that removes most of the default constraints for WCF-services, using a BasicHttpBinding. Of course, in regards to security, it is probably better to have some constraints on the server, but this is only a demonstration. The ServerConfig class in the code beneath is a static helper class that will be used in the examples. In this post, I’ll be using this helper-class for all configuration, for both the server and the client. In WCF, the  client and the server have both their own WCF-configuration. With this piece of code, they will be sharing the same configuration. 1: public static class ServiceConfig 2: { 3: public static Binding DefaultBinding 4: { 5: get 6: { 7: var binding = new BasicHttpBinding(); 8: Configure(binding); 9: return binding; 10: } 11: } 12:  13: public static void Configure(HttpBindingBase binding) 14: { 15: if (binding == null) 16: { 17: throw new ArgumentException("Argument 'binding' cannot be null. Cannot configure binding."); 18: } 19:  20: binding.SendTimeout = new TimeSpan(0, 0, 30, 0); // 30 minute timeout 21: binding.MaxBufferSize = Int32.MaxValue; 22: binding.MaxBufferPoolSize = 2147483647; 23: binding.MaxReceivedMessageSize = Int32.MaxValue; 24: binding.ReaderQuotas.MaxArrayLength = Int32.MaxValue; 25: binding.ReaderQuotas.MaxBytesPerRead = Int32.MaxValue; 26: binding.ReaderQuotas.MaxDepth = Int32.MaxValue; 27: binding.ReaderQuotas.MaxNameTableCharCount = Int32.MaxValue; 28: binding.ReaderQuotas.MaxStringContentLength = Int32.MaxValue; 29: } 30:  31: public static ServiceMetadataBehavior ServiceMetadataBehavior 32: { 33: get 34: { 35: return new ServiceMetadataBehavior 36: { 37: HttpGetEnabled = true, 38: MetadataExporter = {PolicyVersion = PolicyVersion.Policy15} 39: }; 40: } 41: } 42:  43: public static ServiceDebugBehavior ServiceDebugBehavior 44: { 45: get 46: { 47: var smb = new ServiceDebugBehavior(); 48: Configure(smb); 49: return smb; 50: } 51: } 52:  53:  54: public static void Configure(ServiceDebugBehavior behavior) 55: { 56: if (behavior == null) 57: { 58: throw new ArgumentException("Argument 'behavior' cannot be null. Cannot configure debug behavior."); 59: } 60: 61: behavior.IncludeExceptionDetailInFaults = true; 62: } 63: } Configuring the server There are basically two ways to host a WCF service, in IIS and self-hosting. When hosting a WCF service in a production environment using SOA architecture, you'll be most likely hosting it in IIS. When testing the service in integration tests, it's very handy to be able to self-host services in the unit-tests. In fact, you can share the the WCF configuration for self-hosted services and services hosted in IIS. And that is exactly what you want to do, testing the same configurations for test and production environments.   Configuring when Self-hosting When self-hosting, in order to start the service, you'll have to instantiate the ServiceHost class, configure the  service and open it. 1: // Create the service-host. 2: var host = new ServiceHost(typeof(MyService), endpoint); 3:  4: // Configure the binding 5: host.AddServiceEndpoint(typeof(IMyService), ServiceConfig.DefaultBinding, endpoint); 6:  7: // Configure metadata behavior 8: host.Description.Behaviors.Add(ServiceConfig.ServiceMetadataBehavior); 9:  10: // Configure debgug behavior 11: ServiceConfig.Configure((ServiceDebugBehavior)host.Description.Behaviors[typeof(ServiceDebugBehavior)]); 12: 13: // Start listening to the service 14: host.Open(); 15:  Configuring when hosting in IIS When you create a WCF service application with the wizard in Visual Studio, you'll end up with bits and pieces of code in order to get the service running: Svc-file with codebehind. A interface to the service Web.config In order to get rid of the configuration in the <system.serviceModel> section, which the wizard has generated for us, we must tell the service that we have a factory that will create the service for us. We do this by changing the markup for the svc-file: 1: <%@ ServiceHost Language="C#" Debug="true" Service="Namespace.MyService" Factory="Namespace.ServiceHostFactory" %> The markup tells IIS that we have a factory called ServiceHostFactory for this service. The service factory has a method we can override which will be called when someone asks IIS for the service. There are overloads we can override: 1: System.ServiceModel.ServiceHostBase CreateServiceHost(string constructorString, Uri[] baseAddresses) 2: System.ServiceModel.ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) 3:  In this example, we'll be using the last one, so our implementation looks like this: 1: public class ServiceHostFactory : System.ServiceModel.Activation.ServiceHostFactory 2: { 3:  4: protected override System.ServiceModel.ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) 5: { 6: var host = base.CreateServiceHost(serviceType, baseAddresses); 7: host.Description.Behaviors.Add(ServiceConfig.ServiceMetadataBehavior); 8: ServiceConfig.Configure((ServiceDebugBehavior)host.Description.Behaviors[typeof(ServiceDebugBehavior)]); 9: return host; 10: } 11: } 12:  1: public class ServiceHostFactory : System.ServiceModel.Activation.ServiceHostFactory 2: { 3: 4: protected override System.ServiceModel.ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) 5: { 6: var host = base.CreateServiceHost(serviceType, baseAddresses); 7: host.Description.Behaviors.Add(ServiceConfig.ServiceMetadataBehavior); 8: ServiceConfig.Configure((ServiceDebugBehavior)host.Description.Behaviors[typeof(ServiceDebugBehavior)]); 9: return host; 10: } 11: } 12: As you can see, we are using the same configuration helper we used when self-hosting. Now, when you have a factory, the <system.serviceModel> section of the configuration can be removed, because the section will be ignored when the service has a custom factory. If you want to configure something else in the config-file, one could configure in some other section.   Configuring the client Microsoft has helpfully created a ChannelFactory class in order to create a proxy client. When using this approach, you don't have generate those awfull proxy classes for the client. If you share the contracts with the server in it's own assembly like in the layer diagram under, you can share the same piece of code. The contracts in WCF are the interface to the service and if any, the datacontracts (custom types) the service depends on. Using the ChannelFactory with our configuration helper-class is very simple: 1: var identity = EndpointIdentity.CreateDnsIdentity("localhost"); 2: var endpointAddress = new EndpointAddress(endPoint, identity); 3: var factory = new ChannelFactory<IMyService>(DeployServiceConfig.DefaultBinding, endpointAddress); 4: using (var myService = new factory.CreateChannel()) 5: { 6: myService.Hello(); 7: } 8: factory.Close();   Happy configuration!

    Read the article

  • Webcast Q&A: ING on How to Scale Role Management and Compliance

    - by Tanu Sood
    Thanks to all who attended the live webcast we hosted on ING: Scaling Role Management and Access Certifications to Thousands of Applications on Wed, April 11th. Those of you who couldn’t join us, the webcast replay is now available. Many thanks to our guest speaker, Mark Robison, Enterprise Architect at ING for walking us through ING’s drivers and rationale for the platform approach, the phased implementation strategy, results & metrics, roadmap and recommendations. We greatly appreciate the insight he shared with us all on the deployment synergies between Oracle Identity Manager (OIM) and Oracle Identity Analytics (OIA) to enforce streamlined user and role management and scalable compliance. Mark was also kind enough to walk us through specific solutions features that helped ING manage the problem of role explosion and implement closed loop remediation. Our host speaker, Neil Gandhi, Principal Product Manager, Oracle rounded off the presentation by discussing common use cases and deployment scenarios we see organizations implement to automate user/identity administration and enforce closed-loop scalable compliance. Neil also called out the specific features in Oracle Identity Analytics 11gR1 that cater to expediting and streamlining compliance processes such as access certifications. While we tackled a few questions during the webcast, we have captured the responses to those that we weren’t able to get to here; our sincere thanks to Mark Robison for taking the time to respond to questions specific to ING’s implementation and strategy. Q. Did you include business friendly entitlment descriptions, or is the business seeing application descriptors A. We include very business friendly descriptions.  The OIA tool has the facility to allow this. Q. When doing attestation on job change, who is in the workflow to review and confirm that the employee should continue to have access? Is that a best practice?   A. The new and old manager  are in the workflow.  The tool can check for any Separation of Duties (SOD) violations with both having similiar accesses.  It may not be a best practice, but it is a reality of doing your old and new job for a transition period on a transfer. Q. What versions of OIM and OIA are being used at ING?   A. OIM 11gR1 and OIA 11gR1; the very latest versions available. Q. Are you using an entitlements / role catalog?   A. Yes. We use both roles and entitlements. Q. What specific unexpected benefits did the Identity Warehouse provide ING?   A. The most unanticipated was to help Legal Hold identify user ID's in the various applications.   Other benefits included providing a one stop shop for all aggregated ID information. Q. How fine grained are your application and entitlements? Did OIA, OIM support that level of granularity?   A. We have some very fine grained entitlements, but we role this up into approved Roles to allow for easier management.   For managing very fine grained entitlements, Oracle offers the Oracle Entitlement Server.  We currently do not own this software but are considering it. Q. Do you allow any individual access or is everything truly role based?   A. We are a hybrid environment with roles and individual positive and negative entitlements Q. Did you use an Agile methodology like scrum to deliver functionality during your project? A. We started with waterfall, but used an agile approach to provide benefits after the initial implementation Q. How did you handle rolling out the standard ID format to existing users? A. We just used the standard IDs for new users.  We have not taken on a project to address the existing nonstandard IDs. Q. To avoid role explosion, how do you deal with apps that require more than a couple of entitlement TYPES? For example, an app may have different levels of access and it may need to know the user's country/state to associate them with particular customers.   A. We focus on the functional user and craft the role around their daily job requirements.  The role captures the required application entitlements.  To keep role explosion down, we use role mining in OIA and also meet and interview the business.  It is an iterative process to get role consensus. Q. Great presentation! How many rounds of Certifications has ING performed so far?  A. Around 7 quarters and constant certifications on transfer. Q. Did you have executive support from the top down   A. Yes  The executive support was key to our success. Q. For your cloud instance are you using OIA or OIM as SaaS?  A. No.  We are just provisioning and deprovisioning to various Cloud providers.  (Service Now is an example) Q. How do you ensure a role owner does not get more priviliges as are intended and thus violates another role, e,g, a DBA Roles should not get tor rigt to run somethings as root, as this would affect the root role? A. We have SOD  checks.  Also all Roles are initially approved by external audit and the role owners have to certify the roles and any changes Q. What is your ratio of employees to roles?   A. We are still in process going through our various lines of business, so I do not have a final ratio.  From what we have seen, the ratio varies greatly depending on the Line of Business and the diversity of Job Functions.  For standardized lines of business such as call centers, the ratio is very good where we can have a single role that covers many employees.  For specialized lines of business like treasury, it can be one or two people per role. Q. Is ING using Oracle On Demand service ?   A. No Q. Do you have to implement or migrate to OIM in order to get the Identity Warehouse, or can OIA provide the identity warehouse as well if you haven't reached OIM yet? A. No, OIM deployment is not required to implement OIA’s Identity Warehouse but as you heard during the webcast, there are tremendous deployment synergies in deploying both OIA and OIM together. Q. When is the Security Governor product coming out? A. Oracle Security Governor for Healthcare is available today. Hope you enjoyed the webcast and we look forward to having you join us for the next webcast in the Customers Talk: Identity as a Platform webcast series: Toyota: Putting Customers First – Identity Platform as a Business Enabler Wednesday, May 16th at 10 am PST/ 1 pm EST Register Today You can also register for a live event at a city near you where Aberdeen’s Derek Brink will discuss the survey results from the recently published report “Analyzing Platform vs. Point Solution Approach in Identity”. And, you can do a quick (& free)  online assessment of your identity programs by benchmarking it against the 160 organizations surveyed  in the Aberdeen report, compliments of Oracle. Here’s the slide deck from our ING webcast: ING webcast platform View more presentations from OracleIDM

    Read the article

  • CodePlex Daily Summary for Thursday, October 18, 2012

    CodePlex Daily Summary for Thursday, October 18, 2012Popular ReleasesGac Library -- C++ Utilities for GPU Accelerated GUI and Script: Gaclib 0.4.0.0: Gaclib.zip contains the following content GacUIDemo Demo solution and projects Public Source GacUI library Document HTML document. Please start at reference_gacui.html Content Necessary CSS/JPG files for document. Improvements to the previous release Added Windows 8 theme (except tab control, will be provided for the next release) GacUI will choose to use Windows 7 theme or Windows 8 theme as default theme according to OS version Added 1 new demos Template.Window.CustomizedBorder ...Yahoo! UI Library: YUI Compressor for .Net: Version 2.1.1.0 - Sartha (BugFix): - Revered back the embedding of the 2x assemblies.Visual Studio Team Foundation Server Branching and Merging Guide: v2.1 - Visual Studio 2012: Welcome to the Branching and Merging Guide What is new? The Version Control specific discussions have been moved from the Branching and Merging Guide to the new Advanced Version Control Guide. The Branching and Merging Guide and the Advanced Version Control Guide have been ported to the new document style. See http://blogs.msdn.com/b/willy-peter_schaub/archive/2012/10/17/alm-rangers-raising-the-quality-bar-for-documentation-part-2.aspx for more information. Quality-Bar Details Documentatio...D3 Loot Tracker: 1.5.5: Compatible with 1.05.Write Once, Play Everywhere: MonoGame 3.0 (BETA): This is a beta release of the up coming MonoGame 3.0. It contains an Installer which will install a binary release of MonoGame on windows boxes with the following platforms. Windows, Linux, Android and Windows 8. If you need to build for iOS or Mac you will need to get the source code at this time as the installers for those platforms are not available yet. The installer will also install a bunch of Project templates for Visual Studio 2010 , 2012 and MonoDevleop. For those of you wish...Windawesome: Windawesome v1.4.1 x64: Fixed switching of applications across monitors Changed window flashing API (fix your config files) Added NetworkMonitorWidget (thanks to weiwen) Any issues/recommendations/requests for future versions? This is the 64-bit version of the release. Be sure to use that if you are on a 64-bit Windows. Works with "Required DLLs v3".CODE Framework: 4.0.21017.0: See change log in the Documentation section for details.Global Stock Exchange (Hobby Project): Global Stock Exchange - Invst Banking (Hobby Proj): Initial VersionMagelia WebStore Open-source Ecommerce software: Magelia WebStore 2.1: Add support for .net 4.0 to Magelia.Webstore.Client and StarterSite version 2.1.254.3 Scheduler Import & Export feature (for Professional and Entreprise Editions) UTC datetime and timezone support .net 4.5 and Visual Studio 2012 migration client magelia global refactoring release of a nugget package to help developers speed up development http://nuget.org/packages/Magelia.Webstore.Client optimization of the data update mechanism (a.k.a. "burst") Performance improvment of the d...JayData - The cross-platform HTML5 data-management library for JavaScript: JayData 1.2.2: JayData is a unified data access library for JavaScript to CRUD + Query data from different sources like OData, MongoDB, WebSQL, SqLite, HTML5 localStorage, Facebook or YQL. The library can be integrated with Knockout.js or Sencha Touch 2 and can be used on Node.js as well. See it in action in this 6 minutes video Sencha Touch 2 example app using JayData: Netflix browser. What's new in JayData 1.2.2 For detailed release notes check the release notes. Revitalized IndexedDB providerNow you c...JaySvcUtil - generate JavaScript context from OData metadata: JaySvcUtil 1.2.2: You will need the JayData library to use contexts generated with JaySvcUtil! Get it from here: http://jaydata.codeplex.comVFPX: FoxcodePlus: FoxcodePlus - Visual Studio like extensions to Visual FoxPro IntelliSense.Droid Explorer: Droid Explorer 0.8.8.8 Beta: fixed the icon for packages on the desktop fixed the install dialog closing right when it starts removed the link to "set up the sdk for me" as this is no longer supported. fixed bug where the device selection dialog would show, even if there was only one device connected. fixed toolbar from having "gap" between other toolbar removed main menu items that do not have any menus Fiskalizacija za developere: FiskalizacijaDev 1.0: Prva verzija ovog projekta, još je uvijek oznacena kao BETA - ovo znaci da su naša testiranja prošla uspješno :) No, kako mi ne proizvodimo neki software za blagajne, tako sve ovo nije niti isprobano u "realnim" uvjetima - svaka je sugestija, primjedba ili prijava bug-a je dobrodošla. Za sve ovo koristite, molimo, Discussions ili Issue Tracker. U ovom trenutku runtime binary je raspoloživ kao Any CPU za .NET verzije 2.0. Javite ukoliko trebaju i verzije buildane za 32-bit/64-bit kao i za .N...Squiggle - A free open source LAN Messenger: Squiggle 3.2 (Development): NOTE: This is development release and not recommended for production use. This release is mainly for enabling extensibility and interoperability with other platforms. Support for plugins Support for extensions Communication layer and protocol is platform independent (ZeroMQ, ProtocolBuffers) Bug fixes New /invite command Edit the sent message Disable update check NOTE: This is development release and not recommended for production use.AcDown????? - AcDown Downloader Framework: AcDown????? v4.2: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown??????????????????,????????????????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ???? 32??64? ???Linux ????(1)????????Windows XP???,????????.NET Framework 2.0???(x86),?????"?????????"??? (2)???????????Linux???,????????Mono?? ??2...PHPExcel: PHPExcel 1.7.8: See Change Log for details of the new features and bugfixes included in this release, and methods that are now deprecated. Note changes to the PDF Writer: tcPDF is no longer bundled with PHPExcel, but should be installed separately if you wish to use that 3rd-Party library with PHPExcel. Alternatively, you can choose to use mPDF or DomPDF as PDF Rendering libraries instead: PHPExcel now provides a configurable wrapper allowing you a choice of PDF renderer. See the documentation, or the PDF s...DirectX Tool Kit: October 12, 2012: October 12, 2012 Added PrimitiveBatch for drawing user primitives Debug object names for all D3D resources (for PIX and debug layer leak reporting)Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.70: Fixed issue described in discussion #399087: variable references within case values weren't getting resolved.GoogleMap Control: GoogleMap Control 6.1: Some important bug fixes and couple of new features were added. There are no major changes to the sample website. Source code could be downloaded from the Source Code section selecting branch release-6.1. Thus just builds of GoogleMap Control are issued here in this release. Update 14.Oct.2012 - Client side access fixed NuGet Package GoogleMap Control 6.1 NuGet Package FeaturesBounds property to provide ability to create a map by center and bounds as well; Setting in markup <artem:Goog...New ProjectsAnonymous InfoPath Browser Forms Web Part: Web part for SharePoint 2007 and 2010 that enables the anonymous submission of InfoPath forms.ath sem5 gr3 proj1: just a simple app to learn how to work in groups, just a start in our programming careerAtif M DNNTaskManager: This is a simple task manager AWS for .NET Sample (Amazon EC2, S3, SQS, DynamoDB): Amazon Web Services (AWS) Sample for .NET (C#) with Asp.NET MVC and Web API. Including S3, DynamoDB, Elastic Beanstalk and SQSCodingWheels.DataTypes: DataTypes tries to make it easier for developers to have concrete typesafe objects for working with many common forms of data. Many times these data objects are just doubles or ints floating through your code with abbreviations on them describing what they represent.Display attachments (list view) SP 2010: Display attachments (SP 2010) is a field for all type lists without types: Document library, Image library, Links, Surveys.Donation Tracker: This is a school projectEveHQ : The Internet Spaceship Toolkit: Multi-faceted character application for Eve-Online. Includes pilot monitoring, skill queue planning, ship fitting, industry and more.EventConni: EventConniFileCloud.Net: FileCloud.NET is a .Net wrapper for http://filecloud.io/ service API.Foodies: FoodiesForce PowerPivot Refresh: This project contains an c# utility class (CustomDataRefreshUtils) to refresh a PowerPivot file in SharePoint 2010. Geometric Modeling: REnder a mesh with Visual Studio - C#Heavysoft Mince: Heavysoft Mince is an opportunity to enhance my skills and to show how free software can be used in enterprise-level systems.HP iLO Management Utility: A small utility to provide a nice interface for managing multiple servers over the iLO interface.iMeeting: iMeeting is a simple application built in Java. iTask: iTask for Windows 8 LamWebOS: this is test project.MackProjects_Musicallis: Projeto da disciplina de Técnicas de Programação Aplicada II da Universidade Presbiteriana Mackenzie - SP.Merge PDF: MergePDF is an easy and simple tool which could help you batch merge PDF files (any number you want) into one PDF file quickly. It is completely free. With MerNetGen: O NetGen é um projeto de geração automatizada de interface Web em ASP.NET a partir de modelo de classes do LINQ-TO-SQL, estando disponível como um Addin para o Visual Studio 2008.Other Ways: ?? ! ????!!!!!Physical Therapy Timer: A simple Windows application that makes it easy to set a countdown timer for a required time period, and keep track of how many times the timer has be run.PicoMax: This version of PicoMax has everything you need to use the Seeed OLED without the MS Graphics class. It can be used in Gadgeteer or a regular NETMF project.post tracker: ASP, WEB API, WINDOWS PhoneRandomEngine: Open Source Game Engine.Resident Raver: This is Windows 8 version of Resident Raver which I wrote for my Intro To HTML5 Game Dev book. It requires the Impact JavaScript game framework.RogueTS: RogueTS is a Roguelike engine written in TypeScript. It features randomly generated maps, basic lighting effects, collision detection and additional stub code.RonYeeStores: ????????????,??C2C?B2C???????????。ShangHaiTunnel monitor: shang hai tunnel projectSharePoint List Security: Free, open source set of features for SharePoint 2010 to enhance the OOTB list permissions by adding column level permissions for users and groups.Skywave Code Lines Counter: It is a tiny program which will count real written lines of code. Real lines means that we removed autogenerated lines, empty lines and single '{' or '}' lines (in C#) and ... Currently supports .cs and .vb files (for C# and VB) It will say how much you worked ;-)switchcontroles: HAT Switch controle sy seguimientoSyscore: Steps : 1 - Definitions 2 - Requirements 3 - Projecting 4 - Delegation 5 - Prototyping 6 - Testing/Analizing 7 - Finalizing 8 - Distribution 9 - Congratulationstarikscodeplexproject: tarikscodeplexprojectTaskManager04: This project is an effort to implement the coding examples given in the TASK MANAGER video series into a working DotNetNuke program.testc: this is a test ,heihei,yongshengtesttom10172012git01: fds fs fs fs tsUnit - TypeScript Unit Testing Framework: tsUnit is a unit testing framework for TypeScript, written in TypeScript. It allows you to encapsulate your test functions in classes and modules.WebGrease: WebGrease is a suite of tools for optimizing javascript, css files and images.????: ??《?????》、《??》、《???》、《??》???。。。

    Read the article

  • Testing Workflows &ndash; Test-First

    - by Timothy Klenke
    Originally posted on: http://geekswithblogs.net/TimothyK/archive/2014/05/30/testing-workflows-ndash-test-first.aspxThis is the second of two posts on some common strategies for approaching the job of writing tests.  The previous post covered test-after workflows where as this will focus on test-first.  Each workflow presented is a method of attack for adding tests to a project.  The more tools in your tool belt the better.  So here is a partial list of some test-first methodologies. Ping Pong Ping Pong is a methodology commonly used in pair programing.  One developer will write a new failing test.  Then they hand the keyboard to their partner.  The partner writes the production code to get the test passing.  The partner then writes the next test before passing the keyboard back to the original developer. The reasoning behind this testing methodology is to facilitate pair programming.  That is to say that this testing methodology shares all the benefits of pair programming, including ensuring multiple team members are familiar with the code base (i.e. low bus number). Test Blazer Test Blazing, in some respects, is also a pairing strategy.  The developers don’t work side by side on the same task at the same time.  Instead one developer is dedicated to writing tests at their own desk.  They write failing test after failing test, never touching the production code.  With these tests they are defining the specification for the system.  The developer most familiar with the specifications would be assigned this task. The next day or later in the same day another developer fetches the latest test suite.  Their job is to write the production code to get those tests passing.  Once all the tests pass they fetch from source control the latest version of the test project to get the newer tests. This methodology has some of the benefits of pair programming, namely lowering the bus number.  This can be good way adding an extra developer to a project without slowing it down too much.  The production coder isn’t slowed down writing tests.  The tests are in another project from the production code, so there shouldn’t be any merge conflicts despite two developers working on the same solution. This methodology is also a good test for the tests.  Can another developer figure out what system should do just by reading the tests?  This question will be answered as the production coder works there way through the test blazer’s tests. Test Driven Development (TDD) TDD is a highly disciplined practice that calls for a new test and an new production code to be written every few minutes.  There are strict rules for when you should be writing test or production code.  You start by writing a failing (red) test, then write the simplest production code possible to get the code working (green), then you clean up the code (refactor).  This is known as the red-green-refactor cycle. The goal of TDD isn’t the creation of a suite of tests, however that is an advantageous side effect.  The real goal of TDD is to follow a practice that yields a better design.  The practice is meant to push the design toward small, decoupled, modularized components.  This is generally considered a better design that large, highly coupled ball of mud. TDD accomplishes this through the refactoring cycle.  Refactoring is only possible to do safely when tests are in place.  In order to use TDD developers must be trained in how to look for and repair code smells in the system.  Through repairing these sections of smelly code (i.e. a refactoring) the design of the system emerges. For further information on TDD, I highly recommend the series “Is TDD Dead?”.  It discusses its pros and cons and when it is best used. Acceptance Test Driven Development (ATDD) Whereas TDD focuses on small unit tests that concentrate on a small piece of the system, Acceptance Tests focuses on the larger integrated environment.  Acceptance Tests usually correspond to user stories, which come directly from the customer. The unit tests focus on the inputs and outputs of smaller parts of the system, which are too low level to be of interest to the customer. ATDD generally uses the same tools as TDD.  However, ATDD uses fewer mocks and test doubles than TDD. ATDD often complements TDD; they aren’t competing methods.  A full test suite will usually consist of a large number of unit (created via TDD) tests and a smaller number of acceptance tests. Behaviour Driven Development (BDD) BDD is more about audience than workflow.  BDD pushes the testing realm out towards the client.  Developers, managers and the client all work together to define the tests. Typically different tooling is used for BDD than acceptance and unit testing.  This is done because the audience is not just developers.  Tools using the Gherkin family of languages allow for test scenarios to be described in an English format.  Other tools such as MSpec or FitNesse also strive for highly readable behaviour driven test suites. Because these tests are public facing (viewable by people outside the development team), the terminology usually changes.  You can’t get away with the same technobabble you can with unit tests written in a programming language that only developers understand.  For starters, they usually aren’t called tests.  Usually they’re called “examples”, “behaviours”, “scenarios”, or “specifications”. This may seem like a very subtle difference, but I’ve seen this small terminology change have a huge impact on the acceptance of the process.  Many people have a bias that testing is something that comes at the end of a project.  When you say we need to define the tests at the start of the project many people will immediately give that a lower priority on the project schedule.  But if you say we need to define the specification or behaviour of the system before we can start, you’ll get more cooperation.   Keep these test-first and test-after workflows in your tool belt.  With them you’ll be able to find new opportunities to apply them.

    Read the article

  • Delight and Excite

    - by Applications User Experience
    Mick McGee, CEO & President, EchoUser Editor’s Note: EchoUser is a User Experience design firm in San Francisco and a member of the Oracle Usability Advisory Board. Mick and his staff regularly consult on Oracle Applications UX projects. Being part of a user experience design firm, we have the luxury of working with a lot of great people across many great companies. We get to help people solve their problems.  At least we used to. The basic design challenge is still the same; however, the goal is not necessarily to solve “problems” anymore; it is, “I want our products to delight and excite!” The question for us as UX professionals is how to design to those goals, and then how to assess them from a usability perspective. I’m not sure where I first heard “delight and excite” (A book? blog post? Facebook  status? Steve Jobs quote?), but now I hear these listed as user experience goals all the time. In particular, somewhat paradoxically, I routinely hear them in enterprise software conversations. And when asking these same enterprise companies what will make the project successful, we very often hear, “Make it like Apple.” In past days, it was “make it like Yahoo (or Amazon or Google“) but now Apple is the common benchmark. Steve Jobs and Apple were not secrets, but with Jobs’ passing and Apple becoming the world’s most valuable company in the last year, the impact of great design and experience is suddenly very widespread. In particular, users’ expectations have gone way up. Being an enterprise company is no shield to the general expectations that users now have, for all products. Designing a “Minimum Viable Product” The user experience challenge has historically been, to echo the words of Eric Ries (author of Lean Startup) , to create a “minimum viable product”: the proverbial, “make it good enough”. But, in our profession, the “minimum viable” part of that phrase has oftentimes, unfortunately, referred to the design and user experience. Technology typically dominated the focus of the biggest, most successful companies. Few have had the laser focus of Apple to also create and sell design and user experience alongside great technology. But now that Apple is the most valuable company in the world, copying their success is a common undertaking. Great design is now a premium offering that everyone wants, from the one-person startup to the largest companies, consumer and enterprise. This emerging business paradigm will have significant impact across the user experience design process and profession. One area that particularly interests me is, how are we going to evaluate these new emerging “delight and excite” experiences, which are further customized to each particular domain? How to Measure “Delight and Excite” Traditional usability measures of task completion rate, assists, time, and errors are still extremely useful in many situations; however, they are too blunt to offer much insight into emerging experiences “Satisfaction” is usually assessed in user testing, in roughly equivalent importance to the above objective metrics. Various surveys and scales have provided ways to measure satisfying UX, with whatever questions they include. However, to meet the demands of new business goals and keep users at the center of design and development processes, we have to explore new methods to better capture custom-experience goals and emotion-driven user responses. We have had success assessing custom experiences, including “delight and excite”, by employing a variety of user testing methods that tend to combine formative and summative techniques (formative being focused more on identifying usability issues and ways to improve design, and summative focused more on metrics). Our most successful tool has been one we’ve been using for a long time, Magnitude Estimation Technique (MET). But it’s not necessarily about MET as a measure, rather how it is created. Caption: For one client, EchoUser did two rounds of testing.  Each test was a mix of performing representative tasks and gathering qualitative impressions. Each user participated in an in-person moderated 1-on-1 session for 1 hour, using a testing set-up where they held the phone. The primary goal was to identify usability issues and recommend design improvements. MET is based on a definition of the desired experience, which users will then use to rate items of interest (usually tasks in a usability test). In other words, a custom experience definition needs to be created. This can then be used to measure satisfaction in accomplishing tasks; “delight and excite”; or anything else from strategic goals, user demands, or elsewhere. For reference, our standard MET definition in usability testing is: “User experience is your perception of how easy to use, well designed and productive an interface is to complete tasks.” Articulating the User Experience We’ve helped construct experience definitions for several clients to better match their business goals. One example is a modification of the above that was needed for a company that makes medical-related products: “User experience is your perception of how easy to use, well-designed, productive and safe an interface is for conducting tasks. ‘Safe’ is how free an environment (including devices, software, facilities, people, etc.) is from danger, risk, and injury.” Another example is from a company that is pushing hard to incorporate “delight” into their enterprise business line: “User experience is your perception of a product’s ease of use and learning, satisfaction and delight in design, and ability to accomplish objectives.” I find the last one particularly compelling in that there is little that identifies the experience as being for a highly technical enterprise application. That definition could easily be applied to any number of consumer products. We have gone further than the above, including “sexy” and “cool” where decision-makers insisted they were part of the desired experience. We also applied it to completely different experiences where the “interface” was, for example, riding public transit, the “tasks” were train rides, and we followed the participants through the train-riding journey and rated various aspects accordingly: “A good public transportation experience is a cost-effective way of reliably, conveniently, and safely getting me to my intended destination on time.” To construct these definitions, we’ve employed both bottom-up and top-down approaches, depending on circumstances. For bottom-up, user inputs help dictate the terms that best fit the desired experience (usually by way of cluster and factor analysis). Top-down depends on strategic, visionary goals expressed by upper management that we then attempt to integrate into product development (e.g., “delight and excite”). We like a combination of both approaches to push the innovation envelope, but still be mindful of current user concerns. Hopefully the idea of crafting your own custom experience, and a way to measure it, can provide you with some ideas how you can adapt your user experience needs to whatever company you are in. Whether product-development or service-oriented, nearly every company is ultimately providing a user experience. The Bottom Line Creating great experiences may have been popularized by Steve Jobs and Apple, but I’ll be honest, it’s a good feeling to be moving from “good enough” to “delight and excite,” despite the challenge that entails. In fact, it’s because of that challenge that we will expand what we do as UX professionals to help deliver and assess those experiences. I’m excited to see how we, Oracle, and the rest of the industry will live up to that challenge.

    Read the article

  • Clouds Everywhere But not a Drop of Rain – Part 3

    - by sxkumar
    I was sharing with you how a broad-based transformation such as cloud will increase agility and efficiency of an organization if process re-engineering is part of the plan.  I have also stressed on the key enterprise requirements such as “broad and deep solutions, “running your mission critical applications” and “automated and integrated set of capabilities”. Let me walk you through some key cloud attributes such as “elasticity” and “self-service” and what they mean for an enterprise class cloud. I will also talk about how we at Oracle have taken a very enterprise centric view to developing cloud solutions and how our products have been specifically engineered to address enterprise cloud needs. Cloud Elasticity and Enterprise Applications Requirements Easy and quick scalability for a short-period of time is the signature of cloud based solutions. It is this elasticity that allows you to dynamically redistribute your resources according to business priorities, helps increase your overall resource utilization, and reduces operational costs by allowing you to get the most out of your existing investment. Most public clouds are offering a instant provisioning mechanism of compute power (CPU, RAM, Disk), customer pay for the instance-hours(and bandwidth) they use, adding computing resources at peak times and removing them when they are no longer needed. This type of “just-in-time” serving of compute resources is well known for mid-tiers “state less” servers such as web application servers and web servers that just need another machine to start and run on it but what does it really mean for an enterprise application and its underlying data? Most enterprise applications are not as quite as “state less” and justifiably so. As such, how do you take advantage of cloud elasticity and make it relevant for your enterprise apps? This is where Cloud meets Grid Computing. At Oracle, we have invested enormous amount of time, energy and resources in creating enterprise grid solutions. All our technology products offer built-in elasticity via clustering and dynamic scaling. With products like Real Application Clusters (RAC), Automatic Storage Management, WebLogic Clustering, and Coherence In-Memory Grid, we allow all your enterprise applications to benefit from Cloud elasticity –both vertically and horizontally - without requiring any application changes. A number of technology vendors take a rather simplistic route of starting up additional or removing unneeded VM as the "Cloud Scale-Out" solution. While this may work for stateless mid-tier servers where load balancers can handle the addition and remove of instances transparently but following a similar approach for the database tier - often called as "database sharding" - requires significant application modification and typically does not work with off the shelf packaged applications. Technologies like Oracle Database Real Application Clusters, Automatic Storage Management, etc. on the other hand bring the benefits of incremental scalability and on-demand elasticity to ANY application by providing a simplified abstraction layers where the application does not need deal with data spread over multiple database instances. Rather they just talk to a single database and the database software takes care of aggregating resources across multiple hardware components. It is the technologies like these that truly make a cloud solution relevant for enterprises.  For customers who are looking for a next generation hardware consolidation platform, our engineered systems (e.g. Exadata, Exalogic) not only provide incredible amount of performance and capacity, they also reduce the data center complexity and simplify operations. Assemble, Deploy and Manage Enterprise Applications for Cloud Products like Oracle Virtual assembly builder (OVAB) resolve the complex problem of bringing the cloud speed to complex multi-tier applications. With assemblies, you can not only provision all components of a multi-tier application and wire them together by push of a button, other aspects of application lifecycle, such as real-time application testing, scale-up/scale-down, performance and availability monitoring, etc., are also automated using Oracle Enterprise Manager.  An essential criteria for an enterprise cloud to succeed is the ability to ensure business service levels especially when business users have either full visibility on the usage cost with a “show back” or a “charge back”. With Oracle Enterprise Manager 12c, we have created the most comprehensive cloud management solution in the industry that is capable of managing business service levels “applications-to-disk” in a enterprise private cloud – all from a single console. It is the only cloud management platform in the industry that allows you to deliver infrastructure, platform and application cloud services out of the box. Moreover, it offers integrated and complete lifecycle management of the cloud - including planning and set up, service delivery, operations management, metering and chargeback, etc .  Sounds unbelievable? Well, just watch this space for more details on how Oracle Enterprise Manager 12c is the nerve center of Oracle Cloud! Our cloud solution portfolio is also the broadest and most deep in the industry  - covering public, private, hybrid, Infrastructure, platform and applications clouds. It is no coincidence therefore that the Oracle Cloud today offers the most comprehensive set of public cloud services in the industry.  And to a large part, this has been made possible thanks to our years on investment in creating cloud enabling technologies.  Summary  But the intent of this blog post isn't to dwell on how great our solutions are (these are just some examples to illustrate how we at Oracle have approached this problem space). Rather it is to help you ask the right questions before you embark on your cloud journey.  So to summarize, here are the key takeaways.       It is critical that you are clear on why you are building the cloud. Successful organizations keep business benefits as the first and foremost cloud objective. On the other hand, those who approach this purely as a technology project are more likely to fail. Think about where you want to be in 3-5 years before you get started. Your long terms objectives should determine what your first step ought to be. As obvious as it may seem, more people than not make the first move without knowing where they are headed.  Don’t make the mistake of equating cloud to virtualization and Infrastructure-as-a-Service (IaaS). Spinning a VM on-demand will give some short term relief to your IT staff but is unlikely to solve your larger business problems. As such, even if IaaS is your first step towards a more comprehensive cloud, plan the roadmap around those higher level services before you begin. And ask your vendors on how they are going to be your partners in this journey. Capabilities like self-service access and chargeback/showback are absolutely critical if you really expect your cloud to be transformational. Your business won't see the full benefits of the cloud until it empowers them with same kind of control and transparency that they are used to while using a public cloud service.  Evaluate the benefits of integration, as opposed to blindly following the best-of-breed strategy. Integration is a huge challenge and more so in a cloud environment. There are enormous costs associated with stitching a solution out of disparate components and even more in maintaining it. Hope you found these ideas helpful. Looking forward to hearing your thoughts and experiences.

    Read the article

  • SharePoint logging to a list

    - by Norgean
    I recently worked in an environment with several servers. Locating the correct SharePoint log file for error messages, or development trace calls, is cumbersome. And once the solution hit the cloud, it got even worse, as we had no access to the log files at all. Obviously we are not the only ones with this problem, and the current trend seems to be to log to a list. This had become an off-hour project, so rather than do the sensible thing and find a ready-made solution, I decided to do it the hard way. So! Fire up Visual Studio, create yet another empty SharePoint solution, and start to think of some requirements. Easy on/offI want to be able to turn list-logging on and off.Easy loggingFor me, this means being able to use string.Format.Easy filteringLet's have the possibility to add some filtering columns; category and severity, where severity can be "verbose", "warning" or "error". Easy on/off Well, that's easy. Create a new web feature. Add an event receiver, and create the list on activation of the feature. Tear the list down on de-activation. I chose not to create a new content type; I did not feel that it would give me anything extra. I based the list on the generic list - I think a better choice would have been the announcement type. Approximately: public void CreateLog(SPWeb web)         {             var list = web.Lists.TryGetList(LogListName);             if (list == null)             {                 var listGuid = web.Lists.Add(LogListName, "Logging for the masses", SPListTemplateType.GenericList);                 list = web.Lists[listGuid];                 list.Title = LogListTitle;                 list.Update();                 list.Fields.Add(Category, SPFieldType.Text, false);                 var stringColl = new StringCollection();                 stringColl.AddRange(new[]{Error, Information, Verbose});                 list.Fields.Add(Severity, SPFieldType.Choice, true, false, stringColl);                 ModifyDefaultView(list);             }         }Should be self explanatory, but: only create the list if it does not already exist (d'oh). Best practice: create it with a Url-friendly name, and, if necessary, give it a better title. ...because otherwise you'll have to look for a list with a name like "Simple_x0020_Log". I've added a couple of fields; a field for category, and a 'severity'. Both to make it easier to find relevant log messages. Notice that I don't have to call list.Update() after adding the fields - this would cause a nasty error (something along the lines of "List locked by another user"). The function for deleting the log is exactly as onerous as you'd expect:         public void DeleteLog(SPWeb web)         {             var list = web.Lists.TryGetList(LogListTitle);             if (list != null)             {                 list.Delete();             }         } So! "All" that remains is to log. Also known as adding items to a list. Lots of different methods with different signatures end up calling the same function. For example, LogVerbose(web, message) calls LogVerbose(web, null, message) which again calls another method which calls: private static void Log(SPWeb web, string category, string severity, string textformat, params object[] texts)         {             if (web != null)             {                 var list = web.Lists.TryGetList(LogListTitle);                 if (list != null)                 {                     var item = list.AddItem(); // NOTE! NOT list.Items.Add… just don't, mkay?                     var text = string.Format(textformat, texts);                     if (text.Length > 255) // because the title field only holds so many chars. Sigh.                         text = text.Substring(0, 254);                     item[SPBuiltInFieldId.Title] = text;                     item[Degree] = severity;                     item[Category] = category;                     item.Update();                 }             } // omitted: Also log to SharePoint log.         } By adding a params parameter I can call it as if I was doing a Console.WriteLine: LogVerbose(web, "demo", "{0} {1}{2}", "hello", "world", '!'); Ok, that was a silly example, a better one might be: LogError(web, LogCategory, "Exception caught when updating {0}. exception: {1}", listItem.Title, ex); For performance reasons I use list.AddItem rather than list.Items.Add. For completeness' sake, let us include the "ModifyDefaultView" function that I deliberately skipped earlier.         private void ModifyDefaultView(SPList list)         {             // Add fields to default view             var defaultView = list.DefaultView;             var exists = defaultView.ViewFields.Cast<string>().Any(field => String.CompareOrdinal(field, Severity) == 0);               if (!exists)             {                 var field = list.Fields.GetFieldByInternalName(Severity);                 if (field != null)                     defaultView.ViewFields.Add(field);                 field = list.Fields.GetFieldByInternalName(Category);                 if (field != null)                     defaultView.ViewFields.Add(field);                 defaultView.Update();                   var sortDoc = new XmlDocument();                 sortDoc.LoadXml(string.Format("<Query>{0}</Query>", defaultView.Query));                 var orderBy = (XmlElement) sortDoc.SelectSingleNode("//OrderBy");                 if (orderBy != null && sortDoc.DocumentElement != null)                     sortDoc.DocumentElement.RemoveChild(orderBy);                 orderBy = sortDoc.CreateElement("OrderBy");                 sortDoc.DocumentElement.AppendChild(orderBy);                 field = list.Fields[SPBuiltInFieldId.Modified];                 var fieldRef = sortDoc.CreateElement("FieldRef");                 fieldRef.SetAttribute("Name", field.InternalName);                 fieldRef.SetAttribute("Ascending", "FALSE");                 orderBy.AppendChild(fieldRef);                   fieldRef = sortDoc.CreateElement("FieldRef");                 field = list.Fields[SPBuiltInFieldId.ID];                 fieldRef.SetAttribute("Name", field.InternalName);                 fieldRef.SetAttribute("Ascending", "FALSE");                 orderBy.AppendChild(fieldRef);                 defaultView.Query = sortDoc.DocumentElement.InnerXml;                 //defaultView.Query = "<OrderBy><FieldRef Name='Modified' Ascending='FALSE' /><FieldRef Name='ID' Ascending='FALSE' /></OrderBy>";                 defaultView.Update();             }         } First two lines are easy - see if the default view includes the "Severity" column. If it does - quit; our job here is done.Adding "severity" and "Category" to the view is not exactly rocket science. But then? Then we build the sort order query. Through XML. The lines are numerous, but boring. All to achieve the CAML query which is commented out. The major benefit of using the dom to build XML, is that you may get compile time errors for spelling mistakes. I say 'may', because although the compiler will not let you forget to close a tag, it will cheerfully let you spell "Name" as "Naem". Whichever you prefer, at the end of the day the view will sort by modified date and ID, both descending. I added the ID as there may be several items with the same time stamp. So! Simple logging to a list, with sensible a view, and with normal functionality for creating your own filterings. I should probably have added some more views in code, ready filtered for "only errors", "errors and warnings" etc. And it would be nice to block verbose logging completely, but I'm not happy with the alternatives. (yetanotherfeature or an admin page seem like overkill - perhaps just removing it as one of the choices, and not log if it isn't there?) Before you comment - yes, try-catches have been removed for clarity. There is nothing worse than having a logging function that breaks your site!

    Read the article

  • PASS Summit 2010 BI Workshop Feedbacks

    - by Davide Mauri
    As many other speakers already did, I’d like to share with the SQL Community the feedback of my PASS Summit 2010 Workshop. For those who were not there, my workshop was the “BI From A-Z” and the main objective of that workshop was to introduce people in the BI world not only from a technical point of view but insist a lot on the methodological and “engineered” approach. The will to put more engineering in the IT (and specially in the BI field) is something that has been growing stronger and stronger in me every day for of this last 5 years since is simply envy the fact that Airbus, Fincatieri, BMW (just to name a few) can create very complex machine “just” using putting people together and giving them some rules to follow (Of course this is an oversimplification but I think you get what I mean). The key point of engineering is that, after having defined the project blueprint, you have the possibility to give to a huge number of people, the rules to follow, the correct tools in order to implement the rules easily and semi-automatically and a way to measure the quality of the results. Could this be done in IT? Very big question, so my scope is now limited to BI. So that’s the main point of my workshop: and entry-level approach to BI (level was 200) in order to allow attendees to know the basics, to understand what tools they should use for which purpose and, above all, a set of rules and tools in order to make a BI solution scalable in terms of people working on it, while still maintaining a very good quality. All done not focusing only on the practice but explaining the theory behind to see how it can help *a lot* to build a correct solution despite the technology used to implement it. The idea is to reach a point where more then 70% of the work done to create a BI solution can be reused even if technologies changes. This is a very demanding challenge nowadays with the coming of Denali and its column-aligned storage and the shiny-new DAX language. As you may understand I was looking forward to get the feedback since you may have noticed that there’s a lot of “architectural” stuff in IT but really nothing on “engineering”. So how the session could be perceived by the attendees was really unknown to me. The feedback could also give a good indication if the need of more “engineering” is something I feel only by myself or if is something more broad. I’m very happy to be able to say that the overall score of 4.75 put my workshop in the TOP 20 session (on near 200 sessions)! Here’s the detailed evaluations: How would you rate the usefulness of the information presented in your day-to-day environment? 4.75 Answer:    # of Responses 3    1         4    12        5    42               How would you rate the Speaker's presentation skills? 4.80 Answer:    # of Responses 3 : 1         4 : 9         5 : 45               How would you rate the Speaker's knowledge of the subject? 4.95 Answer:    # of Responses 4 :  3         5 : 52               How would you rate the accuracy of the session title, description and experience level to the actual session? 4.75 Answer:    # of Responses 3 : 2         4 : 10         5 : 43               How would you rate the amount of time allocated to cover the topic/session? 4.44 Answer:    # of Responses 3 : 7         4 : 17        5 : 31               How would you rate the quality of the presentation materials? 4.62 Answer:    # of Responses 4 : 21        5 : 34 The comments where all very positive. Many of them asked for more time on the subject (or to shorten the very last topics). I’ll make treasure of these comments and will review the content accordingly. We’ll organize a two-day classes on this topic, where also more examples will be shown and some arguments will be explained more deeply. I’d just like to answer a comment that asks how much of what I shown is “universally applicable”. I can tell you that all of our BI project follow these rules and they’ve been applied to different markets (Insurance, Fashion, GDO) with different people and different teams and they allowed us to be “Adaptive” against the customer. The more the rules are well defined and the more there are tools that supports their implementations, the easier is to add new people to the project and to add or change solution features. Think of a car. How come that almost any mechanic can help you to fix a problem? Because they know what to expect. Because there a rules that allow them to identify the problem without having to discover each time how the car has been implemented build. And this is of course also true for car upgrades/improvements. Last but not least: thanks a lot to everyone for coming!

    Read the article

  • JavaOne Session Report: “50 Tips in 50 Minutes for GlassFish Fans”

    - by Janice J. Heiss
    At JavaOne 2012 on Monday, Oracle’s Engineer Chris Kasso, and Technology Evangelist Arun Gupta, presented a head-spinning session (CON4701) in which they offered 50 tips for GlassFish fans. Kasso and Gupta alternated back and forth with each presenting 10 tips at a time. An audience of about (appropriately) 50 attentive and appreciative developers was on hand in what has to be one of the most information-packed sessions ever at JavaOne!Aside: I experienced one of the quiet joys of JavaOne when, just before the session began, I spotted Java Champion and JavaOne Rock Star Adam Bien sitting nearby – Adam is someone I have been fortunate to know for many years.GlassFish is a freely available, commercially supported Java EE reference implementation. The session prioritized quantity of tips over depth of information and offered tips that are intended for both seasoned and new users, that are meant to increase the range of functional options available to GlassFish users. The focus was on lesser-known dimensions of GlassFish. Attendees were encouraged to pursue tips that contained new information for them. All 50 tips can be accessed here.Below are several examples of more elaborate tips and a final practical tip on how to get in touch with these folks. Tip #1: Using the login Command * To execute a remote command with asadmin you must provide the admin's user name and password.* The login command allows you to store the login credentials to be reused in subsequent commands.* Can be logged into multiple servers (distinguish by host and port). Example:     % asadmin --host ouch login     Enter admin user name [default: admin]>     Enter admin password>     Login information relevant to admin user name [admin]     for host [ouch] and admin port [4848] stored at     [/Users/ckasso/.asadminpass] successfully.     Make sure that this file remains protected.     Information stored in this file will be used by     asadmin commands to manage the associated domain.     Command login executed successfully.     % asadmin --host ouch list-clusters     c1 not running     Command list-clusters executed successfully.Tip #4: Using the AS_DEBUG Env Variable* Environment variable to control client side debug output* Exposes: command processing info URL used to access the command:                           http://localhost:4848/__asadmin/uptime Raw response from the server Example:   % export AS_DEBUG=true  % asadmin uptime  CLASSPATH= ./../glassfish/modules/admin-cli.jar  Commands: [uptime]  asadmin extension directory: /work/gf-3.1.2/glassfish3/glassfish/lib/asadm      ------- RAW RESPONSE  ---------   Signature-Version: 1.0   message: Up 7 mins 10 secs   milliseconds_value: 430194   keys: milliseconds   milliseconds_name: milliseconds   use-main-children-attribute: false   exit-code: SUCCESS  ------- RAW RESPONSE  ---------Tip #11: Using Password Aliases * Some resources require a password to access (e.g. DB, JMS, etc.).* The resource connector is defined in the domain.xml.Example:Suppose the DB resource you wish to access requires an entry like this in the domain.xml:     <property name="password" value="secretp@ssword"/>But company policies do not allow you to store the password in the clear.* Use password aliases to avoid storing the password in the domain.xml* Create a password alias:     % asadmin create-password-alias DB_pw_alias     Enter the alias password>     Enter the alias password again>     Command create-password-alias executed successfully.* The password is stored in domain's encrypted keystore.* Now update the password value in the domain.xml:     <property name="password" value="${ALIAS=DB_pw_alias}"/>Tip #21: How to Start GlassFish as a Service * Configuring a server to automatically start at boot can be tedious.* Each platform does it differently.* The create-service command makes this easy.   Windows: creates a Windows service Linux: /etc/init.d script Solaris: Service Management Facility (SMF) service * Must execute create-service with admin privileges.* Can be used for the DAS or instances* Try it first with the --dry-run option.* There is a (unsupported) _delete-serverExample:     # asadmin create-service domain1     The Service was created successfully. Here are the details:     Name of the service:application/GlassFish/domain1     Type of the service:Domain     Configuration location of the service:/work/gf-3.1.2.2/glassfish3/glassfish/domains     Manifest file location on the system:/var/svc/manifest/application/GlassFish/domain1_work_gf-3.1.2.2_glassfish3_glassfish_domains/Domain-service-smf.xml.     You have created the service but you need to start it yourself. Here are the most typical Solaris commands of interest:     * /usr/bin/svcs  -a | grep domain1  // status     * /usr/sbin/svcadm enable domain1 // start     * /usr/sbin/svcadm disable domain1 // stop     * /usr/sbin/svccfg delete domain1 // uninstallTip #34: Posting a Command via REST* Use wget/curl to execute commands on the DAS.Example:  Deploying an application   % curl -s -S \       -H 'Accept: application/json' -X POST \       -H 'X-Requested-By: anyvalue' \       -F id=@/path/to/application.war \       -F force=true http://localhost:4848/management/domain/applications/application* Use @ before a file name to tell curl to send the file's contents.* The force option tells GlassFish to force the deployment in case the application is already deployed.* Use wget/curl to execute commands on the DAS.Example:  Deploying an application   % curl -s -S \       -H 'Accept: application/json' -X POST \       -H 'X-Requested-By: anyvalue' \       -F id=@/path/to/application.war \       -F force=true http://localhost:4848/management/domain/applications/application* Use @ before a file name to tell curl to send the file's contents.* The force option tells GlassFish to force the deployment in case the application is already deployed.Tip #46: Upgrading to a Newer Version * Upgrade applications and configuration from an earlier version* Upgrade Tool: Side-by-side upgrade– GUI: asupgrade– CLI: asupgrade --c– What happens ?* Copies older source domain -> target domain directory* asadmin start-domain --upgrade* Update Tool and pkg: In-place upgrade– GUI: updatetool, install all Available Updates– CLI: pkg image-update– Upgrade the domain* asadmin start-domain --upgradeTip #50: How to reach us?* GlassFish Forum: http://www.java.net/forums/glassfish/glassfish* [email protected]* @glassfish* facebook.com/glassfish* youtube.com/GlassFishVideos* blogs.oracle.com/theaquariumArun Gupta acknowledged that their method of presentation was experimental and actively solicited feedback about the session. The best way to reach them is on the GlassFish user forum.In addition, check out Gupta’s new book Java EE 6 Pocket Guide.

    Read the article

  • Automatic Maintenance Jobs in every PDB? New SPM Evolve Advisor Task in Oracle 12.1.0.2

    - by Mike Dietrich
    A customer checking out our slides from the OTN Tour in August 2014 asked me a finicky question the other day: "According to the documentation the Automatic SQL Tuning Advisor maintenance task gets executed only within the CDB$ROOT, but not within each PDB - but the slides are not clear here. So what is the truth?" Ok, that's good question. In my understanding all tasks will get executed within each PDB - that's why we recommend (based on experience) to break up the default maintenance windows when using Oracle Multitenant. Otherwise all PDBs will have the same maintenance windows, and guess what will happen when 25 PDBs start gathering object statistics at the same time ... The documentation indeed says: Automatic SQL Tuning Advisor data is stored in the root. It might have results about SQL statements executed in a PDB that were analyzed by the advisor, but these results are not included if the PDB is unplugged. A common user whose current container is the root can run SQL Tuning Advisor manually for SQL statements from any PDB. When a statement is tuned, it is tuned in any container that runs the statement. This sounds reasonable. But when we have a look into our PDBs or into the CDB_AUTOTASK_CLIENT view the result is different from what the doc says. In my environment I did create just two fresh empty PDBs (CON_ID 3 and 4): SQL> select client_name, status, con_id from cdb_autotask_client; CLIENT_NAME                           STATUS         CON_ID------------------------------------- ---------- ----------auto optimizer stats collection       ENABLED             1sql tuning advisor                    ENABLED             1auto space advisor                    ENABLED             1auto optimizer stats collection       ENABLED             4sql tuning advisor                    ENABLED             4auto space advisor                    ENABLED             4auto optimizer stats collection       ENABLED             3sql tuning advisor                    ENABLED             3auto space advisor                    ENABLED             3 9 rows selected. I haven't verified the reason why this is different from the docs but it may have been related to one change in Oracle Database 12.1.0.2: The new SPM Evolve Advisor Task ( SYS_AUTO_SPM_EVOLVE_TASK) for automatic plan evolution for SQL Plan Management. This new task doesn't appear as a stand-alone job (client) in the maintenance window but runs as a sub-entity of the Automatic SQL Tuning Advisor task. And (I'm just guessing) this may be one of the reasons why every PDB will have to have its own Automatic SQL Tuning Advisor task  Here you'll find more information about how to enable, disable and configure the new Oracle 12.1.0.2 SPM Evolve Advisor Task: Oracle Database 12.1.0.2 SQL Tuning Guide:Managing the SPM Evolve Advisor Task -Mike

    Read the article

  • MySQL Utility Users' Console Oerview

    - by rudrap
    MySQL Utility Users' Console (mysqluc): The MySQL Utilities Users' Console is designed to make using the utilities easier via a dedicated console. It helps us to use the utilities without worrying about the python and utility paths. Why do we need a special console? - It does provide a unique shell environment with command completion, help for each utility, user defined variables, and type completion for options. - You no longer have to type out the entire name of the utility. - You don't need to remember the name of a database utility you want to use. - You can define variables and reuse them in your utility commands. - It is possible to run utility command along with mysqluc and come out of the mysqluc console. Console commands: mysqluc> help Command Description ----------------------           --------------------------------------------------- help utilities                     Display list of all utilities supported. help <utility>                  Display help for a specific utility. help or help commands   Show this list. exit or quit                       Exit the console. set <variable>=<value>  Store a variable for recall in commands. show options                   Display list of options specified by the user on launch. show variables                 Display list of variables. <ENTER>                       Press ENTER to execute command. <ESCAPE>                     Press ESCAPE to clear the command entry. <DOWN>                       Press DOWN to retrieve the previous command. <UP>                               Press UP to retrieve the next command in history. <TAB>                            Press TAB for type completion of utility, option,or variable names. <TAB><TAB>                Press TAB twice for list of matching type completion (context sensitive). How do I use it? Pre-requisites: - Download the latest version of MySQL Workbench. - Mysql Servers are running. - Your Pythonpath is set. (e.g. Export PYTHONPATH=/...../mysql-utilities/) Check the Version of mysqluc Utility: /usr/bin/python mysqluc.py –version It should display something like this MySQL Utilities mysqluc.py version 1.1.0 - MySQL Workbench Distribution 5.2.44 Copyright (c) 2010, 2012 Oracle and/or its affiliates. All rights reserved. This program is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE, to the extent permitted by law. Use of TAB to get the current utilities: mysqluc> mysqldb<TAB><TAB> Utility Description -------------        ------------------------------------------------------------ mysqldbcopy      copy databases from one server to another mysqldbexport    export metadata and data from databases mysqldbimport    import metadata and data from files mysqluc> mysqldbcopy –source=$se<TAB> Variable Value -------- ---------------------------------------------------------------------- server1 root@localhost:3306 server2 root@localhost:3307 you can see the variables starting with se and then decide which to use Run a utility via the console: /usr/bin/python mysqluc.py -e "mysqldbcopy --source=root@localhost:3306 --destination=root@localhost:3307 dbname" Get help for utilities in the console: mysqluc> help utilities Display help for a utility mysqluc> help mysqldbcopy Details about mysqldbcopy and its options set variables and use them in commands: mysqluc> set server1 = root@localhost:3306 mysqluc>show variables Variable Value -------- ---------------------------------------------------------------------- server1    root@localhost:3306 server2    root@localhost:3307 mysqluc> mysqldbcopy –source=$server1 –destination=$server2 dbname <Enter> Mysqldbcopy utility output will display. mysqluc>show options Display list of options specified by the user mysqluc SERVER=root@host123 VAR_A=57 -e "show variables" Variable Value -------- ----------------------------------------------------------------- SERVER root@host123 VAR_A 57 Finding option names for an Utility: mysqluc> mysqlserverclone --n Option Description ------------------- --------------------------------------------------------- --new-data=NEW_DATA the full path to the location of the data directory for the new instance --new-port=NEW_PORT the new port for the new instance - default=3307 --new-id=NEW_ID the server_id for the new instance - default=2 Limitations: User defined variables have a lifetime of the console run time.

    Read the article

< Previous Page | 512 513 514 515 516 517 518 519 520 521 522 523  | Next Page >