Search Results

Search found 18096 results on 724 pages for 'let'.

Page 687/724 | < Previous Page | 683 684 685 686 687 688 689 690 691 692 693 694  | Next Page >

  • CodePlex Daily Summary for Sunday, May 30, 2010

    CodePlex Daily Summary for Sunday, May 30, 2010New ProjectsAviva Solutions C# Coding Guidelines: A set of C# coding guidelines, coding standards, layout rules, FxCop rulesets and (upcoming) custom FxCop and StyleCop rules for improving the over...BKWork: private project.classbook: du an trong vong 10 ngay. Nhom (Do Bao Linh, Phan Thanh Tai, Nguyen Dang Loc)Du an - 01: Du an mon thay LuongEndNote助手: EndNote Helper is a assistant tool for EndNote, which make your task of reference management more convinent. EndNote 助手是一个用于辅助EndNote进行文献管理的小工具,它可...Evoucher: Simple Evoucher Sales SystemFiddler Delayed Responses Extension: A fiddler extension that help developers delay the delivery of HTML Responses to applications. Some delay user stories: - Delivery of css to HTML ...Generic Entity Model 2: GEM2 is lightweight entity framework for building custom business solutions. It enables rapid approach to entity logic design, while offering out o...GY06: 这个是长大工院于2009年发起的项目,因为种种原因没有完成。JoshDOS: JoshDOS is a command line operating system kernel based off COSMOS. It can be booted from actual hardware and built in Visual Studio using .NET la...PhysicsFMUDeluxe.NET: Este projeto é desenvolvido com o intuíto de treinar o desenvolvimento de aplicações em C#. Ele contém ferramentas para cálculos de física e matemá...Reactor Services Platform: Reactor is a service composition and deployment grid that streamlines developing, composing, deploying and managing services. Reactor Services are ...SerafinApartment: Simple MVC 2 application for apartment rental. It is multingual booking system, that allows users to register, book and subscribe for notifications...Silverlight Audio Effect Box: This is a C# Silverlight 4 sample application which process audio sample in near real time. It allows to capture the default audio input device and...Smart Voice: Smart Voice let's you control Skype using your voice. It allows you to write messages, issue phone calls, etc. This application was developed think...WebDotNet - The minimalist web framework inspired by web.py: WebDotNet is an experiment in web frameworks. Inspired by the python web framework, web.py, it is an exercise in extreme minimalism in a framework...New ReleasesAcies: Acies - Alpha Build 0.0.7: Alpha release. Requires Microsoft XNA Framework Redistributable 3.1 (http://www.microsoft.com/downloads/details.aspx?FamilyID=53867a2a-e249-4560-...AdventureWorksLT 2008 Sample Database Script: AdventureWorksLT 2008 R2 DB Script: This script is based on latest download from the Sample database for SQL Server 2008 R2. The original download from the sample is approx 84 MB and ...ASP.NET Wiki Control: Release 1.3.1: - Removed ASP.NET Session dependency. BreadCrumbs will now work with sessions disabled. Can now also share a URL and have the breadcrumb appropriat...Aviva Solutions C# Coding Guidelines: Visual Studio 2010 Rule Sets: Rule Sets targetting different styles of projects.bvcms - Bellevue Church Management System: Source: This source was used to build the latest {church}.bvcms.comCC.Yacht: CC.Yacht 1.0.10.529: This is the initial release of CC.Yacht. Marked as beta since I don't have any testing/feedback beyond that of myself and my wife.Community Forums NNTP bridge: Community Forums NNTP Bridge V14: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has add...Community Forums NNTP bridge: Community Forums NNTP Bridge V15: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has add...Coronasoft Cryostasis scripting engine: CCSE v0.0.1.0 BETA: This is the 0.0.1.0 Beta Release. If you find any bugs Post them as a comment or in the Discussions tabEndNote助手: EndNote助手2.1.0..0: 去除了注册验证机制。Fiddler Delayed Responses Extension: v0.1: Version 0.1 of Fiddler Delayed Responses Extension. See ChangeLog for more information.Generic Entity Model 2: GEM2 build 52510: This is first BETA release of GEM2! Following implementation is still missing from initial plan: Detailed documentation MySQL operational databa...JoshDOS: JoshDOS Souce: This is the souce for the JoshDOS 1.0 OS kernel. You need the COSMOS user kit to use.JoshDOS: Shell Version 1.0: Whats in this download *JoshDOS user kit *JoshDOS VStudio starter kit *JoshDOS documentation Note: You will need the COSMOS user kit to start deve...miniTodo: mini Todo version 0.3: Todo完了時に音が出なかったのを修正Model Virtual Casting - ASPItalia.com: Model Virtual Casting 0.2: Model Virtual Casting 0.2Questa seconda release di ModelVC è corrispondente a quella mostrata in occasione della Real Code Conference 4.0 tenutasi ...PhysicsFMUDeluxe.NET: PhysicsFMUDeluxe.NET - Setup: Primeira versão pública do PhysicsFMUDeluxe.NETSilverlight Audio Effect Box: Echo Box 1.0: First realease - zip contains : web page + xap file :Smart Voice: Smart Voice 0.1: Here is the first alpha release of Smart Voice. Please remember this was done for a curricular unit project at my university and i understand that ...StyleCop Contrib: Custom rules v0.2: This release of the custom rules target StyleCop 4.3.3. Included rules are: Spacing Rules - NoTrailingWhiteSpace - IndentUsingTabs Ordering Rules ...System.ComponentModel.DataAnnotations Contrib: 0.2.46280.0: Built with Visual Studio 2010/.Net 4.0. Compiled from source code changeset 46280.VB Styler: VB Styler Suite V 1.3.0.0: This is the newest version of the VB Styler. Here are the new features. New Imaging ColorPicker A template for getting colors via sliders ColorR...VCC: Latest build, v2.1.30529.0: Automatic drop of latest buildWPF Application Framework (WAF): WPF Application Framework (WAF) 1.0.0.90 RC: Version: 1.0.0.90 (Release Candidate): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. ...XNA Collision Detection: XNA Collision Detection Sample Program: I have coded a compact program which shows the collision detection working, and provides a camera class for rendering and moving the "player." Agai...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active ProjectsAStar.netpatterns & practices – Enterprise LibraryCommunity Forums NNTP bridgeBlogEngine.NETGMap.NET - Great Maps for Windows Forms & PresentationIonics Isapi Rewrite FilterRawrCustomer Portal Accelerator for Microsoft Dynamics CRMFacebook Developer ToolkitPAP

    Read the article

  • WCF – interchangeable data-contract types

    - by nmarun
    In a WSDL based environment, unlike a CLR-world, we pass around the ‘state’ of an object and not the reference of an object. Well firstly, what does ‘state’ mean and does this also mean that we can send a struct where a class is expected (or vice-versa) as long as their ‘state’ is one and the same? Let’s see. So I have an operation contract defined as below: 1: [ServiceContract] 2: public interface ILearnWcfServiceExtend : ILearnWcfService 3: { 4: [OperationContract] 5: Employee SaveEmployee(Employee employee); 6: } 7:  8: [ServiceBehavior] 9: public class LearnWcfService : ILearnWcfServiceExtend 10: { 11: public Employee SaveEmployee(Employee employee) 12: { 13: employee.EmployeeId = 123; 14: return employee; 15: } 16: } Quite simplistic operation there (which translates to ‘absolutely no business value’). Now, the data contract Employee mentioned above is a struct. 1: public struct Employee 2: { 3: public int EmployeeId { get; set; } 4:  5: public string FName { get; set; } 6: } After compilation and consumption of this service, my proxy (in the Reference.cs file) looks like below (I’ve ignored the rest of the details just to avoid unwanted confusion): 1: public partial struct Employee : System.Runtime.Serialization.IExtensibleDataObject, System.ComponentModel.INotifyPropertyChanged I call the service with the code below: 1: private static void CallWcfService() 2: { 3: Employee employee = new Employee { FName = "A" }; 4: Console.WriteLine("IsValueType: {0}", employee.GetType().IsValueType); 5: Console.WriteLine("IsClass: {0}", employee.GetType().IsClass); 6: Console.WriteLine("Before calling the service: {0} - {1}", employee.EmployeeId, employee.FName); 7: employee = LearnWcfServiceClient.SaveEmployee(employee); 8: Console.WriteLine("Return from the service: {0} - {1}", employee.EmployeeId, employee.FName); 9: } The output is: I now change my Employee type from a struct to a class in the proxy class and run the application: 1: public partial class Employee : System.Runtime.Serialization.IExtensibleDataObject, System.ComponentModel.INotifyPropertyChanged { The output this time is: The state of an object implies towards its composition, the properties and the values of these properties and not based on whether it is a reference type (class) or a value type (struct). And as shown above, we’re actually passing an object by its state and not by reference. Continuing on the same topic of ‘type-interchangeability’, WCF treats two data contracts as equivalent if they have the same ‘wire-representation’. We can do so using the DataContract and DataMember attributes’ Name property. 1: [DataContract] 2: public struct Person 3: { 4: [DataMember] 5: public int Id { get; set; } 6:  7: [DataMember] 8: public string FirstName { get; set; } 9: } 10:  11: [DataContract(Name="Person")] 12: public class Employee 13: { 14: [DataMember(Name = "Id")] 15: public int EmployeeId { get; set; } 16:  17: [DataMember(Name="FirstName")] 18: public string FName { get; set; } 19: } I’ve created two data contracts with the exact same wire-representation. Just remember that the names and the types of data members need to match to be considered equivalent. The question then arises as to what gets generated in the proxy class. Despite us declaring two data contracts (Person and Employee), only one gets emitted – Person. This is because we’re saying that the Employee type has the same wire-representation as the Person type. Also that the signature of the SaveEmployee operation gets changed on the proxy side: 1: [System.CodeDom.Compiler.GeneratedCodeAttribute("System.ServiceModel", "4.0.0.0")] 2: [System.ServiceModel.ServiceContractAttribute(ConfigurationName="ServiceProxy.ILearnWcfServiceExtend")] 3: public interface ILearnWcfServiceExtend 4: { 5: [System.ServiceModel.OperationContractAttribute(Action="http://tempuri.org/ILearnWcfServiceExtend/SaveEmployee", ReplyAction="http://tempuri.org/ILearnWcfServiceExtend/SaveEmployeeResponse")] 6: ClientApplication.ServiceProxy.Person SaveEmployee(ClientApplication.ServiceProxy.Person employee); 7: } But, on the service side, the SaveEmployee still accepts and returns an Employee data contract. 1: [ServiceBehavior] 2: public class LearnWcfService : ILearnWcfServiceExtend 3: { 4: public Employee SaveEmployee(Employee employee) 5: { 6: employee.EmployeeId = 123; 7: return employee; 8: } 9: } Despite all these changes, our output remains the same as the last one: This is type-interchangeability at work! Here’s one more thing to ponder about. Our Person type is a struct and Employee type is a class. Then how is it that the Person type got emitted as a ‘class’ in the proxy? It’s worth mentioning that WSDL describes a type called Employee and does not say whether it is a class or a struct (see the SOAP message below): 1: <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" 2: xmlns:tem="http://tempuri.org/" 3: xmlns:ser="http://schemas.datacontract.org/2004/07/ServiceApplication"> 4: <soapenv:Header/> 5: <soapenv:Body> 6: <tem:SaveEmployee> 7: <!--Optional:--> 8: <tem:employee> 9: <!--Optional:--> 10: <ser:EmployeeId>?</ser:EmployeeId> 11: <!--Optional:--> 12: <ser:FName>?</ser:FName> 13: </tem:employee> 14: </tem:SaveEmployee> 15: </soapenv:Body> 16: </soapenv:Envelope> There are some differences between how ‘Add Service Reference’ and the svcutil.exe generate the proxy class, but turns out both do some kind of reflection and determine the type of the data contract and emit the code accordingly. So since the Employee type is a class, the proxy ‘Person’ type gets generated as a class. In fact, reflecting on svcutil.exe application, you’ll see that there are a couple of places wherein a flag actually determines a type as a class or a struct. One example is in the ExportISerializableDataContract method in the System.Runtime.Serialization.CodeExporter class. Seems like these flags have a say in deciding whether the type gets emitted as a struct or a class. This behavior is different if you use the WSDL tool though. WSDL tool does not do any kind of reflection of the data contract / serialized type, it emits the type as a class by default. You can check this using the two command lines below:   Note to self: Remember ‘state’ and type-interchangeability when traversing through the WSDL planet!

    Read the article

  • SQL – Migrate Database from SQL Server to NuoDB – A Quick Tutorial

    - by Pinal Dave
    Data is growing exponentially and every organization with growing data is thinking of next big innovation in the world of Big Data. Big data is a indeed a future for every organization at one point of the time. Just like every other next big thing, big data has its own challenges and issues. The biggest challenge associated with the big data is to find the ideal platform which supports the scalability and growth of the data. If you are a regular reader of this blog, you must be familiar with NuoDB. I have been working with NuoDB for a while and their recent release is the best thus far. NuoDB is an elastically scalable SQL database that can run on local host, datacenter and cloud-based resources. A key feature of the product is that it does not require sharding (read more here). Last week, I was able to install NuoDB in less than 90 seconds and have explored their Explorer and Admin sections. You can read about my experiences in these posts: SQL – Step by Step Guide to Download and Install NuoDB – Getting Started with NuoDB SQL – Quick Start with Admin Sections of NuoDB – Manage NuoDB Database SQL – Quick Start with Explorer Sections of NuoDB – Query NuoDB Database Many SQL Authority readers have been following me in my journey to evaluate NuoDB. One of the frequently asked questions I’ve received from you is if there is any way to migrate data from SQL Server to NuoDB. The fact is that there is indeed a way to do so and NuoDB provides a fantastic tool which can help users to do it. NuoDB Migrator is a command line utility that supports the migration of Microsoft SQL Server, MySQL, Oracle, and PostgreSQL schemas and data to NuoDB. The migration to NuoDB is a three-step process: NuoDB Migrator generates a schema for a target NuoDB database It loads data into the target NuoDB database It dumps data from the source database Let’s see how we can migrate our data from SQL Server to NuoDB using a simple three-step approach. But before we do that we will create a sample database in MSSQL and later we will migrate the same database to NuoDB: Setup Step 1: Build a sample data CREATE DATABASE [Test]; CREATE TABLE [Department]( [DepartmentID] [smallint] NOT NULL, [Name] VARCHAR(100) NOT NULL, [GroupName] VARCHAR(100) NOT NULL, [ModifiedDate] [datetime] NOT NULL, CONSTRAINT [PK_Department_DepartmentID] PRIMARY KEY CLUSTERED ( [DepartmentID] ASC ) ) ON [PRIMARY]; INSERT INTO Department SELECT * FROM AdventureWorks2012.HumanResources.Department; Note that I am using the SQL Server AdventureWorks database to build this sample table but you can build this sample table any way you prefer. Setup Step 2: Install Java 64 bit Before you can begin the migration process to NuoDB, make sure you have 64-bit Java installed on your computer. This is due to the fact that the NuoDB Migrator tool is built in Java. You can download 64-bit Java for Windows, Mac OSX, or Linux from the following link: http://java.com/en/download/manual.jsp. One more thing to remember is that you make sure that the path in your environment settings is set to your JAVA_HOME directory or else the tool will not work. Here is how you can do it: Go to My Computer >> Right Click >> Select Properties >> Click on Advanced System Settings >> Click on Environment Variables >> Click on New and enter the following values. Variable Name: JAVA_HOME Variable Value: C:\Program Files\Java\jre7 Make sure you enter your Java installation directory in the Variable Value field. Setup Step 3: Install JDBC driver for SQL Server. There are two JDBC drivers available for SQL Server.  Select the one you prefer to use by following one of the two links below: Microsoft JDBC Driver jTDS JDBC Driver In this example we will be using jTDS JDBC driver. Once you download the driver, move the driver to your NuoDB installation folder. In my case, I have moved the JAR file of the driver into the C:\Program Files\NuoDB\tools\migrator\jar folder as this is my NuoDB installation directory. Now we are all set to start the three-step migration process from SQL Server to NuoDB: Migration Step 1: NuoDB Schema Generation Here is the command I use to generate a schema of my SQL Server Database in NuoDB. First I go to the folder C:\Program Files\NuoDB\tools\migrator\bin and execute the nuodb-migrator.bat file. Note that my database name is ‘test’. Additionally my username and password is also ‘test’. You can see that my SQL Server database is running on my localhost on port 1433. Additionally, the schema of the table is ‘dbo’. nuodb-migrator schema –source.driver=net.sourceforge.jtds.jdbc.Driver –source.url=jdbc:jtds:sqlserver://localhost:1433/ –source.username=test –source.password=test –source.catalog=test –source.schema=dbo –output.path=/tmp/schema.sql The above script will generate a schema of all my SQL Server tables and will put it in the folder C:\tmp\schema.sql . You can open the schema.sql file and execute this file directly in your NuoDB instance. You can follow the link here to see how you can execute the SQL script in NuoDB. Please note that if you have not yet created the schema in the NuoDB database, you should create it before executing this step. Step 2: Generate the Dump File of the Data Once you have recreated your schema in NuoDB from SQL Server, the next step is very easy. Here we create a CSV format dump file, which will contain all the data from all the tables from the SQL Server database. The command to do so is very similar to the above command. Be aware that this step may take a bit of time based on your database size. nuodb-migrator dump –source.driver=net.sourceforge.jtds.jdbc.Driver –source.url=jdbc:jtds:sqlserver://localhost:1433/ –source.username=test –source.password=test –source.catalog=test –source.schema=dbo –output.type=csv –output.path=/tmp/dump.cat Once the above command is successfully executed you can find your CSV file in the C:\tmp\ folder. However, you do not have to do anything manually. The third and final step will take care of completing the migration process. Migration Step 3: Load the Data into NuoDB After building schema and taking a dump of the data, the very next step is essential and crucial. It will take the CSV file and load it into the NuoDB database. nuodb-migrator load –target.url=jdbc:com.nuodb://localhost:48004/mytest –target.schema=dbo –target.username=test –target.password=test –input.path=/tmp/dump.cat Please note that in the above script we are now targeting the NuoDB database, which we have already created with the name of “MyTest”. If the database does not exist, create it manually before executing the above script. I have kept the username and password as “test”, but please make sure that you create a more secure password for your database for security reasons. Voila!  You’re Done That’s it. You are done. It took 3 setup and 3 migration steps to migrate your SQL Server database to NuoDB.  You can now start exploring the database and build excellent, scale-out applications. In this blog post, I have done my best to come up with simple and easy process, which you can follow to migrate your app from SQL Server to NuoDB. Download NuoDB I strongly encourage you to download NuoDB and go through my 3-step migration tutorial from SQL Server to NuoDB. Additionally here are two very important blog post from NuoDB CTO Seth Proctor. He has written excellent blog posts on the concept of the Administrative Domains. NuoDB has this concept of an Administrative Domain, which is a collection of hosts that can run one or multiple databases.  Each database has its own TEs and SMs, but all are managed within the Admin Console for that particular domain. http://www.nuodb.com/techblog/2013/03/11/getting-started-provisioning-a-domain/ http://www.nuodb.com/techblog/2013/03/14/getting-started-running-a-database/ Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • Additional options in MDL

    - by Jane Zhang
        The Metadata Loader(MDL) enables you to populate a new repository as well as transfer, update, or restore a backup of existing repository metadata. It consists of two utilities: metadata export and metadata import. The export utility extracts metadata objects from a repository and writes the information into a file. The import utility reads the metadata information from an exported file and inserts the metadata objects into a repository.      While the Design Client provides an intuitive UI that helps you perform the most commonly used export and import tasks, OMBPlus scripting enables you to specify some additional options, and manage a control file that allows you to perform more specialized export and import tasks. Is it possible to utilize these options in MDL from Design Client? This article will tell you how to achieve it.      A property file named mdl.properties is used to configure the additional options. It stores options in name/value pairs. This file can be created and placed under the directory <owb installation path>/owb/bin/admin/. Below we will introduce the options that can be specified in the mdl.properties file. 1. DEFAULTDIRECTORY     When we open a Metadata Export/Import dialog in Design Client, a default directory is provided for MDL file and log file. For MDL Export, the default directory is <owb installation path>/owb/bin/. As for MDL Import, the default directory is <owb installation path>/owb/mdl/. It may not be the one you would want to use as a default. You can specify the option DEFAULTDIRECTORY in the mdl.properties file to set your own default directory for MDL Export/Import, for example, DEFAULTDIRECOTRY=/tmp/     In this example, the default directory is set to /tmp/. Be sure the value ends with a file separator since it represents a directory. In Windows, the file separator is “\”. In linux, the file separator is “/”. 2. MDLTRACEFILE     Sometimes we would like to trace the whole process of MDL Export/Import, and get detailed information about operations to help developers or supports troubleshooting. To turn on MDL trace, set the option MDLTRACEFILE in the mdl.properties file. MDLTRACEFILE=/tmp/mdl.trc    The right side of the equals sign is to specify the name of the file for MDL trace information to be written. If no path is specified, the file will be placed under directory <owb installation path>/owb/bin/admin/. However, the trace file may be large if the MDL file contains a large number of metadata objects, so please use this option sparingly. 3. CONTROLFILE       We can use a control file to specify how objects are imported or exported. We can set an option called CONTROLFILE in the mdl.properties file, so the control file can also be utilized in Design Client, for example, CONTROLFILE=/tmp/mdl_control_file.ctl     The control file stores options in name/value pairs. When using control file, be sure the file exists, otherwise an exception java.lang.Exception: CNV0002-0031(ERROR): Cannot find specified file will be thrown out during MDL Export/Import.      Next we will introduce some options specified in control file. ZIPFILEFORMAT     By default, MDL exports objects into a zip format file. This zip file has an .mdl extension and contains two files. For example, you export the repository metadata into a file called projects.mdl. When you unzip this MDL file, you obtain two files. The file projects.mdx contains the repository objects. The file mdlcatalog.xml contains internal information about the MDL XML file. Another choice is to combine these two files into one unzip text format file when doing MDL exporting.    In OMBPlus command related to MDL, there is an option called FILE_FORMAT which is used to specify the file format for the exported file. Its acceptable values are ZIP or TEXT. When the value TEXT is selected, the exported file is in text format, for example, OMBEXPORT MDL_FILE '/tmp/options_file_format_test.mdl' FILE_FORMAT TEXT FROM PROJECT 'MY_PROJECT'    How to achieve this via Design Client when doing an MDL exporting? Here we have another option called ZIPFILEFORMAT which has the same function as the FILE_FORMAT. The difference is the acceptable values for ZIPFILEFORMAT are Y or N. When the value is set to N, the exported file is in text format, otherwise it is in zip file format. LOGMESSAGELEVEL     Whenever you export or import repository metadata, MDL writes diagnostic and statistical information to a log file. Their are 3 types of status messages: Informational, Warning and Error. By default, the log file includes all types of message. Sometimes, user may only care about one type of messages, for example, they would like only error messages written to the log file. In order to achieve this, we can set an option called LOGMESSAGELEVEL in control file. The acceptable values for LOGMESSAGELEVEL are ALL, WARNING and ERROR. ALL: If the option LOGMESSAGELEVEL is set to ALL, all types of messages (Informational, Warning and Error) will be written into the log file. WARNING: If the option LOGMESSAGELEVEL is set to WARNING, only warning messages will be written into log file. ERROR: If the option LOGMESSAGELEVEL is set to ERROR, only error messages will be written into log file. UPDATEPROJECTATTRIBUTES, UPDATEMODULEATTRIBUTES      These two options are used to decide whether updating the attributes of projects/modules. The options work when projects/modules being imported already exist in repository and we use update metadata mode or replace metadata mode to do the MDL import. The acceptable values for these two options are Y or N. If the value is set to Y, the attributes of projects/modules will be updated, otherwise not.      Next, let’s give an example to see how these options take effect in MDL. 1. First of all, create the property file mdl.properties under the directory <owb installation path>/owb/bin/admin/. 2. Specify the options in the mdl.properties file, see the following screenshot. 3. Create the control file mdl_control_file.ctl under the directory /tmp/. Set the following options in control file. 4. Log into the OWB Design Client. 5. Create an Oracle module named ORA_MOD_1 under the project MY_PROJECT, then export the project MY_PROJECT into file my_project.mdl. 6. Check the trace file mdl.trc under the directory /tmp/. In this file, we can see very detail information for the above export task. 7. Check the exported MDL file. The file my_project.mdl is in text format. Opening the file, you can see the content of the file directly. It concats the file my_project.mdx and mdlcatalog.xml. 8. Modify the project MY_PROJECT and Oracle module ORA_MOD_1, add descriptions for them separately. Delete the location created in step 5. 9. Import the MDL file my_project.mdl. From the Metadata Import dialog, we can see the default directory for MDL file and log file has been changed to /tmp/. Here we use update metadata mode, match by names to do the importing. 10. After importing, check the description of the project MY_PROJECT, we can see the description is still there. But the description of the Oracle module ORA_MOD_1 has gone. That because we set the option UPDATEPROJECTATTRIBUTES to N, and set the option UPDATEMODULEATTRIBUTES to Y. 11. Check the log file, the log file only contains warning messages and the log message level is set to WARNING.      For more details about the 3 types of status messages, see Oracle® Warehouse Builder Installation and Administration Guide11g Release 2.

    Read the article

  • MVC Automatic Menu

    - by Nuri Halperin
    An ex-colleague of mine used to call his SQL script generator "Super-Scriptmatic 2000". It impressed our then boss little, but was fun to say and use. We called every batch job and script "something 2000" from that day on. I'm tempted to call this one Menu-Matic 2000, except it's waaaay past 2000. Oh well. The problem: I'm developing a bunch of stuff in MVC. There's no PM to generate mounds of requirements and there's no Ux Architect to create wireframe. During development, things change. Specifically, actions get renamed, moved from controller x to y etc. Well, as the site grows, it becomes a major pain to keep a static menu up to date, because the links change. The HtmlHelper doesn't live up to it's name and provides little help. How do I keep this growing list of pesky little forgotten actions reigned in? The general plan is: Decorate every action you want as a menu item with a custom attribute Reflect out all menu items into a structure at load time Render the menu using as CSS  friendly <ul><li> HTML. The MvcMenuItemAttribute decorates an action, designating it to be included as a menu item: [AttributeUsage(AttributeTargets.Method, AllowMultiple = true)] public class MvcMenuItemAttribute : Attribute {   public string MenuText { get; set; }   public int Order { get; set; }   public string ParentLink { get; set; }   internal string Controller { get; set; }   internal string Action { get; set; }     #region ctor   public MvcMenuItemAttribute(string menuText) : this(menuText, 0) { } public MvcMenuItemAttribute(string menuText, int order) { MenuText = menuText; Order = order; }       internal string Link { get { return string.Format("/{0}/{1}", Controller, this.Action); } }   internal MvcMenuItemAttribute ParentItem { get; set; } #endregion } The MenuText allows overriding the text displayed on the menu. The Order allows the items to be ordered. The ParentLink allows you to make this item a child of another menu item. An example action could then be decorated thusly: [MvcMenuItem("Tracks", Order = 20, ParentLink = "/Session/Index")] . All pretty straightforward methinks. The challenge with menu hierarchy becomes fairly apparent when you try to render a menu and highlight the "current" item or render a breadcrumb control. Both encounter an  ambiguity if you allow a data source to have more than one menu item with the same URL link. The issue is that there is no great way to tell which link a person click. Using referring URL will fail if a user bookmarked the page. Using some extra query string to disambiguate duplicate URLs essentially changes the links, and also ads a chance of collision with other query parameters. Besides, that smells. The stock ASP.Net sitemap provider simply disallows duplicate URLS. I decided not to, and simply pick the first one encountered as the "current". Although it doesn't solve the issue completely – one might say they wanted the second of the 2 links to be "current"- it allows one to include a link twice (home->deals and products->deals etc), and the logic of deciding "current" is easy enough to explain to the customer. Now that we got that out of the way, let's build the menu data structure: public static List<MvcMenuItemAttribute> ListMenuItems(Assembly assembly) { var result = new List<MvcMenuItemAttribute>(); foreach (var type in assembly.GetTypes()) { if (!type.IsSubclassOf(typeof(Controller))) { continue; } foreach (var method in type.GetMethods()) { var items = method.GetCustomAttributes(typeof(MvcMenuItemAttribute), false) as MvcMenuItemAttribute[]; if (items == null) { continue; } foreach (var item in items) { if (String.IsNullOrEmpty(item.Controller)) { item.Controller = type.Name.Substring(0, type.Name.Length - "Controller".Length); } if (String.IsNullOrEmpty(item.Action)) { item.Action = method.Name; } result.Add(item); } } } return result.OrderBy(i => i.Order).ToList(); } Using reflection, the ListMenuItems method takes an assembly (you will hand it your MVC web assembly) and generates a list of menu items. It digs up all the types, and for each one that is an MVC Controller, digs up the methods. Methods decorated with the MvcMenuItemAttribute get plucked and added to the output list. Again, pretty simple. To make the structure hierarchical, a LINQ expression matches up all the items to their parent: public static void RegisterMenuItems(List<MvcMenuItemAttribute> items) { _MenuItems = items; _MenuItems.ForEach(i => i.ParentItem = items.FirstOrDefault(p => String.Equals(p.Link, i.ParentLink, StringComparison.InvariantCultureIgnoreCase))); } The _MenuItems is simply an internal list to keep things around for later rendering. Finally, to package the menu building for easy consumption: public static void RegisterMenuItems(Type mvcApplicationType) { RegisterMenuItems(ListMenuItems(Assembly.GetAssembly(mvcApplicationType))); } To bring this puppy home, a call in Global.asax.cs Application_Start() registers the menu. Notice the ugliness of reflection is tucked away from the innocent developer. All they have to do is call the RegisterMenuItems() and pass in the type of the application. When you use the new project template, global.asax declares a class public class MvcApplication : HttpApplication and that is why the Register call passes in that type. protected void Application_Start() { AreaRegistration.RegisterAllAreas(); RegisterRoutes(RouteTable.Routes);   MvcMenu.RegisterMenuItems(typeof(MvcApplication)); }   What else is left to do? Oh, right, render! public static void ShowMenu(this TextWriter output) { var writer = new HtmlTextWriter(output);   renderHierarchy(writer, _MenuItems, null); }   public static void ShowBreadCrumb(this TextWriter output, Uri currentUri) { var writer = new HtmlTextWriter(output); string currentLink = "/" + currentUri.GetComponents(UriComponents.Path, UriFormat.Unescaped);   var menuItem = _MenuItems.FirstOrDefault(m => m.Link.Equals(currentLink, StringComparison.CurrentCultureIgnoreCase)); if (menuItem != null) { renderBreadCrumb(writer, _MenuItems, menuItem); } }   private static void renderBreadCrumb(HtmlTextWriter writer, List<MvcMenuItemAttribute> menuItems, MvcMenuItemAttribute current) { if (current == null) { return; } var parent = current.ParentItem; renderBreadCrumb(writer, menuItems, parent); writer.Write(current.MenuText); writer.Write(" / ");   }     static void renderHierarchy(HtmlTextWriter writer, List<MvcMenuItemAttribute> hierarchy, MvcMenuItemAttribute root) { if (!hierarchy.Any(i => i.ParentItem == root)) return;   writer.RenderBeginTag(HtmlTextWriterTag.Ul); foreach (var current in hierarchy.Where(element => element.ParentItem == root).OrderBy(i => i.Order)) { if (ItemFilter == null || ItemFilter(current)) {   writer.RenderBeginTag(HtmlTextWriterTag.Li); writer.AddAttribute(HtmlTextWriterAttribute.Href, current.Link); writer.AddAttribute(HtmlTextWriterAttribute.Alt, current.MenuText); writer.RenderBeginTag(HtmlTextWriterTag.A); writer.WriteEncodedText(current.MenuText); writer.RenderEndTag(); // link renderHierarchy(writer, hierarchy, current); writer.RenderEndTag(); // li } } writer.RenderEndTag(); // ul } The ShowMenu method renders the menu out to the provided TextWriter. In previous posts I've discussed my partiality to using well debugged, time test HtmlTextWriter to render HTML rather than writing out angled brackets by hand. In addition, writing out using the actual writer on the actual stream rather than generating string and byte intermediaries (yes, StringBuilder being no exception) disturbs me. To carry out the rendering of an hierarchical menu, the recursive renderHierarchy() is used. You may notice that an ItemFilter is called before rendering each item. I figured that at some point one might want to exclude certain items from the menu based on security role or context or something. That delegate is the hook for such future feature. To carry out rendering of a breadcrumb recursion is used again, this time simply to unwind the parent hierarchy from the leaf node, then rendering on the return from the recursion rather than as we go along deeper. I guess I was stuck in LISP that day.. recursion is fun though.   Now all that is left is some usage! Open your Site.Master or wherever you'd like to place a menu or breadcrumb, and plant one of these calls: <% MvcMenu.ShowBreadCrumb(this.Writer, Request.Url); %> to show a breadcrumb trail (notice lack of "=" after <% and the semicolon). <% MvcMenu.ShowMenu(Writer); %> to show the menu.   As mentioned before, the HTML output is nested <UL> <LI> tags, which should make it easy to style using abundant CSS to produce anything from static horizontal or vertical to dynamic drop-downs.   This has been quite a fun little implementation and I was pleased that the code size remained low. The main crux was figuring out how to pass parent information from the attribute to the hierarchy builder because attributes have restricted parameter types. Once I settled on that implementation, the rest falls into place quite easily.

    Read the article

  • The last MVVM you'll ever need?

    - by Nuri Halperin
    As my MVC projects mature and grow, the need to have some omnipresent, ambient model properties quickly emerge. The application no longer has only one dynamic pieced of data on the page: A sidebar with a shopping cart, some news flash on the side – pretty common stuff. The rub is that a controller is invoked in context of a single intended request. The rest of the data, even though it could be just as dynamic, is expected to appear on it's own. There are many solutions to this scenario. MVVM prescribes creating elaborate objects which expose your new data as a property on some uber-object with more properties exposing the "side show" ambient data. The reason I don't love this approach is because it forces fairly acute awareness of the view, and soon enough you have many MVVM objects laying around, and views have to start doing null-checks in order to ensure you really supplied all the values before binding to them. Ick. Just as unattractive is the ViewData dictionary. It's not strongly typed, and in both this and the MVVM approach someone has to populate these properties – n'est pas? Where does that live? With MVC2, we get the formerly-futures  feature Html.RenderAction(). The feature allows you plant a line in a view, of the format: <% Html.RenderAction("SessionInterest", "Session"); %> While this syntax looks very clean, I can't help being bothered by it. MVC was touting a very strong separation of concerns, the Model taking on the role of the business logic, the controller handling route and performing minimal view-choosing operations and the views strictly focused on rendering out angled-bracket tags. The RenderAction() syntax has the view calling some controller and invoking it inline with it's runtime rendering. This – to my taste – embeds too much  knowledge of controllers into the view's code – which was allegedly forbidden.  The one way flow "Controller Receive Data –> Controller invoke Model –> Controller select view –> Controller Hand data to view" now gets a "View calls controller and gets it's own data" which is not so one-way anymore. Ick. I toyed with some other solutions a bit, including some base controllers, special view classes etc. My current favorite though is making use of the ExpandoObject and dynamic features with C# 4.0. If you follow Phil Haack or read a bit from David Heyden you can see the general picture emerging. The game changer is that using the new dynamic syntax, one can sprout properties on an object and make use of them in the view. Well that beats having a bunch of uni-purpose MVVM's any day! Rather than statically exposed properties, we'll just use the capability of adding members at runtime. Armed with new ideas and syntax, I went to work: First, I created a factory method to enrich the focuse object: public static class ModelExtension { public static dynamic Decorate(this Controller controller, object mainValue) { dynamic result = new ExpandoObject(); result.Value = mainValue; result.SessionInterest = CodeCampBL.SessoinInterest(); result.TagUsage = CodeCampBL.TagUsage(); return result; } } This gives me a nice fluent way to have the controller add the rest of the ambient "side show" items (SessionInterest, TagUsage in this demo) and expose them all as the Model: public ActionResult Index() { var data = SyndicationBL.Refresh(TWEET_SOURCE_URL); dynamic result = this.Decorate(data); return View(result); } So now what remains is that my view knows to expect a dynamic object (rather than statically typed) so that the ASP.NET page compiler won't barf: <%@ Page Language="C#" Title="Ambient Demo" MasterPageFile="~/Views/Shared/Ambient.Master" Inherits="System.Web.Mvc.ViewPage<dynamic>" %> Notice the generic ViewPage<dynamic>. It doesn't work otherwise. In the page itself, Model.Value property contains the main data returned from the controller. The nice thing about this, is that the master page (Ambient.Master) also inherits from the generic ViewMasterPage<dynamic>. So rather than the page worrying about all this ambient stuff, the side bars and panels for ambient data all reside in a master page, and can be rendered using the RenderPartial() syntax: <% Html.RenderPartial("TagCloud", Model.SessionInterest as Dictionary<string, int>); %> Note here that a cast is necessary. This is because although dynamic is magic, it can't figure out what type this property is, and wants you to give it a type so its binder can figure out the right property to bind to at runtime. I use as, you can cast if you like. So there we go – no violation of MVC, no explosion of MVVM models and voila – right? Well, I could not let this go without a tweak or two more. The first thing to improve, is that some views may not need all the properties. In that case, it would be a waste of resources to populate every property. The solution to this is simple: rather than exposing properties, I change d the factory method to expose lambdas - Func<T> really. So only if and when a view accesses a member of the dynamic object does it load the data. public static class ModelExtension { // take two.. lazy loading! public static dynamic LazyDecorate(this Controller c, object mainValue) { dynamic result = new ExpandoObject(); result.Value = mainValue; result.SessionInterest = new Func<Dictionary<string, int>>(() => CodeCampBL.SessoinInterest()); result.TagUsage = new Func<Dictionary<string, int>>(() => CodeCampBL.TagUsage()); return result; } } Now that lazy loading is in place, there's really no reason not to hook up all and any possible ambient property. Go nuts! Add them all in – they won't get invoked unless used. This now requires changing the signature of usage on the ambient properties methods –adding some parenthesis to the master view: <% Html.RenderPartial("TagCloud", Model.SessionInterest() as Dictionary<string, int>); %> And, of course, the controller needs to call LazyDecorate() rather than the old Decorate(). The final touch is to introduce a convenience method to the my Controller class , so that the tedium of calling Decorate() everywhere goes away. This is done quite simply by adding a bunch of methods, matching View(object), View(string,object) signatures of the Controller class: public ActionResult Index() { var data = SyndicationBL.Refresh(TWEET_SOURCE_URL); return AmbientView(data); } //these methods can reside in a base controller for the solution: public ViewResult AmbientView(dynamic data) { dynamic result = ModelExtension.LazyDecorate(this, data); return View(result); } public ViewResult AmbientView(string viewName, dynamic data) { dynamic result = ModelExtension.LazyDecorate(this, data); return View(viewName, result); } The call to AmbientView now replaces any call the View() that requires the ambient data. DRY sattisfied, lazy loading and no need to replace core pieces of the MVC pipeline. I call this a good MVC day. Enjoy!

    Read the article

  • AdvancedFormatProvider: Making string.format do more

    - by plblum
    When I have an integer that I want to format within the String.Format() and ToString(format) methods, I’m always forgetting the format symbol to use with it. That’s probably because its not very intuitive. Use {0:N0} if you want it with group (thousands) separators. text = String.Format("{0:N0}", 1000); // returns "1,000"   int value1 = 1000; text = value1.ToString("N0"); Use {0:D} or {0:G} if you want it without group separators. text = String.Format("{0:D}", 1000); // returns "1000"   int value2 = 1000; text2 = value2.ToString("D"); The {0:D} is especially confusing because Microsoft gives the token the name “Decimal”. I thought it reasonable to have a new format symbol for String.Format, "I" for integer, and the ability to tell it whether it shows the group separators. Along the same lines, why not expand the format symbols for currency ({0:C}) and percent ({0:P}) to let you omit the currency or percent symbol, omit the group separator, and even to drop the decimal part when the value is equal to the whole number? My solution is an open source project called AdvancedFormatProvider, a group of classes that provide the new format symbols, continue to support the rest of the native symbols and makes it easy to plug in additional format symbols. Please visit https://github.com/plblum/AdvancedFormatProvider to learn about it in detail and explore how its implemented. The rest of this post will explore some of the concepts it takes to expand String.Format() and ToString(format). AdvancedFormatProvider benefits: Supports {0:I} token for integers. It offers the {0:I-,} option to omit the group separator. Supports {0:C} token with several options. {0:C-$} omits the currency symbol. {0:C-,} omits group separators, and {0:C-0} hides the decimal part when the value would show “.00”. For example, 1000.0 becomes “$1000” while 1000.12 becomes “$1000.12”. Supports {0:P} token with several options. {0:P-%} omits the percent symbol. {0:P-,} omits group separators, and {0:P-0} hides the decimal part when the value would show “.00”. For example, 1 becomes “100 %” while 1.1223 becomes “112.23 %”. Provides a plug in framework that lets you create new formatters to handle specific format symbols. You register them globally so you can just pass the AdvancedFormatProvider object into String.Format and ToString(format) without having to figure out which plug ins to add. text = String.Format(AdvancedFormatProvider.Current, "{0:I}", 1000); // returns "1,000" text2 = String.Format(AdvancedFormatProvider.Current, "{0:I-,}", 1000); // returns "1000" text3 = String.Format(AdvancedFormatProvider.Current, "{0:C-$-,}", 1000.0); // returns "1000.00" The IFormatProvider parameter Microsoft has made String.Format() and ToString(format) format expandable. They each take an additional parameter that takes an object that implements System.IFormatProvider. This interface has a single member, the GetFormat() method, which returns an object that knows how to convert the format symbol and value into the desired string. There are already a number of web-based resources to teach you about IFormatProvider and the companion interface ICustomFormatter. I’ll defer to them if you want to dig more into the topic. The only thing I want to point out is what I think are implementation considerations. Why GetFormat() always tests for ICustomFormatter When you see examples of implementing IFormatProviders, the GetFormat() method always tests the parameter against the ICustomFormatter type. Why is that? public object GetFormat(Type formatType) { if (formatType == typeof(ICustomFormatter)) return this; else return null; } The value of formatType is already predetermined by the .net framework. String.Format() uses the StringBuilder.AppendFormat() method to parse the string, extracting the tokens and calling GetFormat() with the ICustomFormatter type. (The .net framework also calls GetFormat() with the types of System.Globalization.NumberFormatInfo and System.Globalization.DateTimeFormatInfo but these are exclusive to how the System.Globalization.CultureInfo class handles its implementation of IFormatProvider.) Your code replaces instead of expands I would have expected the caller to pass in the format string to GetFormat() to allow your code to determine if it handles the request. My vision would be to return null when the format string is not supported. The caller would iterate through IFormatProviders until it finds one that handles the format string. Unfortunatley that is not the case. The reason you write GetFormat() as above is because the caller is expecting an object that handles all formatting cases. You are effectively supposed to write enough code in your formatter to handle your new cases and call .net functions (like String.Format() and ToString(format)) to handle the original cases. Its not hard to support the native functions from within your ICustomFormatter.Format function. Just test the format string to see if it applies to you. If not, call String.Format() with a token using the format passed in. public string Format(string format, object arg, IFormatProvider formatProvider) { if (format.StartsWith("I")) { // handle "I" formatter } else return String.Format(formatProvider, "{0:" + format + "}", arg); } Formatters are only used by explicit request Each time you write a custom formatter (implementer of ICustomFormatter), it is not used unless you explicitly passed an IFormatProvider object that supports your formatter into String.Format() or ToString(). This has several disadvantages: Suppose you have several ICustomFormatters. In order to have all available to String.Format() and ToString(format), you have to merge their code and create an IFormatProvider to return an instance of your new class. You have to remember to utilize the IFormatProvider parameter. Its easy to overlook, especially when you have existing code that calls String.Format() without using it. Some APIs may call String.Format() themselves. If those APIs do not offer an IFormatProvider parameter, your ICustomFormatter will not be available to them. The AdvancedFormatProvider solves the first two of these problems by providing a plug-in architecture.

    Read the article

  • Poor home office network performance and cannot figure out where the issue is

    - by Jeff Willener
    This is the most bizarre issue. I have worked with small to mid size networks for quite a long time and can say I'm comfortable connecting hardware. Where you will start to lose me is with managed switches and firewalls. To start, let me describe my network (sigh, shouldn't but I MUST solve this). 1) Comcast Cable Internet 2) Motorola SURFboard eXtreme Cable Modem. a) Model: SB6120 b) DOCSIS 3.0 and 2.0 support c) IPv4 and IPv6 support 3-A) Cisco Small Business RV220W Wireless N Firewall a) Latest firmware b) Model: RV220W-A-K9-NA c) WAN Port to Modem (2) d) vlan 1: work e) vlan 2: everything else. 3-B) D-Link DIR-615 Draft 802.11 N Wireless Router a) Latest firmware b) WAN Port to Modem (2) 4) Servers connected directly to firewall a) If firewall 3-A, then vlan 1 b) CAT5e patch cables c) Dell PowerEdge 1400SC w/ 10/100 integrated NIC (Domain Controller, DNS, former DHCP) d) Dell PowerEdge 400SC w/ 10/100/1000 integrated NIC (VMWare Server) 4) Linksys EZXS88W unmanaged Workgroup 10/100 Switch a) If firewall 3-A, then vlan 2 b) 25' CAT5e patch cable to firewall (3-A or 3-B) c) Connects xBox 360, Blu-Ray player, PC at TV 5) Office equipment connected directly to firewall a) If firewall 3-A, then vlan 1 b) ~80' CAT6 or CAT5e patch cable to firewall (3-A or 3-B) c) Connects 1) Dell Latitude laptop 10/100/1000 2) Dell Inspiron laptop 10/100 3) Dell Workstation 10/100/1000 (Pristine host, VMWare Workstation 7.x with many bridged VM's) 4) Brother Laser Printer 10/100 5) Epson All-In-One Workforce 310 10/100 5-A) NetGear FS116 unmanaged 10/100 switch a) I've had this switch for a long time and never had issues. 5-B) NetGear GS108 unmanaged 10/100/1000 switch a) Bought new for this issue and returned. 5-C) Linksys SE2500 unmanaged 10/100/1000 switch a) Bought new for this issue and returned. 5-D) TP-Link TL-SG10008D unmanaged 10/100/1000 a) Bought new for this issue and still have. 6) VLan 1 Wireless Connections (on same subnet if 3-B) a) Any of those at 5c b) HP Laptop 7) VLan 2 Wireless Connection (on same subnet if 3-B) a) IPad, IPod b) Compaq Laptop c) Epson Wireless Printer Shew, without hosting a diagram I hope that paints a good picture. The Issue The breakdown here is at item 5. No matter what I do I cannot have a switch at 5 and have to run everything wireless regardless of router. Issues related to using a switch (point 5 above) SpeedTest is good. Poor throughput to other devices if can communicate at all. Usually cannot ping other devices even on the same switch although, when able, ping times are good. Eventual lose of connectivity and can "sometimes" be restored by unplugging everything for several days, not minutes or hours but we're talking a week if at all. Directly connect to computer gives good internet connection however throughput to other devices connected to firewall is at best horrible. Yet printing doesn't seem to be an issue as long as they are connected via wireless. I have to force the RV220W to 1000Mb on the respective port if using a Gig Switch Issues related to using wireless in place of a switch (point 5 above) Poor throughput to other devices if can communicate. SpeedTest is good. Bottom line Internet speeds are awesome. By the way, Comcast went WAY above and beyond to make sure it was not them. They rewired EVERYTHING which did solve internet drops. Computer to computer connections are garbage Cannot get switch at 5 to work, yet other at 4 has never had an issue. Direct connection, bypass switch, is good for DHCP and internet. DNS must be on server, not firewall. Cisco insists its my switches but as you can see I have used four and two different cables with the same result. My gut feeling is something is happening with routing. But I'm not smart enough to know that answer. I run a lot of VM's at 5-c-3, could that cause it? What's different compared to my previous house is I have introduced Gigabit hardware (firewall/switches/computers). Some of my computers might have IPv6 turned on if I haven't turned it off already. I'm truly at a loss and hope anyone has some crazy idea how to solve this. Bottom line, I need a switch in my office behind the firewall. I've changed everything. The real crux is I will find a working solution and, again, after days it will stop working. So this means I cannot isolate if its a computer since I have to use them. Oh and a solution is not throwing more money at this. I'm well into $1k already. Yah, lame.

    Read the article

  • OWB 11gR2 - Early Arriving Facts

    - by Dawei Sun
    A common challenge when building ETL components for a data warehouse is how to handle early arriving facts. OWB 11gR2 introduced a new feature to address this for dimensional objects entitled Orphan Management. An orphan record is one that does not have a corresponding existing parent record. Orphan management automates the process of handling source rows that do not meet the requirements necessary to form a valid dimension or cube record. In this article, a simple example will be provided to show you how to use Orphan Management in OWB. We first import a sample MDL file that contains all the objects we need. Then we take some time to examine all the objects. After that, we prepare the source data, deploy the target table and dimension/cube loading map. Finally, we run the loading maps, and check the data in target dimension/cube tables. OK, let’s start… 1. Import MDL file and examine sample project First, download zip file from here, which includes a MDL file and three source data files. Then we open OWB design center, import orphan_management.mdl by using the menu File->Import->Warehouse Builder Metadata. Now we have several objects in BI_DEMO project as below: Mapping LOAD_CHANNELS_OM: The mapping for dimension loading. Mapping LOAD_SALES_OM: The mapping for cube loading. Dimension CHANNELS_OM: The dimension that contains channels data. Cube SALES_OM: The cube that contains sales data. Table CHANNELS_OM: The star implementation table of dimension CHANNELS_OM. Table SALES_OM: The star implementation table of cube SALES_OM. Table SRC_CHANNELS: The source table of channels data, that will be loaded into dimension CHANNELS_OM. Table SRC_ORDERS and SRC_ORDER_ITEMS: The source tables of sales data that will be loaded into cube SALES_OM. Sequence CLASS_OM_DIM_SEQ: The sequence used for loading dimension CHANNELS_OM. Dimension CHANNELS_OM This dimension has a hierarchy with three levels: TOTAL, CLASS and CHANNEL. Each level has three attributes: ID (surrogate key), NAME and SOURCE_ID (business key). It has a standard star implementation. The orphan management policy and the default parent setting are shown in the following screenshots: The orphan management policy options that you can set for loading are: Reject Orphan: The record is not inserted. Default Parent: You can specify a default parent record. This default record is used as the parent record for any record that does not have an existing parent record. If the default parent record does not exist, Warehouse Builder creates the default parent record. You specify the attribute values of the default parent record at the time of defining the dimensional object. If any ancestor of the default parent does not exist, Warehouse Builder also creates this record. No Maintenance: This is the default behavior. Warehouse Builder does not actively detect, reject, or fix orphan records. While removing data from a dimension, you can select one of the following orphan management policies: Reject Removal: Warehouse Builder does not allow you to delete the record if it has existing child records. No Maintenance: This is the default behavior. Warehouse Builder does not actively detect, reject, or fix orphan records. (More details are at http://download.oracle.com/docs/cd/E11882_01/owb.112/e10935/dim_objects.htm#insertedID1) Cube SALES_OM This cube is references to dimension CHANNELS_OM. It has three measures: AMOUNT, QUANTITY and COST. The orphan management policy setting are shown as following screenshot: The orphan management policy options that you can set for loading are: No Maintenance: Warehouse Builder does not actively detect, reject, or fix orphan rows. Default Dimension Record: Warehouse Builder assigns a default dimension record for any row that has an invalid or null dimension key value. Use the Settings button to define the default parent row. Reject Orphan: Warehouse Builder does not insert the row if it does not have an existing dimension record. (More details are at http://download.oracle.com/docs/cd/E11882_01/owb.112/e10935/dim_objects.htm#BABEACDG) Mapping LOAD_CHANNELS_OM This mapping loads source data from table SRC_CHANNELS to dimension CHANNELS_OM. The operator CHANNELS_IN is bound to table SRC_CHANNELS; CHANNELS_OUT is bound to dimension CHANNELS_OM. The TOTALS operator is used for generating a constant value for the top level in the dimension. The CLASS_FILTER operator is used to filter out the “invalid” class name, so then we can see what will happen when those channel records with an “invalid” parent are loading into dimension. Some properties of the dimension operator in this mapping are important to orphan management. See the screenshot below: Create Default Level Records: If YES, then default level records will be created. This property must be set to YES for dimensions and cubes if one of their orphan management policies is “Default Parent” or “Default Dimension Record”. This property is set to NO by default, so the user may need to set this to YES manually. LOAD policy for INVALID keys/ LOAD policy for NULL keys: These two properties have the same meaning as in the dimension editor. The values are set to the same as the dimension value when user drops the dimension into the mapping. The user does not need to modify these properties. Record Error Rows: If YES, error rows will be inserted into error table when loading the dimension. REMOVE Orphan Policy: This property is used when removing data from a dimension. Since the dimension loading type is set to LOAD in this example, this property is disabled. Mapping LOAD_SALES_OM This mapping loads source data from table SRC_ORDERS and SRC_ORDER_ITEMS to cube SALES_OM. This mapping seems a little bit complicated, but operators in the red rectangle are used to filter out and generate the records with “invalid” or “null” dimension keys. Some properties of the cube operator in a mapping are important to orphan management. See the screenshot below: Enable Source Aggregation: Should be checked in this example. If the default dimension record orphan policy is set for the cube operator, then it is recommended that source aggregation also be enabled. Otherwise, the orphan management processing may produce multiple fact rows with the same default dimension references, which will cause an “unstable rowset” execution error in the database, since the dimension refs are used as update match attributes for updating the fact table. LOAD policy for INVALID keys/ LOAD policy for NULL keys: These two properties have the same meaning as in the cube editor. The values are set to the same as in the cube editor when the user drops the cube into the mapping. The user does not need to modify these properties. Record Error Rows: If YES, error rows will be inserted into error table when loading the cube. 2. Deploy objects and mappings We now can deploy the objects. First, make sure location SALES_WH_LOCAL has been correctly configured. Then open Control Center Manager by using the menu Tools->Control Center Manager. Expand BI_DEMO->SALES_WH_LOCAL, click SALES_WH node on the project tree. We can see the following objects: Deploy all the objects in the following order: Sequence CLASS_OM_DIM_SEQ Table CHANNELS_OM, SALES_OM, SRC_CHANNELS, SRC_ORDERS, SRC_ORDER_ITEMS Dimension CHANNELS_OM Cube SALES_OM Mapping LOAD_CHANNELS_OM, LOAD_SALES_OM Note that we deployed source tables as well. Normally, we import source table from database instead of deploying them to target schema. However, in this example, we designed the source tables in OWB and deployed them to database for the purpose of this demonstration. 3. Prepare and examine source data Before running the mappings, we need to populate and examine the source data first. Run SRC_CHANNELS.sql, SRC_ORDERS.sql and SRC_ORDER_ITEMS.sql as target user. Then we check the data in these three tables. Table SRC_CHANNELS SQL> select rownum, id, class, name from src_channels; Records 1~5 are correct; they should be loaded into dimension without error. Records 6,7 and 8 have null parents; they should be loaded into dimension with a default parent value, and should be inserted into error table at the same time. Records 9, 10 and 11 have “invalid” parents; they should be rejected by dimension, and inserted into error table. Table SRC_ORDERS and SRC_ORDER_ITEMS SQL> select rownum, a.id, a.channel, b.amount, b.quantity, b.cost from src_orders a, src_order_items b where a.id = b.order_id; Record 178 has null dimension reference; it should be loaded into cube with a default dimension reference, and should be inserted into error table at the same time. Record 179 has “invalid” dimension reference; it should be rejected by cube, and inserted into error table. Other records should be aggregated and loaded into cube correctly. 4. Run the mappings and examine the target data In the Control Center Manager, expand BI_DEMO-> SALES_WH_LOCAL-> SALES_WH-> Mappings, right click on LOAD_CHANNELS_OM node, click Start. Use the same way to run mapping LOAD_SALES_OM. When they successfully finished, we can check the data in target tables. Table CHANNELS_OM SQL> select rownum, total_id, total_name, total_source_id, class_id,class_name, class_source_id, channel_id, channel_name,channel_source_id from channels_om order by abs(dimension_key); Records 1,2 and 3 are the default dimension records for the three levels. Records 8, 10 and 15 are the loaded records that originally have null parents. We see their parents name (class_name) is set to DEF_CLASS_NAME. Those records whose CHANNEL_NAME are Special_4, Special_5 and Special_6 are not loaded to this table because of the invalid parent. Error Table CHANNELS_OM_ERR SQL> select rownum, class_source_id, channel_id, channel_name,channel_source_id, err$$$_error_reason from channels_om_err order by channel_name; We can see all the record with null parent or invalid parent are inserted into this error table. Error reason is “Default parent used for record” for the first three records, and “No parent found for record” for the last three. Table SALES_OM SQL> select a.*, b.channel_name from sales_om a, channels_om b where a.channels=b.channel_id; We can see the order record with null channel_name has been loaded into target table with a default channel_name. The one with “invalid” channel_name are not loaded. Error Table SALES_OM_ERR SQL> select a.amount, a.cost, a.quantity, a.channels, b.channel_name, a.err$$$_error_reason from sales_om_err a, channels_om b where a.channels=b.channel_id(+); We can see the order records with null or invalid channel_name are inserted into error table. If the dimension reference column is null, the error reason is “Default dimension record used for fact”. If it is invalid, the error reason is “Dimension record not found for fact”. Summary In summary, this article illustrated the Orphan Management feature in OWB 11gR2. Automated orphan management policies improve ETL developer and administrator productivity by addressing an important cause of cube and dimension load failures, without requiring developers to explicitly build logic to handle these orphan rows.

    Read the article

  • TFS 2010 SDK: Smart Merge - Programmatically Create your own Merge Tool

    - by Tarun Arora
    Technorati Tags: Team Foundation Server 2010,TFS SDK,TFS API,TFS Merge Programmatically,TFS Work Items Programmatically,TFS Administration Console,ALM   The information available in the Merge window in Team Foundation Server 2010 is very important in the decision making during the merging process. However, at present the merge window shows very limited information, more that often you are interested to know the work item, files modified, code reviewer notes, policies overridden, etc associated with the change set. Our friends at Microsoft are working hard to change the game again with vNext, but because at present the merge window is a model window you have to cancel the merge process and go back one after the other to check the additional information you need. If you can relate to what i am saying, you will enjoy this blog post! I will show you how to programmatically create your own merging window using the TFS 2010 API. A few screen shots of the WPF TFS 2010 API – Custom Merging Application that we will be creating programmatically, Excited??? Let’s start coding… 1. Get All Team Project Collections for the TFS Server You can read more on connecting to TFS programmatically on my blog post => How to connect to TFS Programmatically 1: public static ReadOnlyCollection<CatalogNode> GetAllTeamProjectCollections() 2: { 3: TfsConfigurationServer configurationServer = 4: TfsConfigurationServerFactory. 5: GetConfigurationServer(new Uri("http://xxx:8080/tfs/")); 6: 7: CatalogNode catalogNode = configurationServer.CatalogNode; 8: return catalogNode.QueryChildren(new Guid[] 9: { CatalogResourceTypes.ProjectCollection }, 10: false, CatalogQueryOptions.None); 11: } 2. Get All Team Projects for the selected Team Project Collection You can read more on connecting to TFS programmatically on my blog post => How to connect to TFS Programmatically 1: public static ReadOnlyCollection<CatalogNode> GetTeamProjects(string instanceId) 2: { 3: ReadOnlyCollection<CatalogNode> teamProjects = null; 4: 5: TfsConfigurationServer configurationServer = 6: TfsConfigurationServerFactory.GetConfigurationServer(new Uri("http://xxx:8080/tfs/")); 7: 8: CatalogNode catalogNode = configurationServer.CatalogNode; 9: var teamProjectCollections = catalogNode.QueryChildren(new Guid[] {CatalogResourceTypes.ProjectCollection }, 10: false, CatalogQueryOptions.None); 11: 12: foreach (var teamProjectCollection in teamProjectCollections) 13: { 14: if (string.Compare(teamProjectCollection.Resource.Properties["InstanceId"], instanceId, true) == 0) 15: { 16: teamProjects = teamProjectCollection.QueryChildren(new Guid[] { CatalogResourceTypes.TeamProject }, false, 17: CatalogQueryOptions.None); 18: } 19: } 20: 21: return teamProjects; 22: } 3. Get All Branches with in a Team Project programmatically I will be passing the name of the Team Project for which i want to retrieve all the branches. When consuming the ‘Version Control Service’ you have the method QueryRootBranchObjects, you need to pass the recursion type => none, one, full. Full implies you are interested in all branches under that root branch. 1: public static List<BranchObject> GetParentBranch(string projectName) 2: { 3: var branches = new List<BranchObject>(); 4: 5: var tfs = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri("http://<ServerName>:8080/tfs/<teamProjectName>")); 6: var versionControl = tfs.GetService<VersionControlServer>(); 7: 8: var allBranches = versionControl.QueryRootBranchObjects(RecursionType.Full); 9: 10: foreach (var branchObject in allBranches) 11: { 12: if (branchObject.Properties.RootItem.Item.ToUpper().Contains(projectName.ToUpper())) 13: { 14: branches.Add(branchObject); 15: } 16: } 17: 18: return branches; 19: } 4. Get All Branches associated to the Parent Branch Programmatically Now that we have the parent branch, it is important to retrieve all child branches of that parent branch. Lets see how we can achieve this using the TFS API. 1: public static List<ItemIdentifier> GetChildBranch(string parentBranch) 2: { 3: var branches = new List<ItemIdentifier>(); 4: 5: var tfs = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri("http://<ServerName>:8080/tfs/<CollectionName>")); 6: var versionControl = tfs.GetService<VersionControlServer>(); 7: 8: var i = new ItemIdentifier(parentBranch); 9: var allBranches = 10: versionControl.QueryBranchObjects(i, RecursionType.None); 11: 12: foreach (var branchObject in allBranches) 13: { 14: foreach (var childBranche in branchObject.ChildBranches) 15: { 16: branches.Add(childBranche); 17: } 18: } 19: 20: return branches; 21: } 5. Get Merge candidates between two branches Programmatically Now that we have the parent and the child branch that we are interested to perform a merge between we will use the method ‘GetMergeCandidates’ in the namespace ‘Microsoft.TeamFoundation.VersionControl.Client’ => http://msdn.microsoft.com/en-us/library/bb138934(v=VS.100).aspx 1: public static MergeCandidate[] GetMergeCandidates(string fromBranch, string toBranch) 2: { 3: var tfs = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri("http://<ServerName>:8080/tfs/<CollectionName>")); 4: var versionControl = tfs.GetService<VersionControlServer>(); 5: 6: return versionControl.GetMergeCandidates(fromBranch, toBranch, RecursionType.Full); 7: } 6. Get changeset details Programatically Now that we have the changeset id that we are interested in, we can get details of the changeset. The Changeset object contains the properties => http://msdn.microsoft.com/en-us/library/microsoft.teamfoundation.versioncontrol.client.changeset.aspx - Changes: Gets or sets an array of Change objects that comprise this changeset. - CheckinNote: Gets or sets the check-in note of the changeset. - Comment: Gets or sets the comment of the changeset. - PolicyOverride: Gets or sets the policy override information of this changeset. - WorkItems: Gets an array of work items that are associated with this changeset. 1: public static Changeset GetChangeSetDetails(int changeSetId) 2: { 3: var tfs = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri("http://<ServerName>:8080/tfs/<CollectionName>")); 4: var versionControl = tfs.GetService<VersionControlServer>(); 5: 6: return versionControl.GetChangeset(changeSetId); 7: } 7. Possibilities In future posts i will try and extend this idea to explore further possibilities, but few features that i am sure will further help during the merge decision making process would be, - View changed files - Compare modified file with current/previous version - Merge Preview - Last Merge date Any other features that you can think of?

    Read the article

  • CodePlex Daily Summary for Friday, February 25, 2011

    CodePlex Daily Summary for Friday, February 25, 2011Popular ReleasesMono.Addins: Mono.Addins 0.6: The 0.6 release of Mono.Addins includes many improvements, bug fixes and new features: Add-in engine Add-in name and description can now be localized. There are new custom attributes for defining them, and can also be specified as xml elements in an add-in manifest instead of attributes. Support for custom add-in properties. It is now possible to specify arbitrary properties in add-ins, which can be queried at install time (using the Mono.Addins.Setup API) or at run-time. Custom extensio...patterns & practices: Project Silk: Project Silk Community Drop 3 - 25 Feb 2011: IntroductionWelcome to the third community drop of Project Silk. For this drop we are requesting feedback on overall application architecture, code review of the JavaScript Conductor and Widgets, and general direction of the application. Project Silk provides guidance and sample implementations that describe and illustrate recommended practices for building modern web applications using technologies such as HTML5, jQuery, CSS3 and Internet Explorer 9. This guidance is intended for experien...PhoneyTools: Initial Release (0.1): This is the 0.1 version for preview of the features.Minemapper: Minemapper v0.1.5: Now supports new Minecraft beta v1.3 map format, thanks to updated mcmap. Disabled biomes, until Minecraft Biome Extractor supports new format.Smartkernel: Smartkernel: ????,??????Document.Editor: 2011.7: Whats new for Document.Editor 2011.7: New Find dialog Improved Email dialog Improved Home tab Improved Format tab Minor Bug Fix's, improvements and speed upsChiave File Encryption: Chiave 0.9.1: Application for file encryption and decryption using 512 Bit rijndael encyrption algorithm with simple to use UI. Its written in C# and compiled in .Net version 3.5. It incorporates features of Windows 7 like Jumplists, Taskbar progress and Aero Glass. Change Log from 0.9 Beta to 0.9.1: ======================= >Added option for system shutdown, sleep, hibernate after operation completed. >Minor Changes to the UI. >Numerous Bug fixes. Feedbacks are Welcome!....Coding4Fun Tools: Coding4Fun.Phone.Toolkit v1.2: New control, Toast Prompt! Removed progress bar since Silverlight Toolkit Feb 2010 has it.Umbraco CMS: Umbraco 4.7: Service release fixing 31 issues. A full changelog will be available with the final stable release of 4.7 Important when upgradingUpgrade as if it was a patch release (update /bin, /umbraco and /umbraco_client). For general upgrade information follow the guide found at http://our.umbraco.org/wiki/install-and-setup/upgrading-an-umbraco-installation 4.7 requires the .NET 4.0 framework Web.Config changes Update the web web.config to include the 4 changes found in (they're clearly marked in...HubbleDotNet - Open source full-text search engine: V1.1.0.0: Add Sqlite3 DBAdapter Add App Report when Query Cache is Collecting. Improve the performance of index through Synchronize. Add top 0 feature so that we can only get count of the result. Improve the score calculating algorithm of match. Let the score of the record that match all items large then others. Add MySql DBAdapter Improve performance for multi-fields sort . Using hash table to access the Payload data. The version before used bin search. Using heap sort instead of qui...Silverlight????[???]: silverlight????[???]2.0: ???????,?????,????????silverlight??????。DBSourceTools: DBSourceTools_1.3.0.0: Release 1.3.0.0 Changed editors from FireEdit to ICSharpCode.TextEditor. Complete re-vamp of Intellisense ( further testing needed). Hightlight Field and Table Names in sql scripts. Added field dropdown on all tables and views in DBExplorer. Added data option for viewing data in Tables. Fixed comment / uncomment bug as reported by tareq. Included Synonyms in scripting engine ( nickt_ch ).IronPython: 2.7 Release Candidate 1: We are pleased to announce the first Release Candidate for IronPython 2.7. This release contains over two dozen bugs fixed in preparation for 2.7 Final. See the release notes for 60193 for details and what has already been fixed in the earlier 2.7 prereleases. - IronPython TeamCaliburn Micro: A Micro-Framework for WPF, Silverlight and WP7: Caliburn.Micro 1.0 RC: This is the official Release Candicate for Caliburn.Micro 1.0. The download contains the binaries, samples and VS templates. VS Templates The templates included are designed for situations where the Caliburn.Micro source needs to be embedded within a single project solution. This was targeted at government and other organizations that expressed specific requirements around using an open source project like this. NuGet This release does not have a corresponding NuGet package. The NuGet pack...Caliburn: A Client Framework for WPF and Silverlight: Caliburn 2.0 RC: This is the official Release Candidate for Caliburn 2.0. It contains all binaries, samples and generated code docs.Rawr: Rawr 4.0.20 Beta: Rawr is now web-based. The link to use Rawr4 is: http://elitistjerks.com/rawr.phpThis is the Cataclysm Beta Release. More details can be found at the following link http://rawr.codeplex.com/Thread/View.aspx?ThreadId=237262 As of the 4.0.16 release, you can now also begin using the new Downloadable WPF version of Rawr!This is a pre-alpha release of the WPF version, there are likely to be a lot of issues. If you have a problem, please follow the Posting Guidelines and put it into the Issue Trac...Azure Storage Samples: Version 1.0 (February 2011): These downloads contain source code. Each is a complete sample that fully exercises Windows Azure Storage across blobs, queues, and tables. The difference between the downloads is implementation approach. Storage DotNet CS.zip is a .NET StorageClient library implementation in the C# language. This library come with the Windows Azure SDK. Contains helper classes for accessing blobs, queues, and tables. Storage REST CS.zip is a REST implementation in the C# language. The code to implement R...PowerGUI Visual Studio Extension: PowerGUI VSX 1.3.2: New FeaturesPowerGUI Console Tool Window PowerShell Project Type PowerGUI 2.4 SupportMiniTwitter: 1.66: MiniTwitter 1.66 ???? ?? ?????????? 2 ??????????????????? User Streams ?????????Windows Phone 7 Isolated Storage Explorer: WP7 Isolated Storage Explorer v1.0 Beta: Current release features:WPF desktop explorer client Visual Studio integrated tool window explorer client (Visual Studio 2010 Professional and above) Supported operations: Refresh (isolated storage information), Add Folder, Add Existing Item, Download File, Delete Folder, Delete File Explorer supports operations running on multiple remote applications at the same time Explorer detects application disconnect (1-2 second delay) Explorer confirms operation completed status Explorer d...New ProjectsAgriscope: This is an open information visualization tool used to assist RADA and other Agriculture officers in retrieving and analyzing data in day to day tasks.AVCampos NF-e: Realizar a emissão e controle de nf-e, através de ambientes moveis.Babel Obfuscator NAnt Tasks: This is an NAnt task for Babel Obfuscator. Babel Obfuscator protect software components realized with Microsoft .NET Framework in order to make reverse engineering difficult. Babel Obfuscator can be downloaded at http://www.babelfor.netConcurrent Programming Library: Concurrent Programming Library provides an opportunity to develop a parallel programs using .net framework 2.0 and above. It includes an implementation of various parallel algorithms, thread-safe collections and patterns.EOrg: Gelistirme maksatli yaptigim çalismalar.Extend Grid View: Extend grid view is user control. It help paging a dataset is set on gridview.FinlogiK ReSharper Contrib: FinlogiK ReSharper Contrib is a plugin for ReSharper 5.1 which adds code cleanup and inspection options for static qualifiers.Game development with Playstation Move and Ogre3D: This project is a research aiming to develop a program which can handle the Playstation Move on PC. After that, we will implement a game based on it. The programming language is C++. The graphics is handled by Ogre3D.JAD: Projeto de software.JSARP: This tool allows describing and verifying Petri Nets with the support of a graphical interface. This tool, is being developed in Java.KangmoDB - A replacement for the storage engine of SQLite: KangmoDB claims to be a real-time storage engine that replaces the one in SQLite. KangmoDB tries to achieve the lowest latency time for a transaction with ACID properties. It will be mainly used for the stock market that requires lowest latency with highest stability. MetaprogrammingInDotNetBook: This project will contain code and other artifacts related to the "Metaprogramming in .NET" book that should be avaible in October 2011.munix workstation: The µnix project is an endeavour to create a complete workstation and UNIX-like OS using standard logic IC's and 8-bit AVR microcontrollers. The goal isn't to make something that will compete with a traditional workstation in computation but instead to have a great DIY project.PhoneyTools: Set of controls and utilities for WP7 development.Plist Builder: Serialize non-circular-referencing .NET objects to plist in .NET.Quake3.NET: A port of the Quake 3 engine to C#. This is not merely a port of Quake 3 to run in a managed environment, but a complete rewrite of the engine using C# 4.0's powerful language features.SecViz: Web server security attack graph alert correlation IDS SerialNome: This is a multiport serial applicationsprout sms: a wp7 cabbage clientUsing external assembly in Biztalk 2009 map: Using external assembly in Biztalk 2009 map.

    Read the article

  • Too Many Kittens To Juggle At Once

    - by Bil Simser
    Ahh, the Internet. That crazy, mixed up place where one tweet turns into a conversation between dozens of people and spawns a blogpost. This is the direct result of such an event this morning. It started innocently enough, with this: Then followed up by a blog post by Joel here. In the post, Joel introduces us to the term Business Solutions Architect with mad skillz like InfoPath, Access Services, Excel Services, building Workflows, and SSRS report creation, all while meeting the business needs of users in a SharePoint environment. I somewhat disagreed with Joel that this really wasn’t a new role (at least IMHO) and that a good Architect or BA should really be doing this job. As Joel pointed out when you’re building a SharePoint team this kind of role is often overlooked. Engineers might be able to build workflows but is the right workflow for the right problem? Michael Pisarek wrote about a SharePoint Business Architect a few months ago and it’s a pretty solid assessment. Again, I argue you really shouldn’t be looking for roles that don’t exist and I don’t suggest anyone create roles to hire people to fill them. That’s basically creating a solution looking for problems. Michael’s article does have some great points if you’re lost in the quagmire of SharePoint duties though (and I especially like John Ross’ quote “The coolest shit is worthless if it doesn’t meet business needs”). SharePoinTony summed it up nicely with “SharePoint Solutions knowledge is both lacking and underrated in most environments. Roles help”. Having someone on the team who can dance between a business user and a coder can be difficult. Remember the idea of telling something to someone and them passing it on to the next person. By the time the story comes round the circle it’s a shadow of it’s former self with little resemblance to the original tale. This is very much business requirements as they’re told by the user to a business analyst, written down on paper, read by an architect, tuned into a solution plan, and implemented by a developer. Transformations between what was said, what was heard, what was written down, and what was developed can be distant cousins. Not everyone has the skill of communication and even less have negotiation skills to suit the SharePoint platform. Negotiation is important because not everything can be (or should be) done in SharePoint. Sometimes it’s just not appropriate to build it on the SharePoint platform but someone needs to know enough about the platform and what limitations it might have, then communicate that (and/or negotiate) with a customer or user so it’s not about “You can’t have this” to “Let’s try it this way”. Visualize the possible instead of denying the impossible. So what is the right SharePoint team? My cromag brain came with a fairly simpleton answer (and I’m sure people will just say this is a cop-out). The perfect SharePoint team is just enough people to do the job that know the technology and business problem they’re solving. Bridge the gap between business need and technology platform and you have an architect. Communicate the needs of the business effectively so the entire team understands it and you have a business analyst. Can you get this with full time workers? Maybe but don’t expect miracles out of the gate. Also don’t take a consultant’s word as gospel. Some consultants just don’t have the diversity of the SharePoint platform to be worth their value so be careful. You really need someone who knows enough about SharePoint to be able to validate a consultants knowledge level. This is basically try for any consultant, not just a SharePoint one. Specialization is good and needed. A good, well-balanced SharePoint team is one of people that can solve problems with work with the technology, not against it. Having a top developer is great, but don’t rely on them to solve world hunger if they can’t communicate very well with users. An expert business analyst might be great at gathering requirements so the entire team can understand them, but if it means building 100% custom solutions because they don’t fit inside the SharePoint boundaries isn’t of much value. Just repeat. There is no silver bullet. There is no silver bullet. There is no silver bullet. A few people pointed out Nick Inglis’ article Excluding The Information Professional In SharePoint. It’s a good read too and hits home that maybe some developers and IT pros need some extra help in the information space. If you’re in an organization that needs labels on people, come up with something everyone understands and go with it. If that’s Business Solutions Architect, SharePoint Advisor, or Guy Who Knows A Lot About Portals, make it work for you. We all wish that one person could master all that is SharePoint but we also know that doesn’t scale very well and you quickly get into the hit-by-a-bus syndrome (with the organization coming to a full crawl when the guy or girl goes on vacation, gets sick, or pops out a baby). There are too many gaps in SharePoint knowledge to have any one person know it all and too many kittens to juggle all at once. We like to consider ourselves experts in our field, but trying to tackle too many roles at once and we end up being mediocre jack of all trades, master of none. Don't fall into this pit. It's a deep, dark hole you don't want to try to claw your way out of. Trust me. Been there. Done that. Got the t-shirt. In the end I don’t disagree with Joel. SharePoint is a beast and not something that should be taken on by newbies. If you just read “Teach Yourself SharePoint in 24 Hours” and want to go build your corporate intranet or the next killer business solution with all your new found knowledge plan to pony up consultant dollars a few months later when everything goes to Hell in a handbasket and falls over. I’m not saying don’t build solutions in SharePoint. I’m just saying that building effective ones takes skill like any craft and not something you can just cobble together with a little bit of cursory knowledge. Thanks to *everyone* who participated in this tweet rush. It was fun and educational.

    Read the article

  • Career development as a Software Developer without becoming a manager.

    - by albertpascual
    I’m a developer, I like to write new exciting code everyday, my perfect day at work is a day that when I wake up, I know that I have to write some code that I haven’t done before or to use a new framework/language/platform that is unknown to me. The best days in the office is when a project is waiting for me to architect or write. In my 15 years in the development field, I had to in order to get a better salary to manage people, not just to lead developers, to actually manage people. Something that I found out when I get into a management position is that I’m not that good at managing people, and not afraid to say it. I do not enjoy that part of the job, the worse one, takes time away from what I really like. Leading developers and managing people are very different things. I do like teaching and leading developers in a project. Yet most people believe, and is true in most companies, the way to get a better salary is to be promoted to a manager position. In order to advance in your career you need to let go of the everyday writing code and become a supervisor or manager. This is the path for developers after they become senior developers. As you get older and your family grows, the only way to hit your salary requirements is to advance your career to become a manager and get that manager salary. That path is the common in most companies, the most intelligent companies out there, have learned that promoting good developers mean getting a crappy manager and losing a good resource. Now scratch everything I said, because as I previously stated, I don’t see myself going to the office everyday and just managing people until is time to go home. I like to spend hours working in some code to accomplish a task, learning new platforms and languages or patterns to existing languages. Being interrupted every 15 minutes by emails or people stopping by my office to resolve their problems, is not something I could enjoy. All the sudden riding my motorcycle to work one cold morning over the Redlands Canyon and listening to .NET Rocks podcast, Michael “Doc” Norton explaining how to take control of your development career without necessary going to the manager’s track. I know, I should not have headphones under my helmet when riding a motorcycle in California. His conversation with Carl Franklin and Richard Campbell was just confirming everything I have ever did with actually more details and assuring that there are other paths. His method was simple yet most of us, already do many of those steps, Mr. Michael “Doc” Norton believe that it pays off on the long run, that finally companies prefer to pay higher salaries to those developers, yet I would actually think that many companies do not see developers that way, this is not true for bigger companies. However I do believe the value of those developers increase and most of the time, changing companies could increase their salary instead of staying in the same one. In short without even trying to get into the shadow of Mr. Norton and without following the steps in the order; you should love to learn new technologies, and then teach them to other geeks. I personally have learn many technologies and I haven’t stop doing that, I am a professor at UCR where I teach ASP.NET and Silverlight. Mr Norton continues that after than, you want to be involve in the development community, user groups, online forums, open source projects. I personally talk to user groups, I’m very active in forums asking and answering questions as well as for those I got awarded the Microsoft MVP for ASP.NET. After you accomplish all those, you should also expose yourself for what you know and what you do not know, learning a new language will make you humble again as well as extremely happy. There is no better feeling that learning a new language or pattern in your daily job. If you love your job everyday and what you do, I really recommend you to follow Michael’s presentation that he kindly share it on the link below. His confirmation is a refreshing, knowing that my future is not behind a desk where the computer screen is on my right hand side instead of in front of me. Where I don’t have to spent the days filling up performance forms for people and the new platforms that I haven’t been using yet are just at my fingertips. Presentation here. http://www.slideshare.net/LeanDog/take-control-of-your-development-career-michael-doc-norton?from=share_email_logout3 Take Control of Your Development Career Welcome! Michael “Doc” Norton @DocOnDev http://docondev.blogspot.com/ [email protected] Recovering Post Technical I love to learn I love to teach I love to work in teams I love to write code I really love to write code What about YOU? Do you love your job? Do you love your Employer? Do you love your Boss? What do you love? What do you really love? Take Control Take Control • Get Noticed • Get Together • Get Your Mojo • Get Naked • Get Schooled Get Noticed Get Noticed Know Your Business Get Noticed Get Noticed Understand Management Get Noticed Get Noticed Do Your Existing Job Get Noticed Get Noticed Make Yourself Expendable Get Together Get Together Join a User Group Get Together Help Run a User Group Get Together Start a User Group Get Your Mojo Get Your Mojo Kata Get Your Mojo Koans Get Your Mojo Breakable Toys Get Your Mojo Open Source Get Naked Get Naked Run with Group A Get Naked Do Something Different Get Naked Own Your Mistakes Get Naked Admit You Don’t Know Get Schooled Get Schooled Choose a Mentor Get Schooled Attend Conferences Get Schooled Teach a New Subject Get Started Read These (Again) Take Control of Your Development Career Thank You! Michael “Doc” Norton @DocOnDev http://docondev.blogspot.com/ [email protected] In a short summary, I recommend any developer to check his blog and more important his presentation, I haven’t been lucky enough to watch him live, I’m looking forward the day I have the opportunity. He is giving us hope in the future of developers, when I see some of my geek friends moving to position that in short years they begin to regret, I get more unsure of my future doing what I love. I would say that now is looking at the spectrum of companies that understand and appreciate developers. There are a few there, hopefully with time code sweat shops will start disappearing and being a developer will feed a family of 4. Cheers Al tweetmeme_url = 'http://weblogs.asp.net/albertpascual/archive/2010/12/07/career-development-as-a-software-developer-without-becoming-a-manager.aspx'; tweetmeme_source = 'alpascual';

    Read the article

  • A pseudo-listener for AlwaysOn Availability Groups for SQL Server virtual machines running in Azure

    - by MikeD
    I am involved in a project that is implementing SharePoint 2013 on virtual machines hosted in Azure. The back end data tier consists of two Azure VMs running SQL Server 2012, with the SharePoint databases contained in an AlwaysOn Availability Group. I used this "Tutorial: AlwaysOn Availability Groups in Windows Azure (GUI)" to help me implement this setup.Because Azure DHCP will not assign multiple unique IP addresses to the same VM, having an AG Listener in Azure is not currently supported.  I wanted to figure out another mechanism to support a "pseudo listener" of some sort. First, I created a CNAME (alias) record in the DNS zone with a short TTL (time to live) of 5 minutes (I may yet make this even shorter). The record represents a logical name (let's say the alias is SPSQL) of the server to connect to for the databases in the availability group (AG). When Server1 was hosting the primary replica of the AG, I would set the CNAME of SPSQL to be SERVER1. When the AG failed over to Server1, I wanted to set the CNAME to SERVER2. Seemed simple enough.(It's important to point out that the connection strings for my SharePoint services should use the CNAME alias, and not the actual server name. This whole thing falls apart otherwise.)To accomplish this, I created identical SQL Agent Jobs on Server1 and Server2, with two steps:1. Step 1: Determine if this server is hosting the primary replica.This is a TSQL step using this script:declare @agName sysname = 'AGTest'set nocount on declare @primaryReplica sysnameselect @primaryReplica = agState.primary_replicafrom sys.dm_hadr_availability_group_states agState   join sys.availability_groups ag on agstate.group_id = ag.group_id   where ag.name = @AGname if not exists(   select *    from sys.dm_hadr_availability_group_states agState   join sys.availability_groups ag on agstate.group_id = ag.group_id   where @@Servername = agstate.primary_replica    and ag.name = @AGname)begin   raiserror ('Primary replica of %s is not hosted on %s, it is hosted on %s',17,1,@Agname, @@Servername, @primaryReplica) endThis script determines if the primary replica value of the AG group is the same as the server name, which means that our server is hosting the current AG (you should update the value of the @AgName variable to the name of your AG). If this is true, I want the DNS alias to point to this server. If the current server is not hosting the primary replica, then the script raises an error. Also, if the script can't be executed because it cannot connect to the server, that also will generate an error. For the job step settings, I set the On Failure option to "Quit the job reporting success". The next step in the job will set the DNS alias to this server name, and I only want to do that if I know that it is the current primary replica, otherwise I don't want to do anything. I also include the step output in the job history so I can see the error message.Job Step 2: Update the CNAME entry in DNS with this server's name.I used a PowerShell script to accomplish this:$cname = "SPSQL.contoso.com"$query = "Select * from MicrosoftDNS_CNAMEType"$dns1 = "dc01.contoso.com"$dns2 = "dc02.contoso.com"if ((Test-Connection -ComputerName $dns1 -Count 1 -Quiet) -eq $true){    $dnsServer = $dns1}elseif ((Test-Connection -ComputerName $dns2 -Count 1 -Quiet) -eq $true) {   $dnsServer = $dns2}else{  $msg = "Unable to connect to DNS servers: " + $dns1 + ", " + $dns2   Throw $msg}$record = Get-WmiObject -Namespace "root\microsoftdns" -Query $query -ComputerName $dnsServer  | ? { $_.Ownername -match $cname }$thisServer = [System.Net.Dns]::GetHostEntry("LocalHost").HostName + "."$currentServer = $record.RecordData if ($currentServer -eq $thisServer ) {     $cname + " CNAME is up to date: " + $currentServer}else{    $cname + " CNAME is being updated to " + $thisServer + ". It was " + $currentServer    $record.RecordData = $thisServer    $record.put()}This script does a few things:finds a responsive domain controller (Test-Connection does a ping and returns a Boolean value if you specify the -Quiet parameter)makes a WMI call to the domain controller to get the current CNAME record value (Get-WmiObject)gets the FQDN of this server (GetHostEntry)checks if the CNAME record is correct and updates it if necessary(You should update the values of the variables $cname, $dns1 and $dns2 for your environment.)Since my domain controllers are also hosted in Azure VMs, either one of them could be down at any point in time, so I need to find a DC that is responsive before attempting the DNS call. The other little thing here is that the CNAME record contains the FQDN of a machine, plus it ends with a period. So the comparison of the CNAME record has to take the trailing period into account. When I tested this step, I was getting ACCESS DENIED responses from PowerShell for the Get-WmiObject cmdlet that does a remote lookup on the DC. This occurred because the SQL Agent service account was not a member of the Domain Admins group, so I decided to create a SQL Credential to store the credentials for a domain administrator account and use it as a PowerShell proxy (rather than give the service account Domain Admins membership).In SQL Management Studio, right click on the Credentials node (under the server's Security node), and choose New Credential...Then, under SQL Agent-->Proxies, right click on the PowerShell node and choose New Proxy...Finally, in the job step properties for the PowerShell step, select the new proxy in the Run As drop down.I created this two step Job on both nodes of the Availability Group, but if you had more than two nodes, just create the same job on all the servers. I set the schedule for the job to execute every minute.When the server that is hosting the primary replica is running the job, the job history looks like this:The job history on the secondary server looks like this: When a failover occurs, the SQL Agent job on the new primary replica will detect that the CNAME needs to be updated within a minute. Based on the TTL of the CNAME (which I said at the beginning was 5 minutes), the SharePoint servers will get the new alias within five minutes and should be able to reconnect. I may want to shorten up the TTL to reduce the time it takes for the client connections to use the new alias. Using a DNS CNAME and a SQL Agent Job on all servers hosting AG replicas, I was able to create a pseudo-listener to automatically change the name of the server that was hosting the primary replica, for a scenario where I cannot use a regular AG listener (in this case, because the servers are all hosted in Azure).    

    Read the article

  • New Features in ASP.NET Web API 2 - Part I

    - by dwahlin
    I’m a big fan of ASP.NET Web API. It provides a quick yet powerful way to build RESTful HTTP services that can easily be consumed by a variety of clients. While it’s simple to get started using, it has a wealth of features such as filters, formatters, and message handlers that can be used to extend it when needed. In this post I’m going to provide a quick walk-through of some of the key new features in version 2. I’ll focus on some two of my favorite features that are related to routing and HTTP responses and cover additional features in a future post.   Attribute Routing Routing has been a core feature of Web API since it’s initial release and something that’s built into new Web API projects out-of-the-box. However, there are a few scenarios where defining routes can be challenging such as nested routes (more on that in a moment) and any situation where a lot of custom routes have to be defined. For this example, let’s assume that you’d like to define the following nested route:   /customers/1/orders   This type of route would select a customer with an Id of 1 and then return all of their orders. Defining this type of route in the standard WebApiConfig class is certainly possible, but it isn’t the easiest thing to do for people who don’t understand routing well. Here’s an example of how the route shown above could be defined:   public static class WebApiConfig { public static void Register(HttpConfiguration config) { config.Routes.MapHttpRoute( name: "CustomerOrdersApiGet", routeTemplate: "api/customers/{custID}/orders", defaults: new { custID = 0, controller = "Customers", action = "Orders" } ); config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); GlobalConfiguration.Configuration.Formatters.Insert(0, new JsonpFormatter()); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   With attribute based routing, defining these types of nested routes is greatly simplified. To get started you first need to make a call to the new MapHttpAttributeRoutes() method in the standard WebApiConfig class (or a custom class that you may have created that defines your routes) as shown next:   public static class WebApiConfig { public static void Register(HttpConfiguration config) { // Allow for attribute based routes config.MapHttpAttributeRoutes(); config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); } } Once attribute based routes are configured, you can apply the Route attribute to one or more controller actions. Here’s an example:   [HttpGet] [Route("customers/{custId:int}/orders")] public List<Order> Orders(int custId) { var orders = _Repository.GetOrders(custId); if (orders == null) { throw new HttpResponseException(new HttpResponseMessage(HttpStatusCode.NotFound)); } return orders; }   This example maps the custId route parameter to the custId parameter in the Orders() method and also ensures that the route parameter is typed as an integer. The Orders() method can be called using the following route: /customers/2/orders   While this is extremely easy to use and gets the job done, it doesn’t include the default “api” string on the front of the route that you might be used to seeing. You could add “api” in front of the route and make it “api/customers/{custId:int}/orders” but then you’d have to repeat that across other attribute-based routes as well. To simply this type of task you can add the RoutePrefix attribute above the controller class as shown next so that “api” (or whatever the custom starting point of your route is) is applied to all attribute routes: [RoutePrefix("api")] public class CustomersController : ApiController { [HttpGet] [Route("customers/{custId:int}/orders")] public List<Order> Orders(int custId) { var orders = _Repository.GetOrders(custId); if (orders == null) { throw new HttpResponseException(new HttpResponseMessage(HttpStatusCode.NotFound)); } return orders; } }   There’s much more that you can do with attribute-based routing in ASP.NET. Check out the following post by Mike Wasson for more details.   Returning Responses with IHttpActionResult The first version of Web API provided a way to return custom HttpResponseMessage objects which were pretty easy to use overall. However, Web API 2 now wraps some of the functionality available in version 1 to simplify the process even more. A new interface named IHttpActionResult (similar to ActionResult in ASP.NET MVC) has been introduced which can be used as the return type for Web API controller actions. To return a custom response you can use new helper methods exposed through ApiController such as: Ok NotFound Exception Unauthorized BadRequest Conflict Redirect InvalidModelState Here’s an example of how IHttpActionResult and the helper methods can be used to cleanup code. This is the typical way to return a custom HTTP response in version 1:   public HttpResponseMessage Delete(int id) { var status = _Repository.DeleteCustomer(id); if (status) { return new HttpResponseMessage(HttpStatusCode.OK); } else { throw new HttpResponseException(HttpStatusCode.NotFound); } } With version 2 we can replace HttpResponseMessage with IHttpActionResult and simplify the code quite a bit:   public IHttpActionResult Delete(int id) { var status = _Repository.DeleteCustomer(id); if (status) { //return new HttpResponseMessage(HttpStatusCode.OK); return Ok(); } else { //throw new HttpResponseException(HttpStatusCode.NotFound); return NotFound(); } } You can also cleanup post (insert) operations as well using the helper methods. Here’s a version 1 post action:   public HttpResponseMessage Post([FromBody]Customer cust) { var newCust = _Repository.InsertCustomer(cust); if (newCust != null) { var msg = new HttpResponseMessage(HttpStatusCode.Created); msg.Headers.Location = new Uri(Request.RequestUri + newCust.ID.ToString()); return msg; } else { throw new HttpResponseException(HttpStatusCode.Conflict); } } This is what the code looks like in version 2:   public IHttpActionResult Post([FromBody]Customer cust) { var newCust = _Repository.InsertCustomer(cust); if (newCust != null) { return Created<Customer>(Request.RequestUri + newCust.ID.ToString(), newCust); } else { return Conflict(); } } More details on IHttpActionResult and the different helper methods provided by the ApiController base class can be found here. Conclusion Although there are several additional features available in Web API 2 that I could cover (CORS support for example), this post focused on two of my favorites features. If you have .NET 4.5.1 available then I definitely recommend checking the new features out. Additional articles that cover features in ASP.NET Web API 2 can be found here.

    Read the article

  • Blogging tips for SQL Server professionals

    - by jamiet
    For some time now I have been intending to put some material together relating my blogging experiences since I began blogging in 2004 and that led to me submitting a session for SQLBits recently where I intended to do just that. That didn’t get enough votes to allow me to present however so instead I resolved to write a blog post about it and Simon Sabin’s recent post Blogging – how do you do it? has prompted me to get around to completing it. So, here I present a compendium of tips that I’ve picked up from authoring a fair few blog posts over the past 6 years. Feedburner Feedburner.com is a service that can consume your blog’s default RSS feed and provide another, replacement, feed that has exactly the same content. You can then supply that replacement feed on your blog site for other people to consume in their RSS readers. Why would you want to do this? Well, two reasons actually: It makes your blog portable. If you ever want to move your blog to a different URL you don’t have to tell your subscribers to move to a different feed. The feedburner feed is a pointer to your blog content rather than being a copy of it. Feedburner will collect stats telling you how many people are subscribed to your feed, which RSS readers they use, stuff like that. Here’s a sample screenshot for http://sqlblog.com/blogs/jamie_thomson/: It also tells you what your most viewed posts are: Web stats like these are notoriously inaccurate but then again the method of measurement here is not important, what IS important is that it gives you a trustworthy ranking of your blog posts and (in my opinion) knowing which are your most popular posts is more important than knowing exactly how many views each post has had. This is just the tip of the iceberg of what Feedburner provides and I recommend every new blogger to try it! Monitor subscribers using Google Reader If for some reason Feedburner is not to your taste or (more likely) you already have an established RSS feed that you do not want to change then Google provide another way in which you can monitor your readership in the shape of their online RSS reader, Google Reader. It provides, for every RSS feed, a collection of stats including the number of Google Reader users that have subscribed to that RSS feed. This is really valuable information and in fact I have been recording this statistic for mine and a number of other blogs for a few years now and as such I can produce the following chart that indicates how readership is trending for those blogs over time: [Good news for my fellow SQLBlog bloggers.] As Stephen Few readily points out, its not the numbers that are important but the trend. Search Engine Optimisation (SEO) SEO (or “How do I get my blog to show up in Google”) is a massive area of expertise which I don’t want (and am unable) to cover in much detail here but there are some simple rules of thumb that will help: Tags – If your blog engine offers the ability to add tags to your blog post, use them. Invariably those tags go into the meta section of the page HTML and search engines lap that stuff up. For example, from my recent post Microsoft publish Visual Studio 2010 Database Project Guidance: Title – Search engines take notice of web page titles as well so make them specific and descriptive (e.g. “Configuring dtsConfig connection strings”) rather than esoteric and meaningless in a vain attempt to be humorous (e.g. “Last night a DJ saved my ETL batch”)! Title(2) – Make your title even more search engine friendly by mentioning high level subject areas, not dissimilar to Twitter hashtags. For example, if you look at all of my posts related to SSIS you will notice that nearly all contain the word “SSIS” in the title even if I had to shoehorn it in there by putting it in square brackets or similar. Another tip, if you ARE putting words into your titles in this artificial manner then put them at the end so that they’re not that prominent in search engine results; they’re there for the search engines to consume, not for human beings. Images – Always add titles and alternate text (ALT attribute) to images in your blog post. If you use Windows 7 or Windows Vista then you can use Live Writer (which Simon recommended) makes this easy for you. Headings – If you want to highlight section headings use heading tags (e.g. <H1>, <H2>, <H3> etc…) rather than just formatting the text appropriately – again, Live makes this easy. These tags give your blog posts structure that is understood by search engines and RSS readers alike. (I believe it makes them more amenable to CSS as well – though that’s not something I know too much about). If you check the HTML source for the blog post you’re reading right now you’ll be able to scan through and see where I have used heading tags. Microsoft provide a free tool called the SEO Toolkit that will analyse your blog site (for free) and tell you what things you should change to improve SEO. Go read more and download for free at Search Engine Optimization Toolkit. Did I mention that it was free? Miscellaneous Tips If you are including code in your blog post then ensure it is formatted correctly. Use SQL Server Central’s T-SQL prettifier for formatting T-SQL code. Use images and videos. Personally speaking there’s nothing I like less when reading a blog than paragraph after paragraph of text. Images make your blog more appealing which means people are more likely to read what you have written. Be original. Don’t plagiarise other people’s content and don’t simply rewrite the contents of Books Online. Every time you publish a blog post tweet a link to it. Include hashtags in your tweet that are more likely to grab people’s attention. That’s probably enough for now - I hope this blog post proves useful to someone out there. If you would appreciate a related session at a forthcoming SQLBits conference then please let me know. This will likely be my last blog post for 2010 so I would like to take this opportunity to thank everyone that has commented on, linked to or read any of my blog posts in that time. 2011 is shaping up to be a very interesting for SQL Server observers with the impending release of SQL Server code-named Denali and I promise I’ll have lots more content on that as the year progresses. Happy New Year. @Jamiet

    Read the article

  • The Proper Use of the VM Role in Windows Azure

    - by BuckWoody
    At the Professional Developer’s Conference (PDC) in 2010 we announced an addition to the Computational Roles in Windows Azure, called the VM Role. This new feature allows a great deal of control over the applications you write, but some have confused it with our full infrastructure offering in Windows Hyper-V. There is a proper architecture pattern for both of them. Virtualization Virtualization is the process of taking all of the hardware of a physical computer and replicating it in software alone. This means that a single computer can “host” or run several “virtual” computers. These virtual computers can run anywhere - including at a vendor’s location. Some companies refer to this as Cloud Computing since the hardware is operated and maintained elsewhere. IaaS The more detailed definition of this type of computing is called Infrastructure as a Service (Iaas) since it removes the need for you to maintain hardware at your organization. The operating system, drivers, and all the other software required to run an application are still under your control and your responsibility to license, patch, and scale. Microsoft has an offering in this space called Hyper-V, that runs on the Windows operating system. Combined with a hardware hosting vendor and the System Center software to create and deploy Virtual Machines (a process referred to as provisioning), you can create a Cloud environment with full control over all aspects of the machine, including multiple operating systems if you like. Hosting machines and provisioning them at your own buildings is sometimes called a Private Cloud, and hosting them somewhere else is often called a Public Cloud. State-ful and Stateless Programming This paradigm does not create a new, scalable way of computing. It simply moves the hardware away. The reason is that when you limit the Cloud efforts to a Virtual Machine, you are in effect limiting the computing resources to what that single system can provide. This is because much of the software developed in this environment maintains “state” - and that requires a little explanation. “State-ful programming” means that all parts of the computing environment stay connected to each other throughout a compute cycle. The system expects the memory, CPU, storage and network to remain in the same state from the beginning of the process to the end. You can think of this as a telephone conversation - you expect that the other person picks up the phone, listens to you, and talks back all in a single unit of time. In “Stateless” computing the system is designed to allow the different parts of the code to run independently of each other. You can think of this like an e-mail exchange. You compose an e-mail from your system (it has the state when you’re doing that) and then you walk away for a bit to make some coffee. A few minutes later you click the “send” button (the network has the state) and you go to a meeting. The server receives the message and stores it on a mail program’s database (the mail server has the state now) and continues working on other mail. Finally, the other party logs on to their mail client and reads the mail (the other user has the state) and responds to it and so on. These events might be separated by milliseconds or even days, but the system continues to operate. The entire process doesn’t maintain the state, each component does. This is the exact concept behind coding for Windows Azure. The stateless programming model allows amazing rates of scale, since the message (think of the e-mail) can be broken apart by multiple programs and worked on in parallel (like when the e-mail goes to hundreds of users), and only the order of re-assembling the work is important to consider. For the exact same reason, if the system makes copies of those running programs as Windows Azure does, you have built-in redundancy and recovery. It’s just built into the design. The Difference Between Infrastructure Designs and Platform Designs When you simply take a physical server running software and virtualize it either privately or publicly, you haven’t done anything to allow the code to scale or have recovery. That all has to be handled by adding more code and more Virtual Machines that have a slight lag in maintaining the running state of the system. Add more machines and you get more lag, so the scale is limited. This is the primary limitation with IaaS. It’s also not as easy to deploy these VM’s, and more importantly, you’re often charged on a longer basis to remove them. your agility in IaaS is more limited. Windows Azure is a Platform - meaning that you get objects you can code against. The code you write runs on multiple nodes with multiple copies, and it all works because of the magic of Stateless programming. you don’t worry, or even care, about what is running underneath. It could be Windows (and it is in fact a type of Windows Server), Linux, or anything else - but that' isn’t what you want to manage, monitor, maintain or license. You don’t want to deploy an operating system - you want to deploy an application. You want your code to run, and you don’t care how it does that. Another benefit to PaaS is that you can ask for hundreds or thousands of new nodes of computing power - there’s no provisioning, it just happens. And you can stop using them quicker - and the base code for your application does not have to change to make this happen. Windows Azure Roles and Their Use If you need your code to have a user interface, in Visual Studio you add a Web Role to your project, and if the code needs to do work that doesn’t involve a user interface you can add a Worker Role. They are just containers that act a certain way. I’ll provide more detail on those later. Note: That’s a general description, so it’s not entirely accurate, but it’s accurate enough for this discussion. So now we’re back to that VM Role. Because of the name, some have mistakenly thought that you can take a Virtual Machine running, say Linux, and deploy it to Windows Azure using this Role. But you can’t. That’s not what it is designed for at all. If you do need that kind of deployment, you should look into Hyper-V and System Center to create the Private or Public Infrastructure as a Service. What the VM Role is actually designed to do is to allow you to have a great deal of control over the system where your code will run. Let’s take an example. You’ve heard about Windows Azure, and Platform programming. You’re convinced it’s the right way to code. But you have a lot of things you’ve written in another way at your company. Re-writing all of your code to take advantage of Windows Azure will take a long time. Or perhaps you have a certain version of Apache Web Server that you need for your code to work. In both cases, you think you can (or already have) code the the software to be “Stateless”, you just need more control over the place where the code runs. That’s the place where a VM Role makes sense. Recap Virtualizing servers alone has limitations of scale, availability and recovery. Microsoft’s offering in this area is Hyper-V and System Center, not the VM Role. The VM Role is still used for running Stateless code, just like the Web and Worker Roles, with the exception that it allows you more control over the environment of where that code runs.

    Read the article

  • Silverlight for Windows Embedded Tutorial (step 5 and a bit of Windows Phone 7)

    - by Valter Minute
    If you haven’t spent the last week in the middle of the Sahara desert or traveling on a sled in the north pole area you should have heard something about the launch of Windows Phone 7 Series (or Windows Phone Series 7, or Windows Series Phone 7 or something like that). Even if you are in the middle of the desert or somewhere around the north pole you may have been reached by the news, since it seems that WP7S (using the full name will kill my available bandwidth!) is generating a lot of buzz in the development and IT communities. One of the most important aspects of this new platform is that it will be programmed using a new set of tools and frameworks, completely different from the ones used on older releases of Windows Mobile (or SmartPhone, or PocketPC or whatever…). WP7S applications can be developed using Silverlight or XNA. If you want to learn something more about WP7S development you can download the preview of Charles Petzold’s book about it: http://www.charlespetzold.com/phone/index.html Charles Petzold is also the author of “Programming Windows”, the first book I ever read about programming on Windows (it was Windows 3.0 at that time!). The fact that even I was able to learn how to develop Windows application is a proof of the quality of Petzold’s work. This book is up to his standards and the 150pages preview is already rich in technical contents without being boring or complicated to understand. I may be able to become a Windows Phone developer thanks to mr. Petzold. Mr. Petzold uses some nice samples to introduce the basic concepts of Silverlight development on WP7S. On this new platform you’ll use managed code to develop your application, so those samples can’t be ported on Windows CE R3 as they are, but I would like to take one of the first samples (called “SilverlightTapHello1”) and adapt it to Silverlight for Windows Embedded to show that even plain old native code can be used to develop “cool” user interfaces! The sample shows the standard WP7S title header and a textbox with an hello world message inside it. When the user touches the textbox, it will change its color. When the user touches the background (Grid) behind it, its default color (plain old White) will be restored. Let’s see how we can implement the same features on our embedded device! I took the XAML code of the sample (you can download the book samples here: http://download.microsoft.com/download/1/D/B/1DB49641-3956-41F1-BAFA-A021673C709E/CodeSamples_DRAFTPreview_ProgrammingWindowsPhone7Series.zip) and changed it a little bit to remove references to WP7S or managed runtime. If you compare the resulting files you will see that I was able to keep all the resources inside the App.xaml files and the structure of  MainPage.XAML almost intact. This is the Silverlight for Windows Embedded version of MainPage.XAML: <UserControl x:Class="SilverlightTapHello1.MainPage" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:phoneNavigation="clr-namespace:Microsoft.Phone.Controls;assembly=Microsoft.Phone.Controls.Navigation" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" d:DesignWidth="480" d:DesignHeight="800" FontFamily="{StaticResource PhoneFontFamilyNormal}" FontSize="{StaticResource PhoneFontSizeNormal}" Foreground="{StaticResource PhoneForegroundBrush}" Width="640" Height="480">   <Grid x:Name="LayoutRoot" Background="{StaticResource PhoneBackgroundBrush}"> <Grid.RowDefinitions> <RowDefinition Height="Auto"/> <RowDefinition Height="*"/> </Grid.RowDefinitions>   <!--TitleGrid is the name of the application and page title--> <Grid x:Name="TitleGrid" Grid.Row="0"> <TextBlock Text="SILVERLIGHT TAP HELLO #1" x:Name="textBlockPageTitle" Style="{StaticResource PhoneTextPageTitle1Style}"/> <TextBlock Text="main page" x:Name="textBlockListTitle" Style="{StaticResource PhoneTextPageTitle2Style}"/> </Grid>   <!--ContentGrid is empty. Place new content here--> <Grid x:Name="ContentGrid" Grid.Row="1" MouseLeftButtonDown="ContentGrid_MouseButtonDown" Background="{StaticResource PhoneBackgroundBrush}"> <TextBlock x:Name="TextBlock" Text="Hello, Silverlight for Windows Embedded!" HorizontalAlignment="Center" VerticalAlignment="Center" /> </Grid> </Grid> </UserControl> If you compare it to the WP7S sample (not reported here to avoid any copyright issue) you’ll notice that I had to replace the original phoneNavigation:PhoneApplicationPage with UserControl as the root node. This make sense because there is not support for phone applications on CE 6. I also had to specify width and height of my main page (on the WP7S device this will be adjusted by the OS) and I had to replace the multi-touch event handler with the MouseLeftButtonDown event (no multitouch support for Windows CE R3, still). I also changed the hello message, of course. I used XAML2CPP to generate the boring part of our application and then added the initialization code to WinMain: int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPTSTR lpCmdLine, int nCmdShow) { if (!XamlRuntimeInitialize()) return -1;   HRESULT retcode;   IXRApplicationPtr app; if (FAILED(retcode=GetXRApplicationInstance(&app))) return -1; XRXamlSource dictsrc;   dictsrc.SetResource(hInstance,TEXT("XAML"),IDR_XAML_App);   if (FAILED(retcode=app->LoadResourceDictionary(&dictsrc,NULL))) return -1;   MainPage page;   if (FAILED(page.Init(hInstance,app))) return -1;   UINT exitcode;   if (FAILED(page.GetVisualHost()->StartDialog(&exitcode))) return -1;   return exitcode; }   You may have noticed that there is something different from the previous samples. I added the code to load a resource dictionary. Resources are an important feature of XAML that allows you to define some values that could be replaced inside any XAML file loaded by the runtime. You can use resources to define custom styles for your fonts, backgrounds, controls etc. and to support internationalization, by providing different strings for different languages. The rest of our WinMain isn’t that different. It creates an instances of our MainPage object and displays it. The MainPage class implements an event handler for the MouseLeftButtonDown event of the ContentGrid: class MainPage : public TMainPage<MainPage> { public:   HRESULT ContentGrid_MouseButtonDown(IXRDependencyObject* source,XRMouseButtonEventArgs* args) { HRESULT retcode; IXRSolidColorBrushPtr brush; IXRApplicationPtr app;   if (FAILED(retcode=GetXRApplicationInstance(&app))) return retcode;   if (FAILED(retcode=app->CreateObject(IID_IXRSolidColorBrush,&brush))) return retcode;   COLORREF color=RGBA(0xff,0xff,0xff,0xff);   if (args->pOriginalSource==TextBlock) color=RGBA(rand()&0xFF,rand()&0xFF,rand()&0xFF,0xFF);   if (FAILED(retcode=brush->SetColor(color))) return retcode;   if (FAILED(retcode=TextBlock->SetForeground(brush))) return retcode; return S_OK; } }; As you can see this event is generated when a used clicks inside the grid or inside one of the objects it contains. Since our TextBlock is inside the grid, we don’t need to provide an event handler for its MouseLeftButtonDown event. We can just use the pOriginalSource member of the event arguments to check if the event was generated inside the textblock. If the event was generated inside the grid we create a white brush,if it’s inside the textblock we create some randomly colored brush. Notice that we need to use the RGBA macro to create colors, specifying also a transparency value for them. If we use the RGB macro the resulting color will have its Alpha channel set to zero and will be transparent. Using the SetForeground method we can change the color of our control. You can compare this to the managed code that you can find at page 40-41 of Petzold’s preview book and you’ll see that the native version isn’t much more complex than the managed one. As usual you can download the full code of the sample here: http://cid-9b7b0aefe3514dc5.skydrive.live.com/self.aspx/.Public/SilverlightTapHello1.zip And remember to pre-order Charles Petzold’s “Programming Windows Phone 7 series”, I bet it will be a best-seller! Technorati Tags: Silverlight for Windows Embedded,Windows CE

    Read the article

  • Developer’s Life – Attitude and Communication – They Can Cause Problems – Notes from the Field #027

    - by Pinal Dave
    [Note from Pinal]: This is a 27th episode of Notes from the Field series. The biggest challenge for anyone is to understand human nature. We human have so many things on our mind at any moment of time. There are cases when what we say is not what we mean and there are cases where what we mean we do not say. We do say and things as per our mood and our agenda in mind. Sometimes there are incidents when our attitude creates confusion in the communication and we end up creating a situation which is absolutely not warranted. In this episode of the Notes from the Field series database expert Mike Walsh explains a very crucial issue we face in our career, which is not technical but more to relate to human nature. Read on this may be the best blog post you might read in recent times. In this week’s note from the field, I’m taking a slight departure from technical knowledge and concepts explained. We’ll be back to it next week, I’m sure. Pinal wanted us to explain some of the issues we bump into and how we see some of our customers arrive at problem situations and how we have helped get them back on the right track. Often it is a technical problem we are officially solving – but in a lot of cases as a consultant, we are really helping fix some communication difficulties. This is a technical blog post and not an “advice column” in a newspaper – but the longer I am a consultant, the more years I add to my experience in technology the more I learn that the vast majority of the problems we encounter have “soft skills” included in the chain of causes for the issue we are helping overcome. This is not going to be exhaustive but I hope that sharing four pieces of advice inspired by real issues starts a process of searching for places where we can be the cause of these challenges and look at fixing them in ourselves. Or perhaps we can begin looking at resolving them in teams that we manage. I’ll share three statements that I’ve either heard, read or said and talk about some of the communication or attitude challenges highlighted by the statement. 1 – “But that’s the SAN Administrator’s responsibility…” I heard that early on in my consulting career when talking with a customer who had serious corruption and no good recent backups – potentially no good backups at all. The statement doesn’t have to be this one exactly, but the attitude here is an attitude of “my job stops here, and I don’t care about the intent or principle of why I’m here.” It’s also a situation of having the attitude that as long as there is someone else to blame, I’m fine…  You see in this case, the DBA had a suspicion that the backups were not being handled right.  They were the DBA and they knew that they had responsibility to ensure SQL backups were good to go – it’s a basic requirement of a production DBA. In my “As A DBA Where Do I start?!” presentation, I argue that is job #1 of a DBA. But in this case, the thought was that there was someone else to blame. Rather than create extra work and take on responsibility it was decided to just let it be another team’s responsibility. This failed the company, the company’s customers and no one won. As technologists – we should strive to go the extra mile. If there is a lack of clarity around roles and responsibilities and we know it – we should push to get it resolved. Especially as the DBAs who should act as the advocates of the data contained in the databases we are responsible for. 2 – “We’ve always done it this way, it’s never caused a problem before!” Complacency. I have to say that many failures I’ve been paid good money to help recover from would have not happened had it been for an attitude of complacency. If any thoughts like this have entered your mind about your situation you may be suffering from it. If, while reading this, you get this sinking feeling in your stomach about that one thing you know should be fixed but haven’t done it.. Why don’t you stop and go fix it then come back.. “We should have better backups, but we’re on a SAN so we should be fine really.” “Technically speaking that could happen, but what are the chances?” “We’ll just clean that up as a fast follow” ..and so on. In the age of tightening IT budgets, increased expectations of up time, availability and performance there is no room for complacency. Our customers and business units expect – no demand – the best. Complacency says “we will give you second best or hopefully good enough and we accept the risk and know this may hurt us later. Sometimes an organization will opt for “good enough” and I agree with the concept that at times the perfect can be the enemy of the good. But when we make those decisions in a vacuum and are not reporting them up and discussing them as an organization that is different. That is us unilaterally choosing to do something less than the best and purposefully playing a game of chance. 3 – “This device must accept interference from other devices but not create any” I’ve paraphrased this one – but it’s something the Federal Communications Commission – a federal agency in the United States that regulates electronic communication – requires of all manufacturers of any device that could cause or receive interference electronically. I blogged in depth about this here (http://www.straightpathsql.com/archives/2011/07/relationship-advice-from-the-fcc/) so I won’t go into much detail other than to say this… If we all operated more on the premise that we should do our best to not be the cause of conflict, and to be less easily offended and less upset when we perceive offense life would be easier in many areas! This doesn’t always cause the issues we are called in to help out. Not directly. But where we see it is in unhealthy relationships between the various technology teams at a client. We’ll see teams hoarding knowledge, not sharing well with others and almost working against other teams instead of working with them. If you trace these problems back far enough it often stems from someone or some group of people violating this principle from the FCC. To Sum It Up Technology problems are easy to solve. At Linchpin People we help many customers get past the toughest technological challenge – and at the end of the day it is really just a repeatable process of pattern based troubleshooting, logical thinking and starting at the beginning and carefully stepping through to the end. It’s easy at the end of the day. The tough part of what we do as consultants is the people skills. Being able to help get teams working together, being able to help teams take responsibility, to improve team to team communication? That is the difficult part, and we get to use the soft skills on every engagement. Work on professional development (http://professionaldevelopment.sqlpass.org/) and see continuing improvement here, not just with technology. I can teach just about anyone how to be an excellent DBA and performance tuner, but some of these soft skills are much more difficult to teach. If you want to get started with performance analytics and triage of virtualized SQL Servers with the help of experts, read more over at Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Looking Back at MIX10

    - by WeigeltRo
    It’s the sad truth of my life that even though I’m fascinated by airplanes and flight in general since my childhood days, my body doesn’t like flying. Even the ridiculously short flights inside Germany are taking their toll on me each time. Now combine this with sitting in the cramped space of economy class for many hours on a transatlantic flight from Germany to Las Vegas and back, and factor in some heavy dose of jet lag (especially on my way eastwards), and you get an idea why after coming back home I had this question on my mind: Was it really worth it to attend MIX10? This of course is a question that will also be asked by my boss at Comma Soft (for other reasons, obviously), who decided to send me and my colleague Jens Schaller, to the MIX10 conference. (A note to my German readers: An dieser Stelle der Hinweis, dass Comma Soft noch Silverlight-Entwickler und/oder UI-Designer für den Standort Bonn sucht – aussagekräftige Bewerbungen bitte an [email protected]) Too keep things short: My answer is yes. Before I’ll go into detail, let me ask the heretical questions whether tech conferences in general still make sense. There was a time, where actually being at a tech conference gave you a head-start in regard to learning about new technologies. Nowadays this is no longer true, where every bit of information and every detail is immediately twittered, blogged and whatevered to death. In the case of MIX10 you even can download the video-taped sessions shortly after. So: Does visiting a conference still make sense? It depends on what you expect from a conference. It should be clear to everybody that you’ll neither get exclusive information, nor receive training in a small group. What a conference does offer that sitting in front of your computer does not can be summarized as follows: Focus Being away from work and home will help you to focus on the presented information. Of course there are always the poor guys who are haunted by their work (with mails and short text messages reporting the latest showstopper problem), but in general being out of your office makes a huge difference. Inspiration With the focus comes the emotional involvement. I find it much easier to absorb information if I feel that certain vibe when sitting in a session. This still means that I have put work into reviewing the information later, but it’s a better starting point. And all the impressions collected at a (good) conference combined lead to a higher motivation – be it by the buzz (“this is gonna be sooo cool!”) or by the fear to fall behind (“man, we’ll have work on this, or else…”). People At a conference it’s pretty easy to get into contact with other people during breakfast, lunch and other breaks. This is a good opportunity to get a feel for what other development teams are doing (on a very general level of course, nobody will tell you about their secret formula) and what they are thinking about specific technologies. So MIX10 did offer focus, inspiration and people, but that would have meant nothing without valuable content. When I (being a frontend developer with a strong interest in UI/UX) planned my visit to MIX10, I made the decision to focus on the "soft" topics of design, interaction and user experience. I figured that I would be bombarded with all the technical details about Silverlight 4 anyway in the weeks and months to come. Actually, I would have liked to catch a few technical sessions, but the agenda wasn’t exactly in favor of people interested in any kind of Silverlight and UI/UX/Design topics. That’s one of my few complaints about the conference – I would have liked one more day and/or more sessions per day. Overall, the quality of the workshops and sessions was pretty high. In fact, looking back at my collection of conferences I’ve visited in the past I’d say that MIX10 ranks somewhere near the top spot. Here’s an overview of the workshops/sessions I attended (I’ll leave out the keynotes): Day 0 (Workshops on Sunday) Design Fundamentals for Developers Robby Ingebretsen is the man! Great workshop in three parts with the perfect mix of examples, well-structured definition of terminology and the right dose of humor. Robby was part of the WPF team before founding his own company so he not only has a strong interest in design (and the skillz!) but also the technical background.   Design Tools and Techniques Originally announced to be held by Arturo Toledo, the Rosso brothers from ArcheType filled in for the first two parts, and Corrina Black had a pretty general part about the Windows Phone UI. The first two thirds were a mixed bag; the two guys definitely knew what they were talking about, and the demos were great, but the talk lacked the preparation and polish of a truly great presentation. Corrina was not allowed to go into too much detail before the keynote on Monday, but the session was still very interesting as it showed how much thought went into the Windows Phone UI (and there’s always a lot to learn when people talk about their thought process). Day 1 (Monday) Designing Rich Experiences for Data-Centric Applications I wonder whether there was ever a test-run for this session, but what Ken Azuma and Yoshihiro Saito delivered in the first 15 minutes of a 30-minutes-session made me walk out. A commercial for a product (just great: a video showing a SharePoint plug-in in an all-Japanese UI) combined with the most generic blah blah one could imagine. EPIC FAIL.   Great User Experiences: Seamlessly Blending Technology & Design I switched to this session from the one above but I guess I missed the interesting part – what I did catch was what looked like a “look at the cool stuff we did” without being helpful. Or maybe I was just in a bad mood after the other session.   The Art, Technology and Science of Reading This talk by Kevin Larson was very interesting, but was more a presentation of what Microsoft is doing in research (pretty impressive) and in the end lacked a bit the helpful advice one could have hoped for.   10 Ways to Attack a Design Problem and Come Out Winning Robby Ingebretsen again, and again a great mix of theory and practice. The clean and simple, yet effective, UI of the reader app resulted in a simultaneous “wow” of Jens and me. If you’d watch only one session video, this should be it. Microsoft has to bring Robby back next year! Day 2 (Tuesday) Touch in Public: Multi-touch Interaction Design for Kiosks & Architectural Experiences Very interesting session by Jason Brush, a great inspiration with many details to look out for in the examples. Exactly what I was hoping for – and then some!   Designing Bing: Heart and Science How hard can it be to design the UI for a search engine? An input field and a list of results, that should be it, right? Well, not so fast! The talk by Paul Ray showed the many iterations to finally get it right (up to the choice of a specific blue for the links). And yes, I want an eye-tracking device to play around with!   The Elephant in the Room When Nishant Kothary presented a long list of what his session was not about, I told to myself (not having the description text present) “Am I in the wrong talk? Should I leave?”. Boy, was I wrong. A great talk about human factors in the process of designing stuff.   An Hour with Bill Buxton Having seen Bill Buxton’s presentation in the keynote, I just had to see this man again – even though I didn’t know what to expect. Being more or less unplanned and intended to be more of a conversation, the session didn’t provide a wealth of immediately useful information. Nevertheless Bill Buxton was impressive with his huge knowledge of seemingly everything. But this could/should have been a session some when in the evening and not in parallel to at least two other interesting talks. Day 3 (Wednesday) Design the Ordinary, Like the Fixie This session by DL Byron and Kevin Tamura started really well and brought across the message to keep things simple. But towards the end the talk lost some of its steam. And, as a member of the audience pointed out, they kind of ignored their own advice when they used a fancy presentation software other then PowerPoint that sometimes got in the way of showing things.   Developing Natural User Interfaces Speaking of alternative presentation software, Joshua Blake definitely had the most remarkable alternative to PowerPoint, a self-written program called NaturalShow that was controlled using multi-touch on a touch screen. Not a PowerPoint-killer, but impressive nevertheless. The (excellent) talk itself was kind of eye-opening in regard to what “multi-touch support” on various platforms (WPF, Silverlight, Windows Phone) actually means.   Treat your Content Right The talk by Tiffani Jones Brown wasn’t even on my planned schedule, but somehow I ended up in that session – and it was great. And even for people who don’t necessarily have to write content for websites, some points made by Tiffani are valid in many places, notably wherever you put texts with more than a single word into your UI. Creating Effective Info Viz in Microsoft Silverlight The last session of MIX10 I attended was kind of disappointing. At first things were very promising, with Matthias Shapiro giving a brief but well-structured introduction to info graphics and interactive visualizations. Then the live-coding began and while the result was interesting, too much time was spend on wrestling to get the code working. Ending earlier than planned, the talk was a bit light on actual content, but at least it included a nice list of resources. Conclusion It could be felt all across MIX10, UIs will take a huge leap forward; in fact, there are enough examples that have already. People who both have the technical know-how and at least a basic understanding of design (“literacy” as Bill Buxton called it) are in high demand. The concept of the MIX conference and initiatives like design.toolbox shows that Microsoft understands very well that frontend developers have to acquire new knowledge besides knowing how to hack code and putting buttons on a form. There are extremely exciting times before us, with lots of opportunity for those who are eager to develop their skills, that is for sure.

    Read the article

  • Social Business Forum Milano: Day 1

    - by me
    div.c50 {font-family: Helvetica;} div.c49 {position: relative; height: 0px; overflow: hidden;} span.c48 {color: #333333; font-size: 14px; line-height: 18px;} div.c47 {background-color: #ffffff; border-left: 1px solid rgba(0, 0, 0, 0.098); border-right: 1px solid rgba(0, 0, 0, 0.098); background-clip: padding-box;} div.c46 {color: #666666; font-family: arial, helvetica, sans-serif; font-weight: normal} span.c45 {line-height: 14px;} div.c44 {border-width: 0px; font-family: arial, helvetica, sans-serif; margin: 0px; outline-width: 0px; padding: 0px 0px 10px; vertical-align: baseline} div.c43 {border-width: 0px; margin: 0px; outline-width: 0px; padding: 0px 0px 10px; vertical-align: baseline;} p.c42 {color: #666666; font-family: arial, helvetica, sans-serif} span.c41 {line-height: 14px; font-size: 11px;} h2.c40 {font-family: arial, helvetica, sans-serif} p.c39 {font-family: arial, helvetica, sans-serif} span.c38 {font-family: arial, helvetica, sans-serif; font-size: 80%; font-weight: bold} div.c37 {color: #999999; font-size: 14px; font-weight: normal; line-height: 18px} div.c36 {background-clip: padding-box; background-color: #ffffff; border-bottom: 1px solid #e8e8e8; border-left: 1px solid rgba(0, 0, 0, 0.098); border-right: 1px solid rgba(0, 0, 0, 0.098); cursor: pointer; margin-left: 58px; min-height: 51px; padding: 9px 12px; position: relative; z-index: auto} div.c35 {background-clip: padding-box; background-color: #ffffff; border-bottom: 1px solid #e8e8e8; border-left: 1px solid rgba(0, 0, 0, 0.098); border-right: 1px solid rgba(0, 0, 0, 0.098); cursor: pointer; margin-left: 58px; min-height: 51px; padding: 9px 12px; position: relative} div.c34 {overflow: hidden; font-size: 12px; padding-top: 1px;} ul.c33 {padding: 0px; margin: 0px; list-style-type: none; opacity: 0;} li.c32 {display: inline;} a.c31 {color: #298500; text-decoration: none; outline-width: 0px; font-size: 12px; margin-left: 8px;} a.c30 {color: #999999; text-decoration: none; outline-width: 0px; font-size: 12px; float: left; margin-right: 2px;} strong.c29 {font-weight: normal; color: #298500;} span.c28 {color: #999999;} div.c27 {font-family: arial, helvetica, sans-serif; margin: 0px; word-wrap: break-word} span.c26 {border-width: 0px; width: 48px; height: 48px; border-radius: 5px 5px 5px 5px; position: absolute; top: 12px; left: 12px;} small.c25 {font-size: 12px; color: #bbbbbb; position: absolute; top: 9px; right: 12px; float: right; margin-top: 1px;} a.c24 {color: #999999; text-decoration: none; outline-width: 0px; font-size: 12px;} h3.c23 {font-family: arial, helvetica, sans-serif} span.c22 {font-family: arial, helvetica, sans-serif} div.c21 {display: inline ! important; font-weight: normal} span.c20 {font-family: arial, helvetica, sans-serif; font-size: 80%} a.c19 {font-weight: normal;} span.c18 {font-weight: normal;} div.c17 {font-weight: normal;} div.c16 {margin: 0px; word-wrap: break-word;} a.c15 {color: #298500; text-decoration: none; outline-width: 0px;} strong.c14 {font-weight: normal; color: inherit;} span.c13 {color: #7eb566; text-decoration: none} span.c12 {color: #333333; font-family: arial, helvetica, sans-serif; font-size: 14px; line-height: 18px} a.c11 {color: #999999; text-decoration: none; outline-width: 0px;} span.c10 {font-size: 12px; color: #999999; direction: ltr; unicode-bidi: embed;} strong.c9 {font-weight: normal;} span.c8 {color: #bbbbbb; text-decoration: none} strong.c7 {font-weight: bold; color: #333333;} div.c6 {font-family: arial, helvetica, sans-serif; font-weight: normal} div.c5 {font-family: arial, helvetica, sans-serif; font-size: 80%; font-weight: normal} p.c4 {font-family: arial, helvetica, sans-serif; font-size: 80%; font-weight: normal} h3.c3 {font-family: arial, helvetica, sans-serif; font-weight: bold} span.c2 {font-size: 80%} span.c1 {font-family: arial,helvetica,sans-serif;} Here are my impressions of the first day of the Social Business Forum in Milano A dialogue on Social Business Manifesto - Emanuele Scotti, Rosario Sica The presentation was focusing on Thesis and Anti-Thesis around Social Business My favorite one is: Peter H. Reiser ?@peterreiser social business manifesto theses #2: organizations are conversations - hello Oracle Social Network #sbf12 Here are the Thesis (auto-translated from italian to english) From Stress to Success - Pragmatic pathways for Social Business - John Hagel John Hagel talked about challenges of deploying new social technologies. Below are some key points participant tweeted during the session. 6hRhiannon Hughes ?@Rhi_Hughes Favourite quote this morning 'We need to strengthen the champions & neutralise the enemies' John Hagel. Not a hard task at all #sbf12 Expand Reply Retweet Favorite 8hElena Torresani ?@ElenaTorresani Minimize the power of the enemies of change. Maximize the power of the champions - John Hagel #sbf12 Expand Reply Retweet Favorite 8hGaetano Mazzanti ?@mgaewsj John Hagel change: minimize the power of the enemies #sbf12 Expand Reply Retweet Favorite 8hGaetano Mazzanti ?@mgaewsj John Hagel social software as band-aid for poor leadtime/waste management? mmm #sbf12 Expand Reply Retweet Favorite 8hElena Torresani ?@ElenaTorresani "information is power. We need access to information to get power"John Hagel, Deloitte &Touche #sbf12http://instagr.am/p/LcjgFqMXrf/ View photo Reply Retweet Favorite 8hItalo Marconi ?@italomarconi Information is power and Knowledge is subversive. John Hagel#sbf12 Expand Reply Retweet Favorite 8hdanielce ?@danielce #sbf12 john Hagel: innovation is not rational. from Milano, Milano Reply Retweet Favorite 8hGaetano Mazzanti ?@mgaewsj John Hagel: change is a political (not rational) process #sbf12 Expand Reply Retweet Favorite Enterprise gamification to drive engagement - Ray Wang Ray Wang did an excellent speech around engagement strategies and gamification More details can be found on the Harvard Business Review blog Panel Discussion: Does technology matter? Understanding how software enables or prevents participation Christian Finn, Ram Menon, Mike Gotta, moderated by Paolo Calderari Below are the highlights of the panel discussions as live tweets: 2hPeter H. Reiser ?@peterreiser @cfinn Q: social silos: mega trend social suites - do we create social silos + apps silos + org silos ... #sbf12 Expand Reply Delete Favorite 2hPeter H. Reiser ?@peterreiser @cfinn A: Social will be less siloed - more integrated into application design. Analyatics is key to make intelligent decisions #sbf12 Expand Reply Delete Favorite 2hPeter H. Reiser ?@peterreiser @MikeGotta - A: its more social be design then social by layer - Better work experience using social design. #sbf12 Expand Reply Delete Favorite 2hPeter H. Reiser ?@peterreiser Ram Menon: A: Social + Mobile + consumeration is coming together#sbf12 Expand Reply Delete Favorite 2hPeter H. Reiser ?@peterreiser Q: What is the evolution for social business solution in the next 4-5 years? #sbf12 Expand Reply Delete Favorite 2hPeter H. Reiser ?@peterreiser @cfinn Adoption: A: User experience is king - no training needed - We let you participate into a conversation via mobile and email#sbf12 Expand Reply Delete Favorite 2hPeter H. Reiser ?@peterreiser @MikeGotta A:Adoption - how can we measure quality? Literacy - Are people get confident to talk to a invisible audience ? #sbf12 Expand Reply Delete Favorite 2hPeter H. Reiser ?@peterreiser Ram Meno: A:Adoption - What should I measure ? Depend on business goal you want to active? #sbf12 Expand Reply Delete Favorite 2hPeter H. Reiser ?@peterreiser Q: How can technology facilitate adoption #sbf12 Expand Reply Delete Favorite 2hPeter H. Reiser ?@peterreiser #sbf12 @cfinn @mgotta Ram Menon at panel discussion about social technology @oraclewebcenter http://pic.twitter.com/Pquz73jO View photo Reply Delete Favorite 2hPeter H. Reiser ?@peterreiser Ram Menon: 100% of data is in a system somewhere. 100% of collective intelligence is with people. Social System bridge both worlds Expand Reply Delete Favorite 2hPeter H. Reiser ?@peterreiser #sbf12 @MikeGotta Adoption is specific to the culture of the company Expand Reply Delete Favorite 2hPeter H. Reiser ?@peterreiser @cfinn - drive adoption is important @MikeGotta - activity stream + watch list is most important feature in a social system #sbf12 Expand Reply Delete Favorite 2hPeter H. Reiser ?@peterreiser @MikeGotta Why just adoption? email as 100% adoption? #sbf12 Expand Reply Delete Favorite 2hPeter H. Reiser ?@peterreiser @MikeGotta Ram Menon respond: there is only 1 questions to ask: What is the adoption? #sbf12 @socialadoption you like this ? #sbf12 Expand Reply Delete Favorite 3hPeter H. Reiser ?@peterreiser @MikeGotta - just replacing old technology (e.g. email) with new technology does not help. we need to change model/attitude #sbf12 Expand Reply Delete Favorite 3hPeter H. Reiser ?@peterreiser Ram Menon: CEO mandated to replace 6500 email aliases with Social Networking Software #sbf12 Expand Reply Delete Favorite 3hPeter H. Reiser ?@peterreiser @MikeGotta A: How to bring interface together #sbf12 . Going from point tools to platform, UI, Architecture + Eco-system is important Expand Reply Delete Favorite 3hPeter H. Reiser ?@peterreiser Q: How is technology important in Social Business #sbf12 A:@cfinn - technology is enabler , user experience -easy of use is important Expand Reply Delete Favorite 3hPeter H. Reiser ?@peterreiser @cfinn particiapte in panel "Does technology matter? Understanding how software enables or prevents participation" #sbf #webcenter

    Read the article

  • Using Find/Replace with regular expressions inside a SSIS package

    - by jamiet
    Another one of those might-be-useful-again-one-day-so-I’ll-share-it-in-a-blog-post blog posts I am currently working on a SQL Server Integration Services (SSIS) 2012 implementation where each package contains a parameter called ETLIfcHist_ID: During normal execution this will get altered when the package is executed from the Execute Package Task however we want to make sure that at deployment-time they all have a default value of –1. Of course, they tend to get changed during development so I wanted a way of easily changing them all back to the default value. Opening up each package in turn and editing them was an option but given that we have over 40 packages and we might want to carry out this reset fairly frequently I needed a more automated method so I turned to Visual Studio’s Find/Replace… feature Of course, we don’t know what value will be in that parameter so I can’t simply search for a particular value; hence I opted to use a regular expression to identify the value to be change. In the rest of this blog post I’ll explain how to do that. For demonstration purposes I have taken the contents of a .dtsx file and stripped out everything except the element containing the parameters (<DTS:PackageParameters>), if you want to play along at home you can copy-paste the XML document below into a new XML file and open it up in Visual Studio: <?xml version="1.0"?> <DTS:Executable xmlns:DTS="www.microsoft.com/SqlServer/Dts">   <DTS:PackageParameters>     <DTS:PackageParameter       DTS:CreationName=""       DTS:DataType="3"       DTS:Description="InterfaceHistory_ID: used for Lineage"       DTS:DTSID="{635616DB-EEEE-45C8-89AA-713E25846C7E}"       DTS:ObjectName="ETLIfcHist_ID">       <DTS:Property         DTS:DataType="3"         DTS:Name="ParameterValue">VALUE_TO_BE_CHANGED</DTS:Property>     </DTS:PackageParameter>     <DTS:PackageParameter       DTS:CreationName=""       DTS:DataType="3"       DTS:Description="Some other description"       DTS:DTSID="{635616DB-EEEE-45C8-89AA-713E25845C7E}"       DTS:ObjectName="SomeOtherObjectName">       <DTS:Property         DTS:DataType="3"         DTS:Name="ParameterValue">SomeOtherValue</DTS:Property>     </DTS:PackageParameter>   </DTS:PackageParameters> </DTS:Executable> We are trying to identify the value of the parameter whose name is ETLIfcHist_ID – notice that in the XML document above that value is VALUE_TO_BE_CHANGED. The following regular expression will find the appropriate portion of the XML document: {\<DTS\:PackageParameter[\n ]*DTS\:CreationName="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:DataType="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:Description="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:DTSID="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:ObjectName="ETLIfcHist_ID"\>[\n ]*\<DTS\:Property[\n ]*DTS\:DataType="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:Name="ParameterValue"\>}[A-Za-z0-9\:_\{\}- ]*{\<\/DTS\:Property\>} I have highlighted the name of the parameter that we’re looking for. I have also highlighted two portions identified by pairs of curly braces “{…}”; these are important because they pick out the two portions either side of the value I want to replace, in other words the portions highlighted here: <DTS:PackageParameters>     <DTS:PackageParameter       DTS:CreationName=""       DTS:DataType="3"       DTS:Description="InterfaceHistory_ID: used for Lineage"       DTS:DTSID="{635616DB-EEEE-45C8-89AA-713E25846C7E}"       DTS:ObjectName="ETLIfcHist_ID">       <DTS:Property         DTS:DataType="3"         DTS:Name="ParameterValue">VALUE_TO_BE_CHANGED</DTS:Property>     </DTS:PackageParameter> Those sections in the curly braces are termed tag expressions and can be identified in the replace expression using a backslash and a number identifying which tag expression you’re referring to according to its ordinal position. Hence, our replace expression is simply: \1-1\2 We’re saying the portion of our file identified by the regular expression should be replaced by the first curly brace section, then the literal –1, then the second curly brace section. Make sense? Give it a go yourself by plugging those two expressions into Visual Studio’s Find and Replace dialog. If you set it to look in “All Open Documents” then you can open up the code-behind of all your packages and change all of them at once. The Find and Replace dialog will look like this: That’s it! I realise that not everyone will be looking to change the value of a parameter but hopefully I have shown you a technique that you can modify to work for your own scenario. Given that this blog post is, y’know, on the web I have no doubt that someone is going to find a fault with my find regex expression and if that person is you….that’s OK. Let me know about it in the comments below and perhaps we can work together to come up with something better! Note that some parameters may have a different set of properties (for example some, but not all, of my parameters have a DTS:Required attribute) so your find regular expression may have to change accordingly. When researching this I found the following article to be invaluable: Visual Studio Find/Replace Regular Expression Usage @Jamiet

    Read the article

  • VLC 2.0.3 on Lubuntu 12.04: No audio?

    - by drezabek
    I am on Lubuntu 12.04, and I have installed VLC media player version 2.0.3. When I try and play an audio file, it appears to load fine, and the media position bar displays the progress, and it says it is playing, but I can't here any thing through my speakers. I can hear game audio, web audio, and audio from SMPlayer just fine, but with VLC, I can't here anything. Below is the "Messages" output with the verbosity option set to "2 (debug)" main debug: processing request item: The Bottom, node: Playlist, skip: 0 main debug: resyncing on The Bottom main debug: The Bottom is at 0 main debug: starting playback of the new playlist item main debug: resyncing on The Bottom main debug: The Bottom is at 0 main debug: creating new input thread main debug: Creating an input for 'The Bottom' main debug: TIMER input launching for 'Floex - Machinarium Soundtrack - 01 The Bottom.flac' : 23.706 ms - Total 23.706 ms / 1 intvls (Avg 23.706 ms) main debug: using timeshift granularity of 50 MiB, in path '/tmp' main debug: `file:///home/doug/Music/unsorted/Floex%20-%20Machinarium%20Soundtrack/Floex%20-%20Machinarium%20Soundtrack%20-%2001%20The%20Bottom.flac' gives access `file' demux `' path `/home/doug/Music/unsorted/Floex%20-%20Machinarium%20Soundtrack/Floex%20-%20Machinarium%20Soundtrack%20-%2001%20The%20Bottom.flac' main debug: creating demux: access='file' demux='' location='/home/doug/Music/unsorted/Floex%20-%20Machinarium%20Soundtrack/Floex%20-%20Machinarium%20Soundtrack%20-%2001%20The%20Bottom.flac' file='/home/doug/Music/unsorted/Floex - Machinarium Soundtrack/Floex - Machinarium Soundtrack - 01 The Bottom.flac' main debug: looking for access_demux module: 3 candidates main debug: no access_demux module matching "file" could be loaded main debug: TIMER module_need() : 2.332 ms - Total 2.332 ms / 1 intvls (Avg 2.332 ms) main debug: creating access 'file' location='/home/doug/Music/unsorted/Floex%20-%20Machinarium%20Soundtrack/Floex%20-%20Machinarium%20Soundtrack%20-%2001%20The%20Bottom.flac', path='/home/doug/Music/unsorted/Floex - Machinarium Soundtrack/Floex - Machinarium Soundtrack - 01 The Bottom.flac' main debug: looking for access module: 2 candidates filesystem debug: opening file `/home/doug/Music/unsorted/Floex - Machinarium Soundtrack/Floex - Machinarium Soundtrack - 01 The Bottom.flac' main debug: using access module "filesystem" main debug: TIMER module_need() : 0.762 ms - Total 0.762 ms / 1 intvls (Avg 0.762 ms) main debug: Using stream method for AStream* main debug: starting pre-buffering main debug: received first data after 0 ms main debug: pre-buffering done 1024 bytes in 0s - 43478 KiB/s main debug: looking for stream_filter module: 7 candidates main debug: no stream_filter module matching "any" could be loaded main debug: TIMER module_need() : 0.236 ms - Total 0.236 ms / 1 intvls (Avg 0.236 ms) main debug: looking for stream_filter module: 1 candidate main debug: using stream_filter module "stream_filter_record" main debug: TIMER module_need() : 0.156 ms - Total 0.156 ms / 1 intvls (Avg 0.156 ms) main debug: creating demux: access='file' demux='' location='/home/doug/Music/unsorted/Floex%20-%20Machinarium%20Soundtrack/Floex%20-%20Machinarium%20Soundtrack%20-%2001%20The%20Bottom.flac' file='/home/doug/Music/unsorted/Floex - Machinarium Soundtrack/Floex - Machinarium Soundtrack - 01 The Bottom.flac' main debug: looking for demux module: 54 candidates flacsys debug: Picture type=3 mime=image/png description='' file length=679371 qt4 debug: IM: Setting an input main debug: looking for packetizer module: 21 candidates main debug: using packetizer module "packetizer_flac" main debug: TIMER module_need() : 0.211 ms - Total 0.211 ms / 1 intvls (Avg 0.211 ms) main debug: using demux module "flacsys" main debug: TIMER module_need() : 4.023 ms - Total 4.023 ms / 1 intvls (Avg 4.023 ms) main debug: looking for a subtitle file in /home/doug/Music/unsorted/Floex - Machinarium Soundtrack/ main debug: looking for meta reader module: 2 candidates main debug: using meta reader module "taglib" main debug: TIMER module_need() : 5.245 ms - Total 5.245 ms / 1 intvls (Avg 5.245 ms) main debug: removing module "taglib" main debug: `file:///home/doug/Music/unsorted/Floex%20-%20Machinarium%20Soundtrack/Floex%20-%20Machinarium%20Soundtrack%20-%2001%20The%20Bottom.flac' successfully opened main debug: selecting program id=0 main debug: looking for decoder module: 30 candidates main debug: using decoder module "flac" main debug: TIMER module_need() : 0.442 ms - Total 0.442 ms / 1 intvls (Avg 0.442 ms) main debug: Buffering 0% flac debug: decode STREAMINFO flac debug: channels:2 samplerate:44100 bitspersamples:16 flac debug: STREAMINFO decoded main debug: Buffering 30% main debug: recycling audio output main debug: looking for audio output module: 3 candidates main debug: Buffering 61% pulse debug: using stereo channel map pulse debug: using library version 1.1.0 pulse debug: (compiled with version 1.1.0, protocol 26) main debug: Buffering 92% main debug: Stream buffering done (371 ms in 2 ms) pulse debug: connected locally to unix:/home/doug/.pulse/dce22254e867f905188a2ce200000003-runtime/native as client #14 pulse debug: using protocol 26, server protocol 26 pulse debug: using buffer metrics: maxlength=4194304, tlength=9880, prebuf=0, minreq=3528 pulse debug: connected to sink 0: alsa_output.pci-0000_00_14.2.analog-stereo main debug: using audio output module "pulse" main debug: TIMER module_need() : 4.571 ms - Total 4.571 ms / 1 intvls (Avg 4.571 ms) main debug: output 's16l' 44100 Hz Stereo frame=1 samples/4 bytes main debug: mixer 'f32l' 44100 Hz Stereo frame=1 samples/8 bytes main debug: filter(s) 'f32l'->'s16l' 44100 Hz->44100 Hz Stereo->Stereo main debug: looking for audio filter module: 14 candidates audio_format debug: f32l->s16l, bits per sample: 32->16 main debug: using audio filter module "audio_format" main debug: TIMER module_need() : 0.187 ms - Total 0.187 ms / 1 intvls (Avg 0.187 ms) main debug: conversion pipeline completed main debug: looking for audio mixer module: 2 candidates main debug: using audio mixer module "float32_mixer" main debug: TIMER module_need() : 0.125 ms - Total 0.125 ms / 1 intvls (Avg 0.125 ms) main debug: input 's16l' 44100 Hz Stereo frame=1 samples/4 bytes main debug: looking for audio filter module: 1 candidate scaletempo debug: format: 44100 rate, 2 nch, 4 bps, fl32 scaletempo debug: params: 30 stride, 0.200 overlap, 14 search scaletempo debug: 1.000 scale, 1323.000 stride_in, 1323 stride_out, 1059 standing, 264 overlap, 617 search, 2204 queue, fl32 mode main debug: using audio filter module "scaletempo" main debug: TIMER module_need() : 0.233 ms - Total 0.233 ms / 1 intvls (Avg 0.233 ms) main debug: filter(s) 's16l'->'f32l' 44100 Hz->44100 Hz Stereo->Stereo pulse debug: listing sink alsa_output.pci-0000_00_14.2.analog-stereo (0): Built-in Audio Analog Stereo main debug: looking for audio filter module: 14 candidates audio_format debug: s16l->f32l, bits per sample: 16->32 main debug: using audio filter module "audio_format" main debug: TIMER module_need() : 0.147 ms - Total 0.147 ms / 1 intvls (Avg 0.147 ms) main debug: conversion pipeline completed pulse debug: base volume: 65536 main debug: looking for audio filter module: 1 candidate equalizer debug: equalizer loaded for 44100 Hz with 10 bands 2 pass equalizer debug: 60 Hz -> factor:0.000000 alpha:0.003013 beta:0.993973 gamma:1.993901 equalizer debug: 170 Hz -> factor:0.000000 alpha:0.008490 beta:0.983019 gamma:1.982437 equalizer debug: 310 Hz -> factor:0.000000 alpha:0.015374 beta:0.969252 gamma:1.967331 equalizer debug: 600 Hz -> factor:0.000000 alpha:0.029328 beta:0.941343 gamma:1.934254 equalizer debug: 1000 Hz -> factor:0.000000 alpha:0.047918 beta:0.904163 gamma:1.884869 equalizer debug: 3000 Hz -> factor:0.000000 alpha:0.130408 beta:0.739184 gamma:1.582718 equalizer debug: 6000 Hz -> factor:0.000000 alpha:0.226555 beta:0.546889 gamma:1.015267 equalizer debug: 12000 Hz -> factor:0.000000 alpha:0.344937 beta:0.310127 gamma:-0.181410 equalizer debug: 14000 Hz -> factor:0.000000 alpha:0.366438 beta:0.267123 gamma:-0.521151 equalizer debug: 16000 Hz -> factor:0.000000 alpha:0.379009 beta:0.241981 gamma:-0.808451 main debug: using audio filter module "equalizer" main debug: TIMER module_need() : 0.353 ms - Total 0.353 ms / 1 intvls (Avg 0.353 ms) main debug: filter(s) 'f32l'->'f32l' 44100 Hz->44100 Hz Stereo->Stereo main debug: conversion pipeline completed main debug: looking for visualization2 module: 1 candidate main debug: looking for text renderer module: 2 candidates freetype debug: Building font databases. freetype debug: Took 0 microseconds freetype debug: Using Serif Bold as font from file /usr/share/fonts/truetype/ttf-dejavu/DejaVuSans.ttf freetype debug: using fontsize: 2 main debug: using text renderer module "freetype" main debug: TIMER module_need() : 3.278 ms - Total 3.278 ms / 1 intvls (Avg 3.278 ms) main debug: looking for video filter2 module: 18 candidates swscale debug: 32x32 chroma: YUVA -> 16x16 chroma: RGBA with scaling using Bicubic (good quality) main debug: using video filter2 module "swscale" main debug: TIMER module_need() : 1.037 ms - Total 1.037 ms / 1 intvls (Avg 1.037 ms) main debug: looking for video filter2 module: 18 candidates yuvp debug: YUVP to YUVA converter main debug: using video filter2 module "yuvp" main debug: TIMER module_need() : 0.156 ms - Total 0.156 ms / 1 intvls (Avg 0.156 ms) main debug: Deinterlacing available main debug: deinterlace 0, mode blend, is_needed 0 main debug: Opening vout display wrapper main debug: looking for vout display module: 6 candidates main debug: looking for vout window xid module: 4 candidates qt4 debug: requesting video... qt4 debug: Video was requested 0, 0 main debug: using vout window xid module "qt4" main debug: TIMER module_need() : 61.671 ms - Total 61.671 ms / 1 intvls (Avg 61.671 ms) main debug: looking for inhibit module: 2 candidates main debug: using inhibit module "xdg_screensaver" main debug: TIMER module_need() : 0.336 ms - Total 0.336 ms / 1 intvls (Avg 0.336 ms) xdg_screensaver debug: started xdg-screensaver (PID = 6682) xcb_xv debug: connected to X11.0 server xcb_xv debug: vendor : The X.Org Foundation xcb_xv debug: version: 11103000 xcb_xv debug: using screen 0x15a xcb_xv debug: using XVideo extension v2.2 xcb_xv debug: using adaptor NV17 Video Texture xcb_xv debug: using port 310 xcb_xv debug: using image format 0x30323449 xcb_xv debug: using X11 visual ID 0x21 (depth: 24) xcb_xv debug: using X11 window 0x03400000 xcb_xv debug: using X11 graphic context 0x03400002 main debug: VoutDisplayEvent 'fullscreen' 0 main debug: VoutDisplayEvent 'resize' 800x500 window main debug: using vout display module "xcb_xv" main debug: TIMER module_need() : 69.890 ms - Total 69.890 ms / 1 intvls (Avg 69.890 ms) main debug: original format sz 800x500, of (0,0), vsz 800x500, 4cc I420, sar 1:1, msk r0x0 g0x0 b0x0 main debug: removing module "freetype" main debug: looking for text renderer module: 2 candidates freetype debug: Building font databases. freetype debug: Took 0 microseconds freetype debug: Using Serif Bold as font from file /usr/share/fonts/truetype/ttf-dejavu/DejaVuSans.ttf freetype debug: using fontsize: 2 main debug: using text renderer module "freetype" main debug: TIMER module_need() : 4.552 ms - Total 4.552 ms / 1 intvls (Avg 4.552 ms) main debug: using visualization2 module "visual" main debug: TIMER module_need() : 84.104 ms - Total 84.104 ms / 1 intvls (Avg 84.104 ms) main debug: filter(s) 'f32l'->'f32l' 44100 Hz->44100 Hz Stereo->Stereo main debug: conversion pipeline completed main debug: filter(s) 'f32l'->'f32l' 44100 Hz->44100 Hz Stereo->Stereo main debug: conversion pipeline completed main debug: filter(s) 'f32l'->'f32l' 48510 Hz->44100 Hz Stereo->Stereo main debug: looking for audio filter module: 14 candidates main debug: using audio filter module "samplerate" main debug: TIMER module_need() : 0.375 ms - Total 0.375 ms / 1 intvls (Avg 0.375 ms) main debug: conversion pipeline completed main debug: End of audio preroll main debug: Decoder buffering done in 91 ms main warning: PTS is out of range (-9269), dropping buffer pulse debug: deferring start (190703 us) main debug: looking for video blending module: 1 candidate main debug: using video blending module "blend" main debug: TIMER module_need() : 0.275 ms - Total 0.275 ms / 1 intvls (Avg 0.275 ms) main debug: Detected interlaced video main debug: deinterlace 0, mode blend, is_needed 1 xcb_xv debug: display is visible pulse debug: starting deferred pulse warning: too late by 93760 us pulse debug: changed sample rate to 44186 Hz pulse debug: started pulse warning: too late by 94474 us pulse debug: changed sample rate to 44229 Hz pulse warning: too late by 93532 us pulse debug: changed sample rate to 44272 Hz pulse warning: too late by 92829 us pulse debug: changed sample rate to 44315 Hz pulse warning: too late by 92132 us pulse debug: changed sample rate to 44358 Hz xcb_xv debug: display is visible pulse warning: too late by 91534 us pulse debug: changed sample rate to 44401 Hz xcb_xv debug: display is visible pulse warning: too late by 89482 us pulse debug: changed sample rate to 44440 Hz xcb_xv debug: display is visible xcb_xv debug: display is visible pulse warning: too late by 87529 us pulse debug: changed sample rate to 44479 Hz pulse warning: too late by 84577 us pulse debug: changed sample rate to 44504 Hz main debug: auto hiding mouse cursor pulse warning: too late by 78562 us pulse debug: changed sample rate to 44492 Hz pulse warning: too late by 68015 us pulse debug: changed sample rate to 44422 Hz xcb_xv debug: display is visible xcb_xv debug: display is visible xcb_xv debug: display is visible xcb_xv debug: display is visible main debug: auto hiding mouse cursor pulse debug: changed sample rate to 44336 Hz xcb_xv debug: display is visible xcb_xv debug: display is visible xcb_xv debug: display is visible main debug: auto hiding mouse cursor I have had issues with VLC in the past- the audio quality was extremely crackly, as if the headphone jack was plugged in only half way, and the sounds were extremely sharp and caused my speakers to make a ringing/vibrating noise... It would eventually start working after I messed around with the audio settings, but it happened every restart. I eventually switched to SMPlayer, but now I need some of the features that VLC offers, but I still can't use VLC. At this point, the audio can not be heard at all, and the method I used before, messing around with the audio settings, isn't getting me anywhere. (note, I reposted this on VideoLan's forums, link is here: http://forum.videolan.org/viewtopic.php?f=13&t=104726) Please let me know if you need more information, or are confused by something I posted! Thanks!

    Read the article

  • What's up with LDoms: Part 1 - Introduction & Basic Concepts

    - by Stefan Hinker
    LDoms - the correct name is Oracle VM Server for SPARC - have been around for quite a while now.  But to my surprise, I get more and more requests to explain how they work or to give advise on how to make good use of them.  This made me think that writing up a few articles discussing the different features would be a good idea.  Now - I don't intend to rewrite the LDoms Admin Guide or to copy and reformat the (hopefully) well known "Beginners Guide to LDoms" by Tony Shoumack from 2007.  Those documents are very recommendable - especially the Beginners Guide, although based on LDoms 1.0, is still a good place to begin with.  However, LDoms have come a long way since then, and I hope to contribute to their adoption by discussing how they work and what features there are today.  In this and the following posts, I will use the term "LDoms" as a common abbreviation for Oracle VM Server for SPARC, just because it's a lot shorter and easier to type (and presumably, read). So, just to get everyone on the same baseline, lets briefly discuss the basic concepts of virtualization with LDoms.  LDoms make use of a hypervisor as a layer of abstraction between real, physical hardware and virtual hardware.  This virtual hardware is then used to create a number of guest systems which each behave very similar to a system running on bare metal:  Each has its own OBP, each will install its own copy of the Solaris OS and each will see a certain amount of CPU, memory, disk and network resources available to it.  Unlike some other type 1 hypervisors running on x86 hardware, the SPARC hypervisor is embedded in the system firmware and makes use both of supporting functions in the sun4v SPARC instruction set as well as the overall CPU architecture to fulfill its function. The CMT architecture of the supporting CPUs (T1 through T4) provide a large number of cores and threads to the OS.  For example, the current T4 CPU has eight cores, each running 8 threads, for a total of 64 threads per socket.  To the OS, this looks like 64 CPUs.  The SPARC hypervisor, when creating guest systems, simply assigns a certain number of these threads exclusively to one guest, thus avoiding the overhead of having to schedule OS threads to CPUs, as do typical x86 hypervisors.  The hypervisor only assigns CPUs and then steps aside.  It is not involved in the actual work being dispatched from the OS to the CPU, all it does is maintain isolation between different guests. Likewise, memory is assigned exclusively to individual guests.  Here,  the hypervisor provides generic mappings between the physical hardware addresses and the guest's views on memory.  Again, the hypervisor is not involved in the actual memory access, it only maintains isolation between guests. During the inital setup of a system with LDoms, you start with one special domain, called the Control Domain.  Initially, this domain owns all the hardware available in the system, including all CPUs, all RAM and all IO resources.  If you'd be running the system un-virtualized, this would be what you'd be working with.  To allow for guests, you first resize this initial domain (also called a primary domain in LDoms speak), assigning it a small amount of CPU and memory.  This frees up most of the available CPU and memory resources for guest domains.  IO is a little more complex, but very straightforward.  When LDoms 1.0 first came out, the only way to provide IO to guest systems was to create virtual disk and network services and attach guests to these services.  In the meantime, several different ways to connect guest domains to IO have been developed, the most recent one being SR-IOV support for network devices released in version 2.2 of Oracle VM Server for SPARC. I will cover these more advanced features in detail later.  For now, lets have a short look at the initial way IO was virtualized in LDoms: For virtualized IO, you create two services, one "Virtual Disk Service" or vds, and one "Virtual Switch" or vswitch.  You can, of course, also create more of these, but that's more advanced than I want to cover in this introduction.  These IO services now connect real, physical IO resources like a disk LUN or a networt port to the virtual devices that are assigned to guest domains.  For disk IO, the normal case would be to connect a physical LUN (or some other storage option that I'll discuss later) to one specific guest.  That guest would be assigned a virtual disk, which would appear to be just like a real LUN to the guest, while the IO is actually routed through the virtual disk service down to the physical device.  For network, the vswitch acts very much like a real, physical ethernet switch - you connect one physical port to it for outside connectivity and define one or more connections per guest, just like you would plug cables between a real switch and a real system. For completeness, there is another service that provides console access to guest domains which mimics the behavior of serial terminal servers. The connections between the virtual devices on the guest's side and the virtual IO services in the primary domain are created by the hypervisor.  It uses so called "Logical Domain Channels" or LDCs to create point-to-point connections between all of these devices and services.  These LDCs work very similar to high speed serial connections and are configured automatically whenever the Control Domain adds or removes virtual IO. To see all this in action, now lets look at a first example.  I will start with a newly installed machine and configure the control domain so that it's ready to create guest systems. In a first step, after we've installed the software, let's start the virtual console service and downsize the primary domain.  root@sun # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-c-- UART 512 261632M 0.3% 2d 13h 58m root@sun # ldm add-vconscon port-range=5000-5100 \ primary-console primary root@sun # svcadm enable vntsd root@sun # svcs vntsd STATE STIME FMRI online 9:53:21 svc:/ldoms/vntsd:default root@sun # ldm set-vcpu 16 primary root@sun # ldm set-mau 1 primary root@sun # ldm start-reconf primary root@sun # ldm set-memory 7680m primary root@sun # ldm add-config initial root@sun # shutdown -y -g0 -i6 So what have I done: I've defined a range of ports (5000-5100) for the virtual network terminal service and then started that service.  The vnts will later provide console connections to guest systems, very much like serial NTS's do in the physical world. Next, I assigned 16 vCPUs (on this platform, a T3-4, that's two cores) to the primary domain, freeing the rest up for future guest systems.  I also assigned one MAU to this domain.  A MAU is a crypto unit in the T3 CPU.  These need to be explicitly assigned to domains, just like CPU or memory.  (This is no longer the case with T4 systems, where crypto is always available everywhere.) Before I reassigned the memory, I started what's called a "delayed reconfiguration" session.  That avoids actually doing the change right away, which would take a considerable amount of time in this case.  Instead, I'll need to reboot once I'm all done.  I've assigned 7680MB of RAM to the primary.  That's 8GB less the 512MB which the hypervisor uses for it's own private purposes.  You can, depending on your needs, work with less.  I'll spend a dedicated article on sizing, discussing the pros and cons in detail. Finally, just before the reboot, I saved my work on the ILOM, to make this configuration available after a powercycle of the box.  (It'll always be available after a simple reboot, but the ILOM needs to know the configuration of the hypervisor after a power-cycle, before the primary domain is booted.) Now, lets create a first disk service and a first virtual switch which is connected to the physical network device igb2. We will later use these to connect virtual disks and virtual network ports of our guest systems to real world storage and network. root@sun # ldm add-vds primary-vds root@sun # ldm add-vswitch net-dev=igb2 switch-primary primary You are free to choose whatever names you like for the virtual disk service and the virtual switch.  I strongly recommend that you choose names that make sense to you and describe the function of each service in the context of your implementation.  For the vswitch, for example, you could choose names like "admin-vswitch" or "production-network" etc. This already concludes the configuration of the control domain.  We've freed up considerable amounts of CPU and RAM for guest systems and created the necessary infrastructure - console, vts and vswitch - so that guests systems can actually interact with the outside world.  The system is now ready to create guests, which I'll describe in the next section. For further reading, here are some recommendable links: The LDoms 2.2 Admin Guide The "Beginners Guide to LDoms" The LDoms Information Center on MOS LDoms on OTN

    Read the article

  • Networking in VirtualBox

    - by Fat Bloke
    Networking in VirtualBox is extremely powerful, but can also be a bit daunting, so here's a quick overview of the different ways you can setup networking in VirtualBox, with a few pointers as to which configurations should be used and when. VirtualBox allows you to configure up to 8 virtual NICs (Network Interface Controllers) for each guest vm (although only 4 are exposed in the GUI) and for each of these NICs you can configure: Which virtualized NIC-type is exposed to the Guest. Examples include: Intel PRO/1000 MT Server (82545EM),  AMD PCNet FAST III (Am79C973, the default) or  a Paravirtualized network adapter (virtio-net). How the NIC operates with respect to your Host's physical networking. The main modes are: Network Address Translation (NAT) Bridged networking Internal networking Host-only networking NAT with Port-forwarding The choice of NIC-type comes down to whether the guest has drivers for that NIC.  VirtualBox, suggests a NIC based on the guest OS-type that you specify during creation of the vm, and you rarely need to modify this. But the choice of networking mode depends on how you want to use your vm (client or server) and whether you want other machines on your network to see it. So let's look at each mode in a bit more detail... Network Address Translation (NAT) This is the default mode for new vm's and works great in most situations when the Guest is a "client" type of vm. (i.e. most network connections are outbound). Here's how it works: When the guest OS boots,  it typically uses DHCP to get an IP address. VirtualBox will field this DHCP request and tell the guest OS its assigned IP address and the gateway address for routing outbound connections. In this mode, every vm is assigned the same IP address (10.0.2.15) because each vm thinks they are on their own isolated network. And when they send their traffic via the gateway (10.0.2.2) VirtualBox rewrites the packets to make them appear as though they originated from the Host, rather than the Guest (running inside the Host). This means that the Guest will work even as the Host moves from network to network (e.g. laptop moving between locations), and from wireless to wired connections too. However, how does another computer initiate a connection into a Guest?  e.g. connecting to a web server running in the Guest. This is not (normally) possible using NAT mode as there is no route into the Guest OS. So for vm's running servers we need a different networking mode.... Bridged Networking Bridged Networking is used when you want your vm to be a full network citizen, i.e. to be an equal to your host machine on the network. In this mode, a virtual NIC is "bridged" to a physical NIC on your host, like this: The effect of this is that each VM has access to the physical network in the same way as your host. It can access any service on the network such as external DHCP services, name lookup services, and routing information just as the host does. Logically, the network looks like this: The downside of this mode is that if you run many vm's you can quickly run out of IP addresses or your network administrator gets fed up with you asking for statically assigned IP addresses. Secondly, if your host has multiple physical NICs (e.g. Wireless and Wired) you must reconfigure the bridge when your host jumps networks.  Hmm, so what if you want to run servers in vm's but don't want to involve your network administrator? Maybe one of the next 2 modes is for you... Internal Networking When you configure one or more vm's to sit on an Internal network, VirtualBox ensures that all traffic on that network stays within the host and is only visible to vm's on that virtual network. Configuration looks like this: The internal network ( in this example "intnet" ) is a totally isolated network and so is very "quiet". This is good for testing when you need a separate, clean network, and you can create sophisticated internal networks with vm's that provide their own services to the internal network. (e.g. Active Directory, DHCP, etc). Note that not even the Host is a member of the internal network, but this mode allows vm's to function even when the Host is not connected to a network (e.g. on a plane). Note that in this mode, VirtualBox provides no "convenience" services such as DHCP, so your machines must be statically configured or one of the vm's needs to provide a DHCP/Name service. Multiple internal networks are possible and you can configure vm's to have multiple NICs to sit across internal and other network modes and thereby provide routes if needed. But all this sounds tricky. What if you want an Internal Network that the host participates on with VirtualBox providing IP addresses to the Guests? Ah, then for this, you might want to consider Host-only Networking... Host-only Networking Host-only Networking is like Internal Networking in that you indicate which network the Guest sits on, in this case, "vboxnet0": All vm's sitting on this "vboxnet0" network will see each other, and additionally, the host can see these vm's too. However, other external machines cannot see Guests on this network, hence the name "Host-only". Logically, the network looks like this: This looks very similar to Internal Networking but the host is now on "vboxnet0" and can provide DHCP services. To configure how a Host-only network behaves, look in the VirtualBox Manager...Preferences...Network dialog: Port-Forwarding with NAT Networking Now you may think that we've provided enough modes here to handle every eventuality but here's just one more... What if you cart around a mobile-demo or dev environment on, say, a laptop and you have one or more vm's that you need other machines to connect into? And you are continually hopping onto different (customer?) networks. In this scenario: NAT - won't work because external machines need to connect in. Bridged - possibly an option, but does your customer want you eating IP addresses and can your software cope with changing networks? Internal - we need the vm(s) to be visible on the network, so this is no good. Host-only - same problem as above, we want external machines to connect in to the vm's. Enter Port-forwarding to save the day! Configure your vm's to use NAT networking; Add Port Forwarding rules; External machines connect to "host":"port number" and connections are forwarded by VirtualBox to the guest:port number specified. For example, if your vm runs a web server on port 80, you could set up rules like this:  ...which reads: "any connections on port 8080 on the Host will be forwarded onto this vm's port 80".  This provides a mobile demo system which won't need re-configuring every time you open your laptop lid. Summary VirtualBox has a very powerful set of options allowing you to set up almost any configuration your heart desires. For more information, check out the VirtualBox User Manual on Virtual Networking. -FB 

    Read the article

< Previous Page | 683 684 685 686 687 688 689 690 691 692 693 694  | Next Page >