Search Results

Search found 88714 results on 3549 pages for 'data type'.

Page 994/3549 | < Previous Page | 990 991 992 993 994 995 996 997 998 999 1000 1001  | Next Page >

  • Goodby jQuery Templates, Hello JsRender

    - by SGWellens
    A funny thing happened on my way to the jQuery website, I blinked and a feature was dropped: jQuery Templates have been discontinued. The new pretender to the throne is JsRender. jQuery Templates looked pretty useful when they first came out. Several articles were written about them but I stayed away because being on the bleeding edge of technology is not a productive place to be. I wanted to wait until it stabilized…in retrospect, it was a serendipitous decision. This time however, I threw all caution to the wind and took a close look at JSRender. Why? Maybe I'm having a midlife crisis; I'll go motorcycle shopping tomorrow. Caveat, here is a message from the site: Warning: JsRender is not yet Beta, and there may be frequent changes to APIs and features in the coming period. Fair enough, we've been warned. The first thing we need is some data to render. Below is some JSON formatted data. Typically this will come from an asynchronous call to a web service. For simplicity, I hard coded a variable:     var Golfers = [         { ID: "1", "Name": "Bobby Jones", "Birthday": "1902-03-17" },         { ID: "2", "Name": "Sam Snead", "Birthday": "1912-05-27" },         { ID: "3", "Name": "Tiger Woods", "Birthday": "1975-12-30" }         ]; We also need some templates, I created two. Note: The script blocks have the id property set. They are needed so JsRender can locate them.     <script id="GolferTemplate1" type="text/html">         {{=ID}}: <b>{{=Name}}</b> <i>{{=Birthday}}</i> <br />     </script>       <script id="GolferTemplate2" type="text/html">         <tr>             <td>{{=ID}}</td>             <td><b>{{=Name}}</b></td>             <td><i>{{=Birthday}}</i> </td>         </tr>     </script> Including the correct JavaScript files is trivial:     <script src="Scripts/jquery-1.7.js" type="text/javascript"></script>     <script src="Scripts/jsrender.js" type="text/javascript"></script> Of course we need some place to render the output:     <div id="GolferDiv"></div><br />     <table id="GolferTable"></table> The code is also trivial:     function Test()     {         $("#GolferDiv").html($("#GolferTemplate1").render(Golfers));         $("#GolferTable").html($("#GolferTemplate2").render(Golfers));           // you can inspect the rendered html if there are poblems.         // var html = $("#GolferTemplate2").render(Golfers);     } And here's what it looks like with some random CSS formatting that I had laying around.    Not bad, I hope JsRender lasts longer than jQuery Templates. One final warning, a lot of jQuery code is ugly, butt-ugly. If you do look inside the jQuery files, you may want to cover your keyboard with some plastic in case you get vertigo and blow chunks. I hope someone finds this useful. Steve Wellens CodeProject

    Read the article

  • Deploying an ADF Secure Application using WLS Console

    - by juan.ruiz
    Last week I worked on a requirement from a customer that wanted to understand how to deploy to WLS an application with ADF Security without using JDeveloper. The main question was, what steps where needed in order to set up Enterprise Roles, Security Policies and Application Credentials. In this entry I will explain the steps taken using JDeveloper 11.1.1.2. 0 Requirements: Instead of building a sample application from scratch, we can use Andrejus 's sample application that contains all the security pieces that we need. Open and migrate the project. Also make sure you adjust the database settings accordingly. Creating the EAR file Review the Security settings of the application by going into the Application -> Secure menu and see that there are two enterprise roles as well as the ADF Policies enforcing security on the main page. Make sure the Application Module uses the Data Source instead of JDBC URL for its connection type, also take note of the data source name - in my case I have: java:comp/env/jdbc/HrDS To facilitate the access to this application once we deploy it. Go to your ViewController project properties select the Java EE Application category and give it a meaningful name to the context root as well to the Application Name Go to the ADFSecurityWL Application properties -> Deployment  and create a new EAR deployment profile. Uncheck the Auto generate and Synchronize weblogic-jdbc.xml Descriptors During Deployment Deploy the application as an EAR file. Deploying the Application to WLS using the WLS Console On the WLS console create a JNDI data source. This is the part that I found more tricky of the hole exercise given that the name should match the AM's data source name, however the naming convention that worked for me was jdbc.HrDS Now, deploy the application manually by selecting deployments ->Install look for the EAR and follow the default steps. If this is the firs time you deploy the application, once the deployment finishes you will be asked to Activate Changes on the domain, these changes contain all the security policies and application roles insertion into the WLS instance. Creating Roles and User Groups for the Application To finish the after-deployment set up, we need to create the groups that are the equivalent of the Enterprise Roles of ADF Security. For our sample we have two Enterprise Roles employeesApplication and managersApplication. After that, we create the application users and assign them into their respective groups. Now we can run the application and test the security constraints

    Read the article

  • Introducing the Oracle MDM Blog - Why All MDM Solutions Aren't Equal

    - by ken.pulverman
    Welcome to the Oracle MDM Blog.  Dave Butler, Tony Ouk, and myself - Ken Pulverman, will be bringing you news and information from the world of MDM at Oracle.  Dave is our resident expert with more than 30 years of experience in data and information management. Tony has deep expertise in our Exadata product line which provides a strong hardware synergy with MDM.  I come from Siebel Systems where I helped found the team that built our integration product line and then our Universal Customer Master with is part of our MDM offering at Oracle. I thought I'd hit the ground running with a topic we are going to want to continue to bend your ear about.  We had a recent meeting with Ford Goodman, our head of MDM commercial sales in the US and he was very fired up about and important topic.  He's irked that all MDM solutions get painted with the same brush even though they aren't the same at all. There are companies out there trying to represent frameworks and toolkits as out of the box solutions.  They give you the pleasure (read pain) of doing things like developing your own multi-application data model, building your own web services, or creating your own APIs.  Huh?  What gets sold as flexibility in reality is a barrier to ever going live.  At Siebel Systems we obsessed over the notion of a customer.  Our data model took over 10 years to perfect as defining a customer is a very complex task indeed.  There are divisions, subsidiaries, branches, acquisitions, sites etc., etc., etc..  You'll want to do your homework, but trust me - you aren't going to want to take the time or resource to build these canonical data structures yourself.  And what about APIs?  Again, it sounds flexible.  In reality it's a lot of work. Our DNA at Oracle is to reduce the cost of information technology so we pre-integrate our technology with all of our major applications and pre-build integrations and connectors for all the major systems you work with.  This is tedious work that requires detailed knowledge of the interfaces of all the applications involved.  It is also version specific as the interface features and technology are always changing.  We have a substantial organization to manage this complexity so you don't have to.  Suffice to say, we'd like to help our customers peel back the rhetoric of companies that fly the MDM flag without a real offering that you can quickly benefit from. Please watch this space for more information on this storyline as well as news and information around Oracle MDM.

    Read the article

  • How-to filter table filter input to only allow numeric input

    - by frank.nimphius
    In a previous ADF Code Corner post, I explained how to change the table filter behavior by intercepting the query condition in a query filter. See sample #30 at http://www.oracle.com/technetwork/developer-tools/adf/learnmore/index-101235.html In this OTN Harvest post I explain how to prevent users from providing invalid character entries as table filter criteria to avoid problems upon re-querying the table. In the example shown next, only numeric values are allowed for a table column filter. To create a table that allows data filtering, drag a View Object – or a data collection of a Web Service or JPA business service – from the DataControls panel and drop it as a table. Choose the Enable Filtering option in the Edit Table Columns dialog so the table renders with the column filter boxes displayed. The table filter fields are created using implicit af:inputText components that need to be customized for you to apply a custom filter input component, or to change the input behavior. To change the input filter, so only a defined set of input keys is allowed, you need to change the default filter field with your own af:inputText field to which you apply an af:clientListener tag that filters user keyboard entries. For this, in the Oracle JDeveloper visual editor, select the column which filter you want to change and expand the column node in the Oracle JDeveloper Structure Window. Part of the column definition is the Column facet node. Expand the facets so you see the filter facet entry. The filter facet is grayed out as there is no custom facet defined. In a next step, open theComponent Palette (ctrl+shift+P) and drag an Input Text component onto the facet. This demarks the first part in the filter customization. To make the custom filter component work, you need to map the af:inputText component value property to the ADF filter criteria that is exposed in the Expression Builder. Open the Expression Builder for the filter input component value property by clicking the arrow icon to its right. In the Expression Builder expand the JSP Objects | vs | filterCriteria node to select the attribute name represented by the table column. The vs entry is the name of a variable that is defined on the table and that grants you access to the table attributes. Now that the filter works as before – though using a custom filter input component – you can add the af:clientListener tag to your custom filter component – af:inputText – to call out to JavaScript when users type in the column filter field Point the client filter method property to a JavaScript function that you reference or add through using the af:resource tag and set the type property value to keyDown. <af:document id="d1">     <af:resource type="javascript" source="/js/filterHandler.js"/> … The filter definition looks as shown below <af:inputText label="Label 1" id="it1"                         value="#{vs.filterCriteria.Employe        <af:clientListener method="suppressCharacterInput"                                     type="keyDown"/> </af:inputText> The JavaScript code that you can use to either filter character inputs or numeric inputs is shown below. Just store this code in an external JavaScript (.js) file and reference it from the af:resource tag. //Allow numbers, cursor control keys and delete keys function suppressCharacterInput(evt) {     var _keyCode = evt.getKeyCode();     var _filterField = evt.getCurrentTarget();     var _oldValue = _filterField.getValue();     if (!((_keyCode < 57) ||(_keyCode > 96 && _keyCode < 105))) {         _filterField.setValue(_oldValue);         evt.cancel();     } } //Allow characters, cursor control keys and delete keys function suppressNumericInput(evt) {  var _keyCode = evt.getKeyCode();  var _filterField = evt.getCurrentTarget();  var _oldValue = _filterField.getValue();  //check for numbers  if ((_keyCode < 57 && _keyCode > 47) ||      (_keyCode > 96 && _keyCode < 105)){     _filterField.setValue(_oldValue);     evt.cancel();   } } But what if browsers don't allow JavaScript ? Don't worry about this. If browsers would not support JavaScript then ADF Faces as a whole would not work and you had a different problem.

    Read the article

  • What scientific plotting software is available?

    - by Helix
    I am currently doing some experimental work and I have a lot of data to trawl though. I use Gnumeric, and it's very good, but often I feel there has to be something better. Ideally I would like the maximum number of features with a minimal learning curve, but really I'd just like to know if there is something better than Gnumeric that I can use for manipulating and plotting data. What would you recommend?

    Read the article

  • Autoscaling in a modern world&hellip;. Part 3

    - by Steve Loethen
    The Wasabi Hands on Labs give you a good look at the basic mechanics, but I don’t find the setup too practical.  Using a local console application to host the Autoscaler and rules files is probably the (IMHO) least likely architecture.  Far more common would be hosting in a service on premise (if you want to have the Autoscaler local) or most likely, host it in a Azure role of it’s own.  I chose to go the Azure route. First step was to get the rules.xml and the services.xml files into the cloud.  I tend to be a “one step at a time” sort of guy, so running the console application with the rules sitting in a Azure hosted set of blobs seemed to be the logical first step.  Here are the steps: 1) Create a container in the storage account you wish to use.  Name does not matter, you will get a chance to set the container name (as well as the file names) in the app.config 2) Copy the two files from where you created them to your  container.  I used the same files I had locally.  I made the container public to eliminate security issues, but in the final application, a bit of security needs to be applied (one problem at a time).  The content type was set to text/xml.  I found one reference claiming the importance of this step, and it makes sense. 3) Adjust the app.config to set the location of the files.  This will let you set all the storage account and key information needed to reach into the cloud form your console application.  The sections of your app.config will look like this: <rulesStores> <add name="Blob Rules Store" type="Microsoft.Practices.EnterpriseLibrary.WindowsAzure.Autoscaling.Rules.Configuration.BlobXmlFileRulesStore, Microsoft.Practices.EnterpriseLibrary.WindowsAzure.Autoscaling, Version=5.0.1118.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" blobContainerName="[ContainerName]" blobName="rules.xml" storageAccount="DefaultEndpointsProtocol=https;AccountName=[StorageAccount];AccountKey=[AccountKey]" monitoringRate="00:00:30" certificateThumbprint="" certificateStoreLocation="LocalMachine" checkCertificateValidity="false" /> </rulesStores> <serviceInformationStores> <add name="Blob Service Information Store" type="Microsoft.Practices.EnterpriseLibrary.WindowsAzure.Autoscaling.ServiceModel.Configuration.BlobXmlFileServiceInformationStore, Microsoft.Practices.EnterpriseLibrary.WindowsAzure.Autoscaling, Version=5.0.1118.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" blobContainerName="[ContainerName]" blobName="services.xml" storageAccount="DefaultEndpointsProtocol=https;AccountName=[StorageAccount];AccountKey=[AccountKey]" monitoringRate="00:00:30" certificateThumbprint="" certificateStoreLocation="LocalMachine" checkCertificateValidity="false" /> </serviceInformationStores> Once I had the files up in the sky, I renamed the local copies to just to make my self feel better about the application using the correct set of rules and services.  Deploy the web role to the cloud.  Once it is up and running, start the console application.  You should find the application scales up and down in response to the buttons on the web site.  Tune in next time for moving the hosting of the Autoscaler to a worker role, discussions on getting the logging information into diagnostics into storage, and a set of discussions about certs and how they play a role.

    Read the article

  • O&rsquo;Reilly Deal of the Day 10/June/2014 - AngularJS Directives

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2014/06/10/orsquoreilly-deal-of-the-day-10june2014---angularjs-directives.aspxToday’s half-price E-Book offer from O’Reilly at http://shop.oreilly.com/product/9781783280339.do is AngularJS Directives. “AngularJS, propelled by Google, is quickly becoming one of the most popular JavaScript MVC frameworks available, working to invert the development paradigm and bring data-driven modularity to the web frontend. Directives serve as the core building blocks in AngularJS and enable you to create reusable models that mold around your data structures and breathe new life into the intersection of HTML and JavaScript.”

    Read the article

  • All the posts in LINQ series

    - by vik20000in
    In Last few weeks I have done a few LINQ series Post. Here is a list of all the posts done.Filtering data in LINQ with the help of where clauseUsing Take and skip keyword to filter records in LINQ TakeWhile and SkipWhile method in LINQLINQ and ordering of the result setGrouping data in LINQ with the help of group keywordUsing set operation in LINQLINQ and conversion operatorsRetrieving only the first record or record at a certain index in LINQUsing Generation operator in LINQWorking with Joins in LINQLINQ and Aggregate function Vikram

    Read the article

  • Google I/O 2011: GIS with Google Earth and Google Maps

    Google I/O 2011: GIS with Google Earth and Google Maps Josh Livni, Mano Marks Building a robust interactive map with a lot of data involves more than just adding a few placemarks. We'll talk about integrating with existing GIS software, importing data from shapefiles and other formats, map projections, and techniques for managing, analyzing, and rendering large datasets. From: GoogleDevelopers Views: 3785 19 ratings Time: 52:25 More in Science & Technology

    Read the article

  • Silverlight Cream for April 17, 2010 -- #839

    - by Dave Campbell
    In this Issue: ITLackey, SilverLaw, Max Paulousky, Alex Yakhnin, Paul Sheriff, Douglas, Jeremy Likness, Tomasz Janczuk, Anoop Madhusudanan, Adam Kinney, and Ashish Shetty. Shoutout: If you haven't already seen it, CrocusGirl did a great job of summarizing Day 2 of DevConnections with her Silverlight 4 Launch Notes From SilverlightCream.com: RIA Services - IIS6 Virtual Directory Deployment ITLackey has a post up building on his previous post on Windows Authentication with RIA Services and discusses deploying to an IIS Virtual Directory. How To: Determine ChildWindow Position At Runtime - Silverlight 3 SilverLaw has a post up about determining the position of a ChildWindow at run-time, for example after the user moves it. Modularity in Silverlight Applications - An Issue With ModuleInitializeException – Part 2 Max Paulousky has part 2 of his series up on Modularity in Silverlight... he discusses using XAML as a catalog and registering modules at runtime, and compares to WPF. Creating LINQ Data Provider for WP7 (Part 1) Alex Yakhnin has a first cut at a LINQ Data Provider for WP7 ... I was expecting this to hit pretty soon, because we're all going to want it... check out the code and d/l the project. Synchronize Data between a Silverlight ListBox and a User Control Paul Sheriff demonstrates databinding in XAML between local data in a ListBox and a UserControl. The beginnings of Silverlight development with Expression Blend Douglas has a good post up on beginning your Silverlight development with Expression Blend. He covers a lot of ground in this post. Converting Silverlight 3 to Silverlight 4 Jeremy Likness has a video up demonstrating converting Silverlight 3 to Silverlight 4 with download links and also using commanding on buttons. Debugging WCF RIA Services with WCF traces Tomasz Janczuk has a post up discussing the use of WCF RIA Services traces to help diagnose and debug problems in a deployed service. Bing Maps + oData + Windows Phone 7 - Nerd Dinner Client For Windows Phone 7 Check out what Anoop Madhusudanan has provided... Nerd Dinner for WP7, including OData and BingMaps... just very cool! A few cool new features added in Expression Blend 4 RC Adam Kinney announced the availability of the new Expression Blend and highlights some of the new features... like MakeLayoutPath... FTW! Of Crashing and Sometimes Burning Ashish Shetty has a discourse posted about where the causes of errors might come from, what to expect from the platform, where to find crash dumps, and links to more reading. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Reading All Users Session

    - by imran_ku07
      Introduction :            InProc Session is the widely used state management. Storing the session state Inproc is also the fastest method and is well-suited to small amounts of volatile data. Reading and writing current user Session is very easy. But some times we need to read all users session before taking a decision or sometimes we may need to check which users are currently active with the help of Session. But unfortunately there is no class in .Net Framework (i don't found myself) to read all user InProc Session Data. In this article i will use reflection to read all user Inproc Session.   Description :              This code will work equally in both MVC and webform, but for demonstration i will use a simple webform example. So let's create a simple Website and Add two aspx pages, Default.aspx and Default2.aspx. In Default.aspx just add a link to navigate to Default2.aspx and in Default.aspx.cs just add a Session. Default.aspx: <%@ Page Language="C#" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="Default" %><!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html ><head runat="server">    <title>Untitled Page</title></head><body>    <form id="form1" runat="server">    <div>        <a href="Default2.aspx">Click to navigate to next page</a>    </div>    </form></body></html>  Default.aspx.cs:  using System;using System.Data;using System.Configuration;using System.Collections;using System.Web;using System.Web.Security;using System.Web.UI;using System.Web.UI.WebControls;using System.Web.UI.WebControls.WebParts;using System.Web.UI.HtmlControls;public partial class Default : System.Web.UI.Page{    protected void Page_Load(object sender, EventArgs e)    {        Session["User"] = "User" + DateTime.Now;    }} Now when every user click this link will navigate to Default2.aspx where all the magic appears.Default2.aspx.cs: using System;using System.Data;using System.Configuration;using System.Collections;using System.Web;using System.Web.Security;using System.Web.UI;using System.Web.UI.WebControls;using System.Web.UI.WebControls.WebParts;using System.Web.UI.HtmlControls;using System.Reflection;using System.Web.SessionState;public partial class Default2 : System.Web.UI.Page{    protected void Page_Load(object sender, EventArgs e)    {        object obj = typeof(HttpRuntime).GetProperty("CacheInternal", BindingFlags.NonPublic | BindingFlags.Static).GetValue(null, null);        Hashtable c2 = (Hashtable)obj.GetType().GetField("_entries", BindingFlags.NonPublic | BindingFlags.Instance).GetValue(obj);        foreach (DictionaryEntry entry in c2)        {            object o1 = entry.Value.GetType().GetProperty("Value", BindingFlags.NonPublic | BindingFlags.Instance).GetValue(entry.Value, null);            if (o1.GetType().ToString() == "System.Web.SessionState.InProcSessionState")            {                SessionStateItemCollection sess = (SessionStateItemCollection)o1.GetType().GetField("_sessionItems", BindingFlags.NonPublic | BindingFlags.Instance).GetValue(o1);                if (sess != null)                {                    if (sess["User"] != null)                    {                        Label1.Text += sess["User"] + " is Active.<br>";                    }                }            }        }    }}            Now just open more than one browsers or more than one browser instance and then navigate to Default.aspx and click the link, you will see all the user's Session data.    How this works :        InProc session data is stored in the HttpRuntime’s internal cache in an implementation of ISessionStateItemCollection that implements ICollection. In this code, first of all i got CacheInternal Static Property of HttpRuntime class and then with the help of this object i got _entries private member which is of type ICollection. Then simply enumerate this dictionary and only take object of type System.Web.SessionState.InProcSessionState and finaly got SessionStateItemCollection for each user.Summary :        In this article, I shows you how you can get all current user Sessions. However one thing you will find when executing this code is that it will not show the current user Session which is set in the current request context because Session will be saved after all the Page Events.

    Read the article

  • Add Firefox’s Awesome Bar Bookmark Search Function to Chrome and Iron

    - by Asian Angel
    Do you have a large number of bookmarks saved in your Chromium-based browser and need a quick way to search through them? Then see how easy it is to search through those bookmarks just like Firefox users do with the AwesomeBar extension. To engage the bookmark search function type “ab” in the Address Bar as seen above and press either Tab or the Space Bar. That will display the AwesomeBar prefix-bar as seen below. Enter the desired text to begin your search. For our example we decided to conduct a search for bookmarks related to the Ubuntu Twitter client Hotot. The results will continue to narrow down nicely as you type… Typing just a bit more finishes narrowing our search down the rest of the way for Hotot related items. Install the AwesomeBar Extension [Google Chrome Extensions] How to Enable Google Chrome’s Secret Gold IconHow to Create an Easy Pixel Art Avatar in Photoshop or GIMPInternet Explorer 9 Released: Here’s What You Need To Know

    Read the article

  • Creating XML in SQL Server

    XML has become a common form of representing and exchanging data in today's information age. SQL Server introduced XML-centric capabilities in SQL Server 2000. That functionality has been expanded in later releases. One aspect of working with XML is creating XML from relational data, which is accomplished utilizing the FOR XML clause in SQL Server.

    Read the article

  • Backup Azure Tables, schedule Azure scripts&hellip; and more

    - by Herve Roggero
    Well – months of effort are now officially over… or should I say it’s just the beginning?   Enzo Cloud Backup 2.0 (beta) is now officially out!!! This tool will let you do the following: * Backup SQL Database (and SQL Server to a limited extend) * Backup Azure Tables * Restore SQL Backups into another SQL environment * Restore Azure Tables in Azure Storage, or SQL Environment * Manage and schedule database maintenance scripts * Drop database schema containers (with preview) for SaaS environments * Receive alerts (SMTP) when operations complete or fail That’s it at a high level… but you need to see the flexibility around these features. For example you can select a specific backup strategy for Azure Tables allowing faster backup operations when partition keys use GUIDs. You can also call custom stored procedures during the restore operation of Azure Tables, allowing you to transform the data along the way. You can also set a performance threshold during Azure Table backup operations to help you control possible throttling conditions in your Storage Account. Regarding database scripts, you can now define T-SQL scripts and schedule them for execution in a specific order. You can also tell Enzo to execute a pre and post script during Azure Table restore operations against a SQL environment. The backup operation now supports backing up to multiple devices at the same time. So you can execute a backup request to both a local file, and a blob at the same time, guaranteeing that both will contain the exact same data. And due to the level of options that are available, you can save backup definitions for later reuse. The screenshot below backs up Azure Tables to two devices (a blob and a SQL Database). You can also manage your database schemas for SaaS environments that use schema containers to separate customer data. This new edition allows you to see how many objects you have in each schema, backup specific schemas, and even drop all objects in a given schema. For example the screenshot below shows that the EnzoLog database has 4 user-defined schemas, and the AFA schema has 5 tables and 1 module (stored proc, function, view…). Selecting the AFA schema and trying to delete it will prompt another screen to show which objects will be deleted. As you can see, Enzo Cloud Backup provides amazing capabilities that can help you safeguard your data in SQL Database and Azure Tables, and give you advanced management functions for your Azure environment. Download a free trial today at http://www.bluesyntax.net.

    Read the article

  • Silverlight Cream for June 28, 2011 -- #1112

    - by Dave Campbell
    In this Issue: WindowsPhoneGeek, John Papa, Mike Taulty, Erno de Weerd, Stephen Price, Chris Rouw, Peter Kuhn, Damian Schenkelman, Michael Washington, and Manas Patnaik. Above the Fold: Silverlight: "Binding to View Model properties in Data Templates. The RootBinding Markup Extension" Damian Schenkelman WP7: "Storing Files in SQL Server using WCF RIA Services and Silverlight – Part 3" Chris Rouw LightSwitch: "Saving Files To File System With LightSwitch (Uploading Files)" Michael Washington Shoutouts: Steve Wortham announced a change to his XilverlightXAP.com site... they're now accepting XAML illustrations: Introducing XAML Illustrations, Increased Payouts to Contributors, and More Amid all the discussions that I've tried to avoid, Michael Washinton is Betting The House On LightSwitch From SilverlightCream.com: Dynamically updating a data bound LongListSelector in Windows Phone WindowsPhoneGeek's latest is on using the LongListSelector from the Toolkit and dynamically updating it with data... detailed guidelines and plenty of pictures and code as always. Silverlight TV 77: Exploring 3D with Aaron Oneal John Papa has Silverlight TV number 77 up and is chatting with Aaron Oneal, program manager of the Silverlight 3D efforts... too cool. Silverlight WebBrowser Control for Offline Apps (Part 2) Mike Taulty wrote this post in Silverlight 5 Beta, but says it should be fine in 4... a continuation of his HTML Content display using the WebBrowser control while offline Windows Phone 7: Databinding and the Pivot Control Erno de Weerd discusses the Pivot control in WP7 based on his attempts to use it in an app. Required Attribute on an Entity Stephen Price has a new post at XAML Source... first is this one on setting the required attribute and the troubles you can get into if it's not set correctly Storing Files in SQL Server using WCF RIA Services and Silverlight – Part 3 Chris Rouw has Part 3 of his series on Storing files in SQL Server using FILESTREAM Storage in SQL Server 2008 and Silverlight... this time he's viewing files stored in the FILESTREAM from the LOB app. Getting ready for the Windows Phone 7 Exam 70-599 (Part 4) Peter Kuhn has Part 4 of his series on getting ready for the WP7 exam up at SilverlightShow... the date is coming up soon... are you ready? Binding to View Model properties in Data Templates. The RootBinding Markup Extension Damian Schenkelman has a Silverlight 5 Beta post up... digging into the XAML Markup Extensions and popping out a RootBindingExtensionthat helps bind to a property in a view model from a DataTemplate. Saving Files To File System With LightSwitch (Uploading Files) Michael Washington has a cool tutorial up on his new LightSwitch Help Website... File Upload to a server file system using LightSwitch, plus a project to download... good stuff! Microsoft Media Platform (MMPPF): Player Framework for Silverlight Manas Patnaik's latest post is about the Media Player Project... some of the history of media with Silvelight and how to go about using the Media Player Project bits Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Adding UCM as a search source in Windows Explorer

    - by kyle.hatlestad
    A customer recently pointed out to me that Windows 7 supports federated search within Windows Explorer. This means you can perform searches to external sources such as Google, Flickr, YouTube, etc right from within Explorer. While we do have the Desktop Integration Suite which offers searching within Explorer, I thought it would be interesting to look into this method which would not require any client software to implement. Basically, federated searching hooks up in Windows Explorer through the OpenSearch protocol. A Search Connector Descriptor file is run and it installs the search provider. The file is a .osdx file which is an OpenSearch Description document. It describes the search provider you are hooking up to along with the URL for the query. If those results can come back as an RSS or ATOM feed, then you're all set. So the first step is to install the RSS Feeds component from the UCM Samples page on OTN. If you're on 11g, I've found the RSS Feeds works just fine on that version too. Next, you want to perform a Quick Search with a particular search term and then copy the RSS link address for that search result. Here is what an example URL might looks like: http://server:16200/cs/idcplg?IdcService=GET_SCS_FEED&feedName=search_results&QueryText=%28+%3cqsch%3eoracle%3c%2fqsch %3e+%29&SortField=dInDate&SortOrder=Desc&ResultCount=20&SearchQueryFormat= Universal&SearchProviders=server& Now you want to create a new text file and start out with this information: <?xml version="1.0" encoding="UTF-8"?><OpenSearchDescription xmlns:ms-ose="http://schemas.microsoft.com/opensearchext/2009/"> <ShortName></ShortName> <Description></Description> <Url type="application/rss+xml" template=""/> <Url type="text/html" template=""/> </OpenSearchDescription> Enter a ShortName and Description. The ShortName will be the value used when displaying the search provider in Explorer. In the template attribute for the first Url element, enter the URL copied previously. You will then need to convert the ampersand symbols to '&' to make them XML compliant. Finally, you'll want to switch out the search term with '{searchTerms}'. For the second Url element, you can do the same thing except you want to copy the UCM search results URL from the page of results. That URL will look something like: http://server:16200/cs/idcplg?IdcService=GET_SEARCH_RESULTS&SortField=dInDate&SortOrder=Desc&ResultCount=20&QueryText=%3Cqsch%3Eoracle%3C%2Fqsch%3E&listTemplateId= &ftx=1&SearchQueryFormat=Universal&TargetedQuickSearchSelection= &MiniSearchText=oracle Again, convert the ampersand symbols and replace the search term with '{searchTerms}'. When complete, save the file with the .osdx extension. The completed file should look like: <?xml version="1.0" encoding="UTF-8"?> <OpenSearchDescription xmlns="http://a9.com/-/spec/opensearch/1.1/" xmlns:ms-ose="http://schemas.microsoft.com/opensearchext/2009/"> <ShortName>Universal Content Management</ShortName> <Description>OpenSearch for UCM via Windows 7 Search Federation.</Description> <Url type="application/rss+xml" template="http://server:16200/cs/idcplg?IdcService=GET_SCS_FEED&amp;feedName=search_results&amp;QueryText=%28+%3Cqsch%3E{searchTerms}%3C%2fqsch%3E+%29&amp;SortField=dInDate&amp;SortOrder=Desc&amp;ResultCount=200&amp;SearchQueryFormat=Universal"/> <Url type="text/html" template="http://server:16200/cs/idcplg?IdcService=GET_SEARCH_RESULTS&amp;SortField=dInDate&amp;SortOrder=Desc&amp;ResultCount=20&amp;QueryText=%3Cqsch%3E{searchTerms}%3C%2Fqsch%3E&amp;listTemplateId=&amp;ftx=1&amp;SearchQueryFormat=Universal&amp;TargetedQuickSearchSelection=&amp;MiniSearchText={searchTerms}"/> </OpenSearchDescription> After you save the file, simply double-click it to create the provider. It will ask if you want to add the search connector to Windows. Click Add and it will add it to the Searches folder in your user folder as well as your Favorites. Now just click on the search icon and in the upper right search box, enter your term. As you are typing, it begins executing searches and the results will come back in Explorer. Now when you double-click on an item, it will try and download the web viewable for viewing. You also have the ability to save the search, just as you would in UCM. And there is a link to Search On Website which will launch your browser and go directly to the search results page there. And with some tweaks to the RSS component, you can make the results a bit more interesting. It supports the Media RSS standard, so you can pass along the thumbnail of the documents in the results. To enable this, edit the rss_resources.htm file in the RSS Feeds component. In the std_rss_feed_begin resource include, add the namespace 'xmlns:media="http://search.yahoo.com/mrss/' to the rss definition: <rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:media="http://search.yahoo.com/mrss/"> Next, in the rss_channel_item_with_thumb include, below the closing image element, add this element: </images> <media:thumbnail url="<$if strIndexOf(thumbnailUrl, "@t") > 0 or strIndexOf(thumbnailUrl, "@g") > 0 or strIndexOf(thumbnailUrl, "@p") > 0$><$rssHttpHost$><$thumbnailUrl$><$elseif dGif$><$HttpWebRoot$>images/docgifs/<$dGif$><$endif$>" /> <description> This and lots of other tweaks can be done to the RSS component to help extend it for optimum use in Explorer. Hopefully this can get you started. *Note: This post also applies to Universal Records Management (URM).

    Read the article

  • Using the BAM Interceptor with Continuation

    - by Charles Young
    Originally posted on: http://geekswithblogs.net/cyoung/archive/2014/06/02/using-the-bam-interceptor-with-continuation.aspxI’ve recently been resurrecting some code written several years ago that makes extensive use of the BAM Interceptor provided as part of BizTalk Server’s BAM event observation library.  In doing this, I noticed an issue with continuations.  Essentially, whenever I tried to configure one or more continuations for an activity, the BAM Interceptor failed to complete the activity correctly.   Careful inspection of my code confirmed that I was initializing and invoking the BAM interceptor correctly, so I was mystified.  However, I eventually found the problem.  It is a logical error in the BAM Interceptor code itself. The BAM Interceptor provides a useful mechanism for implementing dynamic tracking.  It supports configurable ‘track points’.  These are grouped into named ‘locations’.  BAM uses the term ‘step’ as a synonym for ‘location’.   Each track point defines a BAM action such as starting an activity, extracting a data item, enabling a continuation, etc.  Each step defines a collection of track points. Understanding Steps The BAM Interceptor provides an abstract model for handling configuration of steps.  It doesn’t, however, define any specific configuration mechanism (e.g., config files, SSO, etc.)  It is up to the developer to decide how to store, manage and retrieve configuration data.  At run time, this configuration is used to register track points which then drive the BAM Interceptor. The full semantics of a step are not immediately clear from Microsoft’s documentation.  They represent a point in a business activity where BAM tracking occurs.  They are named locations in the code.  What is less obvious is that they always represent either the full tracking work for a given activity or a discrete fragment of that work which commences with the start of a new activity or the continuation of an existing activity.  The BAM Interceptor enforces this by throwing an error if no ‘start new’ or ‘continue’ track point is registered for a named location. This constraint implies that each step must marked with an ‘end activity’ track point.  One of the peculiarities of BAM semantics is that when an activity is continued under a correlated ID, you must first mark the current activity as ‘ended’ in order to ensure the right housekeeping is done in the database.  If you re-start an ended activity under the same ID, you will leave the BAM import tables in an inconsistent state.  A step, therefore, always represents an entire unit of work for a given activity or continuation ID.  For activities with continuation, each unit of work is termed a ‘fragment’. Instance and Fragment State Internally, the BAM Interceptor maintains state data at two levels.  First, it represents the overall state of the activity using a ‘trace instance’ token.  This token contains the name and ID of the activity together with a couple of state flags.  The second level of state represents a ‘trace fragment’.   As we have seen, a fragment of an activity corresponds directly to the notion of a ‘step’.  It is the unit of work done at a named location, and it must be bounded by start and end, or continue and end, actions. When handling continuations, the BAM Interceptor differentiates between ‘root’ fragments and other fragments.  Very simply, a root fragment represents the start of an activity.  Other fragments represent continuations.  This is where the logic breaks down.  The BAM Interceptor loses state integrity for root fragments when continuations are defined. Initialization Microsoft’s BAM Interceptor code supports the initialization of BAM Interceptors from track point configuration data.  The process starts by populating an Activity Interceptor Configuration object with an array of track points.  These can belong to different steps (aka ‘locations’) and can be registered in any order.  Once it is populated with track points, the Activity Interceptor Configuration is used to initialise the BAM Interceptor.  The BAM Interceptor sets up a hash table of array lists.  Each step is represented by an array list, and each array list contains an ordered set of track points.  The BAM Interceptor represents track points as ‘executable’ components.  When the OnStep method of the BAM Interceptor is called for a given step, the corresponding list of track points is retrieved and each track point is executed in turn.  Each track point retrieves any required data using a call back mechanism and then serializes a BAM trace fragment object representing a specific action (e.g., start, update, enable continuation, stop, etc.).  The serialised trace fragment is then handed off to a BAM event stream (buffered or direct) which takes the appropriate action. The Root of the Problem The logic breaks down in the Activity Interceptor Configuration.  Each Activity Interceptor Configuration is initialised with an instance of a ‘trace instance’ token.  This provides the basic metadata for the activity as a whole.  It contains the activity name and ID together with state flags indicating if the activity ID is a root (i.e., not a continuation fragment) and if it is completed.  This single token is then shared by all trace actions for all steps registered with the Activity Interceptor Configuration. Each trace instance token is automatically initialised to represent a root fragment.  However, if you subsequently register a ‘continuation’ step with the Activity Interceptor Configuration, the ‘root’ flag is set to false at the point the ‘continue’ track point is registered for that step.   If you use a ‘reflector’ tool to inspect the code for the ActivityInterceptorConfiguration class, you can see the flag being set in one of the overloads of the RegisterContinue method.    This makes no sense.  The trace instance token is shared across all the track points registered with the Activity Interceptor Configuration.  The Activity Interceptor Configuration is designed to hold track points for multiple steps.  The ‘root’ flag is clearly meant to be initialised to ‘true’ for the preliminary root fragment and then subsequently set to false at the point that a continuation step is processed.  Instead, if the Activity Interceptor Configuration contains a continuation step, it is changed to ‘false’ before the root fragment is processed.  This is clearly an error in logic. The problem causes havoc when the BAM Interceptor is used with continuation.  Effectively the root step is no longer processed correctly, and the ultimate effect is that the continued activity never completes!   This has nothing to do with the root and the continuation being in the same process.  It is due to a fundamental mistake of setting the ‘root’ flag to false for a continuation before the root fragment is processed. The Workaround Fortunately, it is easy to work around the bug.  The trick is to ensure that you create a new Activity Interceptor Configuration object for each individual step.  This may mean filtering your configuration data to extract the track points for a single step or grouping the configured track points into individual steps and the creating a separate Activity Interceptor Configuration for each group.  In my case, the first approach was required.  Here is what the amended code looks like: // Because of a logic error in Microsoft's code, a separate ActivityInterceptorConfiguration must be used // for each location. The following code extracts only those track points for a given step name (location). var trackPointGroup = from ResolutionService.TrackPoint tp in bamActivity.TrackPoints                       where (string)tp.Location == bamStepName                       select tp; var bamActivityInterceptorConfig =     new Microsoft.BizTalk.Bam.EventObservation.ActivityInterceptorConfiguration(activityName); foreach (var trackPoint in trackPointGroup) {     switch (trackPoint.Type)     {         case TrackPointType.Start:             bamActivityInterceptorConfig.RegisterStartNew(trackPoint.Location, trackPoint.ExtractionInfo);             break; etc… I’m using LINQ to filter a list of track points for those entries that correspond to a given step and then registering only those track points on a new instance of the ActivityInterceptorConfiguration class.   As soon as I re-wrote the code to do this, activities with continuations started to complete correctly.

    Read the article

  • how to use gps receiver bu-353 in ubuntu 10.10

    - by Parimal
    Hi I have a gps receiver bu-353 with usb interface i want to know how can i use it under ubuntu I ran the following command gpsd -n -N -D 2 /dev/ttyUSB0 i got the output as: gpsd: launching (Version 2.94) gpsd: listening on port gpsd gpsd: running with effective group ID 1000 gpsd: running with effective user ID 1000 gpsd: opening GPS data source type 3 at '/dev/ttyUSB0' gpsd: speed 38400, 8N1 gpsd: Garmin: garmin_gps Linux USB module not active. gpsd: speed 9600, 8O1 gpsd: speed 38400, 8N1 gpsd: gpsd_activate(): opened GPS (fd 6) gpsd: speed 4800, 8N1 gpsd: NTPD ntpd_link_activate: 0 gpsd: /dev/ttyUSB0 identified as type SiRF binary (2.687608 sec @ 4800bps) gpsd: detaching 127.0.0.1 (sub 1, fd 8) in detach_client gpsd: detaching 127.0.0.1 (sub 1, fd 8) in detach_client after this i started tangoGPS, which said no gps and no gpsd found

    Read the article

  • How to deny the web access to some files?

    - by Strae
    I need to do an operation a bit strange. First, i run on Debian, apache2 (which 'runs' as user www-data) So, I have simple text file with .txt ot .ini, or whatever extension, doesnt matter. These files are located in subfolders with a structure like this: www.example.com/folder1/car/foobar.txt www.example.com/folder1/cycle/foobar.txt www.example.com/folder1/fish/foobar.txt www.example.com/folder1/fruit/foobar.txt therefore, the file name always the same, ditto for the 'hierarchy', just change the name of the folder: /folder-name-static/folder-name-dinamyc/file-name-static.txt What I should do is (I think) relatively simple: I must be able to read that file by programs on the server (python, php for example), but if I try to retrieve the file contents by broswer (digiting the url www.example.com/folder1/car/foobar.txt, or via cUrl, etc..) I must get a forbidden error, or whatever, but not access the file. It would also be nice that even accessing those files via FTP are 'hidden', or anyway couldnt be downloaded (at least that I use with the ftp root and user data) How can I do? I found this online, be put in the file .htaccess: <Files File.txt> Order allow, deny Deny from all </ Files> It seems to work, but only if the file is in the web root (www.example.com / myfile.txt), and not in subfolders. Moreover, the folders in the second level (www.example.com/folder1/fruit/foobar.txt) will be dinamycally created.. I would like to avoid having to change .htaccess file from time to time. It is possible to create a rule, something like that, that goes for all files with given name, which is on www.example.com/folder-name-static/folder-name-dinamyc/file-name-static.txt, where those parts are allways the same, just that one change ? EDIT: As Dave Drager said, i could semplify this keeping those file outside the web accessible directory. But those directory's will contain others files too, images, and stuff used by my users, so i'm simply try to not have a duplicate folders system, like: /var/www/vhosts/example.com/httpdocs/folder1/car/[other folders and files here] /var/www/vhosts/example.com/httpdocs/folder1/cycle/[other folders and files here] /var/www/vhosts/example.com/httpdocs/folder1/fish/[other folders and files here] //and, then for the 'secrets' files: /folder1/data/car/foobar.txt /folder1/data/cycle/foobar.txt /folder1/data/fish/foobar.txt

    Read the article

  • VS for Database Pros (GDR R2) Removes Sproc Comments (2 replies)

    I have been working with my team to implement Data Dude GDR R2 for managing ALL of the databases for our applications. So far I am very pleased by what we can do with the tool with a single exception. I want to have a header with comments as part of every stored procedure so we can track the history of a procedure. When creating a deployment script, and subsequently running it, Data Dude strips ou...

    Read the article

  • Playing NSF music in FMOD.net

    - by Tesserex
    So, as the title says, I want to be able to play NSF files using FMOD, because my project already uses FMOD and I'd rather not replace it. This will involve figuring out how existing players and emulators work and porting it. I haven't yet found an existing player that uses FMOD. My starting point is the MyNes source from http://sourceforge.net/projects/mynes/. There are two big steps between here and what I'm looking for. MyNes plays from a ROM, not NSF. So, I have to rip out the APU and get it to play NSF files. The MyNes APU uses SlimDX, so I have to convert that to FMOD.NET. I am really stuck about how to go about either of these, because I'm not that familiar with audio formats and it's hard finding resources online. So here are a few questions: From what I can tell from the NSF spec at http://kevtris.org/nes/nsfspec.txt, it's just contains the relevant memory section of the ROM, plus the header. If anyone can verify or correct this that would be great. The emulator APU uses data from the rest of the emulator to play, including things like cycle counts. I'm not sure what replaces this in a standalone player. Can't I just load all the music data at once into a stream and play it? Joining #1 and #2, does the header data from the NSF substitute for some of the ROM data in the emulator code? Using FMOD, will I be following the usercreatedsound example for loading a stream? And does this format count as PCM? Specifically MyNes says PCM8. Any tips on loading / playing the stream in FMOD are appreciated. As an aside, I don't really understand the loading / playing sections of the spec I linked at all. It seems to apply to 6502 systems / emulators only and not to my situation. I know it's a long shot for anyone here to have enough experience in this area to help, but anything you can provide is definitely appreciated. A link to an existing .NET library that does this would be even better, but I don't believe one exists.

    Read the article

  • UserAppDataPath in WPF

    - by psheriff
    In Windows Forms applications you were able to get to your user's roaming profile directory very easily using the Application.UserAppDataPath property. This folder allows you to store information for your program in a custom folder specifically for your program. The format of this directory looks like this: C:\Users\YOUR NAME\AppData\Roaming\COMPANY NAME\APPLICATION NAME\APPLICATION VERSION For example, on my Windows 7 64-bit system, this folder would look like this for a Windows Forms Application: C:\Users\PSheriff\AppData\Roaming\PDSA, Inc.\WindowsFormsApplication1\1.0.0.0 For some reason Microsoft did not expose this property from the Application object of WPF applications. I guess they think that we don't need this property in WPF? Well, sometimes we still do need to get at this folder. You have two choices on how to retrieve this property. Add a reference to the System.Windows.Forms.dll to your WPF application and use this property directly. Or, you can write your own method to build the same path. If you add a reference to the System.Windows.Forms.dll you will need to use System.Windows.Forms.Application.UserAppDataPath to access this property. Create a GetUserAppDataPath Method in WPF If you want to build this path you can do so with just a few method calls in WPF using Reflection. The code below shows this fairly simple method to retrieve the same folder as shown above. C#using System.Reflection; public string GetUserAppDataPath(){  string path = string.Empty;  Assembly assm;  Type at;  object[] r;   // Get the .EXE assembly  assm = Assembly.GetEntryAssembly();  // Get a 'Type' of the AssemblyCompanyAttribute  at = typeof(AssemblyCompanyAttribute);  // Get a collection of custom attributes from the .EXE assembly  r = assm.GetCustomAttributes(at, false);  // Get the Company Attribute  AssemblyCompanyAttribute ct =                 ((AssemblyCompanyAttribute)(r[0]));  // Build the User App Data Path  path = Environment.GetFolderPath(              Environment.SpecialFolder.ApplicationData);  path += @"\" + ct.Company;  path += @"\" + assm.GetName().Version.ToString();   return path;} Visual BasicPublic Function GetUserAppDataPath() As String  Dim path As String = String.Empty  Dim assm As Assembly  Dim at As Type  Dim r As Object()   ' Get the .EXE assembly  assm = Assembly.GetEntryAssembly()  ' Get a 'Type' of the AssemblyCompanyAttribute  at = GetType(AssemblyCompanyAttribute)  ' Get a collection of custom attributes from the .EXE assembly  r = assm.GetCustomAttributes(at, False)  ' Get the Company Attribute  Dim ct As AssemblyCompanyAttribute = _                 DirectCast(r(0), AssemblyCompanyAttribute)  ' Build the User App Data Path  path = Environment.GetFolderPath( _                 Environment.SpecialFolder.ApplicationData)  path &= "\" & ct.Company  path &= "\" & assm.GetName().Version.ToString()   Return pathEnd Function Summary Getting the User Application Data Path folder in WPF is fairly simple with just a few method calls using Reflection. Of course, there is absolutely no reason you cannot just add a reference to the System.Windows.Forms.dll to your WPF application and use that Application object. After all, System.Windows.Forms.dll is a part of the .NET Framework and can be used from WPF with no issues at all. NOTE: Visit http://www.pdsa.com/downloads to get more tips and tricks like this one. Good Luck with your Coding,Paul Sheriff ** SPECIAL OFFER FOR MY BLOG READERS **We frequently offer a FREE gift for readers of my blog. Visit http://www.pdsa.com/Event/Blog for your FREE gift!

    Read the article

  • Caching NHibernate Named Queries

    - by TStewartDev
    I recently started a new job and one of my first tasks was to implement a "popular products" design. The parameters were that it be done with NHibernate and be cached for 24 hours at a time because the query will be pretty taxing and the results do not need to be constantly up to date. This ended up being tougher than it sounds. The database schema meant a minimum of four joins with filtering and ordering criteria. I decided to use a stored procedure rather than letting NHibernate create the SQL for me. Here is a summary of what I learned (even if I didn't ultimately use all of it): You can't, at the time of this writing, use Fluent NHibernate to configure SQL named queries or imports You can return persistent entities from a stored procedure and there are a couple ways to do that You can populate POCOs using the results of a stored procedure, but it isn't quite as obvious You can reuse your named query result mapping other places (avoid duplication) Caching your query results is not at all obvious Testing to see if your cache is working is a pain NHibernate does a lot of things right. Having unified, up-to-date, comprehensive, and easy-to-find documentation is not one of them. By the way, if you're new to this, I'll use the terms "named query" and "stored procedure" (from NHibernate's perspective) fairly interchangeably. Technically, a named query can execute any SQL, not just a stored procedure, and a stored procedure doesn't have to be executed from a named query, but for reusability, it seems to me like the best practice. If you're here, chances are good you're looking for answers to a similar problem. You don't want to read about the path, you just want the result. So, here's how to get this thing going. The Stored Procedure NHibernate has some guidelines when using stored procedures. For Microsoft SQL Server, you have to return a result set. The scalar value that the stored procedure returns is ignored as are any result sets after the first. Other than that, it's nothing special. CREATE PROCEDURE GetPopularProducts @StartDate DATETIME, @MaxResults INT AS BEGIN SELECT [ProductId], [ProductName], [ImageUrl] FROM SomeTableWithJoinsEtc END The Result Class - PopularProduct You have two options to transport your query results to your view (or wherever is the final destination): you can populate an existing mapped entity class in your model, or you can create a new entity class. If you go with the existing model, the advantage is that the query will act as a loader and you'll get full proxied access to the domain model. However, this can be a disadvantage if you require access to the related entities that aren't loaded by your results. For example, my PopularProduct has image references. Unless I tie them into the query (thus making it even more complicated and expensive to run), they'll have to be loaded on access, requiring more trips to the database. Since we're trying to avoid trips to the database by using a second-level cache, we should use the second option, which is to create a separate entity for results. This approach is (I believe) in the spirit of the Command-Query Separation principle, and it allows us to flatten our data and optimize our report-generation process from data source to view. public class PopularProduct { public virtual int ProductId { get; set; } public virtual string ProductName { get; set; } public virtual string ImageUrl { get; set; } } The NHibernate Mappings (hbm) Next up, we need to let NHibernate know about the query and where the results will go. Below is the markup for the PopularProduct class. Notice that I'm using the <resultset> element and that it has a name attribute. The name allows us to drop this into our query map and any others, giving us reusability. Also notice the <import> element which lets NHibernate know about our entity class. <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2"> <import class="PopularProduct, Infrastructure.NHibernate, Version=1.0.0.0"/> <resultset name="PopularProductResultSet"> <return-scalar column="ProductId" type="System.Int32"/> <return-scalar column="ProductName" type="System.String"/> <return-scalar column="ImageUrl" type="System.String"/> </resultset> </hibernate-mapping>  And now the PopularProductsMap: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2"> <sql-query name="GetPopularProducts" resultset-ref="PopularProductResultSet" cacheable="true" cache-mode="normal"> <query-param name="StartDate" type="System.DateTime" /> <query-param name="MaxResults" type="System.Int32" /> exec GetPopularProducts @StartDate = :StartDate, @MaxResults = :MaxResults </sql-query> </hibernate-mapping>  The two most important things to notice here are the resultset-ref attribute, which links in our resultset mapping, and the cacheable attribute. The Query Class – PopularProductsQuery So far, this has been fairly obvious if you're familiar with NHibernate. This next part, maybe not so much. You can implement your query however you want to; for me, I wanted a self-encapsulated Query class, so here's what it looks like: public class PopularProductsQuery : IPopularProductsQuery { private static readonly IResultTransformer ResultTransformer; private readonly ISessionBuilder _sessionBuilder;   static PopularProductsQuery() { ResultTransformer = Transformers.AliasToBean<PopularProduct>(); }   public PopularProductsQuery(ISessionBuilder sessionBuilder) { _sessionBuilder = sessionBuilder; }   public IList<PopularProduct> GetPopularProducts(DateTime startDate, int maxResults) { var session = _sessionBuilder.GetSession(); var popularProducts = session .GetNamedQuery("GetPopularProducts") .SetCacheable(true) .SetCacheRegion("PopularProductsCacheRegion") .SetCacheMode(CacheMode.Normal) .SetReadOnly(true) .SetResultTransformer(ResultTransformer) .SetParameter("StartDate", startDate.Date) .SetParameter("MaxResults", maxResults) .List<PopularProduct>();   return popularProducts; } }  Okay, so let's look at each line of the query execution. The first, GetNamedQuery, matches up with our NHibernate mapping for the sql-query. Next, we set it as cacheable (this is probably redundant since our mapping also specified it, but it can't hurt, right?). Then we set the cache region which we'll get to in the next section. Set the cache mode (optional, I believe), and my cache is read-only, so I set that as well. The result transformer is very important. This tells NHibernate how to transform your query results into a non-persistent entity. You can see I've defined ResultTransformer in the static constructor using the AliasToBean transformer. The name is obviously leftover from Java/Hibernate. Finally, set your parameters and then call a result method which will execute the query. Because this is set to cached, you execute this statement every time you run the query and NHibernate will know based on your parameters whether to use its cached version or a fresh version. The Configuration – hibernate.cfg.xml and Web.config You need to explicitly enable second-level caching in your hibernate configuration: <hibernate-configuration xmlns="urn:nhibernate-configuration-2.2"> <session-factory> [...] <property name="dialect">NHibernate.Dialect.MsSql2005Dialect</property> <property name="cache.provider_class">NHibernate.Caches.SysCache.SysCacheProvider,NHibernate.Caches.SysCache</property> <property name="cache.use_query_cache">true</property> <property name="cache.use_second_level_cache">true</property> [...] </session-factory> </hibernate-configuration> Both properties "use_query_cache" and "use_second_level_cache" are necessary. As this is for a web deployement, we're using SysCache which relies on ASP.NET's caching. Be aware of this if you're not deploying to the web! You'll have to use a different cache provider. We also need to tell our cache provider (in this cache, SysCache) about our caching region: <syscache> <cache region="PopularProductsCacheRegion" expiration="86400" priority="5" /> </syscache> Here I've set the cache to be valid for 24 hours. This XML snippet goes in your Web.config (or in a separate file referenced by Web.config, which helps keep things tidy). The Payoff That should be it! At this point, your queries should run once against the database for a given set of parameters and then use the cache thereafter until it expires. You can, of course, adjust settings to work in your particular environment. Testing Testing your application to ensure it is using the cache is a pain, but if you're like me, you want to know that it's actually working. It's a bit involved, though, so I'll create a separate post for it if comments indicate there is interest.

    Read the article

  • Analysis Services Tabular books #ssas #tabular

    - by Marco Russo (SQLBI)
    Many people are looking for books about Analysis Services Tabular. Today there are two books available and they complement each other: Microsoft SQL Server 2012 Analysis Services: The BISM Tabular Model by Marco Russo, Alberto Ferrari and Chris Webb Applied Microsoft SQL Server 2012 Analysis Services: Tabular Modeling by Teo Lachev The book I wrote with Alberto and Chris is a complete guide to create tabular models and has a good coverage about DAX, including how to use it for enriching a semantic model with calculated columns and measures and how to use it for querying a Tabular model. In my experience, DAX as a query language is a very interesting option for custom analytical applications that requires a fast calculation engine, or simply for standard reports running in Reporting Services and accessing a Tabular model. You can freely preview the table of content and read some excerpts from the book on Safari Books Online. The book is in printing and should be shipped within mid-July, so finally it will be very soon on the shelf of all the people already preordered it! The Teo Lachev’s book, covers the full spectrum of Tabular models provided by Microsoft: starting with self-service BI, you have users creating a model with PowerPivot for Excel, publishing it to PowerPivot for SharePoint and exploring data by using Power View; then, the PowerPivot for Excel model can be imported in a Tabular model and published in Analysis Services, adding more control on the model through row-level security and partitioning, for example. Teo’s book follows a step-by-step approach describing each feature that is very good for a beginner that is new to PowerPivot and/or to BISM Tabular. If you need to get the big picture and to start using the products that are part of the new Microsoft wave of BI products, the Teo’s book is for you. After you read the book from Teo, or if you already have a certain confidence with PowerPivot or BISM Tabular and you want to go deeper about internals, best practices, design patterns in just BISM Tabular, then our book is a suggested read: it contains several chapters about DAX, includes discussions about new opportunities in data model design offered by Tabular models, and also provides examples of optimizations you can obtain in DAX and best practices in data modeling and queries. It might seem strange that an author write a review of a book that might seem to compete with his one, but in reality these two books complement each other and are not alternatives. If you have any doubt, buy both: you will be not disappointed! Moreover, Amazon usually offers you a deal to buy three books, including the Visualizing Data with Microsoft Power View, another good choice for getting all the details about Power View.

    Read the article

  • How do I set up MVP for a Winforms solution?

    - by JonWillis
    Question moved from Stackoverflow - http://stackoverflow.com/questions/4971048/how-do-i-set-up-mvp-for-a-winforms-solution I have used MVP and MVC in the past, and I prefer MVP as it controls the flow of execution so much better in my opinion. I have created my infrastructure (datastore/repository classes) and use them without issue when hard coding sample data, so now I am moving onto the GUI and preparing my MVP. Section A I have seen MVP using the view as the entry point, that is in the views constructor method it creates the presenter, which in turn creates the model, wiring up events as needed. I have also seen the presenter as the entry point, where a view, model and presenter are created, this presenter is then given a view and model object in its constructor to wire up the events. As in 2, but the model is not passed to the presenter. Instead the model is a static class where methods are called and responses returned directly. Section B In terms of keeping the view and model in sync I have seen. Whenever a value in the view in changed, i.e. TextChanged event in .Net/C#. This fires a DataChangedEvent which is passed through into the model, to keep it in sync at all times. And where the model changes, i.e. a background event it listens to, then the view is updated via the same idea of raising a DataChangedEvent. When a user wants to commit changes a SaveEvent it fires, passing through into the model to make the save. In this case the model mimics the view's data and processes actions. Similar to #b1, however the view does not sync with the model all the time. Instead when the user wants to commit changes, SaveEvent is fired and the presenter grabs the latest details and passes them into the model. in this case the model does not know about the views data until it is required to act upon it, in which case it is passed all the needed details. Section C Displaying of business objects in the view, i.e. a object (MyClass) not primitive data (int, double) The view has property fields for all its data that it will display as domain/business objects. Such as view.Animals exposes a IEnumerable<IAnimal> property, even though the view processes these into Nodes in a TreeView. Then for the selected animal it would expose SelectedAnimal as IAnimal property. The view has no knowledge of domain objects, it exposes property for primitive/framework (.Net/Java) included objects types only. In this instance the presenter will pass an adapter object the domain object, the adapter will then translate a given business object into the controls visible on the view. In this instance the adapter must have access to the actual controls on the view, not just any view so becomes more tightly coupled. Section D Multiple views used to create a single control. i.e. You have a complex view with a simple model like saving objects of different types. You could have a menu system at the side with each click on an item the appropriate controls are shown. You create one huge view, that contains all of the individual controls which are exposed via the views interface. You have several views. You have one view for the menu and a blank panel. This view creates the other views required but does not display them (visible = false), this view also implements the interface for each view it contains (i.e. child views) so it can expose to one presenter. The blank panel is filled with other views (Controls.Add(myview)) and ((myview.visible = true). The events raised in these "child"-views are handled by the parent view which in turn pass the event to the presenter, and visa versa for supplying events back down to child elements. Each view, be it the main parent or smaller child views are each wired into there own presenter and model. You can literately just drop a view control into an existing form and it will have the functionality ready, just needs wiring into a presenter behind the scenes. Section E Should everything have an interface, now based on how the MVP is done in the above examples will affect this answer as they might not be cross-compatible. Everything has an interface, the View, Presenter and Model. Each of these then obviously has a concrete implementation. Even if you only have one concrete view, model and presenter. The View and Model have an interface. This allows the views and models to differ. The presenter creates/is given view and model objects and it just serves to pass messages between them. Only the View has an interface. The Model has static methods and is not created, thus no need for an interface. If you want a different model, the presenter calls a different set of static class methods. Being static the Model has no link to the presenter. Personal thoughts From all the different variations I have presented (most I have probably used in some form) of which I am sure there are more. I prefer A3 as keeping business logic reusable outside just MVP, B2 for less data duplication and less events being fired. C1 for not adding in another class, sure it puts a small amount of non unit testable logic into a view (how a domain object is visualised) but this could be code reviewed, or simply viewed in the application. If the logic was complex I would agree to an adapter class but not in all cases. For section D, i feel D1 creates a view that is too big atleast for a menu example. I have used D2 and D3 before. Problem with D2 is you end up having to write lots of code to route events to and from the presenter to the correct child view, and its not drag/drop compatible, each new control needs more wiring in to support the single presenter. D3 is my prefered choice but adds in yet more classes as presenters and models to deal with the view, even if the view happens to be very simple or has no need to be reused. i think a mixture of D2 and D3 is best based on circumstances. As to section E, I think everything having an interface could be overkill I already do it for domain/business objects and often see no advantage in the "design" by doing so, but it does help in mocking objects in tests. Personally I would see E2 as a classic solution, although have seen E3 used in 2 projects I have worked on previously. Question Am I implementing MVP correctly? Is there a right way of going about it? I've read Martin Fowler's work that has variations, and I remember when I first started doing MVC, I understood the concept, but could not originally work out where is the entry point, everything has its own function but what controls and creates the original set of MVC objects.

    Read the article

< Previous Page | 990 991 992 993 994 995 996 997 998 999 1000 1001  | Next Page >