Search Results

Search found 5109 results on 205 pages for 'specify'.

Page 189/205 | < Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >

  • Accessing SharePoint 2010 Data with REST/OData on Windows Phone 7

    - by Jan Tielens
    Consuming SharePoint 2010 data in Windows Phone 7 applications using the CTP version of the developer tools is quite a challenge. The issue is that the SharePoint 2010 data is not anonymously available; users need to authenticate to be able to access the data. When I first tried to access SharePoint 2010 data from my first Hello-World-type Windows Phone 7 application I thought “Hey, this should be easy!” because Windows Phone 7 development based on Silverlight and SharePoint 2010 has a Client Object Model for Silverlight. Unfortunately you can’t use the Client Object Model of SharePoint 2010 on the Windows Phone platform; there’s a reference to an assembly that’s not available (System.Windows.Browser). My second thought was “OK, no problem!” because SharePoint 2010 also exposes a REST/OData API to access SharePoint data. Using the REST API in SharePoint 2010 is as easy as making a web request for a URL (in which you specify the data you’d like to retrieve), e.g. http://yoursiteurl/_vti_bin/listdata.svc/Announcements. This is very easy to accomplish in a Silverlight application that’s running in the context of a page in a SharePoint site, because the credentials of the currently logged on user are automatically picked up and passed to the WCF service. But a Windows Phone application is of course running outside of the SharePoint site’s page, so the application should build credentials that have to be passed to SharePoint’s WCF service. This turns out to be a small challenge in Silverlight 3, the WebClient doesn’t support authentication; there is a Credentials property but when you set it and make the request you get a NotImplementedException exception. Probably this issued will be solved in the very near future, since Silverlight 4 does support authentication, and there’s already a WCF Data Services download that uses this new platform feature of Silverlight 4. So when Windows Phone platform switches to Silverlight 4, you can just use the WebClient to get the data. Even more, if the OData Client Library for Windows Phone 7 gets updated after that, things should get even easier! By the way: the things I’m writing in this paragraph are just assumptions that I make which make a lot of sense IMHO, I don’t have any info all of this will happen, but I really hope so. So are SharePoint developers out of the Windows Phone development game until they get this fixed? Well luckily not, when the HttpWebRequest class is being used instead, you can pass credentials! Using the HttpWebRequest class is slightly more complex than using the WebClient class, but the end result is that you have access to your precious SharePoint 2010 data. The following code snippet is getting all the announcements of an Annoucements list in a SharePoint site: HttpWebRequest webReq =     (HttpWebRequest)HttpWebRequest.Create("http://yoursite/_vti_bin/listdata.svc/Announcements");webReq.Credentials = new NetworkCredential("username", "password"); webReq.BeginGetResponse(    (result) => {        HttpWebRequest asyncReq = (HttpWebRequest)result.AsyncState;         XDocument xdoc = XDocument.Load(            ((HttpWebResponse)asyncReq.EndGetResponse(result)).GetResponseStream());         XNamespace ns = "http://www.w3.org/2005/Atom";        var items = from item in xdoc.Root.Elements(ns + "entry")                    select new { Title = item.Element(ns + "title").Value };         this.Dispatcher.BeginInvoke(() =>        {            foreach (var item in items)                MessageBox.Show(item.Title);        });    }, webReq); When you try this in a Windows Phone 7 application, make sure you add a reference to the System.Xml.Linq assembly, because the code uses Linq to XML to parse the resulting Atom feed, so the Title of every announcement is being displayed in a MessageBox. Check out my previous post if you’d like to see a more polished sample Windows Phone 7 application that displays SharePoint 2010 data.When you plan to use this technique, it’s of course a good idea to encapsulate the code doing the request, so it becomes really easy to get the data that you need. In the following code snippet you can find the GetAtomFeed method that gets the contents of any Atom feed, even if you need to authenticate to get access to the feed. delegate void GetAtomFeedCallback(Stream responseStream); public MainPage(){    InitializeComponent();     SupportedOrientations = SupportedPageOrientation.Portrait |         SupportedPageOrientation.Landscape;     string url = "http://yoursite/_vti_bin/listdata.svc/Announcements";    string username = "username";    string password = "password";    string domain = "";     GetAtomFeed(url, username, password, domain, (s) =>    {        XNamespace ns = "http://www.w3.org/2005/Atom";        XDocument xdoc = XDocument.Load(s);         var items = from item in xdoc.Root.Elements(ns + "entry")                    select new { Title = item.Element(ns + "title").Value };         this.Dispatcher.BeginInvoke(() =>        {            foreach (var item in items)            {                MessageBox.Show(item.Title);            }        });    });} private static void GetAtomFeed(string url, string username,     string password, string domain, GetAtomFeedCallback cb){    HttpWebRequest webReq = (HttpWebRequest)HttpWebRequest.Create(url);    webReq.Credentials = new NetworkCredential(username, password, domain);     webReq.BeginGetResponse(        (result) =>        {            HttpWebRequest asyncReq = (HttpWebRequest)result.AsyncState;            HttpWebResponse resp = (HttpWebResponse)asyncReq.EndGetResponse(result);            cb(resp.GetResponseStream());        }, webReq);}

    Read the article

  • The Shift: how Orchard painlessly shifted to document storage, and how it’ll affect you

    - by Bertrand Le Roy
    We’ve known it all along. The storage for Orchard content items would be much more efficient using a document database than a relational one. Orchard content items are composed of parts that serialize naturally into infoset kinds of documents. Storing them as relational data like we’ve done so far was unnatural and requires the data for a single item to span multiple tables, related through 1-1 relationships. This means lots of joins in queries, and a great potential for Select N+1 problems. Document databases, unfortunately, are still a tough sell in many places that prefer the more familiar relational model. Being able to x-copy Orchard to hosters has also been a basic constraint in the design of Orchard. Combine those with the necessity at the time to run in medium trust, and with license compatibility issues, and you’ll find yourself with very few reasonable choices. So we went, a little reluctantly, for relational SQL stores, with the dream of one day transitioning to document storage. We have played for a while with the idea of building our own document storage on top of SQL databases, and Sébastien implemented something more than decent along those lines, but we had a better way all along that we didn’t notice until recently… In Orchard, there are fields, which are named properties that you can add dynamically to a content part. Because they are so dynamic, we have been storing them as XML into a column on the main content item table. This infoset storage and its associated API are fairly generic, but were only used for fields. The breakthrough was when Sébastien realized how this existing storage could give us the advantages of document storage with minimal changes, while continuing to use relational databases as the substrate. public bool CommercialPrices { get { return this.Retrieve(p => p.CommercialPrices); } set { this.Store(p => p.CommercialPrices, value); } } This code is very compact and efficient because the API can infer from the expression what the type and name of the property are. It is then able to do the proper conversions for you. For this code to work in a content part, there is no need for a record at all. This is particularly nice for site settings: one query on one table and you get everything you need. This shows how the existing infoset solves the data storage problem, but you still need to query. Well, for those properties that need to be filtered and sorted on, you can still use the current record-based relational system. This of course continues to work. We do however provide APIs that make it trivial to store into both record properties and the infoset storage in one operation: public double Price { get { return Retrieve(r => r.Price); } set { Store(r => r.Price, value); } } This code looks strikingly similar to the non-record case above. The difference is that it will manage both the infoset and the record-based storages. The call to the Store method will send the data in both places, keeping them in sync. The call to the Retrieve method does something even cooler: if the property you’re looking for exists in the infoset, it will return it, but if it doesn’t, it will automatically look into the record for it. And if that wasn’t cool enough, it will take that value from the record and store it into the infoset for the next time it’s required. This means that your data will start automagically migrating to infoset storage just by virtue of using the code above instead of the usual: public double Price { get { return Record.Price; } set { Record.Price = value; } } As your users browse the site, it will get faster and faster as Select N+1 issues will optimize themselves away. If you preferred, you could still have explicit migration code, but it really shouldn’t be necessary most of the time. If you do already have code using QueryHints to mitigate Select N+1 issues, you might want to reconsider those, as with the new system, you’ll want to avoid joins that you don’t need for filtering or sorting, further optimizing your queries. There are some rare cases where the storage of the property must be handled differently. Check out this string[] property on SearchSettingsPart for example: public string[] SearchedFields { get { return (Retrieve<string>("SearchedFields") ?? "") .Split(new[] {',', ' '}, StringSplitOptions.RemoveEmptyEntries); } set { Store("SearchedFields", String.Join(", ", value)); } } The array of strings is transformed by the property accessors into and from a comma-separated list stored in a string. The Retrieve and Store overloads used in this case are lower-level versions that explicitly specify the type and name of the attribute to retrieve or store. You may be wondering what this means for code or operations that look directly at the database tables instead of going through the new infoset APIs. Even if there is a record, the infoset version of the property will win if it exists, so it is necessary to keep the infoset up-to-date. It’s not very complicated, but definitely something to keep in mind. Here is what a product record looks like in Nwazet.Commerce for example: And here is the same data in the infoset: The infoset is stored in Orchard_Framework_ContentItemRecord or Orchard_Framework_ContentItemVersionRecord, depending on whether the content type is versionable or not. A good way to find what you’re looking for is to inspect the record table first, as it’s usually easier to read, and then get the item record of the same id. Here is the detailed XML document for this product: <Data> <ProductPart Inventory="40" Price="18" Sku="pi-camera-box" OutOfStockMessage="" AllowBackOrder="false" Weight="0.2" Size="" ShippingCost="null" IsDigital="false" /> <ProductAttributesPart Attributes="" /> <AutoroutePart DisplayAlias="camera-box" /> <TitlePart Title="Nwazet Pi Camera Box" /> <BodyPart Text="[...]" /> <CommonPart CreatedUtc="2013-09-10T00:39:00Z" PublishedUtc="2013-09-14T01:07:47Z" /> </Data> The data is neatly organized under each part. It is easy to see how that document is all you need to know about that content item, all in one table. If you want to modify that data directly in the database, you should be careful to do it in both the record table and the infoset in the content item record. In this configuration, the record is now nothing more than an index, and will only be used for sorting and filtering. Of course, it’s perfectly fine to mix record-backed properties and record-less properties on the same part. It really depends what you think must be sorted and filtered on. In turn, this potentially simplifies migrations considerably. So here it is, the great shift of Orchard to document storage, something that Orchard has been designed for all along, and that we were able to implement with a satisfying and surprising economy of resources. Expect this code to make its way into the 1.8 version of Orchard when that’s available.

    Read the article

  • FTP Publishing with the new Windows Azure Release

    - by Harish Ranganathan
    There is a good chance you might have stumbled upon the new Windows Azure Release that we made on June 6th.  Scott Guthrie’s Post quite summarizes the overall new features. One of my favorite features is the Windows Azure Websites and the ability to do publish files to Azure using your FTP Client. Windows Azure Websites offers low cost (free upto 10 websites) web hosting where you can deploy any website that can run on IIS 7.0, quickly. The earlier releases of Azure SDKs and the Azure platform support .NET 3.5 & above for running your applications.  This was a constraint for many since there are/were a lot of ASP.NET 2.0 applications built over time and simply to put it on Azure, many of you were skeptical to migrate it to .NET 4. Windows Azure Websites offer the flexibility of running IIS 7.0 supported .NET Versions which means you can run .NET 1.1, 2.0, 3.5 and .NET 4.  Not just that! You can also run classic ASP Applications. Windows Azure Websites don’t need you to go through the complexity of adding the Cloud Project Template and then publishing the Configuration Files.  Lets take a step by step understanding of Websites and publishing using FTP. I downloaded the Club Website Starter Kit from http://www.asp.net/downloads/starter-kits/club It also requires a database and I downloaded the SQL Scripts and created a SQL Server Database called Club. This installs a Web Site Project Template.  Note that I am running Windows 8 Release Preview and Visual Studio 2012 RC.  After installing the template, select File – New – Website and don’t forget to choose the Framework version as .NET 2.0 You can see the “Club Website Starter Kit” .  Once you select the Website gets created.  You would encounter a warning indicating that the Club Website Starter Kit uses SQL Express and the recommended database is LocalDB Express.  Click ok to continue.  Once the Website is created open up the Web.config and locate the “ClubSiteDB” connection string.  By default, it points to a SQL Express Database.  Instead configure it to use your local SQL Server. Also, open up Global.asax and comment out the following line if (!Roles.RoleExists("Administrators")) Roles.CreateRole("Administrators"); There seems to be an issue in the code that doesn’t create the role.  Post that, hit CTRL+F5 and you should be able to see the Website Running, as below So, now we have the Club Starter Kit site up running locally.  Moving to Azure Visit http://manage.windowsazure.com/ and sign up for a trial account.  This allows you to host up to 10 websites for free and a host of other benefits.  The free Websites can be extended to an year without any charge.  Once you have signed up, sign in to the portal using the Live ID used for sign up. After signing in, you would be presented with the “All Items” listing page which lists, Websites, Cloud Services, Databases etc.,  If this is the first time, you wouldn’t find anything. Click on the “Websites” link from the left menu.  Click on “New” in the bottom and it should show up a dialog.  In the same, select Website and click on “Quick Create” and in the URL Textbox, specify “MyFirstDemo” and click the “Create Web Site” link below. It should take a few seconds to create the Website.  Once the Website is created, click on the listing and it should open up the Dashboard.  Since we haven’t done anything yet, there shouldn’t be any statistics Click on the “Download publish profile” link in the right bottom.  This file has the FTP publishing settings. Also, if you scroll down you can see the FTP URL for this site.  It should typically start ftp://waws-xxxx-xxx-xxxx In the downloaded publish profile file, you can also find the ftp URL.  Pick the following from this file publishUrl (the 2nd one, the one that features after publishMethod =”FTP”) and the userName and userPWD that follows. Note that we have everything required to publish the files.  But since the Club Starter Kit uses Databases, we need to have the Database running on SQL Azure.  Go back to the Main Menu and click on “New” in the bottom but this time select “SQL Database” and provide “Club” as Database name for “Quick Create” If this is the first time a Server would be created.  Otherwise, it would pickup the existing server name. Once the database is created, you can use the SQL Azure Migration Wizard http://sqlazuremw.codeplex.com/ and provide the credentials to connect to local database and then the SQL Azure database for migrating the “Club” database.  The migration wizard UI hasn’t changed much and is the same as explained by me in one my posts earlier http://geekswithblogs.net/ranganh/archive/2009/09/29/taking-your-northwind-database-to-sql-azure-and-binding-it.aspx Once the database is migrated, come back to the main screen and click on the Database base in the Azure Management Portal.  It opens up the dashboard of the database.  Click on “Show connection Strings” and it would popup a list of connection string formats.  Choose the ADO.NET connection string and after editing the password with the password that you provided when creating the database server in the Azure Portal, paste it into the config file of the Club Starter Kit Website.  Just to reiterate, the connection string key is ClubSiteDB. Try running the Website once to ensure that the application though running locally could connect to the SQL Database running on Azure. Once you are able to run the website successfully, we are all set to do the FTP Publishing. Download your favorite FTP tool.  I use http://filezilla-project.org/ In the Host Textbox, paste the FTP URL that you picked up from the publish profile file and also paste the username and password.  Click on “QuickConnect”.  If everything is fine, you should be able to connect to the remote server.  If it is successfully connected, you can see the wwwroot folder of the Website, running in Azure Make sure on the “Local Site” in the left, you choose the path to the folder of your Website.  Open up the Website folder on the left such that it lists all the files and folders inside.  Select all of them and click select “Upload” or simply drag and drop all the files to the root folder that is listed above.  Once the publishing is done, you should be able to hit the SiteURL that you can find the dashboard page of the website.  In our case, it would be http://MyFirstDemo.azurewebsites.net That’s it, we have now done FTP publishing in Azure and that too we are running a .NET 2.0 Website on Azure. Cheers !!!

    Read the article

  • Announcing: Improvements to the Windows Azure Portal

    - by ScottGu
    Earlier today we released a number of enhancements to the new Windows Azure Management Portal.  These new capabilities include: Service Bus Management and Monitoring Support for Managing Co-administrators Import/Export support for SQL Databases Virtual Machine Experience Enhancements Improved Cloud Service Status Notifications Media Services Monitoring Support Storage Container Creation and Access Control Support All of these improvements are now live in production and available to start using immediately.  Below are more details on them: Service Bus Management and Monitoring The new Windows Azure Management Portal now supports Service Bus management and monitoring. Service Bus provides rich messaging infrastructure that can sit between applications (or between cloud and on-premise environments) and allow them to communicate in a loosely coupled way for improved scale and resiliency. With the new Service Bus experience, you can now create and manage Service Bus Namespaces, Queues, Topics, Relays and Subscriptions. You can also get rich monitoring for Service Bus Queues, Topics and Subscriptions. To create a Service Bus namespace, you can now select the “Service Bus” tab in the Windows Azure portal and then simply select the CREATE command: Doing so will bring up a new “Create a Namespace” dialog that allows you to name and create a new Service Bus Namespace: Once created, you can obtain security credentials associated with the Namespace via the ACCESS KEY command. This gives you the ability to obtain the connection string associated with the service namespace. You can copy and paste these values into any application that requires these credentials: It is also now easy to create Service Bus Queues and Topics via the NEW experience in the portal drawer.  Simply click the NEW command and navigate to the “App Services” category to create a new Service Bus entity: Once you provision a new Queue or Topic it can be managed in the portal.  Clicking on a namespace will display all queues and topics within it: Clicking on an item in the list will allow you to drill down into a dashboard view that allows you to monitor the activity and traffic within it, as well as perform operations on it. For example, below is a view of an “orders” queue – note how we now surface both the incoming and outgoing message flow rate, as well as the total queue length and queue size: To monitor pub/sub subscriptions you can use the ADD METRICS command within a topic and select a specific subscription to monitor. Support for Managing Co-Administrators You can now add co-administrators for your Windows Azure subscription using the new Windows Azure Portal. This allows you to share management of your Windows Azure services with other users. Subscription co-administrators share the same administrative rights and permissions that service administrator have - except a co-administrator cannot change or view billing details about the account, nor remove the service administrator from a subscription. In the SETTINGS section, click on the ADMINISTRATORS tab, and select the ADD button to add a co-administrator to your subscription: To add a co-administrator, you specify the email address for a Microsoft account (formerly Windows Live ID) or an organizational account, and choose the subscription you want to add them to: You can later update the subscriptions that the co-administrator has access to by clicking on the EDIT button, and then select or deselect the subscriptions to which they belong. Import/Export Support for SQL Databases The Windows Azure administration portal now supports importing and exporting SQL Databases to/from Blob Storage.  Databases can be imported/exported to blob storage using the same BACPAC file format that is supported with SQL Server 2012.  Among other benefits, this makes it easy to copy and migrate databases between on-premise and cloud environments. SQL Databases now have an EXPORT command in the bottom drawer that when pressed will prompt you to save your database to a Windows Azure storage container: The UI allows you to choose an existing storage account or create a new one, as well as the name of the BACPAC file to persist in blob storage: You can also now import and create a new SQL Database by using the NEW command.  This will prompt you to select the storage container and file to import the database from: The Windows Azure Portal enables you to monitor the progress of import and export operations. If you choose to log out of the portal, you can come back later and check on the status of all of the operations in the new history tab of the SQL Database server – this shows your entire import and export history and the status (success/fail) of each: Enhancements to the Virtual Machine Experience One of the common pain-points we have heard from customers using the preview of our new Virtual Machine support has been the inability to delete the associated VHDs when a VM instance (or VM drive) gets deleted. Prior to today’s release the VHDs would continue to be in your storage account and accumulate storage charges. You can now navigate to the Disks tab within the Virtual Machine extension, select a VM disk to delete, and click the DELETE DISK command: When you click the DELETE DISK button you have the option to delete the disk + associated .VHD file (completely clearing it from storage).  Alternatively you can delete the disk but still retain a .VHD copy of it in storage. Improved Cloud Service Status Notifications The Windows Azure portal now exposes more information of the health status of role instances.  If any of the instances are in a non-running state, the status at the top of the dashboard will summarize the status (and update automatically as the role health changes): Clicking the instance hyperlink within this status summary view will navigate you to a detailed role instance view, and allow you to get more detailed health status of each of the instances.  The portal has been updated to provide more specific status information within this detailed view – giving you better visibility into the health of your app: Monitoring Support for Media Services Windows Azure Media Services allows you to create media processing jobs (for example: encoding media files) in your Windows Azure Media Services account. In the Windows Azure Portal, you can now monitor the number of encoding jobs that are queued up for processing as well as active, failed and queued tasks for encoding jobs. On your media services account dashboard, you can visualize the monitoring data for last 6 hours, 24 hours or 7 days. Storage Container Creation and Access Control Support You can now create Windows Azure Storage storage containers from within the Windows Azure Portal.  After selecting a storage account, you can navigate to the CONTAINERS tab and click the ADD CONTAINER command: This will display a dialog that lets you name the new container and control access to it: You can also update the access setting as well as container metadata of existing containers by selecting one and then using the new EDIT CONTAINER command: This will then bring up the edit container dialog that allows you to change and save its settings: In addition to creating and editing containers, you can click on them within the portal to drill-in and view blobs within them.  Summary The above features are all now live in production and available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today.  Visit the Windows Azure Developer Center to learn more about how to build apps with it. We’ll have even more new features and enhancements coming later this month – including support for the recent Windows Server 2012 and .NET 4.5 releases (we will enable new web and worker role images with Windows Server 2012 and .NET 4.5, and support .NET 4.5 with Websites).  Keep an eye out on my blog for details as these new features become available. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • SQL SERVER – Data Sources and Data Sets in Reporting Services SSRS

    - by Pinal Dave
    This example is from the Beginning SSRS by Kathi Kellenberger. Supporting files are available with a free download from the www.Joes2Pros.com web site. This example is from the Beginning SSRS. Supporting files are available with a free download from the www.Joes2Pros.com web site. Connecting to Your Data? When I was a child, the telephone book was an important part of my life. Maybe I was just a nerd, but I enjoyed getting a new book every year to page through to learn about the businesses in my small town or to discover where some of my school acquaintances lived. It was also the source of maps to my town’s neighborhoods and the towns that surrounded me. To make a phone call, I would need a telephone number. In order to find a telephone number, I had to know how to use the telephone book. That seems pretty simple, but it resembles connecting to any data. You have to know where the data is and how to interact with it. A data source is the connection information that the report uses to connect to the database. You have two choices when creating a data source, whether to embed it in the report or to make it a shared resource usable by many reports. Data Sources and Data Sets A few basic terms will make the upcoming choses make more sense. What database on what server do you want to connect to? It would be better to just ask… “what is your data source?” The connection you need to make to get your reports data is called a data source. If you connected to a data source (like the JProCo database) there may be hundreds of tables. You probably only want data from just a few tables. This means you want to write a specific query against this data source. A query on a data source to get just the records you need for an SSRS report is called a Data Set. Creating a local Data Source You can connect embed a connection from your report directly to your JProCo database which (let’s say) is installed on a server named Reno. If you move JProCo to a new server named Tampa then you need to update the Data Set. If you have 10 reports in one project that were all pointing to the JProCo database on the Reno server then they would all need to be updated at once. It’s possible to make a project level Data Source and have each report use that. This means one change can fix all 10 reports at once. This would be called a Shared Data Source. Creating a Shared Data Source The best advice I can give you is to create shared data sources. The reason I recommend this is that if a database moves to a new server you will have just one place in Report Manager to make the server name change. That one change will update the connection information in all the reports that use that data source. To get started, you will start with a fresh project. Go to Start > All Programs > SQL Server 2012 > Microsoft SQL Server Data Tools to launch SSDT. Once SSDT is running, click New Project to create a new project. Once the New Project dialog box appears, fill in the form, as shown in. Be sure to select Report Server Project this time – not the wizard. Click OK to dismiss the New Project dialog box. You should now have an empty project, as shown in the Solution Explorer. A report is meant to show you data. Where is the data? The first task is to create a Shared Data Source. Right-click on the Shared Data Sources folder and choose Add New Data Source. The Shared Data Source Properties dialog box will launch where you can fill in a name for the data source. By default, it is named DataSource1. The best practice is to give the data source a more meaningful name. It is possible that you will have projects with more than one data source and, by naming them, you can tell one from another. Type the name JProCo for the data source name and click the Edit button to configure the database connection properties. If you take a look at the types of data sources you can choose, you will see that SSRS works with many data platforms including Oracle, XML, and Teradata. Make sure SQL Server is selected before continuing. For this post, I am assuming that you are using a local SQL Server and that you can use your Windows account to log in to the SQL Server. If, for some reason you must use SQL Server Authentication, choose that option and fill in your SQL Server account credentials. Otherwise, just accept Windows Authentication. If your database server was installed locally and with the default instance, just type in Localhost for the Server name. Select the JProCo database from the database list. At this point, the connection properties should look like. If you have installed a named instance of SQL Server, you will have to specify the server name like this: Localhost\InstanceName, replacing the InstanceName with whatever your instance name is. If you are not sure about the named instance, launch the SQL Server Configuration Manager found at Start > All Programs > Microsoft SQL Server 2012 > Configuration Tools. If you have a named instance, the name will be shown in parentheses. A default instance of SQL Server will display MSSQLSERVER; a named instance will display the name chosen during installation. Once you get the connection properties filled in, click OK to dismiss the Connection Properties dialog box and OK again to dismiss the Shared Data Source properties. You now have a data source in the Solution Explorer. What’s next I really need to thank Kathi Kellenberger and Rick Morelan for sharing this material for this 5 day series of posts on SSRS. To get really comfortable with SSRS you will get to know the different SSDT windows, Build reports on your own (without the wizards),  Add report headers and footers, Accept user input,  create levels, charts, or even maps for visual appeal. You might be surprise to know a small 230 page book starts from the very beginning and covers the steps to do all these items. Beginning SSRS 2012 is a small easy to follow book so you can learn SSRS for less than $20. See Joes2Pros.com for more on this and other books. If you want to learn SSRS in easy to simple words – I strongly recommend you to get Beginning SSRS book from Joes 2 Pros. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Reporting Services, SSRS

    Read the article

  • HTML5 Input type=date Formatting Issues

    - by Rick Strahl
    One of the nice features in HTML5 is the abililty to specify a specific input type for HTML text input boxes. There a host of very useful input types available including email, number, date, datetime, month, number, range, search, tel, time, url and week. For a more complete list you can check out the MDN reference. Date input types also support automatic validation which can be useful in some scenarios but maybe can get in the way at other times. One of the more common input types, and one that can most benefit of a custom UI for selection is of course date input. Almost every application could use a decent date representation and HTML5's date input type seems to push into the right direction. It'd be nice if you could just say:<form action="DateTest.html"> <label for="FromDate">Enter a Date:</label> <input type="date" id="FromDate" name="FromDate" value="11/08/2012" class="date" /> <hr /> <input type="submit" id="btnSubmit" name="btnSubmit" value="Save Date" class="smallbutton" /> </form> but if you'd expect to just work, you're likely to be pretty disappointed. Problem #1: Browser Support For starters there's browser support. Out of the major browsers only the latest versions of WebKit and Opera based browsers seem to support date input. Neither FireFox, nor any version of Internet Explorer (including the new touch enabled IE10 in Windows RT) support input type=date. Browser support is an issue, but it would be OK if it wasn't for problem #2. Problem #2: Date Formatting If you look at my date input from before:<input type="date" id="FromDate" name="FromDate" value="11/08/2012" class="date" /> You can see that my date is formatted in local date format (ie. en-us). Now when I run this sadly the form that comes up in Chrome (and also iOS mobile browsers) comes up like this: Chrome isn't recognizing my local date string. Instead it's expecting my date format to be provided in ISO 8601 format which is: 2012-11-08 So if I change the date input field to:<input type="date" id="FromDate" name="FromDate" value="2012-10-08" class="date" /> I correctly get the date field filled in: Also when I pick a date with the DatePicker the date value is also returned is also set to the ISO date format. Yet notice how the date is still formatted to the local date time format (ie. en-US format). So if I pick a new date: and then save, the value field is set back to: 2012-11-15 using the ISO format. The same is true for Opera and iOS browsers and I suspect any other WebKit style browser and their date pickers. So to summarize input type=date: Expects ISO 8601 format dates to display intial values Sets selected date values to ISO 8601 Now what? This would sort of make sense, if all browsers supported input type=date. It'd be easy because you could just format dates appropriately when you set the date value into the control by applying the appropriate culture formatting (ie. .ToString("yyyy-MM-dd") ). .NET is actually smart enough to pick up the date on the other end for modelbinding when ISO 8601 is used. For other environments this might be a bit more tricky. input type=date is clearly the way to go forward. Date controls implemented in HTML are going the way of the dodo, given the intricacies of mobile platforms and scaling for both desktop and mobile. I've been using jQuery UI Datepicker for ages but once going to mobile, that's no longer an option as the control doesn't scale down well for mobile apps (at least not without major re-styling). It also makes a lot of sense for the browser to provide this functionality - creating a consistent date input experience across apps only makes sense, which is why I find it baffling that neither FireFox nor IE 10 deign it necessary to support date input natively. The problem is that a large number of even the latest and greatest browsers don't support this. So now you're stuck with not knowing what date format you have to serve since neither the local format, nor the ISO format works in all cases. For my current app I just broke down and used the ISO format and so I'll live with the non-local date format. <input type="date" id="ToDate" name="ToDate" value="2012-11-08" class="date"/> Here's what this looks like on Chrome: Here's what it looks like on my iPhone: Both Chrome and the phone do this the way it should be. For the phone especially this demonstrates why we'd want this - the built-in date picker there certainly beats manually trying to edit the date using finger gymnastics, and it's one of the easiest ways to pick a date I can think of (ie. easier to use than your typical date picker). Finally here's what the date looks like in FireFox: Certainly this is not the ideal date format, but it's clear enough I suppose. If users enter a date in local US format and that works as well (but won't work for other locales). It'll have to do. Over time one can only hope that other browsers will finally decide to implement this functionality natively to provide a unique experience. Until then, incomplete solutions it is. Related Posts Html 5 Input Types - How useful is this really going to be?© Rick Strahl, West Wind Technologies, 2005-2012Posted in HTML5  HTML   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • How to free up space on /boot? [closed]

    - by Phrogz
    Possible Duplicate: Free up more space on /boot I logged onto my server today to find the message: => /boot is using 98.9% of 91MB When I look at /boot I see that it is indeed very low on space, and has old-kernel files in it: phrogz@planar:~$ df -h /boot Filesystem Size Used Avail Use% Mounted on /dev/sda1 92M 54M 33M 63% /boot phrogz@planar:~$ la /boot total 81880 drwxr-xr-x 4 root root 3072 2011-12-02 06:26 ./ drwxr-xr-x 22 root root 4096 2011-09-29 06:37 ../ -rw-r--r-- 1 root root 646419 2011-03-01 19:02 abi-2.6.32-30-server -rw-r--r-- 1 root root 646419 2011-04-08 17:07 abi-2.6.32-31-server -rw-r--r-- 1 root root 646454 2011-04-20 16:53 abi-2.6.32-32-server -rw-r--r-- 1 root root 646454 2011-07-29 16:07 abi-2.6.32-33-server -rw-r--r-- 1 root root 646710 2011-09-13 18:00 abi-2.6.32-34-server -rw-r--r-- 1 root root 646820 2011-10-11 11:10 abi-2.6.32-35-server -rw-r--r-- 1 root root 110687 2011-03-01 19:02 config-2.6.32-30-server -rw-r--r-- 1 root root 110676 2011-04-08 17:07 config-2.6.32-31-server -rw-r--r-- 1 root root 110687 2011-04-20 16:53 config-2.6.32-32-server -rw-r--r-- 1 root root 110687 2011-07-29 16:07 config-2.6.32-33-server -rw-r--r-- 1 root root 110687 2011-09-13 18:00 config-2.6.32-34-server -rw-r--r-- 1 root root 110687 2011-10-11 11:10 config-2.6.32-35-server drwxr-xr-x 3 root root 6144 2011-12-02 06:26 grub/ -rw-r--r-- 1 root root 8258196 2011-05-18 11:58 initrd.img-2.6.32-30-server -rw-r--r-- 1 root root 8259568 2011-05-23 20:24 initrd.img-2.6.32-31-server -rw-r--r-- 1 root root 8257374 2011-05-30 07:47 initrd.img-2.6.32-32-server -rw-r--r-- 1 root root 8287489 2011-08-10 06:37 initrd.img-2.6.32-33-server -rw-r--r-- 1 root root 8288075 2011-09-29 06:37 initrd.img-2.6.32-34-server drwx------ 2 root root 12288 2011-05-18 11:46 lost+found/ -rw-r--r-- 1 root root 160280 2010-03-23 03:40 memtest86+.bin -rw-r--r-- 1 root root 2179117 2011-03-01 19:02 System.map-2.6.32-30-server -rw-r--r-- 1 root root 2179628 2011-04-08 17:07 System.map-2.6.32-31-server -rw-r--r-- 1 root root 2178240 2011-04-20 16:53 System.map-2.6.32-32-server -rw-r--r-- 1 root root 2178382 2011-07-29 16:07 System.map-2.6.32-33-server -rw-r--r-- 1 root root 2178952 2011-09-13 18:00 System.map-2.6.32-34-server -rw-r--r-- 1 root root 2179333 2011-10-11 11:10 System.map-2.6.32-35-server -rw-r--r-- 1 root root 1336 2011-03-01 19:08 vmcoreinfo-2.6.32-30-server -rw-r--r-- 1 root root 1336 2011-04-08 17:13 vmcoreinfo-2.6.32-31-server -rw-r--r-- 1 root root 1336 2011-04-20 16:54 vmcoreinfo-2.6.32-32-server -rw-r--r-- 1 root root 1336 2011-07-29 16:08 vmcoreinfo-2.6.32-33-server -rw-r--r-- 1 root root 1336 2011-09-13 18:03 vmcoreinfo-2.6.32-34-server -rw-r--r-- 1 root root 1336 2011-10-11 11:11 vmcoreinfo-2.6.32-35-server -rw-r--r-- 1 root root 4111552 2011-03-01 19:02 vmlinuz-2.6.32-30-server -rw-r--r-- 1 root root 4113344 2011-04-08 17:07 vmlinuz-2.6.32-31-server -rw-r--r-- 1 root root 4106528 2011-04-20 16:53 vmlinuz-2.6.32-32-server -rw-r--r-- 1 root root 4107648 2011-07-29 16:07 vmlinuz-2.6.32-33-server -rw-r--r-- 1 root root 4108960 2011-09-13 18:00 vmlinuz-2.6.32-34-server -rw-r--r-- 1 root root 4111040 2011-10-11 11:10 vmlinuz-2.6.32-35-server I was able to find the old kernel packages like so: phrogz@planar:/boot$ dpkg -l | grep linux-image ii linux-image-2.6.32-30-server 2.6.32-30.59 Linux kernel image for version 2.6.32 on x86 ii linux-image-2.6.32-31-server 2.6.32-31.61 Linux kernel image for version 2.6.32 on x86 ii linux-image-2.6.32-32-server 2.6.32-32.62 Linux kernel image for version 2.6.32 on x86 ii linux-image-2.6.32-33-server 2.6.32-33.72 Linux kernel image for version 2.6.32 on x86 ii linux-image-2.6.32-34-server 2.6.32-34.77 Linux kernel image for version 2.6.32 on x86 iF linux-image-2.6.32-35-server 2.6.32-35.78 Linux kernel image for version 2.6.32 on x86 iU linux-image-server 2.6.32.36.42 Linux kernel image on Server Equipment. …and I can see that many of them are older than my current image: phrogz@planar:/boot$ uname -a Linux planar 2.6.32-34-server #77-Ubuntu SMP Tue Sep 13 20:54:38 UTC 2011 x86_64 GNU/Linux However, I can't actually remove them due to an unmet dependency: phrogz@planar:/boot$ sudo apt-get --purge remove linux-image-2.6.32-30-server Reading package lists... Done Building dependency tree Reading state information... Done You might want to run `apt-get -f install' to correct these: The following packages have unmet dependencies: linux-image-server: Depends: linux-image-2.6.32-36-server but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). But I can't fix the dependency (presumably due to low disk space): phrogz@planar:/boot$ sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: liblcms1 linux-headers-2.6.32-32-server libnspr4-0d linux-headers-2.6.32-33-server linux-headers-2.6.32-32 linux-headers-2.6.32-33 linux-headers-2.6.32-34 libcups2 tzdata-java libjpeg62 linux-headers-2.6.32-34-server libavahi-client3 ca-certificates-java libnss3-1d Use 'apt-get autoremove' to remove them. The following extra packages will be installed: linux-image-2.6.32-36-server Suggested packages: fdutils linux-doc-2.6.32 linux-source-2.6.32 linux-tools The following NEW packages will be installed: linux-image-2.6.32-36-server 0 upgraded, 1 newly installed, 0 to remove and 8 not upgraded. 3 not fully installed or removed. Need to get 0B/31.8MB of archives. After this operation, 128MB of additional disk space will be used. Do you want to continue [Y/n]? (Reading database ... 145200 files and directories currently installed.) Unpacking linux-image-2.6.32-36-server (from .../linux-image-2.6.32-36-server_2.6.32-36.79_amd64.deb) ... Done. dpkg: error processing /var/cache/apt/archives/linux-image-2.6.32-36-server_2.6.32-36.79_amd64.deb (--unpack): failed in buffer_write(fd) (10, ret=-1): backend dpkg-deb during `./boot/vmlinuz-2.6.32-36-server': No space left on device dpkg-deb: subprocess paste killed by signal (Broken pipe) Running postrm hook script /usr/sbin/update-grub. Generating grub.cfg ... Found linux image: /boot/vmlinuz-2.6.32-35-server Found linux image: /boot/vmlinuz-2.6.32-34-server Found initrd image: /boot/initrd.img-2.6.32-34-server Found linux image: /boot/vmlinuz-2.6.32-33-server Found initrd image: /boot/initrd.img-2.6.32-33-server Found linux image: /boot/vmlinuz-2.6.32-32-server Found initrd image: /boot/initrd.img-2.6.32-32-server Found linux image: /boot/vmlinuz-2.6.32-31-server Found initrd image: /boot/initrd.img-2.6.32-31-server Found linux image: /boot/vmlinuz-2.6.32-30-server Found initrd image: /boot/initrd.img-2.6.32-30-server Found memtest86+ image: /memtest86+.bin done Errors were encountered while processing: /var/cache/apt/archives/linux-image-2.6.32-36-server_2.6.32-36.79_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) How do I free up space on /boot so that I can fix my dependencies? Should I just delete the files manually? And then, should I resize my /boot to be larger, so this doesn't happen again? If so, how? If not, what maintenance should I be running regularly to prevent the accumulation of this cruft?

    Read the article

  • New ways for backup, recovery and restore of Essbase Block Storage databases – part 2 by Bernhard Kinkel

    - by Alexandra Georgescu
    After discussing in the first part of this article new options in Essbase for the general backup and restore, this second part will deal with the also rather new feature of Transaction Logging and Replay, which was released in version 11.1, enhancing existing restore options. Tip: Transaction logging and replay cannot be used for aggregate storage databases. Please refer to the Oracle Hyperion Enterprise Performance Management System Backup and Recovery Guide (rel. 11.1.2.1). Even if backups are done on a regular, frequent base, subsequent data entries, loads or calculations would not be reflected in a restored database. Activating Transaction Logging could fill that gap and provides you with an option to capture these post-backup transactions for later replay. The following table shows, which are the transactions that could be logged when Transaction Logging is enabled: In order to activate its usage, corresponding statements could be added to the Essbase.cfg file, using the TRANSACTIONLOGLOCATION command. The complete syntax reads: TRANSACTIONLOGLOCATION [ appname [ dbname]] LOGLOCATION NATIVE ?ENABLE | DISABLE Where appname and dbname are optional parameters giving you the chance in combination with the ENABLE or DISABLE command to set Transaction Logging for certain applications or databases or to exclude them from being logged. If only an appname is specified, the setting applies to all databases in that particular application. If appname and dbname are not defined, all applications and databases would be covered. LOGLOCATION specifies the directory to which the log is written, e.g. D:\temp\trlogs. This directory must already exist or needs to be created before using it for log information being written to it. NATIVE is a reserved keyword that shouldn’t be changed. The following example shows how to first enable logging on a more general level for all databases in the application Sample, followed by a disabling statement on a more granular level for only the Basic database in application Sample, hence excluding it from being logged. TRANSACTIONLOGLOCATION Sample Hyperion/trlog/Sample NATIVE ENABLE TRANSACTIONLOGLOCATION Sample Basic Hyperion/trlog/Sample NATIVE DISABLE Tip: After applying changes to the configuration file you must restart the Essbase server in order to initialize the settings. A maybe required replay of logged transactions after restoring a database can be done only by administrators. The following options are available: In Administration Services selecting Replay Transactions on the right-click menu on the database: Here you can select to replay transactions logged after the last replay request was originally executed or after the time of the last restored backup (whichever occurred later) or transactions logged after a specified time. Or you can replay transactions selectively based on a range of sequence IDs, which can be accessed using Display Transactions on the right-click menu on the database: These sequence ID s (0, 1, 2 … 7 in the screenshot below) are assigned to each logged transaction, indicating the order in which the transaction was performed. This helps to ensure the integrity of the restored data after a replay, as the replay of transactions is enforced in the same order in which they were originally performed. So for example a calculation originally run after a data load cannot be replayed before having replayed the data load first. After a transaction is replayed, you can replay only transactions with a greater sequence ID. For example, replaying the transaction with sequence ID of 4 includes all preceding transactions, while afterwards you can only replay transactions with a sequence ID of 5 or greater. Tip: After restoring a database from a backup you should always completely replay all logged transactions, which were executed after the backup, before executing new transactions. But not only the transaction information itself needs to be logged and stored in a specified directory as described above. During transaction logging, Essbase also creates archive copies of data load and rules files in the following default directory: ARBORPATH/app/appname/dbname/Replay These files are then used during the replay of a logged transaction. By default Essbase archives only data load and rules files for client data loads, but in order to specify the type of data to archive when logging transactions you can use the command TRANSACTIONLOGDATALOADARCHIVE as an additional entry in the Essbase.cfg file. The syntax for the statement is: TRANSACTIONLOGDATALOADARCHIVE [appname [dbname]] [OPTION] While to the [appname [dbname]] argument the same applies like before for TRANSACTIONLOGLOCATION, the valid values for the OPTION argument are the following: Make the respective setting for which files copies should be logged, considering from which location transactions are usually taking place. Selecting the NONE option prevents Essbase from saving the respective files and the data load cannot be replayed. In this case you must first manually load the data before you can replay the transactions. Tip: If you use server or SQL data and the data and rules files are not archived in the Replay directory (for example, you did not use the SERVER or SERVER_CLIENT option), Essbase replays the data that is actually in the data source at the moment of the replay, which may or may not be the data that was originally loaded. You can find more detailed information in the following documents: Oracle Hyperion Enterprise Performance Management System Backup and Recovery Guide (rel. 11.1.2.1) Oracle Essbase Online Documentation (rel. 11.1.2.1)) Enterprise Performance Management System Documentation (including previous releases) Or on the Oracle Technology Network. If you are also interested in other new features and smart enhancements in Essbase or Hyperion Planning stay tuned for coming articles or check our training courses and web presentations. You can find general information about offerings for the Essbase and Planning curriculum or other Oracle-Hyperion products here; (please make sure to select your country/region at the top of this page) or in the OU Learning paths section, where Planning, Essbase and other Hyperion products can be found under the Fusion Middleware heading (again, please select the right country/region). Or drop me a note directly: [email protected]. About the Author: Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis. Disclaimer: All methods and features mentioned in this article must be considered and tested carefully related to your environment, processes and requirements. As guidance please always refer to the available software documentation. This article does not recommend or advise any explicit action or change, hence the author cannot be held responsible for any consequences due to the use or implementation of these features.

    Read the article

  • Fun with Declarative Components

    - by [email protected]
    Use case background I have been asked on a number of occasions if our selectOneChoice component could allow random text to be entered, as well as having a list of selections available. Unfortunately, the selectOneChoice component only allows entry via the dropdown selection list and doesn't allow text entry. I was thinking of possible solutions and thought that this might make a good example for using a declarative component.My initial idea My first thought was to use an af:inputText to allow the text entry, and an af:selectOneChoice with mode="compact" for the selections. To get it to layout horizontally, we would want to use an af:panelGroupLayout with layout="horizontal". To get the label for this to line up correctly, we'll need to wrap the af:panelGroupLayout with an af:panelLabelAndMessage. This is the basic structure: <af:panelLabelAndMessage> <af:panelGroupLayout layout="horizontal"> <af:inputText/> <af:selectOneChoice mode="compact"/> </af:panelgroupLayout></af:panelLabelAndMessage> Make it into a declarative component One of the steps to making a declarative component is deciding what attributes we want to be able to specify. To keep this example simple, let's just have: 'label' (the label of our declarative component)'value' (what we want to bind to the value of the input text)'items' (the select items in our dropdown) Here is the initial declarative component code (saved as file "inputTextWithChoice.jsff"): <?xml version='1.0' encoding='UTF-8'?><!-- Copyright (c) 2008, Oracle and/or its affiliates. All rights reserved. --><jsp:root xmlns:jsp="http://java.sun.com/JSP/Page" version="2.1" xmlns:f="http://java.sun.com/jsf/core" xmlns:af="http://xmlns.oracle.com/adf/faces/rich"> <jsp:directive.page contentType="text/html;charset=utf-8"/> <af:componentDef var="attrs" componentVar="comp"> <af:xmlContent> <component xmlns="http://xmlns.oracle.com/adf/faces/rich/component"> <description>Input text with choice component.</description> <attribute> <description>Label</description> <attribute-name>label</attribute-name> <attribute-class>java.lang.String</attribute-class> </attribute> <attribute> <description>Value</description> <attribute-name>value</attribute-name> <attribute-class>java.lang.Object</attribute-class> </attribute> <attribute> <description>Choice Select Items Value</description> <attribute-name>items</attribute-name> <attribute-class>[[Ljavax.faces.model.SelectItem;</attribute-class> </attribute> </component> </af:xmlContent> <af:panelLabelAndMessage id="myPlm" label="#{attrs.label}" for="myIt"> <af:panelGroupLayout id="myPgl" layout="horizontal"> <af:inputText id="myIt" value="#{attrs.value}" partialTriggers="mySoc" label="myIt" simple="true" /> <af:selectOneChoice id="mySoc" label="mySoc" simple="true" mode="compact" value="#{attrs.value}" autoSubmit="true"> <f:selectItems id="mySIs" value="#{attrs.items}" /> </af:selectOneChoice> </af:panelGroupLayout> </af:panelLabelAndMessage> </af:componentDef></jsp:root> By having af:inputText and af:selectOneChoice both have the same value, then (assuming that this passed in as an EL expression) selecting something in the selectOneChoice will update the value in the af:inputText. To use this declarative component in a jspx page: <af:declarativeComponent id="myItwc" viewId="inputTextWithChoice.jsff" label="InputText with Choice" value="#{demoInput.choiceValue}" items="#{demoInput.selectItems}" /> Some problems arise At first glace, this seems to be functioning like we want it to. However, there is a side effect to having the af:inputText and af:selectOneChoice share a value, if one changes, so does the other. The problem here is that when we update the af:inputText to something that doesn't match one of the selections in the af:selectOneChoice, the af:selectOneChoice will set itself to null (since the value doesn't match one of the selections) and the next time the page is submitted, it will submit the null value and the af:inputText will be empty. Oops, we don't want that. Hmm, what to do. Okay, how about if we make sure that the current value is always available in the selection list. But, lets not render it if the value is empty. We also need to add a partialTriggers attribute so that this gets updated when the af:inputText is changed. Plus, we really don't want to select this item so let's disable it. <af:selectOneChoice id="mySoc" partialTriggers="myIt" label="mySoc" simple="true" mode="compact" value="#{attrs.value}" autoSubmit="true"> <af:selectItem id="mySI" label="Selected:#{attrs.value}" value="#{attrs.value}" disabled="true" rendered="#{!empty attrs.value}"/> <af:separator id="mySp" /> <f:selectItems id="mySIs" value="#{attrs.items}" /></af:selectOneChoice> That seems to be working pretty good. One minor issue that we probably can't do anything about is that when you enter something in the inputText and then click on the selectOneChoice, the popup is displayed, but then goes away because it has been replaced via PPR because we told it to with the partialTriggers="myIt". This is not that big a deal, since if you are entering something manually, you probably don't want to select something from the list right afterwards. Making it look like a single component. Now, let's play around a bit with the contentStyle of the af:inputText and the af:selectOneChoice so that the compact icon will layout inside the af:inputText, making it look more like an af:selectManyChoice. We need to add some padding-right to the af;inputText so there is space for the icon. These adjustments were for the Fusion FX skin. <af:inputText id="myIt" partialTriggers="mySoc" autoSubmit="true" contentStyle="padding-right: 15px;" value="#{attrs.value}" label="myIt" simple="true" /><af:selectOneChoice id="mySoc" partialTriggers="myIt" contentStyle="position: relative; top: -2px; left: -19px;" label="mySoc" simple="true" mode="compact" value="#{attrs.value}" autoSubmit="true"> <af:selectItem id="mySI" label="Selected:#{attrs.value}" value="#{attrs.value}" disabled="true" rendered="#{!empty attrs.value}"/> <af:separator id="mySp" /> <f:selectItems id="mySIs" value="#{attrs.items}" /></af:selectOneChoice> There you have it, a declarative component that allows for suggested selections, but also allows arbitrary text to be entered. This could be used for search field, where the 'items' attribute could be populated with popular searches. Lines of java code written: 0

    Read the article

  • PowerShell Script to Enumerate SharePoint 2010 or 2013 Permissions and Active Directory Group Membership

    - by Brian T. Jackett
    Originally posted on: http://geekswithblogs.net/bjackett/archive/2013/07/01/powershell-script-to-enumerate-sharepoint-2010-or-2013-permissions-and.aspx   In this post I will present a script to enumerate SharePoint 2010 or 2013 permissions across the entire farm down to the site (SPWeb) level.  As a bonus this script also recursively expands the membership of any Active Directory (AD) group including nested groups which you wouldn’t be able to find through the SharePoint UI.   History     Back in 2009 (over 4 years ago now) I published one my most read blog posts about enumerating SharePoint 2007 permissions.  I finally got around to updating that script to remove deprecated APIs, supporting the SharePoint 2010 commandlets, and fixing a few bugs.  There are 2 things that script did that I had to remove due to major architectural or procedural changes in the script. Indenting the XML output Ability to search for a specific user    I plan to add back the ability to search for a specific user but wanted to get this version published first.  As for indenting the XML that could be added but would take some effort.  If there is user demand for it (let me know in the comments or email me using the contact button at top of blog) I’ll move it up in priorities.    As a side note you may also notice that I’m not using the Active Directory commandlets.  This was a conscious decision since not all environments have them available.  Instead I’m relying on the older [ADSI] type accelerator and APIs.  It does add a significant amount of code to the script but it is necessary for compatibility.  Hopefully in a few years if I need to update again I can remove that legacy code.   Solution    Below is the script to enumerate SharePoint 2010 and 2013 permissions down to site level.  You can also download it from my SkyDrive account or my posting on the TechNet Script Center Repository. SkyDrive TechNet Script Center Repository http://gallery.technet.microsoft.com/scriptcenter/Enumerate-SharePoint-2010-35976bdb   001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050 051 052 053 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 ########################################################### #DisplaySPWebApp8.ps1 # #Author: Brian T. Jackett #Last Modified Date: 2013-07-01 # #Traverse the entire web app site by site to display # hierarchy and users with permissions to site. ########################################################### function Expand-ADGroupMembership {     Param     (         [Parameter(Mandatory=$true,                    Position=0)]         [string]         $ADGroupName,         [Parameter(Position=1)]         [string]         $RoleBinding     )     Process     {         $roleBindingText = ""         if(-not [string]::IsNullOrEmpty($RoleBinding))         {             $roleBindingText = " RoleBindings=`"$roleBindings`""         }         Write-Output "<ADGroup Name=`"$($ADGroupName)`"$roleBindingText>"         $domain = $ADGroupName.substring(0, $ADGroupName.IndexOf("\") + 1)         $groupName = $ADGroupName.Remove(0, $ADGroupName.IndexOf("\") + 1)                                     #BEGIN - CODE ADAPTED FROM SCRIPT CENTER SAMPLE CODE REPOSITORY         #http://www.microsoft.com/technet/scriptcenter/scripts/powershell/search/users/srch106.mspx         #GET AD GROUP FROM DIRECTORY SERVICES SEARCH         $strFilter = "(&(objectCategory=Group)(name="+($groupName)+"))"         $objDomain = New-Object System.DirectoryServices.DirectoryEntry         $objSearcher = New-Object System.DirectoryServices.DirectorySearcher         $objSearcher.SearchRoot = $objDomain         $objSearcher.Filter = $strFilter         # specify properties to be returned         $colProplist = ("name","member","objectclass")         foreach ($i in $colPropList)         {             $catcher = $objSearcher.PropertiesToLoad.Add($i)         }         $colResults = $objSearcher.FindAll()         #END - CODE ADAPTED FROM SCRIPT CENTER SAMPLE CODE REPOSITORY         foreach ($objResult in $colResults)         {             if($objResult.Properties["Member"] -ne $null)             {                 foreach ($member in $objResult.Properties["Member"])                 {                     $indMember = [adsi] "LDAP://$member"                     $fullMemberName = $domain + ($indMember.Name)                                         #if($indMember["objectclass"]                         # if child AD group continue down chain                         if(($indMember | Select-Object -ExpandProperty objectclass) -contains "group")                         {                             Expand-ADGroupMembership -ADGroupName $fullMemberName                         }                         elseif(($indMember | Select-Object -ExpandProperty objectclass) -contains "user")                         {                             Write-Output "<ADUser>$fullMemberName</ADUser>"                         }                 }             }         }                 Write-Output "</ADGroup>"     } } #end Expand-ADGroupMembership # main portion of script if((Get-PSSnapin -Name microsoft.sharepoint.powershell) -eq $null) {     Add-PSSnapin Microsoft.SharePoint.PowerShell } $farm = Get-SPFarm Write-Output "<Farm Guid=`"$($farm.Id)`">" $webApps = Get-SPWebApplication foreach($webApp in $webApps) {     Write-Output "<WebApplication URL=`"$($webApp.URL)`" Name=`"$($webApp.Name)`">"     foreach($site in $webApp.Sites)     {         Write-Output "<SiteCollection URL=`"$($site.URL)`">"                 foreach($web in $site.AllWebs)         {             Write-Output "<Site URL=`"$($web.URL)`">"             # if site inherits permissions from parent then stop processing             if($web.HasUniqueRoleAssignments -eq $false)             {                 Write-Output "<!-- Inherits role assignments from parent -->"             }             # else site has unique permissions             else             {                 foreach($assignment in $web.RoleAssignments)                 {                     if(-not [string]::IsNullOrEmpty($assignment.Member.Xml))                     {                         $roleBindings = ($assignment.RoleDefinitionBindings | Select-Object -ExpandProperty name) -join ","                         # check if assignment is SharePoint Group                         if($assignment.Member.XML.StartsWith('<Group') -eq "True")                         {                             Write-Output "<SPGroup Name=`"$($assignment.Member.Name)`" RoleBindings=`"$roleBindings`">"                             foreach($SPGroupMember in $assignment.Member.Users)                             {                                 # if SharePoint group member is an AD Group                                 if($SPGroupMember.IsDomainGroup)                                 {                                     Expand-ADGroupMembership -ADGroupName $SPGroupMember.Name                                 }                                 # else SharePoint group member is an AD User                                 else                                 {                                     # remove claim portion of user login                                     #Write-Output "<ADUser>$($SPGroupMember.UserLogin.Remove(0,$SPGroupMember.UserLogin.IndexOf("|") + 1))</ADUser>"                                     Write-Output "<ADUser>$($SPGroupMember.UserLogin)</ADUser>"                                 }                             }                             Write-Output "</SPGroup>"                         }                         # else an indivdually listed AD group or user                         else                         {                             if($assignment.Member.IsDomainGroup)                             {                                 Expand-ADGroupMembership -ADGroupName $assignment.Member.Name -RoleBinding $roleBindings                             }                             else                             {                                 # remove claim portion of user login                                 #Write-Output "<ADUser>$($assignment.Member.UserLogin.Remove(0,$assignment.Member.UserLogin.IndexOf("|") + 1))</ADUser>"                                                                 Write-Output "<ADUser RoleBindings=`"$roleBindings`">$($assignment.Member.UserLogin)</ADUser>"                             }                         }                     }                 }             }             Write-Output "</Site>"             $web.Dispose()         }         Write-Output "</SiteCollection>"         $site.Dispose()     }     Write-Output "</WebApplication>" } Write-Output "</Farm>"      The output from the script can be sent to an XML which you can then explore using the [XML] type accelerator.  This lets you explore the XML structure however you see fit.  See the screenshot below for an example.      If you do view the XML output through a text editor (Notepad++ for me) notice the format.  Below we see a SharePoint site that has a SharePoint group Demo Members with Edit permissions assigned.  Demo Members has an AD group corp\developers as a member.  corp\developers has a child AD group called corp\DevelopersSub with 1 AD user in that sub group.  As you can see the script recursively expands the AD hierarchy.   Conclusion    It took me 4 years to finally update this script but I‘m happy to get this published.  I was able to fix a number of errors and smooth out some rough edges.  I plan to develop this into a more full fledged tool over the next year with more features and flexibility (copy permissions, search for individual user or group, optional enumerate lists / items, etc.).  If you have any feedback, feature requests, or issues running it please let me know.  Enjoy the script!         -Frog Out

    Read the article

  • J2EE Applications, SPARC T4, Solaris Containers, and Resource Pools

    - by user12620111
    I've obtained a substantial performance improvement on a SPARC T4-2 Server running a J2EE Application Server Cluster by deploying the cluster members into Oracle Solaris Containers and binding those containers to cores of the SPARC T4 Processor. This is not a surprising result, in fact, it is consistent with other results that are available on the Internet. See the "references", below, for some examples. Nonetheless, here is a summary of my configuration and results. (1.0) Before deploying a J2EE Application Server Cluster into a virtualized environment, many decisions need to be made. I'm not claiming that all of the decisions that I have a made will work well for every environment. In fact, I'm not even claiming that all of the decisions are the best possible for my environment. I'm only claiming that of the small sample of configurations that I've tested, this is the one that is working best for me. Here are some of the decisions that needed to be made: (1.1) Which virtualization option? There are several virtualization options and isolation levels that are available. Options include: Hard partitions:  Dynamic Domains on Sun SPARC Enterprise M-Series Servers Hypervisor based virtualization such as Oracle VM Server for SPARC (LDOMs) on SPARC T-Series Servers OS Virtualization using Oracle Solaris Containers Resource management tools in the Oracle Solaris OS to control the amount of resources an application receives, such as CPU cycles, physical memory, and network bandwidth. Oracle Solaris Containers provide the right level of isolation and flexibility for my environment. To borrow some words from my friends in marketing, "The SPARC T4 processor leverages the unique, no-cost virtualization capabilities of Oracle Solaris Zones"  (1.2) How to associate Oracle Solaris Containers with resources? There are several options available to associate containers with resources, including (a) resource pool association (b) dedicated-cpu resources and (c) capped-cpu resources. I chose to create resource pools and associate them with the containers because I wanted explicit control over the cores and virtual processors.  (1.3) Cluster Topology? Is it best to deploy (a) multiple application servers on one node, (b) one application server on multiple nodes, or (c) multiple application servers on multiple nodes? After a few quick tests, it appears that one application server per Oracle Solaris Container is a good solution. (1.4) Number of cluster members to deploy? I chose to deploy four big 64-bit application servers. I would like go back a test many 32-bit application servers, but that is left for another day. (2.0) Configuration tested. (2.1) I was using a SPARC T4-2 Server which has 2 CPU and 128 virtual processors. To understand the physical layout of the hardware on Solaris 10, I used the OpenSolaris psrinfo perl script available at http://hub.opensolaris.org/bin/download/Community+Group+performance/files/psrinfo.pl: test# ./psrinfo.pl -pv The physical processor has 8 cores and 64 virtual processors (0-63) The core has 8 virtual processors (0-7)   The core has 8 virtual processors (8-15)   The core has 8 virtual processors (16-23)   The core has 8 virtual processors (24-31)   The core has 8 virtual processors (32-39)   The core has 8 virtual processors (40-47)   The core has 8 virtual processors (48-55)   The core has 8 virtual processors (56-63)     SPARC-T4 (chipid 0, clock 2848 MHz) The physical processor has 8 cores and 64 virtual processors (64-127)   The core has 8 virtual processors (64-71)   The core has 8 virtual processors (72-79)   The core has 8 virtual processors (80-87)   The core has 8 virtual processors (88-95)   The core has 8 virtual processors (96-103)   The core has 8 virtual processors (104-111)   The core has 8 virtual processors (112-119)   The core has 8 virtual processors (120-127)     SPARC-T4 (chipid 1, clock 2848 MHz) (2.2) The "before" test: without processor binding. I started with a 4-member cluster deployed into 4 Oracle Solaris Containers. Each container used a unique gigabit Ethernet port for HTTP traffic. The containers shared a 10 gigabit Ethernet port for JDBC traffic. (2.3) The "after" test: with processor binding. I ran one application server in the Global Zone and another application server in each of the three non-global zones (NGZ):  (3.0) Configuration steps. The following steps need to be repeated for all three Oracle Solaris Containers. (3.1) Stop AppServers from the BUI. (3.2) Stop the NGZ. test# ssh test-z2 init 5 (3.3) Enable resource pools: test# svcadm enable pools (3.4) Create the resource pool: test# poolcfg -dc 'create pool pool-test-z2' (3.5) Create the processor set: test# poolcfg -dc 'create pset pset-test-z2' (3.6) Specify the maximum number of CPU's that may be addd to the processor set: test# poolcfg -dc 'modify pset pset-test-z2 (uint pset.max=32)' (3.7) bash syntax to add Virtual CPUs to the processor set: test# (( i = 64 )); while (( i < 96 )); do poolcfg -dc "transfer to pset pset-test-z2 (cpu $i)"; (( i = i + 1 )) ; done (3.8) Associate the resource pool with the processor set: test# poolcfg -dc 'associate pool pool-test-z2 (pset pset-test-z2)' (3.9) Tell the zone to use the resource pool that has been created: test# zonecfg -z test-z1 set pool=pool-test-z2 (3.10) Boot the Oracle Solaris Container test# zoneadm -z test-z2 boot (3.11) Save the configuration to /etc/pooladm.conf test# pooladm -s (4.0) Results. Using the resource pools improves both throughput and response time: (5.0) References: System Administration Guide: Oracle Solaris Containers-Resource Management and Oracle Solaris Zones Capitalizing on large numbers of processors with WebSphere Portal on Solaris WebSphere Application Server and T5440 (Dileep Kumar's Weblog)  http://www.brendangregg.com/zones.html Reuters Market Data System, RMDS 6 Multiple Instances (Consolidated), Performance Test Results in Solaris, Containers/Zones Environment on Sun Blade X6270 by Amjad Khan, 2009.

    Read the article

  • Nashorn, the rhino in the room

    - by costlow
    Nashorn is a new runtime within JDK 8 that allows developers to run code written in JavaScript and call back and forth with Java. One advantage to the Nashorn scripting engine is that is allows for quick prototyping of functionality or basic shell scripts that use Java libraries. The previous JavaScript runtime, named Rhino, was introduced in JDK 6 (released 2006, end of public updates Feb 2013). Keeping tradition amongst the global developer community, "Nashorn" is the German word for rhino. The Java platform and runtime is an intentional home to many languages beyond the Java language itself. OpenJDK’s Da Vinci Machine helps coordinate work amongst language developers and tool designers and has helped different languages by introducing the Invoke Dynamic instruction in Java 7 (2011), which resulted in two major benefits: speeding up execution of dynamic code, and providing the groundwork for Java 8’s lambda executions. Many of these improvements are discussed at the JVM Language Summit, where language and tool designers get together to discuss experiences and issues related to building these complex components. There are a number of benefits to running JavaScript applications on JDK 8’s Nashorn technology beyond writing scripts quickly: Interoperability with Java and JavaScript libraries. Scripts do not need to be compiled. Fast execution and multi-threading of JavaScript running in Java’s JRE. The ability to remotely debug applications using an IDE like NetBeans, Eclipse, or IntelliJ (instructions on the Nashorn blog). Automatic integration with Java monitoring tools, such as performance, health, and SIEM. In the remainder of this blog post, I will explain how to use Nashorn and the benefit from those features. Nashorn execution environment The Nashorn scripting engine is included in all versions of Java SE 8, both the JDK and the JRE. Unlike Java code, scripts written in nashorn are interpreted and do not need to be compiled before execution. Developers and users can access it in two ways: Users running JavaScript applications can call the binary directly:jre8/bin/jjs This mechanism can also be used in shell scripts by specifying a shebang like #!/usr/bin/jjs Developers can use the API and obtain a ScriptEngine through:ScriptEngine engine = new ScriptEngineManager().getEngineByName("nashorn"); When using a ScriptEngine, please understand that they execute code. Avoid running untrusted scripts or passing in untrusted/unvalidated inputs. During compilation, consider isolating access to the ScriptEngine and using Type Annotations to only allow @Untainted String arguments. One noteworthy difference between JavaScript executed in or outside of a web browser is that certain objects will not be available. For example when run outside a browser, there is no access to a document object or DOM tree. Other than that, all syntax, semantics, and capabilities are present. Examples of Java and JavaScript The Nashorn script engine allows developers of all experience levels the ability to write and run code that takes advantage of both languages. The specific dialect is ECMAScript 5.1 as identified by the User Guide and its standards definition through ECMA international. In addition to the example below, Benjamin Winterberg has a very well written Java 8 Nashorn Tutorial that provides a large number of code samples in both languages. Basic Operations A basic Hello World application written to run on Nashorn would look like this: #!/usr/bin/jjs print("Hello World"); The first line is a standard script indication, so that Linux or Unix systems can run the script through Nashorn. On Windows where scripts are not as common, you would run the script like: jjs helloWorld.js. Receiving Arguments In order to receive program arguments your jjs invocation needs to use the -scripting flag and a double-dash to separate which arguments are for jjs and which are for the script itself:jjs -scripting print.js -- "This will print" #!/usr/bin/jjs var whatYouSaid = $ARG.length==0 ? "You did not say anything" : $ARG[0] print(whatYouSaid); Interoperability with Java libraries (including 3rd party dependencies) Another goal of Nashorn was to allow for quick scriptable prototypes, allowing access into Java types and any libraries. Resources operate in the context of the script (either in-line with the script or as separate threads) so if you open network sockets and your script terminates, those sockets will be released and available for your next run. Your code can access Java types the same as regular Java classes. The “import statements” are written somewhat differently to accommodate for language. There is a choice of two styles: For standard classes, just name the class: var ServerSocket = java.net.ServerSocket For arrays or other items, use Java.type: var ByteArray = Java.type("byte[]")You could technically do this for all. The same technique will allow your script to use Java types from any library or 3rd party component and quickly prototype items. Building a user interface One major difference between JavaScript inside and outside of a web browser is the availability of a DOM object for rendering views. When run outside of the browser, JavaScript has full control to construct the entire user interface with pre-fabricated UI controls, charts, or components. The example below is a variation from the Nashorn and JavaFX guide to show how items work together. Nashorn has a -fx flag to make the user interface components available. With the example script below, just specify: jjs -fx -scripting fx.js -- "My title" #!/usr/bin/jjs -fx var Button = javafx.scene.control.Button; var StackPane = javafx.scene.layout.StackPane; var Scene = javafx.scene.Scene; var clickCounter=0; $STAGE.title = $ARG.length>0 ? $ARG[0] : "You didn't provide a title"; var button = new Button(); button.text = "Say 'Hello World'"; button.onAction = myFunctionForButtonClicking; var root = new StackPane(); root.children.add(button); $STAGE.scene = new Scene(root, 300, 250); $STAGE.show(); function myFunctionForButtonClicking(){   var text = "Click Counter: " + clickCounter;   button.setText(text);   clickCounter++;   print(text); } For a more advanced post on using Nashorn to build a high-performing UI, see JavaFX with Nashorn Canvas example. Interoperable with frameworks like Node, Backbone, or Facebook React The major benefit of any language is the interoperability gained by people and systems that can read, write, and use it for interactions. Because Nashorn is built for the ECMAScript specification, developers familiar with JavaScript frameworks can write their code and then have system administrators deploy and monitor the applications the same as any other Java application. A number of projects are also running Node applications on Nashorn through Project Avatar and the supported modules. In addition to the previously mentioned Nashorn tutorial, Benjamin has also written a post about Using Backbone.js with Nashorn. To show the multi-language power of the Java Runtime, there is another interesting example that unites Facebook React and Clojure on JDK 8’s Nashorn. Summary Nashorn provides a simple and fast way of executing JavaScript applications and bridging between the best of each language. By making the full range of Java libraries to JavaScript applications, and the quick prototyping style of JavaScript to Java applications, developers are free to work as they see fit. Software Architects and System Administrators can take advantage of one runtime and leverage any work that they have done to tune, monitor, and certify their systems. Additional information is available within: The Nashorn Users’ Guide Java Magazine’s article "Next Generation JavaScript Engine for the JVM." The Nashorn team’s primary blog or a very helpful collection of Nashorn links.

    Read the article

  • Anatomy of a .NET Assembly - CLR metadata 1

    - by Simon Cooper
    Before we look at the bytes comprising the CLR-specific data inside an assembly, we first need to understand the logical format of the metadata (For this post I only be looking at simple pure-IL assemblies; mixed-mode assemblies & other things complicates things quite a bit). Metadata streams Most of the CLR-specific data inside an assembly is inside one of 5 streams, which are analogous to the sections in a PE file. The name of each section in a PE file starts with a ., and the name of each stream in the CLR metadata starts with a #. All but one of the streams are heaps, which store unstructured binary data. The predefined streams are: #~ Also called the metadata stream, this stream stores all the information on the types, methods, fields, properties and events in the assembly. Unlike the other streams, the metadata stream has predefined contents & structure. #Strings This heap is where all the namespace, type & member names are stored. It is referenced extensively from the #~ stream, as we'll be looking at later. #US Also known as the user string heap, this stream stores all the strings used in code directly. All the strings you embed in your source code end up in here. This stream is only referenced from method bodies. #GUID This heap exclusively stores GUIDs used throughout the assembly. #Blob This heap is for storing pure binary data - method signatures, generic instantiations, that sort of thing. Items inside the heaps (#Strings, #US, #GUID and #Blob) are indexed using a simple binary offset from the start of the heap. At that offset is a coded integer giving the length of that item, then the item's bytes immediately follow. The #GUID stream is slightly different, in that GUIDs are all 16 bytes long, so a length isn't required. Metadata tables The #~ stream contains all the assembly metadata. The metadata is organised into 45 tables, which are binary arrays of predefined structures containing information on various aspects of the metadata. Each entry in a table is called a row, and the rows are simply concatentated together in the file on disk. For example, each row in the TypeRef table contains: A reference to where the type is defined (most of the time, a row in the AssemblyRef table). An offset into the #Strings heap with the name of the type An offset into the #Strings heap with the namespace of the type. in that order. The important tables are (with their table number in hex): 0x2: TypeDef 0x4: FieldDef 0x6: MethodDef 0x14: EventDef 0x17: PropertyDef Contains basic information on all the types, fields, methods, events and properties defined in the assembly. 0x1: TypeRef The details of all the referenced types defined in other assemblies. 0xa: MemberRef The details of all the referenced members of types defined in other assemblies. 0x9: InterfaceImpl Links the types defined in the assembly with the interfaces that type implements. 0xc: CustomAttribute Contains information on all the attributes applied to elements in this assembly, from method parameters to the assembly itself. 0x18: MethodSemantics Links properties and events with the methods that comprise the get/set or add/remove methods of the property or method. 0x1b: TypeSpec 0x2b: MethodSpec These tables provide instantiations of generic types and methods for each usage within the assembly. There are several ways to reference a single row within a table. The simplest is to simply specify the 1-based row index (RID). The indexes are 1-based so a value of 0 can represent 'null'. In this case, which table the row index refers to is inferred from the context. If the table can't be determined from the context, then a particular row is specified using a token. This is a 4-byte value with the most significant byte specifying the table, and the other 3 specifying the 1-based RID within that table. This is generally how a metadata table row is referenced from the instruction stream in method bodies. The third way is to use a coded token, which we will look at in the next post. So, back to the bytes Now we've got a rough idea of how the metadata is logically arranged, we can now look at the bytes comprising the start of the CLR data within an assembly: The first 8 bytes of the .text section are used by the CLR loader stub. After that, the CLR-specific data starts with the CLI header. I've highlighted the important bytes in the diagram. In order, they are: The size of the header. As the header is a fixed size, this is always 0x48. The CLR major version. This is always 2, even for .NET 4 assemblies. The CLR minor version. This is always 5, even for .NET 4 assemblies, and seems to be ignored by the runtime. The RVA and size of the metadata header. In the diagram, the RVA 0x20e4 corresponds to the file offset 0x2e4 Various flags specifying if this assembly is pure-IL, whether it is strong name signed, and whether it should be run as 32-bit (this is how the CLR differentiates between x86 and AnyCPU assemblies). A token pointing to the entrypoint of the assembly. In this case, 06 (the last byte) refers to the MethodDef table, and 01 00 00 refers to to the first row in that table. (after a gap) RVA of the strong name signature hash, which comes straight after the CLI header. The RVA 0x2050 corresponds to file offset 0x250. The rest of the CLI header is mainly used in mixed-mode assemblies, and so is zeroed in this pure-IL assembly. After the CLI header comes the strong name hash, which is a SHA-1 hash of the assembly using the strong name key. After that comes the bodies of all the methods in the assembly concatentated together. Each method body starts off with a header, which I'll be looking at later. As you can see, this is a very small assembly with only 2 methods (an instance constructor and a Main method). After that, near the end of the .text section, comes the metadata, containing a metadata header and the 5 streams discussed above. We'll be looking at this in the next post. Conclusion The CLI header data doesn't have much to it, but we've covered some concepts that will be important in later posts - the logical structure of the CLR metadata and the overall layout of CLR data within the .text section. Next, I'll have a look at the contents of the #~ stream, and how the table data is arranged on disk.

    Read the article

  • How can I remove old kernels/install new ones when /boot is full?

    - by Marcel
    I know this question is asked many times before, however with me it is just a bit different I guess. # df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 224G 5.2G 208G 3% / udev 1.9G 4.0K 1.9G 1% /dev tmpfs 777M 260K 777M 1% /run none 5.0M 0 5.0M 0% /run/lock none 1.9G 0 1.9G 0% /run/shm /dev/sda2 90M 88M 0 100% /boot /dev/sda6 1.9G 514M 1.3G 29% /tmp My boot partition is full. Current Kernel: # uname -r 3.2.0-35-generic All Kernels: # dpkg --list | grep linux-image ii linux-image-3.2.0-32-generic 3.2.0-32.51 Linux kernel image for version 3.2.0 on 64 bit x86 SMP ii linux-image-3.2.0-34-generic 3.2.0-34.53 Linux kernel image for version 3.2.0 on 64 bit x86 SMP ii linux-image-3.2.0-35-generic 3.2.0-35.55 Linux kernel image for version 3.2.0 on 64 bit x86 SMP iF linux-image-3.2.0-37-generic 3.2.0-37.58 Linux kernel image for version 3.2.0 on 64 bit x86 SMP iF linux-image-3.2.0-38-generic 3.2.0-38.60 Linux kernel image for version 3.2.0 on 64 bit x86 SMP iU linux-image-generic 3.2.0.37.45 Generic Linux kernel image So I thought of removing the 3.2.0.32-generic kernel with: # sudo apt-get purge linux-image-3.2.0-32-generic Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: linux-generic : Depends: linux-headers-generic (= 3.2.0.37.45) but 3.2.0.38.46 is to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). No success. When I try apt-get -f install it also fails: # apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: linux-headers-3.2.0-34 linux-headers-3.2.0-35 linux-headers-3.2.0-34-generic linux-headers-3.2.0-35-generic Use 'apt-get autoremove' to remove them. The following extra packages will be installed: linux-generic linux-image-generic The following packages will be upgraded: linux-generic linux-image-generic 2 upgraded, 0 newly installed, 0 to remove and 9 not upgraded. 5 not fully installed or removed. Need to get 0 B/4,334 B of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up initramfs-tools (0.99ubuntu13.1) ... update-initramfs: deferring update (trigger activated) Setting up linux-image-3.2.0-37-generic (3.2.0-37.58) ... Running depmod. update-initramfs: deferring update (hook will be called later) The link /initrd.img is a dangling linkto /boot/initrd.img-3.2.0-38-generic Examining /etc/kernel/postinst.d. run-parts: executing /etc/kernel/postinst.d/initramfs-tools 3.2.0-37-generic /boot/vmlinuz-3.2.0-37-generic update-initramfs: Generating /boot/initrd.img-3.2.0-37-generic gzip: stdout: No space left on device E: mkinitramfs failure cpio 141 gzip 1 update-initramfs: failed for /boot/initrd.img-3.2.0-37-generic with 1. run-parts: /etc/kernel/postinst.d/initramfs-tools exited with return code 1 Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/linux-image-3.2.0-37-generic.postinst line 1010. dpkg: error processing linux-image-3.2.0-37-generic (--configure): subprocess installed post-installation script returned error exit status 2 Setting up linux-image-3.2.0-38-generic (3.2.0-38.60) ... Running depmod. update-initramfs: deferring update (hook will be called later) The link /initrd.img is a dangling linkto /boot/initrd.img-3.2.0-37-generic Examining /etc/kernel/postinst.d. run-parts: executing /etc/kernel/postinst.d/initramfs-tools 3.2.0-38-generic /boot/vmlinuz-3.2.0-38-generic update-initramfs: Generating /boot/initrd.img-3.2.0-38-generic gzip: stdout: No space left on device E: mkinitramfs failure cpio 141 gzip 1 update-initramfs: failed for /boot/initrd.img-3.2.0-38-generic with 1. run-parts: /etc/kernel/postinst.d/initramfs-tools exited with return code 1 Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/linux-image-3.2.0-38-generic.postinst line 1010. dpkg: error processing linux-image-3.2.0-38-generic (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of linux-image-generic: linux-image-generic depends on linux-image-3.2.0-37-generic; however: Package linux-image-3.2.0-37-generic is not configured yet. dpkg: error processing linux-image-generic (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-generic: linux-generic depends on linux-image-generic (= 3.2.0.37.45); however: Package linux-image-generic is not configured yet. linux-generic depends on linux-headers-generic (= 3.2.0.37.45); however: Version of linux-headers-generic on system is 3.2.0.38.46. dpkg: error processing linux-generic (--configure): dependency problems - leaving unconfigured Processing triggers for initramfs-tools ... No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because MaxReports is reached already update-initramfs: Generating /boot/initrd.img-3.2.0-35-generic gzip: stdout: No space left on device E: mkinitramfs failure cpio 141 gzip 1 update-initramfs: failed for /boot/initrd.img-3.2.0-35-generic with 1. dpkg: error processing initramfs-tools (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: linux-image-3.2.0-37-generic linux-image-3.2.0-38-generic linux-image-generic linux-generic initramfs-tools E: Sub-process /usr/bin/dpkg returned an error code (1) Any help would really be appreciated. Update: I did: sudo rm /boot/*-3.2.0-32-generic /boot/*-3.2.0-34-generic After that the following problem with apt-get -f install: root@localhost:/# apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: linux-generic The following packages will be upgraded: linux-generic 1 upgraded, 0 newly installed, 0 to remove and 9 not upgraded. 1 not fully installed or removed. Need to get 0 B/1,722 B of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y dpkg: dependency problems prevent configuration of linux-generic: linux-generic depends on linux-image-generic (= 3.2.0.37.45); however: Version of linux-image-generic on system is 3.2.0.38.46. linux-generic depends on linux-headers-generic (= 3.2.0.37.45); however: Version of linux-headers-generic on system is 3.2.0.38.46. dpkg: error processing linux-generic (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: linux-generic E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Building dynamic OLAP data marts on-the-fly

    - by DrJohn
    At the forthcoming SQLBits conference, I will be presenting a session on how to dynamically build an OLAP data mart on-the-fly. This blog entry is intended to clarify exactly what I mean by an OLAP data mart, why you may need to build them on-the-fly and finally outline the steps needed to build them dynamically. In subsequent blog entries, I will present exactly how to implement some of the techniques involved. What is an OLAP data mart? In data warehousing parlance, a data mart is a subset of the overall corporate data provided to business users to meet specific business needs. Of course, the term does not specify the technology involved, so I coined the term "OLAP data mart" to identify a subset of data which is delivered in the form of an OLAP cube which may be accompanied by the relational database upon which it was built. To clarify, the relational database is specifically create and loaded with the subset of data and then the OLAP cube is built and processed to make the data available to the end-users via standard OLAP client tools. Why build OLAP data marts? Market research companies sell data to their clients to make money. To gain competitive advantage, market research providers like to "add value" to their data by providing systems that enhance analytics, thereby allowing clients to make best use of the data. As such, OLAP cubes have become a standard way of delivering added value to clients. They can be built on-the-fly to hold specific data sets and meet particular needs and then hosted on a secure intranet site for remote access, or shipped to clients' own infrastructure for hosting. Even better, they support a wide range of different tools for analytical purposes, including the ever popular Microsoft Excel. Extension Attributes: The Challenge One of the key challenges in building multiple OLAP data marts based on the same 'template' is handling extension attributes. These are attributes that meet the client's specific reporting needs, but do not form part of the standard template. Now clearly, these extension attributes have to come into the system via additional files and ultimately be added to relational tables so they can end up in the OLAP cube. However, processing these files and filling dynamically altered tables with SSIS is a challenge as SSIS packages tend to break as soon as the database schema changes. There are two approaches to this: (1) dynamically build an SSIS package in memory to match the new database schema using C#, or (2) have the extension attributes provided as name/value pairs so the file's schema does not change and can easily be loaded using SSIS. The problem with the first approach is the complexity of writing an awful lot of complex C# code. The problem of the second approach is that name/value pairs are useless to an OLAP cube; so they have to be pivoted back into a proper relational table somewhere in the data load process WITHOUT breaking SSIS. How this can be done will be part of future blog entry. What is involved in building an OLAP data mart? There are a great many steps involved in building OLAP data marts on-the-fly. The key point is that all the steps must be automated to allow for the production of multiple OLAP data marts per day (i.e. many thousands, each with its own specific data set and attributes). Now most of these steps have a great deal in common with standard data warehouse practices. The key difference is that the databases are all built to order. The only permanent database is the metadata database (shown in orange) which holds all the metadata needed to build everything else (i.e. client orders, configuration information, connection strings, client specific requirements and attributes etc.). The staging database (shown in red) has a short life: it is built, populated and then ripped down as soon as the OLAP Data Mart has been populated. In the diagram below, the OLAP data mart comprises the two blue components: the Data Mart which is a relational database and the OLAP Cube which is an OLAP database implemented using Microsoft Analysis Services (SSAS). The client may receive just the OLAP cube or both components together depending on their reporting requirements.  So, in broad terms the steps required to fulfil a client order are as follows: Step 1: Prepare metadata Create a set of database names unique to the client's order Modify all package connection strings to be used by SSIS to point to new databases and file locations. Step 2: Create relational databases Create the staging and data mart relational databases using dynamic SQL and set the database recovery mode to SIMPLE as we do not need the overhead of logging anything Execute SQL scripts to build all database objects (tables, views, functions and stored procedures) in the two databases Step 3: Load staging database Use SSIS to load all data files into the staging database in a parallel operation Load extension files containing name/value pairs. These will provide client-specific attributes in the OLAP cube. Step 4: Load data mart relational database Load the data from staging into the data mart relational database, again in parallel where possible Allocate surrogate keys and use SSIS to perform surrogate key lookup during the load of fact tables Step 5: Load extension tables & attributes Pivot the extension attributes from their native name/value pairs into proper relational tables Add the extension attributes to the views used by OLAP cube Step 6: Deploy & Process OLAP cube Deploy the OLAP database directly to the server using a C# script task in SSIS Modify the connection string used by the OLAP cube to point to the data mart relational database Modify the cube structure to add the extension attributes to both the data source view and the relevant dimensions Remove any standard attributes that not required Process the OLAP cube Step 7: Backup and drop databases Drop staging database as it is no longer required Backup data mart relational and OLAP database and ship these to the client's infrastructure Drop data mart relational and OLAP database from the build server Mark order complete Start processing the next order, ad infinitum. So my future blog posts and my forthcoming session at the SQLBits conference will all focus on some of the more interesting aspects of building OLAP data marts on-the-fly such as handling the load of extension attributes and how to dynamically alter the structure of an OLAP cube using C#.

    Read the article

  • Changing CSS with jQuery syntax in Silverlight using jLight

    - by Timmy Kokke
    Lately I’ve ran into situations where I had to change elements or had to request a value in the DOM from Silverlight. jLight, which was introduced in an earlier article, can help with that. jQuery offers great ways to change CSS during runtime. Silverlight can access the DOM, but it isn’t as easy as jQuery. All examples shown in this article can be looked at in this online demo. The code can be downloaded here.   Part 1: The easy stuff Selecting and changing properties is pretty straight forward. Setting the text color in all <B> </B> elements can be done using the following code:   jQuery.Select("b").Css("color", "red");   The Css() method is an extension method on jQueryObject which is return by the jQuery.Select() method. The Css() method takes to parameters. The first is the Css style property. All properties used in Css can be entered in this string. The second parameter is the value you want to give the property. In this case the property is “color” and it is changed to “red”. To specify which element you want to select you can add a :selector parameter to the Select() method as shown in the next example.   jQuery.Select("b:first").Css("font-family", "sans-serif");   The “:first” pseudo-class selector selects only the first element. This example changes the “font-family” property of the first <B></B> element to “sans-serif”. To make use of intellisense in Visual Studio I’ve added a extension methods to help with the pseudo-classes. In the example below the “font-weight” of every “Even” <LI></LI> is set to “bold”.   jQuery.Select("li".Even()).Css("font-weight", "bold");   Because the Css() extension method returns a jQueryObject it is possible to chain calls to Css(). The following example show setting the “color”, “background-color” and the “font-size” of all headers in one go.   jQuery.Select(":header").Css("color", "#12FF70") .Css("background-color", "yellow") .Css("font-size", "25px");   Part 2: More complex stuff In only a few cases you need to change only one style property. More often you want to change an entire set op style properties all in one go.  You could chain a lot of Css() methods together. A better way is to add a class to a stylesheet and define all properties in there. With the AddClass() method you can set a style class to a set of elements. This example shows how to add the “demostyle” class to all <B></B> in the document.   jQuery.Select("b").AddClass("demostyle");   Removing the class works in the same way:   jQuery.Select("b").RemoveClass("demostyle");   jLight is build for interacting with to the DOM from Silverlight using jQuery. A jQueryObjectCss object can be used to define different sets of style properties in Silverlight. The over 60 most common Css style properties are defined in the jQueryObjectCss class. A string indexer can be used to access all style properties ( CssObject1[“background-color”] equals CssObject1.BackgroundColor). In the code below, two jQueryObjectCss objects are defined and instantiated.   private jQueryObjectCss CssObject1; private jQueryObjectCss CssObject2;   public Demo2() { CssObject1 = new jQueryObjectCss { BackgroundColor = "Lime", Color="Black", FontSize = "12pt", FontFamily = "sans-serif", FontWeight = "bold", MarginLeft = 150, LineHeight = "28px", Border = "Solid 1px #880000" }; CssObject2 = new jQueryObjectCss { FontStyle = "Italic", FontSize = "48", Color = "#225522" }; InitializeComponent(); }   Now instead of chaining to set all different properties you can just pass one of the jQueryObjectCss objects to the Css() method. In this case all <LI></LI> elements are set to match this object.   jQuery.Select("li").Css(CssObject1); When using the jQueryObjectCss objects chaining is still possible. In the following example all headers are given a blue backgroundcolor and the last is set to match CssObject2.   jQuery.Select(":header").Css(new jQueryObjectCss{BackgroundColor = "Blue"}) .Eq(-1).Css(CssObject2);   Part 3: The fun stuff Having Silverlight call JavaScript and than having JavaScript to call Silverlight requires a lot of plumbing code. Everything has to be registered and strings are passed back and forth to execute the JavaScript. jLight makes this kind of stuff so easy, it becomes fun to use. In a lot of situations jQuery can call a function to decide what to do, setting a style class based on complex expressions for example. jLight can do the same, but the callback methods are defined in Silverlight. This example calls the function() method for each <LI></LI> element. The callback method has to take a jQueryObject, an integer and a string as parameters. In this case jLight differs a bit from the actual jQuery implementation. jQuery uses only the index and the className parameters. A jQueryObject is added to make it simpler to access the attributes and properties of the element. If the text of the listitem starts with a ‘D’ or an ‘M’ the class is set. Otherwise null is returned and nothing happens.   private void button1_Click(object sender, RoutedEventArgs e) { jQuery.Select("li").AddClass(function); }   private string function(jQueryObject obj, int index, string className) { if (obj.Text[0] == 'D' || obj.Text[0] == 'M') return "demostyle"; return null; }   The last thing I would like to demonstrate uses even more Silverlight and less jLight, but demonstrates the power of the combination. Animating a style property using a Storyboard with easing functions. First a dependency property is defined. In this case it is a double named Intensity. By handling the changed event the color is set using jQuery.   public double Intensity { get { return (double)GetValue(IntensityProperty); } set { SetValue(IntensityProperty, value); } }   public static readonly DependencyProperty IntensityProperty = DependencyProperty.Register("Intensity", typeof(double), typeof(Demo3), new PropertyMetadata(0.0, IntensityChanged));   private static void IntensityChanged(DependencyObject d, DependencyPropertyChangedEventArgs e) { var i = (byte)(double)e.NewValue; jQuery.Select("span").Css("color", string.Format("#{0:X2}{0:X2}{0:X2}", i)); }   An animation has to be created. This code defines a Storyboard with one keyframe that uses a bounce ease as an easing function. The animation is set to target the Intensity dependency property defined earlier.   private Storyboard CreateAnimation(double value) { Storyboard storyboard = new Storyboard(); var da = new DoubleAnimationUsingKeyFrames(); var d = new EasingDoubleKeyFrame { EasingFunction = new BounceEase(), KeyTime = KeyTime.FromTimeSpan(TimeSpan.FromSeconds(1.0)), Value = value }; da.KeyFrames.Add(d); Storyboard.SetTarget(da, this); Storyboard.SetTargetProperty(da, new PropertyPath(Demo3.IntensityProperty)); storyboard.Children.Add(da); return storyboard; }   Initially the Intensity is set to 128 which results in a gray color. When one of the buttons is pressed, a new animation is created an played. One to animate to black, and one to animate to white.   public Demo3() { InitializeComponent(); Intensity = 128; }   private void button2_Click(object sender, RoutedEventArgs e) { CreateAnimation(255).Begin(); }   private void button3_Click(object sender, RoutedEventArgs e) { CreateAnimation(0).Begin(); }   Conclusion As you can see jLight can make the life of a Silverlight developer a lot easier when accessing the DOM. Almost all jQuery functions that are defined in jLight use the same constructions as described above. I’ve tried to stay as close as possible to the real jQuery. Having JavaScript perform callbacks to Silverlight using jLight will be described in more detail in a future tutorial about AJAX or eventing.

    Read the article

  • ASP.NET WebAPI Security 3: Extensible Authentication Framework

    - by Your DisplayName here!
    In my last post, I described the identity architecture of ASP.NET Web API. The short version was, that Web API (beta 1) does not really have an authentication system on its own, but inherits the client security context from its host. This is fine in many situations (e.g. AJAX style callbacks with an already established logon session). But there are many cases where you don’t use the containing web application for authentication, but need to do it yourself. Examples of that would be token based authentication and clients that don’t run in the context of the web application (e.g. desktop clients / mobile). Since Web API provides a nice extensibility model, it is easy to implement whatever security framework you want on top of it. My design goals were: Easy to use. Extensible. Claims-based. ..and of course, this should always behave the same, regardless of the hosting environment. In the rest of the post I am outlining some of the bits and pieces, So you know what you are dealing with, in case you want to try the code. At the very heart… is a so called message handler. This is a Web API extensibility point that gets to see (and modify if needed) all incoming and outgoing requests. Handlers run after the conversion from host to Web API, which means that handler code deals with HttpRequestMessage and HttpResponseMessage. See Pedro’s post for more information on the processing pipeline. This handler requires a configuration object for initialization. Currently this is very simple, it contains: Settings for the various authentication and credential types Settings for claims transformation Ability to block identity inheritance from host The most important part here is the credential type support, but I will come back to that later. The logic of the message handler is simple: Look at the incoming request. If the request contains an authorization header, try to authenticate the client. If this is successful, create a claims principal and populate the usual places. If not, return a 401 status code and set the Www-Authenticate header. Look at outgoing response, if the status code is 401, set the Www-Authenticate header. Credential type support Under the covers I use the WIF security token handler infrastructure to validate credentials and to turn security tokens into claims. The idea is simple: an authorization header consists of two pieces: the schema and the actual “token”. My configuration object allows to associate a security token handler with a scheme. This way you only need to implement support for a specific credential type, and map that to the incoming scheme value. The current version supports HTTP Basic Authentication as well as SAML and SWT tokens. (I needed to do some surgery on the standard security token handlers, since WIF does not directly support string-ified tokens. The next version of .NET will fix that, and the code should become simpler then). You can e.g. use this code to hook up a username/password handler to the Basic scheme (the default scheme name for Basic Authentication). config.Handler.AddBasicAuthenticationHandler( (username, password) => username == password); You simply have to provide a password validation function which could of course point back to your existing password library or e.g. membership. The following code maps a token handler for Simple Web Tokens (SWT) to the Bearer scheme (the currently favoured scheme name for OAuth2). You simply have to specify the issuer name, realm and shared signature key: config.Handler.AddSimpleWebTokenHandler(     "Bearer",     http://identity.thinktecture.com/trust,     Constants.Realm,     "Dc9Mpi3jaaaUpBQpa/4R7XtUsa3D/ALSjTVvK8IUZbg="); For certain integration scenarios it is very useful if your Web API can consume SAML tokens. This is also easily accomplishable. The following code uses the standard WIF API to configure the usual SAMLisms like issuer, audience, service certificate and certificate validation. Both SAML 1.1 and 2.0 are supported. var registry = new ConfigurationBasedIssuerNameRegistry(); registry.AddTrustedIssuer( "d1 c5 b1 25 97 d0 36 94 65 1c e2 64 fe 48 06 01 35 f7 bd db", "ADFS"); var adfsConfig = new SecurityTokenHandlerConfiguration(); adfsConfig.AudienceRestriction.AllowedAudienceUris.Add( new Uri(Constants.Realm)); adfsConfig.IssuerNameRegistry = registry; adfsConfig.CertificateValidator = X509CertificateValidator.None; // token decryption (read from configuration section) adfsConfig.ServiceTokenResolver = FederatedAuthentication.ServiceConfiguration.CreateAggregateTokenResolver(); config.Handler.AddSaml11SecurityTokenHandler("SAML", adfsConfig); Claims Transformation After successful authentication, if configured, the standard WIF ClaimsAuthenticationManager is called to run claims transformation and validation logic. This stage is used to transform the “technical” claims from the security token into application claims. You can either have a separate transformation logic, or share on e.g. with the containing web application. That’s just a matter of configuration. Adding the authentication handler to a Web API application In the spirit of Web API this is done in code, e.g. global.asax for web hosting: protected void Application_Start() {     AreaRegistration.RegisterAllAreas();     ConfigureApis(GlobalConfiguration.Configuration);     RegisterGlobalFilters(GlobalFilters.Filters);     RegisterRoutes(RouteTable.Routes);     BundleTable.Bundles.RegisterTemplateBundles(); } private void ConfigureApis(HttpConfiguration configuration) {     configuration.MessageHandlers.Add( new AuthenticationHandler(ConfigureAuthentication())); } private AuthenticationConfiguration ConfigureAuthentication() {     var config = new AuthenticationConfiguration     {         // sample claims transformation for consultants sample, comment out to see raw claims         ClaimsAuthenticationManager = new ApiClaimsTransformer(),         // value of the www-authenticate header, // if not set, the first scheme added to the handler collection is used         DefaultAuthenticationScheme = "Basic"     };     // add token handlers - see above     return config; } You can find the full source code and some samples here. In the next post I will describe some of the samples in the download, and then move on to authorization. HTH

    Read the article

  • Creating a Training Lab on Windows Azure

    - by Michael Stephenson
    Originally posted on: http://geekswithblogs.net/michaelstephenson/archive/2013/06/17/153149.aspxThis week we are preparing for a training course that Alan Smith will be running for the support teams at one of my customers around Windows Azure. In order to facilitate the training lab we have a few prerequisites we need to handle. One of the biggest ones is that although the support team all have MSDN accounts the local desktops they work on are not ideal for running most of the labs as we want to give them some additional developer background training around Azure. Some recent Azure announcements really help us in this area: MSDN software can now be used on Azure VM You don't pay for Azure VM's when they are no longer used  Since the support team only have limited experience of Windows Azure and the organisation also have an Enterprise Agreement we decided it would be best value for money to spin up a training lab in a subscription on the EA and then we can turn the machines off when we are done. At the same time we would be able to spin them back up when the users need to do some additional lab work once the training course is completed. In order to achieve this I wanted to create a powershell script which would setup my training lab. The aim was to create 18 VM's which would be based on a prebuilt template with Visual Studio and the Azure development tools. The script I used is described below The Start & Variables The below text will setup the powershell environment and some variables which I will use elsewhere in the script. It will also import the Azure Powershell cmdlets. You can see below that I will need to download my publisher settings file and know some details from my Azure account. At this point I will assume you have a basic understanding of Azure & Powershell so already know how to do this. Set-ExecutionPolicy Unrestrictedcls $startTime = get-dateImport-Module "C:\Program Files (x86)\Microsoft SDKs\Windows Azure\PowerShell\Azure\Azure.psd1"# Azure Publisher Settings $azurePublisherSettings = '<Your settings file>.publishsettings'  # Subscription Details $subscriptionName = "<Your subscription name>" $defaultStorageAccount = "<Your default storage account>"  # Affinity Group Details $affinityGroup = '<Your affinity group>' $dataCenter = 'West Europe' # From Get-AzureLocation  # VM Details $baseVMName = 'TRN' $adminUserName = '<Your admin username>' $password = '<Your admin password>' $size = 'Medium' $vmTemplate = '<The name of your VM template image>' $rdpFilePath = '<File path to save RDP files to>' $machineSettingsPath = '<File path to save machine info to>'    Functions In the next section of the script I have some functions which are used to perform certain actions. The first is called CreateVM. This will do the following actions: If the VM already exists it will be deleted Create the cloud service Create the VM from the template I have created Add an endpoint so we can RDP to them all over the same port Download the RDP file so there is a short cut the trainees can easily access the machine via Write settings for the machine to a log file  function CreateVM($machineNo) { # Specify a name for the new VM $machineName = "$baseVMName-$machineNo" Write-Host "Creating VM: $machineName"       # Get the Azure VM Image      $myImage = Get-AzureVMImage $vmTemplate   #If the VM already exists delete and re-create it $existingVm = Get-AzureVM -Name $machineName -ServiceName $serviceName if($existingVm -ne $null) { Write-Host "VM already exists so deleting it" Remove-AzureVM -Name $machineName -ServiceName $serviceName }   "Creating Service" $serviceName = "bupa-azure-train-$machineName" Remove-AzureService -Force -ServiceName $serviceName New-AzureService -Location $dataCenter -ServiceName $serviceName   Write-Host "Creating VM: $machineName" New-AzureQuickVM -Windows -name $machineName -ServiceName $serviceName -ImageName $myImage.ImageName -InstanceSize $size -AdminUsername $adminUserName -Password $password  Write-Host "Updating the RDP endpoint for $machineName" Get-AzureVM -name $machineName -ServiceName $serviceName ` | Add-AzureEndpoint -Name RDP -Protocol TCP -LocalPort 3389 -PublicPort 550 ` | Update-AzureVM    Write-Host "Get the RDP File for machine $machineName" $machineRDPFilePath = "$rdpFilePath\$machineName.rdp" Get-AzureRemoteDesktopFile -name $machineName -ServiceName $serviceName -LocalPath "$machineRDPFilePath"   WriteMachineSettings "$machineName" "$serviceName" }    The delete machine settings function is used to delete the log file before we start re-running the process.  function DeleteMachineSettings() { Write-Host "Deleting the machine settings output file" [System.IO.File]::Delete("$machineSettingsPath"); }    The write machine settings function will get the VM and then record its details to the log file. The importance of the log file is that I can easily provide the information for all of the VM's to our infrastructure team to be able to configure access to all of the VM's    function WriteMachineSettings([string]$vmName, [string]$vmServiceName) { Write-Host "Writing to the machine settings output file"   $vm = Get-AzureVM -name $vmName -ServiceName $vmServiceName $vmEndpoint = Get-AzureEndpoint -VM $vm -Name RDP   $sb = new-object System.Text.StringBuilder $sb.Append("Service Name: "); $sb.Append($vm.ServiceName); $sb.Append(", "); $sb.Append("VM: "); $sb.Append($vm.Name); $sb.Append(", "); $sb.Append("RDP Public Port: "); $sb.Append($vmEndpoint.Port); $sb.Append(", "); $sb.Append("Public DNS: "); $sb.Append($vmEndpoint.Vip); $sb.AppendLine(""); [System.IO.File]::AppendAllText($machineSettingsPath, $sb.ToString());  } # end functions    Rest of Script In the rest of the script it is really just the bit that orchestrates the actions we want to happen. It will load the publisher settings, select the Azure subscription and then loop around the CreateVM function and create 16 VM's  Import-AzurePublishSettingsFile $azurePublisherSettings Set-AzureSubscription -SubscriptionName $subscriptionName -CurrentStorageAccount $defaultStorageAccount Select-AzureSubscription -SubscriptionName $subscriptionName  DeleteMachineSettings    "Starting creating Bupa International Azure Training Lab" $numberOfVMs = 16  for ($index=1; $index -le $numberOfVMs; $index++) { $vmNo = "$index" CreateVM($vmNo); }    "Finished creating Bupa International Azure Training Lab" # Give it a Minute Start-Sleep -s 60  $endTime = get-date "Script run time " + ($endTime - $startTime)    Conclusion As you can see there is nothing too fancy about this script but in our case of creating a small isolated training lab which is not connected to our corporate network then we can easily use this to provision the lab. Im sure if this is of use to anyone you can easily modify it to do other things with the lab environment too. A couple of points to note are that there are some soft limits in Azure about the number of cores and services your subscription can use. You may need to contact the Azure support team to be able to increase this limit. In terms of the real business value of this approach, it was not possible to use the existing desktops to do the training on, and getting some internal virtual machines would have been relatively expensive and time consuming for our ops team to do. With the Azure option we are able to spin these machines up for a temporary period during the training course and then throw them away when we are done. We expect the costing of this test lab to be very small, especially considering we have EA pricing. As a ball park I think my 18 lab VM training environment will cost in the region of $80 per day on our EA. This is a fraction of the cost of the creation of a single VM on premise.

    Read the article

  • Refactoring a Single Rails Model with large methods & long join queries trying to do everything

    - by Kelseydh
    I have a working Ruby on Rails Model that I suspect is inefficient, hard to maintain, and full of unnecessary SQL join queries. I want to optimize and refactor this Model (Quiz.rb) to comply with Rails best practices, but I'm not sure how I should do it. The Rails app is a game that has Missions with many Stages. Users complete Stages by answering Questions that have correct or incorrect Answers. When a User tries to complete a stage by answering questions, the User gets a Quiz entry with many Attempts. Each Attempt records an Answer submitted for that Question within the Stage. A user completes a stage or mission by getting every Attempt correct, and their progress is tracked by adding a new entry to the UserMission & UserStage join tables. All of these features work, but unfortunately the Quiz.rb Model has been twisted to handle almost all of it exclusively. The callbacks began at 'Quiz.rb', and because I wasn't sure how to leave the Quiz Model during a multi-model update, I resorted to using Rails Console to have the @quiz instance variable via self.some_method do all the heavy lifting to retrieve every data value for the game's business logic; resulting in large extended join queries that "dance" all around the Database schema. The Quiz.rb Model that Smells: class Quiz < ActiveRecord::Base belongs_to :user has_many :attempts, dependent: :destroy before_save :check_answer before_save :update_user_mission_and_stage accepts_nested_attributes_for :attempts, :reject_if => lambda { |a| a[:answer_id].blank? }, :allow_destroy => true #Checks every answer within each quiz, adding +1 for each correct answer #within a stage quiz, and -1 for each incorrect answer def check_answer stage_score = 0 self.attempts.each do |attempt| if attempt.answer.correct? == true stage_score += 1 elsif attempt.answer.correct == false stage_score - 1 end end stage_score end def winner return true end def update_user_mission_and_stage ####### #Step 1: Checks if UserMission exists, finds or creates one. #if no UserMission for the current mission exists, creates a new UserMission if self.user_has_mission? == false @user_mission = UserMission.new(user_id: self.user.id, mission_id: self.current_stage.mission_id, available: true) @user_mission.save else @user_mission = self.find_user_mission end ####### #Step 2: Checks if current UserStage exists, stops if true to prevent duplicate entry if self.user_has_stage? @user_mission.save return true else ####### ##Step 3: if step 2 returns false: ##Initiates UserStage creation instructions #checks for winner (winner actions need to be defined) if they complete last stage of last mission for a given orientation if self.passed? && self.is_last_stage? && self.is_last_mission? create_user_stage_and_update_user_mission self.winner #NOTE: The rest are the same, but specify conditions that are available to add badges or other actions upon those conditions occurring: ##if user completes first stage of a mission elsif self.passed? && self.is_first_stage? && self.is_first_mission? create_user_stage_and_update_user_mission #creates user badge for finishing first stage of first mission self.user.add_badge(5) self.user.activity_logs.create(description: "granted first-stage badge", type_event: "badge", value: "first-stage") #If user completes last stage of a given mission, creates a new UserMission elsif self.passed? && self.is_last_stage? && self.is_first_mission? create_user_stage_and_update_user_mission #creates user badge for finishing first mission self.user.add_badge(6) self.user.activity_logs.create(description: "granted first-mission badge", type_event: "badge", value: "first-mission") elsif self.passed? create_user_stage_and_update_user_mission else self.passed? == false return true end end end #Creates a new UserStage record in the database for a successful Quiz question passing def create_user_stage_and_update_user_mission @nu_stage = @user_mission.user_stages.new(user_id: self.user.id, stage_id: self.current_stage.id) @nu_stage.save @user_mission.save self.user.add_points(50) end #Boolean that defines passing a stage as answering every question in that stage correct def passed? self.check_answer >= self.number_of_questions end #Returns the number of questions asked for that stage's quiz def number_of_questions self.attempts.first.answer.question.stage.questions.count end #Returns the current_stage for the Quiz, routing through 1st attempt in that Quiz def current_stage self.attempts.first.answer.question.stage end #Gives back the position of the stage relative to its mission. def stage_position self.attempts.first.answer.question.stage.position end #will find the user_mission for the current user and stage if it exists def find_user_mission self.user.user_missions.find_by_mission_id(self.current_stage.mission_id) end #Returns true if quiz was for the last stage within that mission #helpful for triggering actions related to a user completing a mission def is_last_stage? self.stage_position == self.current_stage.mission.stages.last.position end #Returns true if quiz was for the first stage within that mission #helpful for triggering actions related to a user completing a mission def is_first_stage? self.stage_position == self.current_stage.mission.stages_ordered.first.position end #Returns true if current user has a UserMission for the current stage def user_has_mission? self.user.missions.ids.include?(self.current_stage.mission.id) end #Returns true if current user has a UserStage for the current stage def user_has_stage? self.user.stages.include?(self.current_stage) end #Returns true if current user is on the last mission based on position within a given orientation def is_first_mission? self.user.missions.first.orientation.missions.by_position.first.position == self.current_stage.mission.position end #Returns true if current user is on the first stage & mission of a given orientation def is_last_mission? self.user.missions.first.orientation.missions.by_position.last.position == self.current_stage.mission.position end end My Question Currently my Rails server takes roughly 500ms to 1 sec to process single @quiz.save action. I am confident that the slowness here is due to sloppy code, not bad Database ERD design. What does a better solution look like? And specifically: Should I use join queries to retrieve values like I did here, or is it better to instantiate new objects within the model instead? Or am I missing a better solution? How should update_user_mission_and_stage be refactored to follow best practices? Relevant Code for Reference: quizzes_controller.rb w/ Controller Route Initiating Callback: class QuizzesController < ApplicationController before_action :find_stage_and_mission before_action :find_orientation before_action :find_question def show end def create @user = current_user @quiz = current_user.quizzes.new(quiz_params) if @quiz.save if @quiz.passed? if @mission.next_mission.nil? && @stage.next_stage.nil? redirect_to root_path, notice: "Congratulations, you have finished the last mission!" elsif @stage.next_stage.nil? redirect_to [@mission.next_mission, @mission.first_stage], notice: "Correct! Time for Mission #{@mission.next_mission.position}", info: "Starting next mission" else redirect_to [@mission, @stage.next_stage], notice: "Answer Correct! You passed the stage!" end else redirect_to [@mission, @stage], alert: "You didn't get every question right, please try again." end else redirect_to [@mission, @stage], alert: "Sorry. We were unable to save your answer. Please contact the admministrator." end @questions = @stage.questions.all end private def find_stage_and_mission @stage = Stage.find(params[:stage_id]) @mission = @stage.mission end def find_question @question = @stage.questions.find_by_id params[:id] end def quiz_params params.require(:quiz).permit(:user_id, :attempt_id, {attempts_attributes: [:id, :quiz_id, :answer_id]}) end def find_orientation @orientation = @mission.orientation @missions = @orientation.missions.by_position end end Overview of Relevant ERD Database Relationships: Mission - Stage - Question - Answer - Attempt <- Quiz <- User Mission - UserMission <- User Stage - UserStage <- User Other Models: Mission.rb class Mission < ActiveRecord::Base belongs_to :orientation has_many :stages has_many :user_missions, dependent: :destroy has_many :users, through: :user_missions #SCOPES scope :by_position, -> {order(position: :asc)} def stages_ordered stages.order(:position) end def next_mission self.orientation.missions.find_by_position(self.position.next) end def first_stage next_mission.stages_ordered.first end end Stage.rb: class Stage < ActiveRecord::Base belongs_to :mission has_many :questions, dependent: :destroy has_many :user_stages, dependent: :destroy has_many :users, through: :user_stages accepts_nested_attributes_for :questions, reject_if: :all_blank, allow_destroy: true def next_stage self.mission.stages.find_by_position(self.position.next) end end Question.rb class Question < ActiveRecord::Base belongs_to :stage has_many :answers, dependent: :destroy accepts_nested_attributes_for :answers, :reject_if => lambda { |a| a[:body].blank? }, :allow_destroy => true end Answer.rb: class Answer < ActiveRecord::Base belongs_to :question has_many :attempts, dependent: :destroy end Attempt.rb: class Attempt < ActiveRecord::Base belongs_to :answer belongs_to :quiz end User.rb: class User < ActiveRecord::Base belongs_to :school has_many :activity_logs has_many :user_missions, dependent: :destroy has_many :missions, through: :user_missions has_many :user_stages, dependent: :destroy has_many :stages, through: :user_stages has_many :orientations, through: :school has_many :quizzes, dependent: :destroy has_many :attempts, through: :quizzes def latest_stage_position self.user_missions.last.user_stages.last.stage.position end end UserMission.rb class UserMission < ActiveRecord::Base belongs_to :user belongs_to :mission has_many :user_stages, dependent: :destroy end UserStage.rb class UserStage < ActiveRecord::Base belongs_to :user belongs_to :stage belongs_to :user_mission end

    Read the article

  • Using Apache FOP from .NET level

    - by Lukasz Kurylo
    In one of my previous posts I was talking about FO.NET which I was using to generate a pdf documents from XSL-FO. FO.NET is one of the .NET ports of Apache FOP. Unfortunatelly it is no longer maintained. I known it when I decidec to use it, because there is a lack of available (free) choices for .NET to render a pdf form XSL-FO. I hoped in this implementation I will find all I need to create a pdf file with my really simple requirements. FO.NET is a port from some old version of Apache FOP and I found really quickly that there is a lack of some features that I needed, like dotted borders, double borders or support for margins. So I started to looking for some alternatives. I didn’t try the NFOP, another port of Apache FOP, because I found something I think much more better, the IKVM.NET project.   IKVM.NET it is not a pdf renderer. So what it is? From the project site:   IKVM.NET is an implementation of Java for Mono and the Microsoft .NET Framework. It includes the following components: a Java Virtual Machine implemented in .NET a .NET implementation of the Java class libraries tools that enable Java and .NET interoperability   In the simplest form IKVM.NET allows to use a Java code library in the C# code and vice versa.   I tried to use an Apache FOP, the best I think open source pdf –> XSL-FO renderer written in Java from my project written in C# using an IKVM.NET and it work like a charm. In the rest of the post I want to show, how to prepare a .NET *.dll class library from Apache FOP *.jar’s with IKVM.NET and generate a simple Hello world pdf document.   To start playing with IKVM.NET and Apache FOP we need to download their packages: IKVM.NET Apache FOP and then unpack them.   From the FOP directory copy all the *.jar’s files from lib and build catalogs to some location, e.g. d:\fop. Second step is to build the *.dll library from these files. On the console execute the following comand:   ikvmc –target:library –out:d:\fop\fop.dll –recurse:d:\fop   The ikvmc is located in the bin subdirectory where you unpacked the IKVM.NET. You must execute this command from this catalog, add this path to the global variable PATH or specify the full path to the bin subdirectory.   In no error occurred during this process, the fop.dll library should be created. Right now we can create a simple project to test if we can create a pdf file.   So let’s create a simple console project application and add reference to the fop.dll and the IKVM dll’s: IKVM.OpenJDK.Core and IKVM.OpenJDK.XML.API.   Full code to generate a pdf file from XSL-FO template:   static void Main(string[] args)         {             //initialize the Apache FOP             FopFactory fopFactory = FopFactory.newInstance();               //in this stream we will get the generated pdf file             OutputStream o = new DotNetOutputMemoryStream();             try             {                 Fop fop = fopFactory.newFop("application/pdf", o);                 TransformerFactory factory = TransformerFactory.newInstance();                 Transformer transformer = factory.newTransformer();                   //read the template from disc                 Source src = new StreamSource(new File("HelloWorld.fo"));                 Result res = new SAXResult(fop.getDefaultHandler());                 transformer.transform(src, res);             }             finally             {                 o.close();             }             using (System.IO.FileStream fs = System.IO.File.Create("HelloWorld.pdf"))             {                 //write from the .NET MemoryStream stream to disc the generated pdf file                 var data = ((DotNetOutputMemoryStream)o).Stream.GetBuffer();                 fs.Write(data, 0, data.Length);             }             Process.Start("HelloWorld.pdf");             System.Console.ReadLine();         }   Apache FOP be default using a Java’s Xalan to work with XML files. I didn’t find a way to replace this piece of code with equivalent from .NET standard library. If any error or warning will occure during generating the pdf file, on the console will ge shown, that’s why I inserted the last line in the sample above. The DotNetOutputMemoryStream this is my wrapper for the Java OutputStream. I have created it to have the possibility to exchange data between the .NET <-> Java objects. It’s implementation:   class DotNetOutputMemoryStream : OutputStream     {         private System.IO.MemoryStream ms = new System.IO.MemoryStream();         public System.IO.MemoryStream Stream         {             get             {                 return ms;             }         }         public override void write(int i)         {             ms.WriteByte((byte)i);         }         public override void write(byte[] b, int off, int len)         {             ms.Write(b, off, len);         }         public override void write(byte[] b)         {             ms.Write(b, 0, b.Length);         }         public override void close()         {             ms.Close();         }         public override void flush()         {             ms.Flush();         }     } The last thing we need, this is the HelloWorld.fo template.   <?xml version="1.0" encoding="utf-8"?> <fo:root xmlns:fo="http://www.w3.org/1999/XSL/Format"          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">   <fo:layout-master-set>     <fo:simple-page-master master-name="simple"                   page-height="29.7cm"                   page-width="21cm"                   margin-top="1.8cm"                   margin-bottom="0.8cm"                   margin-left="1.6cm"                   margin-right="1.2cm">       <fo:region-body margin-top="3cm"/>       <fo:region-before extent="3cm"/>       <fo:region-after extent="1.5cm"/>     </fo:simple-page-master>   </fo:layout-master-set>   <fo:page-sequence master-reference="simple">     <fo:flow flow-name="xsl-region-body">       <fo:block font-size="18pt" color="black" text-align="center">         Hello, World!       </fo:block>     </fo:flow>   </fo:page-sequence> </fo:root>   I’m not going to explain how how this template is created, because this will be covered in the near future posts.   Generated pdf file should look that:

    Read the article

  • Oracle Identity Manager Role Management With API

    - by mustafakaya
    As an administrator, you use roles to create and manage the records of a collection of users to whom you want to permit access to common functionality, such as access rights, roles, or permissions. Roles can be independent of an organization, span multiple organizations, or contain users from a single organization. Using roles, you can: View the menu items that the users can access through Oracle Identity Manager Administration Web interface. Assign users to roles. Assign a role to a parent role Designate status to the users so that they can specify defined responses for process tasks. Modify permissions on data objects. Designate role administrators to perform actions on roles, such as enabling members of another role to assign users to the current role, revoke members from current role and so on. Designate provisioning policies for a role. These policies determine if a resource object is to be provisioned to or requested for a member of the role. Assign or remove membership rules to or from the role. These rules determine which users can be assigned/removed as direct membership to/from the role.  In this post, i will share some examples for role management with Oracle Identity Management API.  You can do role operations you can use Thor.API.Operations.tcGroupOperationsIntf interface. tcGroupOperationsIntf service =  getClient().getService(tcGroupOperationsIntf.class);     Assign an user to role :    public void assignRoleByUsrKey(String roleName, String usrKey) throws Exception {         Map<String, String> filter = new HashMap<String, String>();         filter.put("Groups.Role Name", roleName);         tcResultSet role = service.findGroups(filter);         String groupKey = role.getStringValue("Groups.Key");         service.addMemberUser(Long.parseLong(groupKey), Long.parseLong(usrKey));     }  Revoke an user from role:     public void revokeRoleByUsrKey(String roleName, String usrKey) throws Exception {         Map<String, String> filter = new HashMap<String, String>();         filter.put("Groups.Role Name", roleName);         tcResultSet role = service.findGroups(filter);         String groupKey = role.getStringValue("Groups.Key");         service.removeMemberUser(Long.parseLong(groupKey), Long.parseLong(usrKey));     } Get all members of a role :      public List<User> getRoleMembers(String roleName) throws Exception {         List<User> userList = new ArrayList<User>();         Map<String, String> filter = new HashMap<String, String>();         filter.put("Groups.Role Name", roleName);         tcResultSet role = service.findGroups(filter);       String groupKey = role.getStringValue("Groups.Key");         tcResultSet members = service.getAllMemberUsers(Long.parseLong(groupKey));         for (int i = 0; i < members.getRowCount(); i++) {                 members.goToRow(i);                 long userKey = members.getLongValue("Users.Key");                 User member = oimUserManager.findUserByUserKey(String.valueOf(userKey));                 userList.add(member);         }        return userList;     } About me: Mustafa Kaya is a Senior Consultant in Oracle Fusion Middleware Team, living in Istanbul. Before coming to Oracle, he worked in teams developing web applications and backend services at a telco company. He is a Java technology enthusiast, software engineer and addicted to learn new technologies,develop new ideas. Follow Mustafa on Twitter,Connect on LinkedIn, and visit his site for Oracle Fusion Middleware related tips.

    Read the article

  • IIS 7.0 informational HTTP status codes

    - by Samir R. Bhogayta
    1xx - Informational These HTTP status codes indicate a provisional response. The client computer receives one or more 1xx responses before the client computer receives a regular response. IIS 7.0 uses the following informational HTTP status codes: 100 - Continue. 101 - Switching protocols. 2xx - Success These HTTP status codes indicate that the server successfully accepted the request. IIS 7.0 uses the following success HTTP status codes: 200 - OK. The client request has succeeded. 201 - Created. 202 - Accepted. 203 - Nonauthoritative information. 204 - No content. 205 - Reset content. 206 - Partial content. 3xx - Redirection These HTTP status codes indicate that the client browser must take more action to fulfill the request. For example, the client browser may have to request a different page on the server. Or, the client browser may have to repeat the request by using a proxy server. IIS 7.0 uses the following redirection HTTP status codes: 301 - Moved permanently. 302 - Object moved. 304 - Not modified. 307 - Temporary redirect. 4xx - Client error These HTTP status codes indicate that an error occurred and that the client browser appears to be at fault. For example, the client browser may have requested a page that does not exist. Or, the client browser may not have provided valid authentication information. IIS 7.0 uses the following client error HTTP status codes: 400 - Bad request. The request could not be understood by the server due to malformed syntax. The client should not repeat the request without modifications. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 400 error: 400.1 - Invalid Destination Header. 400.2 - Invalid Depth Header. 400.3 - Invalid If Header. 400.4 - Invalid Overwrite Header. 400.5 - Invalid Translate Header. 400.6 - Invalid Request Body. 400.7 - Invalid Content Length. 400.8 - Invalid Timeout. 400.9 - Invalid Lock Token. 401 - Access denied. IIS 7.0 defines several HTTP status codes that indicate a more specific cause of a 401 error. The following specific HTTP status codes are displayed in the client browser but are not displayed in the IIS log: 401.1 - Logon failed. 401.2 - Logon failed due to server configuration. 401.3 - Unauthorized due to ACL on resource. 401.4 - Authorization failed by filter. 401.5 - Authorization failed by ISAPI/CGI application. 403 - Forbidden. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 403 error: 403.1 - Execute access forbidden. 403.2 - Read access forbidden. 403.3 - Write access forbidden. 403.4 - SSL required. 403.5 - SSL 128 required. 403.6 - IP address rejected. 403.7 - Client certificate required. 403.8 - Site access denied. 403.9 - Forbidden: Too many clients are trying to connect to the Web server. 403.10 - Forbidden: Web server is configured to deny Execute access. 403.11 - Forbidden: Password has been changed. 403.12 - Mapper denied access. 403.13 - Client certificate revoked. 403.14 - Directory listing denied. 403.15 - Forbidden: Client access licenses have exceeded limits on the Web server. 403.16 - Client certificate is untrusted or invalid. 403.17 - Client certificate has expired or is not yet valid. 403.18 - Cannot execute requested URL in the current application pool. 403.19 - Cannot execute CGI applications for the client in this application pool. 403.20 - Forbidden: Passport logon failed. 403.21 - Forbidden: Source access denied. 403.22 - Forbidden: Infinite depth is denied. 404 - Not found. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 404 error: 404.0 - Not found. 404.1 - Site Not Found. 404.2 - ISAPI or CGI restriction. 404.3 - MIME type restriction. 404.4 - No handler configured. 404.5 - Denied by request filtering configuration. 404.6 - Verb denied. 404.7 - File extension denied. 404.8 - Hidden namespace. 404.9 - File attribute hidden. 404.10 - Request header too long. 404.11 - Request contains double escape sequence. 404.12 - Request contains high-bit characters. 404.13 - Content length too large. 404.14 - Request URL too long. 404.15 - Query string too long. 404.16 - DAV request sent to the static file handler. 404.17 - Dynamic content mapped to the static file handler via a wildcard MIME mapping. 404.18 - Querystring sequence denied. 404.19 - Denied by filtering rule. 405 - Method Not Allowed. 406 - Client browser does not accept the MIME type of the requested page. 408 - Request timed out. 412 - Precondition failed. 5xx - Server error These HTTP status codes indicate that the server cannot complete the request because the server encounters an error. IIS 7.0 uses the following server error HTTP status codes: 500 - Internal server error. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 500 error: 500.0 - Module or ISAPI error occurred. 500.11 - Application is shutting down on the Web server. 500.12 - Application is busy restarting on the Web server. 500.13 - Web server is too busy. 500.15 - Direct requests for Global.asax are not allowed. 500.19 - Configuration data is invalid. 500.21 - Module not recognized. 500.22 - An ASP.NET httpModules configuration does not apply in Managed Pipeline mode. 500.23 - An ASP.NET httpHandlers configuration does not apply in Managed Pipeline mode. 500.24 - An ASP.NET impersonation configuration does not apply in Managed Pipeline mode. 500.50 - A rewrite error occurred during RQ_BEGIN_REQUEST notification handling. A configuration or inbound rule execution error occurred. Note Here is where the distributed rules configuration is read for both inbound and outbound rules. 500.51 - A rewrite error occurred during GL_PRE_BEGIN_REQUEST notification handling. A global configuration or global rule execution error occurred. Note Here is where the global rules configuration is read. 500.52 - A rewrite error occurred during RQ_SEND_RESPONSE notification handling. An outbound rule execution occurred. 500.53 - A rewrite error occurred during RQ_RELEASE_REQUEST_STATE notification handling. An outbound rule execution error occurred. The rule is configured to be executed before the output user cache gets updated. 500.100 - Internal ASP error. 501 - Header values specify a configuration that is not implemented. 502 - Web server received an invalid response while acting as a gateway or proxy. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 502 error: 502.1 - CGI application timeout. 502.2 - Bad gateway. 503 - Service unavailable. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 503 error: 503.0 - Application pool unavailable. 503.2 - Concurrent request limit exceeded.

    Read the article

  • MVC Portable Area Modules *Without* MasterPages

    - by Steve Michelotti
    Portable Areas from MvcContrib provide a great way to build modular and composite applications on top of MVC. In short, portable areas provide a way to distribute MVC binary components as simple .NET assemblies where the aspx/ascx files are actually compiled into the assembly as embedded resources. I’ve blogged about Portable Areas in the past including this post here which talks about embedding resources and you can read more of an intro to Portable Areas here. As great as Portable Areas are, the question that seems to come up the most is: what about MasterPages? MasterPages seems to be the one thing that doesn’t work elegantly with portable areas because you specify the MasterPage in the @Page directive and it won’t use the same mechanism of the view engine so you can’t just embed them as resources. This means that you end up referencing a MasterPage that exists in the host application but not in your portable area. If you name the ContentPlaceHolderId’s correctly, it will work – but it all seems a little fragile. Ultimately, what I want is to be able to build a portable area as a module which has no knowledge of the host application. I want to be able to invoke the module by a full route on the user’s browser and it gets invoked and “automatically appears” inside the application’s visual chrome just like a MasterPage. So how could we accomplish this with portable areas? With this question in mind, I looked around at what other people are doing to address similar problems. Specifically, I immediately looked at how the Orchard team is handling this and I found it very compelling. Basically Orchard has its own custom layout/theme framework (utilizing a custom view engine) that allows you to build your module without any regard to the host. You simply decorate your controller with the [Themed] attribute and it will render with the outer chrome around it: 1: [Themed] 2: public class HomeController : Controller Here is the slide from the Orchard talk at this year MIX conference which shows how it conceptually works:   It’s pretty cool stuff.  So I figure, it must not be too difficult to incorporate this into the portable areas view engine as an optional piece of functionality. In fact, I’ll even simplify it a little – rather than have 1) Document.aspx, 2) Layout.ascx, and 3) <view>.ascx (as shown in the picture above); I’ll just have the outer page be “Chrome.aspx” and then the specific view in question. The Chrome.aspx not only takes the place of the MasterPage, but now since we’re no longer constrained by the MasterPage infrastructure, we have the choice of the Chrome.aspx living in the host or inside the portable areas as another embedded resource! Disclaimer: credit where credit is due – much of the code from this post is me re-purposing the Orchard code to suit my needs. To avoid confusion with Orchard, I’m going to refer to my implementation (which will be based on theirs) as a Chrome rather than a Theme. The first step I’ll take is to create a ChromedAttribute which adds a flag to the current HttpContext to indicate that the controller designated Chromed like this: 1: [Chromed] 2: public class HomeController : Controller The attribute itself is an MVC ActionFilter attribute: 1: public class ChromedAttribute : ActionFilterAttribute 2: { 3: public override void OnActionExecuting(ActionExecutingContext filterContext) 4: { 5: var chromedAttribute = GetChromedAttribute(filterContext.ActionDescriptor); 6: if (chromedAttribute != null) 7: { 8: filterContext.HttpContext.Items[typeof(ChromedAttribute)] = null; 9: } 10: } 11:   12: public static bool IsApplied(RequestContext context) 13: { 14: return context.HttpContext.Items.Contains(typeof(ChromedAttribute)); 15: } 16:   17: private static ChromedAttribute GetChromedAttribute(ActionDescriptor descriptor) 18: { 19: return descriptor.GetCustomAttributes(typeof(ChromedAttribute), true) 20: .Concat(descriptor.ControllerDescriptor.GetCustomAttributes(typeof(ChromedAttribute), true)) 21: .OfType<ChromedAttribute>() 22: .FirstOrDefault(); 23: } 24: } With that in place, we only have to override the FindView() method of the custom view engine with these 6 lines of code: 1: public override ViewEngineResult FindView(ControllerContext controllerContext, string viewName, string masterName, bool useCache) 2: { 3: if (ChromedAttribute.IsApplied(controllerContext.RequestContext)) 4: { 5: var bodyView = ViewEngines.Engines.FindPartialView(controllerContext, viewName); 6: var documentView = ViewEngines.Engines.FindPartialView(controllerContext, "Chrome"); 7: var chromeView = new ChromeView(bodyView, documentView); 8: return new ViewEngineResult(chromeView, this); 9: } 10:   11: // Just execute normally without applying Chromed View Engine 12: return base.FindView(controllerContext, viewName, masterName, useCache); 13: } If the view engine finds the [Chromed] attribute, it will invoke it’s own process – otherwise, it’ll just defer to the normal web forms view engine (with masterpages). The ChromeView’s primary job is to independently set the BodyContent on the view context so that it can be rendered at the appropriate place: 1: public class ChromeView : IView 2: { 3: private ViewEngineResult bodyView; 4: private ViewEngineResult documentView; 5:   6: public ChromeView(ViewEngineResult bodyView, ViewEngineResult documentView) 7: { 8: this.bodyView = bodyView; 9: this.documentView = documentView; 10: } 11:   12: public void Render(ViewContext viewContext, System.IO.TextWriter writer) 13: { 14: ChromeViewContext chromeViewContext = ChromeViewContext.From(viewContext); 15:   16: // First render the Body view to the BodyContent 17: using (var bodyViewWriter = new StringWriter()) 18: { 19: var bodyViewContext = new ViewContext(viewContext, bodyView.View, viewContext.ViewData, viewContext.TempData, bodyViewWriter); 20: this.bodyView.View.Render(bodyViewContext, bodyViewWriter); 21: chromeViewContext.BodyContent = bodyViewWriter.ToString(); 22: } 23: // Now render the Document view 24: this.documentView.View.Render(viewContext, writer); 25: } 26: } The ChromeViewContext (code excluded here) mainly just has a string property for the “BodyContent” – but it also makes sure to put itself in the HttpContext so it’s available. Finally, we created a little extension method so the module’s view can be rendered in the appropriate place: 1: public static void RenderBody(this HtmlHelper htmlHelper) 2: { 3: ChromeViewContext chromeViewContext = ChromeViewContext.From(htmlHelper.ViewContext); 4: htmlHelper.ViewContext.Writer.Write(chromeViewContext.BodyContent); 5: } At this point, the other thing left is to decide how we want to implement the Chrome.aspx page. One approach is the copy/paste the HTML from the typical Site.Master and change the main content placeholder to use the HTML helper above – this way, there are no MasterPages anywhere. Alternatively, we could even have Chrome.aspx utilize the MasterPage if we wanted (e.g., in the case where some pages are Chromed and some pages want to use traditional MasterPage): 1: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage" %> 2: <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> 3: <% Html.RenderBody(); %> 4: </asp:Content> At this point, it’s all academic. I can create a controller like this: 1: [Chromed] 2: public class WidgetController : Controller 3: { 4: public ActionResult Index() 5: { 6: return View(); 7: } 8: } Then I’ll just create Index.ascx (a partial view) and put in the text “Inside my widget”. Now when I run the app, I can request the full route (notice the controller name of “widget” in the address bar below) and the HTML from my Index.ascx will just appear where it is supposed to.   This means no more warnings for missing MasterPages and no more need for your module to have knowledge of the host’s MasterPage placeholders. You have the option of using the Chrome.aspx in the host or providing your own while embedding it as an embedded resource itself. I’m curious to know what people think of this approach. The code above was done with my own local copy of MvcContrib so it’s not currently something you can download. At this point, these are just my initial thoughts – just incorporating some ideas for Orchard into non-Orchard apps to enable building modular/composite apps more easily. Additionally, on the flip side, I still believe that Portable Areas have potential as the module packaging story for Orchard itself.   What do you think?

    Read the article

  • Inheritance Mapping Strategies with Entity Framework Code First CTP5: Part 2 – Table per Type (TPT)

    - by mortezam
    In the previous blog post you saw that there are three different approaches to representing an inheritance hierarchy and I explained Table per Hierarchy (TPH) as the default mapping strategy in EF Code First. We argued that the disadvantages of TPH may be too serious for our design since it results in denormalized schemas that can become a major burden in the long run. In today’s blog post we are going to learn about Table per Type (TPT) as another inheritance mapping strategy and we'll see that TPT doesn’t expose us to this problem. Table per Type (TPT)Table per Type is about representing inheritance relationships as relational foreign key associations. Every class/subclass that declares persistent properties—including abstract classes—has its own table. The table for subclasses contains columns only for each noninherited property (each property declared by the subclass itself) along with a primary key that is also a foreign key of the base class table. This approach is shown in the following figure: For example, if an instance of the CreditCard subclass is made persistent, the values of properties declared by the BillingDetail base class are persisted to a new row of the BillingDetails table. Only the values of properties declared by the subclass (i.e. CreditCard) are persisted to a new row of the CreditCards table. The two rows are linked together by their shared primary key value. Later, the subclass instance may be retrieved from the database by joining the subclass table with the base class table. TPT Advantages The primary advantage of this strategy is that the SQL schema is normalized. In addition, schema evolution is straightforward (modifying the base class or adding a new subclass is just a matter of modify/add one table). Integrity constraint definition are also straightforward (note how CardType in CreditCards table is now a non-nullable column). Another much more important advantage is the ability to handle polymorphic associations (a polymorphic association is an association to a base class, hence to all classes in the hierarchy with dynamic resolution of the concrete class at runtime). A polymorphic association to a particular subclass may be represented as a foreign key referencing the table of that particular subclass. Implement TPT in EF Code First We can create a TPT mapping simply by placing Table attribute on the subclasses to specify the mapped table name (Table attribute is a new data annotation and has been added to System.ComponentModel.DataAnnotations namespace in CTP5): public abstract class BillingDetail {     public int BillingDetailId { get; set; }     public string Owner { get; set; }     public string Number { get; set; } } [Table("BankAccounts")] public class BankAccount : BillingDetail {     public string BankName { get; set; }     public string Swift { get; set; } } [Table("CreditCards")] public class CreditCard : BillingDetail {     public int CardType { get; set; }     public string ExpiryMonth { get; set; }     public string ExpiryYear { get; set; } } public class InheritanceMappingContext : DbContext {     public DbSet<BillingDetail> BillingDetails { get; set; } } If you prefer fluent API, then you can create a TPT mapping by using ToTable() method: protected override void OnModelCreating(ModelBuilder modelBuilder) {     modelBuilder.Entity<BankAccount>().ToTable("BankAccounts");     modelBuilder.Entity<CreditCard>().ToTable("CreditCards"); } Generated SQL For QueriesLet’s take an example of a simple non-polymorphic query that returns a list of all the BankAccounts: var query = from b in context.BillingDetails.OfType<BankAccount>() select b; Executing this query (by invoking ToList() method) results in the following SQL statements being sent to the database (on the bottom, you can also see the result of executing the generated query in SQL Server Management Studio): Now, let’s take an example of a very simple polymorphic query that requests all the BillingDetails which includes both BankAccount and CreditCard types: projects some properties out of the base class BillingDetail, without querying for anything from any of the subclasses: var query = from b in context.BillingDetails             select new { b.BillingDetailId, b.Number, b.Owner }; -- var query = from b in context.BillingDetails select b; This LINQ query seems even more simple than the previous one but the resulting SQL query is not as simple as you might expect: -- As you can see, EF Code First relies on an INNER JOIN to detect the existence (or absence) of rows in the subclass tables CreditCards and BankAccounts so it can determine the concrete subclass for a particular row of the BillingDetails table. Also the SQL CASE statements that you see in the beginning of the query is just to ensure columns that are irrelevant for a particular row have NULL values in the returning flattened table. (e.g. BankName for a row that represents a CreditCard type) TPT ConsiderationsEven though this mapping strategy is deceptively simple, the experience shows that performance can be unacceptable for complex class hierarchies because queries always require a join across many tables. In addition, this mapping strategy is more difficult to implement by hand— even ad-hoc reporting is more complex. This is an important consideration if you plan to use handwritten SQL in your application (For ad hoc reporting, database views provide a way to offset the complexity of the TPT strategy. A view may be used to transform the table-per-type model into the much simpler table-per-hierarchy model.) SummaryIn this post we learned about Table per Type as the second inheritance mapping in our series. So far, the strategies we’ve discussed require extra consideration with regard to the SQL schema (e.g. in TPT, foreign keys are needed). This situation changes with the Table per Concrete Type (TPC) that we will discuss in the next post. References ADO.NET team blog Java Persistence with Hibernate book a { text-decoration: none; } a:visited { color: Blue; } .title { padding-bottom: 5px; font-family: Segoe UI; font-size: 11pt; font-weight: bold; padding-top: 15px; } .code, .typeName { font-family: consolas; } .typeName { color: #2b91af; } .padTop5 { padding-top: 5px; } .padTop10 { padding-top: 10px; } p.MsoNormal { margin-top: 0in; margin-right: 0in; margin-bottom: 10.0pt; margin-left: 0in; line-height: 115%; font-size: 11.0pt; font-family: "Calibri" , "sans-serif"; }

    Read the article

  • RIF PRD: Presentation syntax issues

    - by Charles Young
    Over Christmas I got to play a bit with the W3C RIF PRD and came across a few issues which I thought I would record for posterity. Specifically, I was working on a grammar for the presentation syntax using a GLR grammar parser tool (I was using the current CTP of ‘M’ (MGrammer) and Intellipad – I do so hope the MS guys don’t kill off M and Intellipad now they have dropped the other parts of SQL Server Modelling). I realise that the presentation syntax is non-normative and that any issues with it do not therefore compromise the standard. However, presentation syntax is useful in its own right, and it would be great to iron out any issues in a future revision of the standard. The main issues are actually not to do with the grammar at all, but rather with the ‘running example’ in the RIF PRD recommendation. I started with the code provided in Example 9.1. There are several discrepancies when compared with the EBNF rules documented in the standard. Broadly the problems can be categorised as follows: ·      Parenthesis mismatch – the wrong number of parentheses are used in various places. For example, in GoldRule, the RHS of the rule (the ‘Then’) is nested in the LHS (‘the If’). In NewCustomerAndWidgetRule, the RHS is orphaned from the LHS. Together with additional incorrect parenthesis, this leads to orphanage of UnknownStatusRule from the entire Document. ·      Invalid use of parenthesis in ‘Forall’ constructs. Parenthesis should not be used to enclose formulae. Removal of the invalid parenthesis gave me a feeling of inconsistency when comparing formulae in Forall to formulae in If. The use of parenthesis is not actually inconsistent in these two context, but in an If construct it ‘feels’ as if you are enclosing formulae in parenthesis in a LISP-like fashion. In reality, the parenthesis is simply being used to group subordinate syntax elements. The fact that an If construct can contain only a single formula as an immediate child adds to this feeling of inconsistency. ·      Invalid representation of compact URIs (CURIEs) in the context of Frame productions. In several places the URIs are not qualified with a namespace prefix (‘ex1:’). This conflicts with the definition of CURIEs in the RIF Datatypes and Built-Ins 1.0 document. Here are the productions: CURIE          ::= PNAME_LN                  | PNAME_NS PNAME_LN       ::= PNAME_NS PN_LOCAL PNAME_NS       ::= PN_PREFIX? ':' PN_LOCAL       ::= ( PN_CHARS_U | [0-9] ) ((PN_CHARS|'.')* PN_CHARS)? PN_CHARS       ::= PN_CHARS_U                  | '-' | [0-9] | #x00B7                  | [#x0300-#x036F] | [#x203F-#x2040] PN_CHARS_U     ::= PN_CHARS_BASE                  | '_' PN_CHARS_BASE ::= [A-Z] | [a-z] | [#x00C0-#x00D6] | [#x00D8-#x00F6]                  | [#x00F8-#x02FF] | [#x0370-#x037D] | [#x037F-#x1FFF]                  | [#x200C-#x200D] | [#x2070-#x218F] | [#x2C00-#x2FEF]                  | [#x3001-#xD7FF] | [#xF900-#xFDCF] | [#xFDF0-#xFFFD]                  | [#x10000-#xEFFFF] PN_PREFIX      ::= PN_CHARS_BASE ((PN_CHARS|'.')* PN_CHARS)? The more I look at CURIEs, the more my head hurts! The RIF specification allows prefixes and colons without local names, which surprised me. However, the CURIE Syntax 1.0 working group note specifically states that this form is supported…and then promptly provides a syntactic definition that seems to preclude it! However, on (much) deeper inspection, it appears that ‘ex1:’ (for example) is allowed, but would really represent a ‘fragment’ of the ‘reference’, rather than a prefix! Ouch! This is so completely ambiguous that it surely calls into question the whole CURIE specification.   In any case, RIF does not allow local names without a prefix. ·      Missing ‘External’ specifiers for built-in functions and predicates.  The EBNF specification enforces this for terms within frames, but does not appear to enforce (what I believe is) the correct use of External on built-in predicates. In any case, the running example only specifies ‘External’ once on the predicate in UnknownStatusRule. External() is required in several other places. ·      The List used on the LHS of UnknownStatusRule is comma-delimited. This is not supported by the EBNF definition. Similarly, the argument list of pred:list-contains is illegally comma-delimited. ·      Unnecessary use of conjunction around a single formula in DiscountRule. This is strictly legal in the EBNF, but redundant.   All the above issues concern the presentation syntax used in the running example. There are a few minor issues with the grammar itself. Note that Michael Kiefer stated in his paper “Rule Interchange Format: The Framework” that: “The presentation syntax of RIF … is an abstract syntax and, as such, it omits certain details that might be important for unambiguous parsing.” ·      The grammar cannot differentiate unambiguously between strategies and priorities on groups. A processor is forced to resolve this by detecting the use of IRIs and integers. This could easily be fixed in the grammar.   ·      The grammar cannot unambiguously parse the ‘->’ operator in frames. Specifically, ‘-’ characters are allowed in PN_LOCAL names and hence a parser cannot determine if ‘status->’ is (‘status’ ‘->’) or (‘status-’ ‘>’).   One way to fix this is to amend the PN_LOCAL production as follows: PN_LOCAL ::= ( PN_CHARS_U | [0-9] ) ((PN_CHARS|'.')* ((PN_CHARS)-('-')))? However, unilaterally changing the definition of this production, which is defined in the SPARQL Query Language for RDF specification, makes me uncomfortable. ·      I assume that the presentation syntax is case-sensitive. I couldn’t find this stated anywhere in the documentation, but function/predicate names do appear to be documented as being case-sensitive. ·      The EBNF does not specify whitespace handling. A couple of productions (RULE and ACTION_BLOCK) are crafted to enforce the use of whitespace. This is not necessary. It seems inconsistent with the rest of the specification and can cause parsing issues. In addition, the Const production exhibits whitespaces issues. The intention may have been to disallow the use of whitespace around ‘^^’, but any direct implementation of the EBNF will probably allow whitespace between ‘^^’ and the SYMSPACE. Of course, I am being a little nit-picking about all this. On the whole, the EBNF translated very smoothly and directly to ‘M’ (MGrammar) and proved to be fairly complete. I have encountered far worse issues when translating other EBNF specifications into usable grammars.   I can’t imagine there would be any difficulty in implementing the same grammar in Antlr, COCO/R, gppg, XText, Bison, etc. A general observation, which repeats a point made above, is that the use of parenthesis in the presentation syntax can feel inconsistent and un-intuitive.   It isn’t actually inconsistent, but I think the presentation syntax could be improved by adopting braces, rather than parenthesis, to delimit subordinate syntax elements in a similar way to so many programming languages. The familiarity of braces would communicate the structure of the syntax more clearly to people like me.  If braces were adopted, parentheses could be retained around ‘var (frame | ‘new()’) constructs in action blocks. This use of parenthesis feels very LISP-like, and I think that this is my issue. It’s as if the presentation syntax represents the deformed love-child of LISP and C. In some places (specifically, action blocks), parenthesis is used in a LISP-like fashion. In other places it is used like braces in C. I find this quite confusing. Here is a corrected version of the running example (Example 9.1) in compliant presentation syntax: Document(    Prefix( ex1 <http://example.com/2009/prd2> )    (* ex1:CheckoutRuleset *)  Group rif:forwardChaining (     (* ex1:GoldRule *)    Group 10 (      Forall ?customer such that And(?customer # ex1:Customer                                     ?customer[ex1:status->"Silver"])        (Forall ?shoppingCart such that ?customer[ex1:shoppingCart->?shoppingCart]           (If Exists ?value (And(?shoppingCart[ex1:value->?value]                                  External(pred:numeric-greater-than-or-equal(?value 2000))))            Then Do(Modify(?customer[ex1:status->"Gold"])))))      (* ex1:DiscountRule *)    Group (      Forall ?customer such that ?customer # ex1:Customer        (If Or( ?customer[ex1:status->"Silver"]                ?customer[ex1:status->"Gold"])         Then Do ((?s ?customer[ex1:shoppingCart-> ?s])                  (?v ?s[ex1:value->?v])                  Modify(?s [ex1:value->External(func:numeric-multiply (?v 0.95))]))))      (* ex1:NewCustomerAndWidgetRule *)    Group (      Forall ?customer such that And(?customer # ex1:Customer                                     ?customer[ex1:status->"New"] )        (If Exists ?shoppingCart ?item                   (And(?customer[ex1:shoppingCart->?shoppingCart]                        ?shoppingCart[ex1:containsItem->?item]                        ?item # ex1:Widget ) )         Then Do( (?s ?customer[ex1:shoppingCart->?s])                  (?val ?s[ex1:value->?val])                  (?voucher ?customer[ex1:voucher->?voucher])                  Retract(?customer[ex1:voucher->?voucher])                  Retract(?voucher)                  Modify(?s[ex1:value->External(func:numeric-multiply(?val 0.90))]))))      (* ex1:UnknownStatusRule *)    Group (      Forall ?customer such that ?customer # ex1:Customer        (If Not(Exists ?status                       (And(?customer[ex1:status->?status]                            External(pred:list-contains(List("New" "Bronze" "Silver" "Gold") ?status)) )))         Then Do( Execute(act:print(External(func:concat("New customer: " ?customer))))                  Assert(?customer[ex1:status->"New"]))))  ) )   I hope that helps someone out there :-)

    Read the article

< Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >