Search Results

Search found 51910 results on 2077 pages for 'run level'.

Page 305/2077 | < Previous Page | 301 302 303 304 305 306 307 308 309 310 311 312  | Next Page >

  • Silverlight 2.0 - Can't get the text wrapping behaviour that I want

    - by Anthony
    I am having trouble getting Silverlight 2.0 to lay out text exactly how I want. I want text with line breaks and embedded links, with wrapping, like HTML text in a web page. Here's the closest that I have come: <UserControl x:Class="FlowPanelTest.Page" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:Controls="clr-namespace:Microsoft.Windows.Controls;assembly=Microsoft.Windows.Controls" Width="250" Height="300"> <Border BorderBrush="Black" BorderThickness="2" > <Controls:WrapPanel> <TextBlock x:Name="tb1" TextWrapping="Wrap">Short text. </TextBlock> <TextBlock x:Name="tb2" TextWrapping="Wrap">A bit of text. </TextBlock> <TextBlock x:Name="tb3" TextWrapping="Wrap">About half of a line of text.</TextBlock> <TextBlock x:Name="tb4" TextWrapping="Wrap">More than half a line of longer text.</TextBlock> <TextBlock x:Name="tb5" TextWrapping="Wrap">More than one line of text, so it will wrap onto the following line.</TextBlock> </Controls:WrapPanel> </Border> </UserControl> But the issue is that although the text blocks tb1 and tb2 will go onto the same line because there is room enough for them completely, tb3 onwards will not start on the same line as the previous block, even though it will wrap onto following lines. I want each text block to start where the previous one ends, on the same line. I want to put click event handlers on some of the text. I also want paragraph breaks. Essentially I'm trying to work around the lack of FlowDocument and Hyperlink controls in Silverlight 2.0's subset of XAML. To answer the questions posed in the answers: Why not use runs for the non-clickable text? If I just use individual TextBlocks only on the clickable text, then those bits of text will still suffer from the wrapping problem illustrated above. And the TextBlock just before the link, and the TextBlock just after. Essentially all of it. It doesn't look like I have many opportunities for putting multiple runs in the same TextBlock. Dividing the links from the other text with RegExs and loops is not the issue at all, the issue is display layout. Why not put each word in an individual TextBlock in a WrapPanel Aside from being an ugly hack, this does not play at all well with linebreaks - the layout is incorrect. It would also make the underline style of linked text into a broken line. Here's an example with each word in its own TextBlock. Try running it, note that the linebreak isn't shown in the right place at all. <UserControl x:Class="SilverlightApplication2.Page" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:Controls="clr-namespace:Microsoft.Windows.Controls;assembly=Microsoft.Windows.Controls" Width="300" Height="300"> <Controls:WrapPanel> <TextBlock TextWrapping="Wrap">Short1 </TextBlock> <TextBlock TextWrapping="Wrap">Longer1 </TextBlock> <TextBlock TextWrapping="Wrap">Longerest1 </TextBlock> <TextBlock TextWrapping="Wrap"> <Run>Break</Run> <LineBreak></LineBreak> </TextBlock> <TextBlock TextWrapping="Wrap">Short2</TextBlock> <TextBlock TextWrapping="Wrap">Longer2</TextBlock> <TextBlock TextWrapping="Wrap">Longerest2</TextBlock> <TextBlock TextWrapping="Wrap">Short3</TextBlock> <TextBlock TextWrapping="Wrap">Longer3</TextBlock> <TextBlock TextWrapping="Wrap">Longerest3</TextBlock> </Controls:WrapPanel> </UserControl> What about The LinkLabelControl as here and here. It has the same problems as the approach above, since it's much the same. Try running the sample, and make the link text longer and longer until it wraps. Note that the link starts on a new line, which it shouldn't. Make the link text even longer, so that the link text is longer than a line. Note that it doesn't wrap at all, it cuts off. This control doesn't handle line breaks and paragraph breaks either. Why not put the text all in runs, detect clicks on the containing TextBlock and work out which run was clicked Runs do not have mouse events, but the containing TextBlock does. I can't find a way to check if the run is under the mouse (IsMouseOver is not present in SilverLight) or to find the bounding geometry of the run (no clip property). There is VisualTreeHelper.FindElementsInHostCoordinates() The code below uses VisualTreeHelper.FindElementsInHostCoordinates to get the controls under the click. The output lists the TextBlock but not the Run, since a Run is not a UiElement. private void theText_MouseLeftButtonDown(object sender, System.Windows.Input.MouseButtonEventArgs e) { // get the elements under the click UIElement uiElementSender = sender as UIElement; Point clickPos = e.GetPosition(uiElementSender); var UiElementsUnderClick = VisualTreeHelper.FindElementsInHostCoordinates(clickPos, uiElementSender); // show the controls string outputText = ""; foreach (var uiElement in UiElementsUnderClick) { outputText += uiElement.GetType().ToString() + "\n"; } this.outText.Text = outputText; } Use an empty text block with a margin to space following content onto a following line I'm still thinking about this one. How do you calculate the right width for a line-breaking block to force following content onto the following line? Too short and the following content will still be on the same line, at the right. Too long and the "linebreak" will be on the following line, with content after it. You would have to resize the breaks when the control is resized. Some of the code for this is: TextBlock lineBreak = new TextBlock(); lineBreak.TextWrapping = TextWrapping.Wrap; lineBreak.Text = " "; // need adaptive width lineBreak.Margin = new Thickness(0, 0, 200, 0);

    Read the article

  • How can I use Perl regular expressions to parse XML data?

    - by Luke
    I have a pretty long piece of XML that I want to parse. I want to remove everything except for the subclass-code and city. So that I am left with something like the example below. EXAMPLE TEST SUBCLASS|MIAMI CODE <?xml version="1.0" standalone="no"?> <web-export> <run-date>06/01/2010 <pub-code>TEST <ad-type>TEST <cat-code>Real Estate</cat-code> <class-code>TEST</class-code> <subclass-code>TEST SUBCLASS</subclass-code> <placement-description></placement-description> <position-description>Town House</position-description> <subclass3-code></subclass3-code> <subclass4-code></subclass4-code> <ad-number>0000284708-01</ad-number> <start-date>05/28/2010</start-date> <end-date>06/09/2010</end-date> <line-count>6</line-count> <run-count>13</run-count> <customer-type>Private Party</customer-type> <account-number>100099237</account-number> <account-name>DOE, JOHN</account-name> <addr-1>207 CLARENCE STREET</addr-1> <addr-2> </addr-2> <city>MIAMI</city> <state>FL</state> <postal-code>02910</postal-code> <country>USA</country> <phone-number>4014612880</phone-number> <fax-number></fax-number> <url-addr> </url-addr> <email-addr>[email protected]</email-addr> <pay-flag>N</pay-flag> <ad-description>DEANESTATES2BEDS2BATHSAPPLIANCED</ad-description> <order-source>Import</order-source> <order-status>Live</order-status> <payor-acct>100099237</payor-acct> <agency-flag>N</agency-flag> <rate-note></rate-note> <ad-content> MIAMI&#47;Dean Estates&#58; 2 beds&#44; 2 baths&#46; Applianced&#46; Central air&#46; Carpets&#46; Laundry&#46; 2 decks&#46; Pool&#46; Parking&#46; Close to everything&#46;No smoking&#46; No utilities&#46; &#36;1275 mo&#46; 401&#45;578&#45;1501&#46; </ad-content> </ad-type> </pub-code> </run-date> </web-export> PERL So what I want to do is open an existing file read the contents then use regular expressions to eliminate the unnecessary XML tags. open(READFILE, "FILENAME"); while(<READFILE>) { $_ =~ s/<\?xml version="(.*)" standalone="(.*)"\?>\n.*//g; $_ =~ s/<subclass-code>//g; $_ =~ s/<\/subclass-code>\n.*/|/g; $_ =~ s/(.*)PJ RER Houses /PJ RER Houses/g; $_ =~ s/\G //g; $_ =~ s/<city>//g; $_ =~ s/<\/city>\n.*//g; $_ =~ s/<(\/?)web-export>(.*)\n.*//g; $_ =~ s/<(\/?)run-date>(.*)\n.*//g; $_ =~ s/<(\/?)pub-code>(.*)\n.*//g; $_ =~ s/<(\/?)ad-type>(.*)\n.*//g; $_ =~ s/<(\/?)cat-code>(.*)<(\/?)cat-code>\n.*//g; $_ =~ s/<(\/?)class-code>(.*)<(\/?)class-code>\n.*//g; $_ =~ s/<(\/?)placement-description>(.*)<(\/?)placement-description>\n.*//g; $_ =~ s/<(\/?)position-description>(.*)<(\/?)position-description>\n.*//g; $_ =~ s/<(\/?)subclass3-code>(.*)<(\/?)subclass3-code>\n.*//g; $_ =~ s/<(\/?)subclass4-code>(.*)<(\/?)subclass4-code>\n.*//g; $_ =~ s/<(\/?)ad-number>(.*)<(\/?)ad-number>\n.*//g; $_ =~ s/<(\/?)start-date>(.*)<(\/?)start-date>\n.*//g; $_ =~ s/<(\/?)end-date>(.*)<(\/?)end-date>\n.*//g; $_ =~ s/<(\/?)line-count>(.*)<(\/?)line-count>\n.*//g; $_ =~ s/<(\/?)run-count>(.*)<(\/?)run-count>\n.*//g; $_ =~ s/<(\/?)customer-type>(.*)<(\/?)customer-type>\n.*//g; $_ =~ s/<(\/?)account-number>(.*)<(\/?)account-number>\n.*//g; $_ =~ s/<(\/?)account-name>(.*)<(\/?)account-name>\n.*//g; $_ =~ s/<(\/?)addr-1>(.*)<(\/?)addr-1>\n.*//g; $_ =~ s/<(\/?)addr-2>(.*)<(\/?)addr-2>\n.*//g; $_ =~ s/<(\/?)state>(.*)<(\/?)state>\n.*//g; $_ =~ s/<(\/?)postal-code>(.*)<(\/?)postal-code>\n.*//g; $_ =~ s/<(\/?)country>(.*)<(\/?)country>\n.*//g; $_ =~ s/<(\/?)phone-number>(.*)<(\/?)phone-number>\n.*//g; $_ =~ s/<(\/?)fax-number>(.*)<(\/?)fax-number>\n.*//g; $_ =~ s/<(\/?)url-addr>(.*)<(\/?)url-addr>\n.*//g; $_ =~ s/<(\/?)email-addr>(.*)<(\/?)email-addr>\n.*//g; $_ =~ s/<(\/?)pay-flag>(.*)<(\/?)pay-flag>\n.*//g; $_ =~ s/<(\/?)ad-description>(.*)<(\/?)ad-description>\n.*//g; $_ =~ s/<(\/?)order-source>(.*)<(\/?)order-source>\n.*//g; $_ =~ s/<(\/?)order-status>(.*)<(\/?)order-status>\n.*//g; $_ =~ s/<(\/?)payor-acct>(.*)<(\/?)payor-acct>\n.*//g; $_ =~ s/<(\/?)agency-flag>(.*)<(\/?)agency-flag>\n.*//g; $_ =~ s/<(\/?)rate-note>(.*)<(\/?)rate-note>\n.*//g; $_ =~ s/<ad-content>(.*)\n.*//g; $_ =~ s/\t(.*)\n.*//g; $_ =~ s/<\/ad-content>(.*)\n.*//g; } close( READFILE1 ); Is there an easier way of doing this? I don't want to use any modules. I know that it might make this easier but the file I am reading has a lot of data in it.

    Read the article

  • Edit Contact code worked in 1.6 but doesn't work on Droid 2.1?

    - by user225405
    Hi All, I had some fairly simple code in my app to invoke Edit Contact activity on a known good contact index that worked in Android 1.6 but is broken for me now in Android 2.1 on the Droid. I built a sample activity/app 'EdCon' to show this: package com.jbh; import android.app.Activity; import android.content.Intent; import android.net.Uri; import android.os.Bundle; public class EdCon extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); // Build an intent to edit a known good contact index Intent i; i = new Intent(Intent.ACTION_EDIT); i.setData(Uri.parse("content://contacts/people/10")); startActivity(i); } } When I run this on my G1 running 1.6 it works as expected i.e. brings up the Edit Contact screen for the known index and then I can hit BACK to return to "Hello World, EdCon". When I run this on the Droid under 2.1 I get the following: 05-07 15:35:57.787: INFO/ActivityManager(1013): Starting activity: Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] flg=0x10000000 cmp=com.jbh/.EdCon } 05-07 15:35:57.826: DEBUG/AndroidRuntime(13780): Shutting down VM 05-07 15:35:57.826: DEBUG/dalvikvm(13780): DestroyJavaVM waiting for non-daemon threads to exit 05-07 15:35:57.928: DEBUG/dalvikvm(13780): DestroyJavaVM shutting VM down 05-07 15:35:57.928: DEBUG/dalvikvm(13780): HeapWorker thread shutting down 05-07 15:35:57.928: DEBUG/dalvikvm(13780): HeapWorker thread has shut down 05-07 15:35:57.928: DEBUG/jdwp(13780): JDWP shutting down net... 05-07 15:35:57.928: DEBUG/jdwp(13780): Got wake-up signal, bailing out of select 05-07 15:35:57.928: INFO/dalvikvm(13780): Debugger has detached; object registry had 1 entries 05-07 15:35:57.928: DEBUG/dalvikvm(13780): VM cleaning up 05-07 15:35:57.935: INFO/ActivityManager(1013): Start proc com.jbh for activity com.jbh/.EdCon: pid=13802 uid=10052 gids={1015} 05-07 15:35:57.967: ERROR/AndroidRuntime(13780): ERROR: thread attach failed 05-07 15:35:58.053: INFO/ActivityThread(13792): Publishing provider com.android.vending.SuggestionsProvider: com.android.vending.SuggestionsProvider 05-07 15:35:58.154: INFO/dalvikvm(13802): Debugger thread not active, ignoring DDM send (t=0x41504e4d l=38) 05-07 15:35:58.209: DEBUG/dalvikvm(13780): LinearAlloc 0x0 used 639500 of 5242880 (12%) 05-07 15:35:58.365: INFO/dalvikvm(13802): Debugger thread not active, ignoring DDM send (t=0x41504e4d l=18) 05-07 15:35:58.639: INFO/ActivityManager(1013): Starting activity: Intent { act=android.intent.action.EDIT dat=content://contacts/people/10 cmp=com.android.contacts/.ui.EditContactActivity } 05-07 15:35:58.975: DEBUG/dalvikvm(13137): GC freed 2902 objects / 166768 bytes in 61ms 05-07 15:35:59.100: DEBUG/vending(13792): com.android.vending.LocalDbSyncService.run(): Syncing local DB with package manager... 05-07 15:35:59.100: DEBUG/vending(13792): com.android.vending.LocalDbSyncService.syncLocalDbWithPackageManager(): No INSTALLING or UNINSTALLING assets. 05-07 15:35:59.115: INFO/ActivityManager(1013): Displayed activity com.android.contacts/.ui.EditContactActivity: 387 ms (total 1296 ms) 05-07 15:35:59.185: DEBUG/Sources(13137): Creating external source for type=com.facebook.auth.login, packageName=com.facebook.katana 05-07 15:35:59.225: DEBUG/vending(13792): com.android.vending.LocalDbSyncService.run(): Syncing done. 05-07 15:35:59.232: WARN/dalvikvm(13137): threadid=27: thread exiting with uncaught exception (group=0x4001b180) 05-07 15:35:59.232: ERROR/AndroidRuntime(13137): Uncaught handler: thread AsyncTask #1 exiting due to uncaught exception 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): java.lang.RuntimeException: An error occured while executing doInBackground() 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at android.os.AsyncTask$3.done(AsyncTask.java:200) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at java.util.concurrent.FutureTask$Sync.innerSetException(FutureTask.java:273) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at java.util.concurrent.FutureTask.setException(FutureTask.java:124) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:307) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at java.util.concurrent.FutureTask.run(FutureTask.java:137) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1068) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:561) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at java.lang.Thread.run(Thread.java:1096) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): Caused by: android.database.sqlite.SQLiteException: no such column: raw_contact_id: , while compiling: SELECT account_name, account_type, sourceid, version, dirty, data_id, res_package, mimetype, data1, data2, data3, data4, data5, data6, data7, data8, data9, data10, data11, data12, data13, data14, data15, data_sync1, data_sync2, data_sync3, data_sync4, _id, is_primary, is_super_primary, data_version, group_sourceid, sync1, sync2, sync3, sync4, deleted, contact_id, starred, is_restricted FROM contact_entities_view WHERE (1) AND (raw_contact_id=10) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at android.database.sqlite.SQLiteProgram.native_compile(Native Method) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at android.database.sqlite.SQLiteProgram.compile(SQLiteProgram.java:110) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at android.database.sqlite.SQLiteProgram.(SQLiteProgram.java:59) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at android.database.sqlite.SQLiteQuery.(SQLiteQuery.java:49) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at android.database.sqlite.SQLiteDirectCursorDriver.query(SQLiteDirectCursorDriver.java:49) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at android.database.sqlite.SQLiteDatabase.rawQueryWithFactory(SQLiteDatabase.java:1221) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at android.database.sqlite.SQLiteQueryBuilder.query(SQLiteQueryBuilder.java:316) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at com.android.providers.contacts.ContactsProvider2.query(ContactsProvider2.java:3850) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at com.android.providers.contacts.ContactsProvider2.query(ContactsProvider2.java:3840) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at com.android.providers.contacts.ContactsProvider2$RawContactsEntityIterator.(ContactsProvider2.java:4498) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at com.android.providers.contacts.ContactsProvider2.queryEntities(ContactsProvider2.java:4751) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at android.content.ContentProvider$Transport.queryEntities(ContentProvider.java:140) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at android.content.ContentProviderClient.queryEntities(ContentProviderClient.java:98) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at android.content.ContentResolver.queryEntities(ContentResolver.java:296) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at com.android.contacts.model.EntitySet.fromQuery(EntitySet.java:72) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at com.android.contacts.ui.EditContactActivity$QueryEntitiesTask.doInBackground(EditContactActivity.java:191) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at com.android.contacts.ui.EditContactActivity$QueryEntitiesTask.doInBackground(EditContactActivity.java:154) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at com.android.contacts.util.WeakAsyncTask.doInBackground(WeakAsyncTask.java:45) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at android.os.AsyncTask$2.call(AsyncTask.java:185) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:305) 05-07 15:35:59.295: ERROR/AndroidRuntime(13137): ... 4 more 05-07 15:35:59.303: INFO/Process(1013): Sending signal. PID: 13137 SIG: 3 05-07 15:35:59.303: INFO/dalvikvm(13137): threadid=7: reacting to signal 3 05-07 15:35:59.303: ERROR/dalvikvm(13137): Unable to open stack trace file '/data/anr/traces.txt': Permission denied 05-07 15:35:59.506: INFO/DumpStateReceiver(1013): Added state dump to 1 crashes 05-07 15:36:07.053: DEBUG/dalvikvm(12901): GC freed 389 objects / 25056 bytes in 145ms 05-07 15:36:17.287: DEBUG/dalvikvm(11649): GC freed 154 objects / 6816 bytes in 136ms 05-07 15:36:22.365: DEBUG/dalvikvm(13574): GC freed 348 objects / 67848 bytes in 112ms 05-07 15:36:27.451: DEBUG/dalvikvm(11836): GC freed 267 objects / 17432 bytes in 65ms 05-07 15:36:32.553: DEBUG/dalvikvm(12757): GC freed 1888 objects / 92440 bytes in 67ms 05-07 15:36:38.803: INFO/power(1013): * set_screen_state 0 05-07 15:36:38.813: DEBUG/SurfaceFlinger(1013): About to give-up screen, flinger = 0x114c30 05-07 15:36:38.826: DEBUG/Sensors(1013): using accelerometer (name=accelerometer) 05-07 15:36:38.834: DEBUG/PhoneWindow(13137): couldn't save which view has focus because the focused view android.widget.ScrollView@44883558 has no id. 05-07 15:36:38.865: DEBUG/WifiService(1013): ACTION_SCREEN_OFF 05-07 15:36:38.889: DEBUG/WifiService(1013): setting ACTION_DEVICE_IDLE timer for 900000ms 05-07 15:36:44.107: DEBUG/dalvikvm(1013): GC freed 7351 objects / 521440 bytes in 130ms 05-07 15:36:49.373: DEBUG/dalvikvm(13553): GC freed 321 objects / 12056 bytes in 102ms The no such column: raw_contact_id: looks like the issue but I'm not sure how or why that would happen or what it means. Any help appreciated! [email protected]

    Read the article

  • Using JSON.NET for dynamic JSON parsing

    - by Rick Strahl
    With the release of ASP.NET Web API as part of .NET 4.5 and MVC 4.0, JSON.NET has effectively pushed out the .NET native serializers to become the default serializer for Web API. JSON.NET is vastly more flexible than the built in DataContractJsonSerializer or the older JavaScript serializer. The DataContractSerializer in particular has been very problematic in the past because it can't deal with untyped objects for serialization - like values of type object, or anonymous types which are quite common these days. The JavaScript Serializer that came before it actually does support non-typed objects for serialization but it can't do anything with untyped data coming in from JavaScript and it's overall model of extensibility was pretty limited (JavaScript Serializer is what MVC uses for JSON responses). JSON.NET provides a robust JSON serializer that has both high level and low level components, supports binary JSON, JSON contracts, Xml to JSON conversion, LINQ to JSON and many, many more features than either of the built in serializers. ASP.NET Web API now uses JSON.NET as its default serializer and is now pulled in as a NuGet dependency into Web API projects, which is great. Dynamic JSON Parsing One of the features that I think is getting ever more important is the ability to serialize and deserialize arbitrary JSON content dynamically - that is without mapping the JSON captured directly into a .NET type as DataContractSerializer or the JavaScript Serializers do. Sometimes it isn't possible to map types due to the differences in languages (think collections, dictionaries etc), and other times you simply don't have the structures in place or don't want to create them to actually import the data. If this topic sounds familiar - you're right! I wrote about dynamic JSON parsing a few months back before JSON.NET was added to Web API and when Web API and the System.Net HttpClient libraries included the System.Json classes like JsonObject and JsonArray. With the inclusion of JSON.NET in Web API these classes are now obsolete and didn't ship with Web API or the client libraries. I re-linked my original post to this one. In this post I'll discus JToken, JObject and JArray which are the dynamic JSON objects that make it very easy to create and retrieve JSON content on the fly without underlying types. Why Dynamic JSON? So, why Dynamic JSON parsing rather than strongly typed parsing? Since applications are interacting more and more with third party services it becomes ever more important to have easy access to those services with easy JSON parsing. Sometimes it just makes lot of sense to pull just a small amount of data out of large JSON document received from a service, because the third party service isn't directly related to your application's logic most of the time - and it makes little sense to map the entire service structure in your application. For example, recently I worked with the Google Maps Places API to return information about businesses close to me (or rather the app's) location. The Google API returns a ton of information that my application had no interest in - all I needed was few values out of the data. Dynamic JSON parsing makes it possible to map this data, without having to map the entire API to a C# data structure. Instead I could pull out the three or four values I needed from the API and directly store it on my business entities that needed to receive the data - no need to map the entire Maps API structure. Getting JSON.NET The easiest way to use JSON.NET is to grab it via NuGet and add it as a reference to your project. You can add it to your project with: PM> Install-Package Newtonsoft.Json From the Package Manager Console or by using Manage NuGet Packages in your project References. As mentioned if you're using ASP.NET Web API or MVC 4 JSON.NET will be automatically added to your project. Alternately you can also go to the CodePlex site and download the latest version including source code: http://json.codeplex.com/ Creating JSON on the fly with JObject and JArray Let's start with creating some JSON on the fly. It's super easy to create a dynamic object structure with any of the JToken derived JSON.NET objects. The most common JToken derived classes you are likely to use are JObject and JArray. JToken implements IDynamicMetaProvider and so uses the dynamic  keyword extensively to make it intuitive to create object structures and turn them into JSON via dynamic object syntax. Here's an example of creating a music album structure with child songs using JObject for the base object and songs and JArray for the actual collection of songs:[TestMethod] public void JObjectOutputTest() { // strong typed instance var jsonObject = new JObject(); // you can explicitly add values here using class interface jsonObject.Add("Entered", DateTime.Now); // or cast to dynamic to dynamically add/read properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; album.Artist = "AC/DC"; album.YearReleased = 1976; album.Songs = new JArray() as dynamic; dynamic song = new JObject(); song.SongName = "Dirty Deeds Done Dirt Cheap"; song.SongLength = "4:11"; album.Songs.Add(song); song = new JObject(); song.SongName = "Love at First Feel"; song.SongLength = "3:10"; album.Songs.Add(song); Console.WriteLine(album.ToString()); } This produces a complete JSON structure: { "Entered": "2012-08-18T13:26:37.7137482-10:00", "AlbumName": "Dirty Deeds Done Dirt Cheap", "Artist": "AC/DC", "YearReleased": 1976, "Songs": [ { "SongName": "Dirty Deeds Done Dirt Cheap", "SongLength": "4:11" }, { "SongName": "Love at First Feel", "SongLength": "3:10" } ] } Notice that JSON.NET does a nice job formatting the JSON, so it's easy to read and paste into blog posts :-). JSON.NET includes a bunch of configuration options that control how JSON is generated. Typically the defaults are just fine, but you can override with the JsonSettings object for most operations. The important thing about this code is that there's no explicit type used for holding the values to serialize to JSON. Rather the JSON.NET objects are the containers that receive the data as I build up my JSON structure dynamically, simply by adding properties. This means this code can be entirely driven at runtime without compile time restraints of structure for the JSON output. Here I use JObject to create a album 'object' and immediately cast it to dynamic. JObject() is kind of similar in behavior to ExpandoObject in that it allows you to add properties by simply assigning to them. Internally, JObject values are stored in pseudo collections of key value pairs that are exposed as properties through the IDynamicMetaObject interface exposed in JSON.NET's JToken base class. For objects the syntax is very clean - you add simple typed values as properties. For objects and arrays you have to explicitly create new JObject or JArray, cast them to dynamic and then add properties and items to them. Always remember though these values are dynamic - which means no Intellisense and no compiler type checking. It's up to you to ensure that the names and values you create are accessed consistently and without typos in your code. Note that you can also access the JObject instance directly (not as dynamic) and get access to the underlying JObject type. This means you can assign properties by string, which can be useful for fully data driven JSON generation from other structures. Below you can see both styles of access next to each other:// strong type instance var jsonObject = new JObject(); // you can explicitly add values here jsonObject.Add("Entered", DateTime.Now); // expando style instance you can just 'use' properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; JContainer (the base class for JObject and JArray) is a collection so you can also iterate over the properties at runtime easily:foreach (var item in jsonObject) { Console.WriteLine(item.Key + " " + item.Value.ToString()); } The functionality of the JSON objects are very similar to .NET's ExpandObject and if you used it before, you're already familiar with how the dynamic interfaces to the JSON objects works. Importing JSON with JObject.Parse() and JArray.Parse() The JValue structure supports importing JSON via the Parse() and Load() methods which can read JSON data from a string or various streams respectively. Essentially JValue includes the core JSON parsing to turn a JSON string into a collection of JsonValue objects that can be then referenced using familiar dynamic object syntax. Here's a simple example:public void JValueParsingTest() { var jsonString = @"{""Name"":""Rick"",""Company"":""West Wind"", ""Entered"":""2012-03-16T00:03:33.245-10:00""}"; dynamic json = JValue.Parse(jsonString); // values require casting string name = json.Name; string company = json.Company; DateTime entered = json.Entered; Assert.AreEqual(name, "Rick"); Assert.AreEqual(company, "West Wind"); } The JSON string represents an object with three properties which is parsed into a JObject class and cast to dynamic. Once cast to dynamic I can then go ahead and access the object using familiar object syntax. Note that the actual values - json.Name, json.Company, json.Entered - are actually of type JToken and I have to cast them to their appropriate types first before I can do type comparisons as in the Asserts at the end of the test method. This is required because of the way that dynamic types work which can't determine the type based on the method signature of the Assert.AreEqual(object,object) method. I have to either assign the dynamic value to a variable as I did above, or explicitly cast ( (string) json.Name) in the actual method call. The JSON structure can be much more complex than this simple example. Here's another example of an array of albums serialized to JSON and then parsed through with JsonValue():[TestMethod] public void JsonArrayParsingTest() { var jsonString = @"[ { ""Id"": ""b3ec4e5c"", ""AlbumName"": ""Dirty Deeds Done Dirt Cheap"", ""Artist"": ""AC/DC"", ""YearReleased"": 1976, ""Entered"": ""2012-03-16T00:13:12.2810521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/61kTaH-uZBL._AA115_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/gp/product/…ASIN=B00008BXJ4"", ""Songs"": [ { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Dirty Deeds Done Dirt Cheap"", ""SongLength"": ""4:11"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Love at First Feel"", ""SongLength"": ""3:10"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Big Balls"", ""SongLength"": ""2:38"" } ] }, { ""Id"": ""7b919432"", ""AlbumName"": ""End of the Silence"", ""Artist"": ""Henry Rollins Band"", ""YearReleased"": 1992, ""Entered"": ""2012-03-16T00:13:12.2800521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/51FO3rb1tuL._SL160_AA160_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/End-Silence-Rollins-Band/dp/B0000040OX/ref=sr_1_5?ie=UTF8&qid=1302232195&sr=8-5"", ""Songs"": [ { ""AlbumId"": ""7b919432"", ""SongName"": ""Low Self Opinion"", ""SongLength"": ""5:24"" }, { ""AlbumId"": ""7b919432"", ""SongName"": ""Grip"", ""SongLength"": ""4:51"" } ] } ]"; JArray jsonVal = JArray.Parse(jsonString) as JArray; dynamic albums = jsonVal; foreach (dynamic album in albums) { Console.WriteLine(album.AlbumName + " (" + album.YearReleased.ToString() + ")"); foreach (dynamic song in album.Songs) { Console.WriteLine("\t" + song.SongName); } } Console.WriteLine(albums[0].AlbumName); Console.WriteLine(albums[0].Songs[1].SongName); } JObject and JArray in ASP.NET Web API Of course these types also work in ASP.NET Web API controller methods. If you want you can accept parameters using these object or return them back to the server. The following contrived example receives dynamic JSON input, and then creates a new dynamic JSON object and returns it based on data from the first:[HttpPost] public JObject PostAlbumJObject(JObject jAlbum) { // dynamic input from inbound JSON dynamic album = jAlbum; // create a new JSON object to write out dynamic newAlbum = new JObject(); // Create properties on the new instance // with values from the first newAlbum.AlbumName = album.AlbumName + " New"; newAlbum.NewProperty = "something new"; newAlbum.Songs = new JArray(); foreach (dynamic song in album.Songs) { song.SongName = song.SongName + " New"; newAlbum.Songs.Add(song); } return newAlbum; } The raw POST request to the server looks something like this: POST http://localhost/aspnetwebapi/samples/PostAlbumJObject HTTP/1.1User-Agent: FiddlerContent-type: application/jsonHost: localhostContent-Length: 88 {AlbumName: "Dirty Deeds",Songs:[ { SongName: "Problem Child"},{ SongName: "Squealer"}]} and the output that comes back looks like this: {  "AlbumName": "Dirty Deeds New",  "NewProperty": "something new",  "Songs": [    {      "SongName": "Problem Child New"    },    {      "SongName": "Squealer New"    }  ]} The original values are echoed back with something extra appended to demonstrate that we're working with a new object. When you receive or return a JObject, JValue, JToken or JArray instance in a Web API method, Web API ignores normal content negotiation and assumes your content is going to be received and returned as JSON, so effectively the parameter and result type explicitly determines the input and output format which is nice. Dynamic to Strong Type Mapping You can also map JObject and JArray instances to a strongly typed object, so you can mix dynamic and static typing in the same piece of code. Using the 2 Album jsonString shown earlier, the code below takes an array of albums and picks out only a single album and casts that album to a static Album instance.[TestMethod] public void JsonParseToStrongTypeTest() { JArray albums = JArray.Parse(jsonString) as JArray; // pick out one album JObject jalbum = albums[0] as JObject; // Copy to a static Album instance Album album = jalbum.ToObject<Album>(); Assert.IsNotNull(album); Assert.AreEqual(album.AlbumName,jalbum.Value<string>("AlbumName")); Assert.IsTrue(album.Songs.Count > 0); } This is pretty damn useful for the scenario I mentioned earlier - you can read a large chunk of JSON and dynamically walk the property hierarchy down to the item you want to access, and then either access the specific item dynamically (as shown earlier) or map a part of the JSON to a strongly typed object. That's very powerful if you think about it - it leaves you in total control to decide what's dynamic and what's static. Strongly typed JSON Parsing With all this talk of dynamic let's not forget that JSON.NET of course also does strongly typed serialization which is drop dead easy. Here's a simple example on how to serialize and deserialize an object with JSON.NET:[TestMethod] public void StronglyTypedSerializationTest() { // Demonstrate deserialization from a raw string var album = new Album() { AlbumName = "Dirty Deeds Done Dirt Cheap", Artist = "AC/DC", Entered = DateTime.Now, YearReleased = 1976, Songs = new List<Song>() { new Song() { SongName = "Dirty Deeds Done Dirt Cheap", SongLength = "4:11" }, new Song() { SongName = "Love at First Feel", SongLength = "3:10" } } }; // serialize to string string json2 = JsonConvert.SerializeObject(album,Formatting.Indented); Console.WriteLine(json2); // make sure we can serialize back var album2 = JsonConvert.DeserializeObject<Album>(json2); Assert.IsNotNull(album2); Assert.IsTrue(album2.AlbumName == "Dirty Deeds Done Dirt Cheap"); Assert.IsTrue(album2.Songs.Count == 2); } JsonConvert is a high level static class that wraps lower level functionality, but you can also use the JsonSerializer class, which allows you to serialize/parse to and from streams. It's a little more work, but gives you a bit more control. The functionality available is easy to discover with Intellisense, and that's good because there's not a lot in the way of documentation that's actually useful. Summary JSON.NET is a pretty complete JSON implementation with lots of different choices for JSON parsing from dynamic parsing to static serialization, to complex querying of JSON objects using LINQ. It's good to see this open source library getting integrated into .NET, and pushing out the old and tired stock .NET parsers so that we finally have a bit more flexibility - and extensibility - in our JSON parsing. Good to go! Resources Sample Test Project http://json.codeplex.com/© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  Web Api  AJAX   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • error about ACPI _OSC request failed (AE_NOT_FOUND)

    - by Yavuz Maslak
    I have ubuntu server 11.10 64 bit I see an error in kernel.log. This error comes out when the server reboot. some port of grep APCI in kernel.log; Dec 5 09:08:51 www kernel: [ 0.588605] pci0000:00: Requesting ACPI _OSC control (0x1d) Dec 5 09:08:51 www kernel: [ 0.588667] pci0000:00: ACPI _OSC request failed (AE_NOT_FOUND), returned control mask: 0x1d Dec 5 09:08:51 www kernel: [ 0.588746] ACPI _OSC control for PCIe not granted, disabling ASPM Which hardware may be cause this error ? root@www:# grep -r ACPI /var/log/kern.log Dec 5 09:08:51 www kernel: [ 0.000000] BIOS-e820: 00000000bf780000 - 00000000bf798000 (ACPI data) Dec 5 09:08:51 www kernel: [ 0.000000] BIOS-e820: 00000000bf798000 - 00000000bf7dc000 (ACPI NVS) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: RSDP 00000000000fb1a0 00014 (v00 ACPIAM) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: RSDT 00000000bf780000 00040 (v01 022410 RSDT1405 20100224 MSFT 00000097) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: FACP 00000000bf780200 00084 (v01 022410 FACP1405 20100224 MSFT 00000097) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: DSDT 00000000bf7804b0 0C359 (v01 A1279 A1279001 00000001 INTL 20060113) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: FACS 00000000bf798000 00040 Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: APIC 00000000bf780390 000D8 (v01 022410 APIC1405 20100224 MSFT 00000097) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: MCFG 00000000bf780470 0003C (v01 022410 OEMMCFG 20100224 MSFT 00000097) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: OEMB 00000000bf798040 00072 (v01 022410 OEMB1405 20100224 MSFT 00000097) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: HPET 00000000bf78f4b0 00038 (v01 022410 OEMHPET 20100224 MSFT 00000097) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: OSFR 00000000bf78f4f0 000B0 (v01 022410 OEMOSFR 20100224 MSFT 00000097) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: SSDT 00000000bf798fe0 00363 (v01 DpgPmm CpuPm 00000012 INTL 20060113) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: Local APIC address 0xfee00000 Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: PM-Timer IO Port: 0x808 Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: Local APIC address 0xfee00000 Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x84] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x85] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x07] lapic_id[0x86] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x08] lapic_id[0x87] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x09] lapic_id[0x88] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0a] lapic_id[0x89] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0b] lapic_id[0x8a] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0c] lapic_id[0x8b] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0d] lapic_id[0x8c] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0e] lapic_id[0x8d] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0f] lapic_id[0x8e] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x10] lapic_id[0x8f] disabled) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: IOAPIC (id[0x01] address[0xfec00000] gsi_base[0]) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: IOAPIC (id[0x03] address[0xfec8a000] gsi_base[24]) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: IRQ0 used by override. Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: IRQ2 used by override. Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: IRQ9 used by override. Dec 5 09:08:51 www kernel: [ 0.000000] Using ACPI (MADT) for SMP configuration information Dec 5 09:08:51 www kernel: [ 0.000000] ACPI: HPET id: 0x8086a301 base: 0xfed00000 Dec 5 09:08:51 www kernel: [ 0.009507] ACPI: Core revision 20110413 Dec 5 09:08:51 www kernel: [ 0.499129] PM: Registering ACPI NVS region at bf798000 (278528 bytes) Dec 5 09:08:51 www kernel: [ 0.500749] ACPI: bus type pci registered Dec 5 09:08:51 www kernel: [ 0.502747] ACPI: EC: Look up EC in DSDT Dec 5 09:08:51 www kernel: [ 0.503788] ACPI: Executed 1 blocks of module-level executable AML code Dec 5 09:08:51 www kernel: [ 0.520435] ACPI: SSDT 00000000bf7980c0 00F20 (v01 DpgPmm P001Ist 00000011 INTL 20060113) Dec 5 09:08:51 www kernel: [ 0.520863] ACPI: Dynamic OEM Table Load: Dec 5 09:08:51 www kernel: [ 0.520990] ACPI: SSDT (null) 00F20 (v01 DpgPmm P001Ist 00000011 INTL 20060113) Dec 5 09:08:51 www kernel: [ 0.521308] ACPI: Interpreter enabled Dec 5 09:08:51 www kernel: [ 0.521366] ACPI: (supports S0 S1 S3 S4 S5) Dec 5 09:08:51 www kernel: [ 0.521611] ACPI: Using IOAPIC for interrupt routing Dec 5 09:08:51 www kernel: [ 0.522622] PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in ACPI motherboard resources Dec 5 09:08:51 www kernel: [ 0.554150] ACPI: No dock devices found. Dec 5 09:08:51 www kernel: [ 0.554267] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 5 09:08:51 www kernel: [ 0.555231] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 5 09:08:51 www kernel: [ 0.588224] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0._PRT] Dec 5 09:08:51 www kernel: [ 0.588398] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P1._PRT] Dec 5 09:08:51 www kernel: [ 0.588451] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P4._PRT] Dec 5 09:08:51 www kernel: [ 0.588473] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P6._PRT] Dec 5 09:08:51 www kernel: [ 0.588492] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P7._PRT] Dec 5 09:08:51 www kernel: [ 0.588512] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P8._PRT] Dec 5 09:08:51 www kernel: [ 0.588540] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.NPE1._PRT] Dec 5 09:08:51 www kernel: [ 0.588559] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.NPE3._PRT] Dec 5 09:08:51 www kernel: [ 0.588579] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.NPE7._PRT] Dec 5 09:08:51 www kernel: [ 0.588605] pci0000:00: Requesting ACPI _OSC control (0x1d) Dec 5 09:08:51 www kernel: [ 0.588667] pci0000:00: ACPI _OSC request failed (AE_NOT_FOUND), returned control mask: 0x1d Dec 5 09:08:51 www kernel: [ 0.588746] ACPI _OSC control for PCIe not granted, disabling ASPM Dec 5 09:08:51 www kernel: [ 0.597666] ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 6 7 10 11 12 14 *15) Dec 5 09:08:51 www kernel: [ 0.598142] ACPI: PCI Interrupt Link [LNKB] (IRQs *5) Dec 5 09:08:51 www kernel: [ 0.598336] ACPI: PCI Interrupt Link [LNKC] (IRQs 3 4 6 7 10 *11 12 14 15) Dec 5 09:08:51 www kernel: [ 0.598810] ACPI: PCI Interrupt Link [LNKD] (IRQs 3 4 6 7 *10 11 12 14 15) Dec 5 09:08:51 www kernel: [ 0.599284] ACPI: PCI Interrupt Link [LNKE] (IRQs 3 4 6 7 10 11 12 *14 15) Dec 5 09:08:51 www kernel: [ 0.599762] ACPI: PCI Interrupt Link [LNKF] (IRQs *3 4 6 7 10 11 12 14 15) Dec 5 09:08:51 www kernel: [ 0.600236] ACPI: PCI Interrupt Link [LNKG] (IRQs 3 4 6 *7 10 11 12 14 15) Dec 5 09:08:51 www kernel: [ 0.600709] ACPI: PCI Interrupt Link [LNKH] (IRQs 3 *4 6 7 10 11 12 14 15) Dec 5 09:08:51 www kernel: [ 0.601931] PCI: Using ACPI for IRQ routing Dec 5 09:08:51 www kernel: [ 0.628146] pnp: PnP ACPI init Dec 5 09:08:51 www kernel: [ 0.628211] ACPI: bus type pnp registered Dec 5 09:08:51 www kernel: [ 0.628417] pnp 00:00: Plug and Play ACPI device, IDs PNP0a08 PNP0a03 (active) Dec 5 09:08:51 www kernel: [ 0.628859] system 00:01: Plug and Play ACPI device, IDs PNP0c01 (active) Dec 5 09:08:51 www kernel: [ 0.628915] pnp 00:02: Plug and Play ACPI device, IDs PNP0200 (active) Dec 5 09:08:51 www kernel: [ 0.628951] pnp 00:03: Plug and Play ACPI device, IDs PNP0b00 (active) Dec 5 09:08:51 www kernel: [ 0.628975] pnp 00:04: Plug and Play ACPI device, IDs PNP0800 (active) Dec 5 09:08:51 www kernel: [ 0.629004] pnp 00:05: Plug and Play ACPI device, IDs PNP0c04 (active) Dec 5 09:08:51 www kernel: [ 0.629229] system 00:06: Plug and Play ACPI device, IDs PNP0c02 (active) Dec 5 09:08:51 www kernel: [ 0.629779] system 00:07: Plug and Play ACPI device, IDs PNP0c02 (active) Dec 5 09:08:51 www kernel: [ 0.629849] pnp 00:08: Plug and Play ACPI device, IDs PNP0103 (active) Dec 5 09:08:51 www kernel: [ 0.629901] pnp 00:09: Plug and Play ACPI device, IDs INT0800 (active) Dec 5 09:08:51 www kernel: [ 0.630030] system 00:0a: Plug and Play ACPI device, IDs PNP0c02 (active) Dec 5 09:08:51 www kernel: [ 0.630254] system 00:0b: Plug and Play ACPI device, IDs PNP0c02 (active) Dec 5 09:08:51 www kernel: [ 0.630304] pnp 00:0c: Plug and Play ACPI device, IDs PNP0303 PNP030b (active) Dec 5 09:08:51 www kernel: [ 0.630359] pnp 00:0d: Plug and Play ACPI device, IDs PNP0f03 PNP0f13 (active) Dec 5 09:08:51 www kernel: [ 0.630492] system 00:0e: Plug and Play ACPI device, IDs PNP0c02 (active) Dec 5 09:08:51 www kernel: [ 0.630986] system 00:0f: Plug and Play ACPI device, IDs PNP0c01 (active) Dec 5 09:08:51 www kernel: [ 0.631078] pnp: PnP ACPI: found 16 devices Dec 5 09:08:51 www kernel: [ 0.631135] ACPI: ACPI bus type pnp unregistered Dec 5 09:08:51 www kernel: [ 0.726291] ACPI: Power Button [PWRB] Dec 5 09:08:51 www kernel: [ 0.726452] ACPI: Power Button [PWRF] Dec 5 09:08:51 www kernel: [ 0.726527] ACPI: acpi_idle yielding to intel_idle Dec 7 21:45:22 www kernel: [ 0.000000] BIOS-e820: 00000000bf780000 - 00000000bf798000 (ACPI data) Dec 7 21:45:22 www kernel: [ 0.000000] BIOS-e820: 00000000bf798000 - 00000000bf7dc000 (ACPI NVS) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: RSDP 00000000000fb1a0 00014 (v00 ACPIAM) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: RSDT 00000000bf780000 00040 (v01 022410 RSDT1405 20100224 MSFT 00000097) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: FACP 00000000bf780200 00084 (v01 022410 FACP1405 20100224 MSFT 00000097) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: DSDT 00000000bf7804b0 0C359 (v01 A1279 A1279001 00000001 INTL 20060113) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: FACS 00000000bf798000 00040 Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: APIC 00000000bf780390 000D8 (v01 022410 APIC1405 20100224 MSFT 00000097) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: MCFG 00000000bf780470 0003C (v01 022410 OEMMCFG 20100224 MSFT 00000097) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: OEMB 00000000bf798040 00072 (v01 022410 OEMB1405 20100224 MSFT 00000097) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: HPET 00000000bf78f4b0 00038 (v01 022410 OEMHPET 20100224 MSFT 00000097) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: OSFR 00000000bf78f4f0 000B0 (v01 022410 OEMOSFR 20100224 MSFT 00000097) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: SSDT 00000000bf798fe0 00363 (v01 DpgPmm CpuPm 00000012 INTL 20060113) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: Local APIC address 0xfee00000 Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: PM-Timer IO Port: 0x808 Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: Local APIC address 0xfee00000 Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x84] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x85] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x07] lapic_id[0x86] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x08] lapic_id[0x87] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x09] lapic_id[0x88] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0a] lapic_id[0x89] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0b] lapic_id[0x8a] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0c] lapic_id[0x8b] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0d] lapic_id[0x8c] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0e] lapic_id[0x8d] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x0f] lapic_id[0x8e] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: LAPIC (acpi_id[0x10] lapic_id[0x8f] disabled) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: IOAPIC (id[0x01] address[0xfec00000] gsi_base[0]) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: IOAPIC (id[0x03] address[0xfec8a000] gsi_base[24]) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: IRQ0 used by override. Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: IRQ2 used by override. Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: IRQ9 used by override. Dec 7 21:45:22 www kernel: [ 0.000000] Using ACPI (MADT) for SMP configuration information Dec 7 21:45:22 www kernel: [ 0.000000] ACPI: HPET id: 0x8086a301 base: 0xfed00000 Dec 7 21:45:22 www kernel: [ 0.009505] ACPI: Core revision 20110413 Dec 7 21:45:22 www kernel: [ 0.499203] PM: Registering ACPI NVS region at bf798000 (278528 bytes) Dec 7 21:45:22 www kernel: [ 0.500819] ACPI: bus type pci registered Dec 7 21:45:22 www kernel: [ 0.503121] ACPI: EC: Look up EC in DSDT Dec 7 21:45:22 www kernel: [ 0.504162] ACPI: Executed 1 blocks of module-level executable AML code Dec 7 21:45:22 www kernel: [ 0.520821] ACPI: SSDT 00000000bf7980c0 00F20 (v01 DpgPmm P001Ist 00000011 INTL 20060113) Dec 7 21:45:22 www kernel: [ 0.521247] ACPI: Dynamic OEM Table Load: Dec 7 21:45:22 www kernel: [ 0.521374] ACPI: SSDT (null) 00F20 (v01 DpgPmm P001Ist 00000011 INTL 20060113) Dec 7 21:45:22 www kernel: [ 0.521691] ACPI: Interpreter enabled Dec 7 21:45:22 www kernel: [ 0.521748] ACPI: (supports S0 S1 S3 S4 S5) Dec 7 21:45:22 www kernel: [ 0.521993] ACPI: Using IOAPIC for interrupt routing Dec 7 21:45:22 www kernel: [ 0.523002] PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in ACPI motherboard resources Dec 7 21:45:22 www kernel: [ 0.554533] ACPI: No dock devices found. Dec 7 21:45:22 www kernel: [ 0.554649] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 7 21:45:22 www kernel: [ 0.555620] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 7 21:45:22 www kernel: [ 0.588224] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0._PRT] Dec 7 21:45:22 www kernel: [ 0.588398] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P1._PRT] Dec 7 21:45:22 www kernel: [ 0.588451] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P4._PRT] Dec 7 21:45:22 www kernel: [ 0.588473] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P6._PRT] Dec 7 21:45:22 www kernel: [ 0.588492] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P7._PRT] Dec 7 21:45:22 www kernel: [ 0.588512] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P8._PRT] Dec 7 21:45:22 www kernel: [ 0.588540] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.NPE1._PRT] Dec 7 21:45:22 www kernel: [ 0.588559] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.NPE3._PRT] Dec 7 21:45:22 www kernel: [ 0.588579] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.NPE7._PRT] Dec 7 21:45:22 www kernel: [ 0.588606] pci0000:00: Requesting ACPI _OSC control (0x1d) Dec 7 21:45:22 www kernel: [ 0.588667] pci0000:00: ACPI _OSC request failed (AE_NOT_FOUND), returned control mask: 0x1d Dec 7 21:45:22 www kernel: [ 0.588746] ACPI _OSC control for PCIe not granted, disabling ASPM Dec 7 21:45:22 www kernel: [ 0.597661] ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 6 7 10 11 12 14 *15) Dec 7 21:45:22 www kernel: [ 0.598137] ACPI: PCI Interrupt Link [LNKB] (IRQs *5) Dec 7 21:45:22 www kernel: [ 0.598331] ACPI: PCI Interrupt Link [LNKC] (IRQs 3 4 6 7 10 *11 12 14 15) Dec 7 21:45:22 www kernel: [ 0.598804] ACPI: PCI Interrupt Link [LNKD] (IRQs 3 4 6 7 *10 11 12 14 15) Dec 7 21:45:22 www kernel: [ 0.599278] ACPI: PCI Interrupt Link [LNKE] (IRQs 3 4 6 7 10 11 12 *14 15) Dec 7 21:45:22 www kernel: [ 0.599756] ACPI: PCI Interrupt Link [LNKF] (IRQs *3 4 6 7 10 11 12 14 15) Dec 7 21:45:22 www kernel: [ 0.600230] ACPI: PCI Interrupt Link [LNKG] (IRQs 3 4 6 *7 10 11 12 14 15) Dec 7 21:45:22 www kernel: [ 0.600704] ACPI: PCI Interrupt Link [LNKH] (IRQs 3 *4 6 7 10 11 12 14 15) Dec 7 21:45:22 www kernel: [ 0.601926] PCI: Using ACPI for IRQ routing Dec 7 21:45:22 www kernel: [ 0.624115] pnp: PnP ACPI init Dec 7 21:45:22 www kernel: [ 0.624179] ACPI: bus type pnp registered Dec 7 21:45:22 www kernel: [ 0.624382] pnp 00:00: Plug and Play ACPI device, IDs PNP0a08 PNP0a03 (active) Dec 7 21:45:22 www kernel: [ 0.624821] system 00:01: Plug and Play ACPI device, IDs PNP0c01 (active) Dec 7 21:45:22 www kernel: [ 0.624875] pnp 00:02: Plug and Play ACPI device, IDs PNP0200 (active) Dec 7 21:45:22 www kernel: [ 0.624911] pnp 00:03: Plug and Play ACPI device, IDs PNP0b00 (active) Dec 7 21:45:22 www kernel: [ 0.624933] pnp 00:04: Plug and Play ACPI device, IDs PNP0800 (active) Dec 7 21:45:22 www kernel: [ 0.624962] pnp 00:05: Plug and Play ACPI device, IDs PNP0c04 (active) Dec 7 21:45:22 www kernel: [ 0.625186] system 00:06: Plug and Play ACPI device, IDs PNP0c02 (active) Dec 7 21:45:22 www kernel: [ 0.625733] system 00:07: Plug and Play ACPI device, IDs PNP0c02 (active) Dec 7 21:45:22 www kernel: [ 0.625803] pnp 00:08: Plug and Play ACPI device, IDs PNP0103 (active) Dec 7 21:45:22 www kernel: [ 0.625856] pnp 00:09: Plug and Play ACPI device, IDs INT0800 (active) Dec 7 21:45:22 www kernel: [ 0.625984] system 00:0a: Plug and Play ACPI device, IDs PNP0c02 (active) Dec 7 21:45:22 www kernel: [ 0.626206] system 00:0b: Plug and Play ACPI device, IDs PNP0c02 (active) Dec 7 21:45:22 www kernel: [ 0.626256] pnp 00:0c: Plug and Play ACPI device, IDs PNP0303 PNP030b (active) Dec 7 21:45:22 www kernel: [ 0.626312] pnp 00:0d: Plug and Play ACPI device, IDs PNP0f03 PNP0f13 (active) Dec 7 21:45:22 www kernel: [ 0.626445] system 00:0e: Plug and Play ACPI device, IDs PNP0c02 (active) Dec 7 21:45:22 www kernel: [ 0.626936] system 00:0f: Plug and Play ACPI device, IDs PNP0c01 (active) Dec 7 21:45:22 www kernel: [ 0.627027] pnp: PnP ACPI: found 16 devices Dec 7 21:45:22 www kernel: [ 0.627084] ACPI: ACPI bus type pnp unregistered Dec 7 21:45:22 www kernel: [ 0.722086] ACPI: Power Button [PWRB] Dec 7 21:45:22 www kernel: [ 0.722246] ACPI: Power Button [PWRF] Dec 7 21:45:22 www kernel: [ 0.722320] ACPI: acpi_idle yielding to intel_idle

    Read the article

  • Configuring Fed Authentication Methods in OIF / IdP

    - by Damien Carru
    In this article, I will provide examples on how to configure OIF/IdP to map OAM Authentication Schemes to Federation Authentication Methods, based on the concepts introduced in my previous entry. I will show examples for the three protocols supported by OIF: SAML 2.0 SSO SAML 1.1 SSO OpenID 2.0 Enjoy the reading! Configuration As I mentioned in my previous article, mapping Federation Authentication Methods to OAM Authentication Schemes is protocol dependent, since the methods are defined in the various protocols (SAML 2.0, SAML 1.1, OpenID 2.0). As such, the WLST commands to set those mappings will involve: Either the SP Partner Profile and affect all Partners referencing that profile, which do not override the Federation Authentication Method to OAM Authentication Scheme mappings Or the SP Partner entry, which will only affect the SP Partner It is important to note that if an SP Partner is configured to define one or more Federation Authentication Method to OAM Authentication Scheme mappings, then all the mappings defined in the SP Partner Profile will be ignored. WLST Commands The two OIF WLST commands that can be used to define mapping Federation Authentication Methods to OAM Authentication Schemes are: addSPPartnerProfileAuthnMethod() to define a mapping on an SP Partner Profile, taking as parameters: The name of the SP Partner Profile The Federation Authentication Method The OAM Authentication Scheme name addSPPartnerAuthnMethod() to define a mapping on an SP Partner , taking as parameters: The name of the SP Partner The Federation Authentication Method The OAM Authentication Scheme name Note: I will discuss in a subsequent article the other parameters of those commands. In the next sections, I will show examples on how to use those methods: For SAML 2.0, I will configure the SP Partner Profile, that will apply all the mappings to SP Partners referencing this profile, unless they override mapping definition For SAML 1.1, I will configure the SP Partner. For OpenID 2.0, I will configure the SP/RP Partner SAML 2.0 Test Setup In this setup, OIF is acting as an IdP and is integrated with a remote SAML 2.0 SP partner identified by AcmeSP. In this test, I will perform Federation SSO with OIF/IdP configured to: Use LDAPScheme as the Authentication Scheme Use BasicScheme as the Authentication Scheme Map BasicSessionScheme  to  the urn:oasis:names:tc:SAML:2.0:ac:classes:Password Federation Authentication Method Use OAMLDAPPluginAuthnScheme as the Authentication Scheme Map OAMLDAPPluginAuthnScheme to  the urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport Federation Authentication Method LDAPScheme as Authentication Scheme Using the OOTB settings regarding user authentication in OAM, the user will be challenged via a FORM based login page based on the LDAPScheme. Also the default Federation Authentication Method mappings configuration maps only the urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport to LDAPScheme (also marked as the default scheme used for authentication), FAAuthScheme, BasicScheme and BasicFAScheme. After authentication via FORM, OIF/IdP would issue an Assertion similar to: <samlp:Response ...>    <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>    <samlp:Status>        <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>    </samlp:Status>    <saml:Assertion ...>        <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>        <dsig:Signature>            ...        </dsig:Signature>        <saml:Subject>            <saml:NameID ...>[email protected]</saml:NameID>            <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">                <saml:SubjectConfirmationData .../>            </saml:SubjectConfirmation>        </saml:Subject>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthnInstant="2014-03-21T20:53:55Z" SessionIndex="id-6i-Dm0yB-HekG6cejktwcKIFMzYE8Yrmqwfd0azz" SessionNotOnOrAfter="2014-03-21T21:53:55Z">            <saml:AuthnContext>                <saml:AuthnContextClassRef>                   urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport                </saml:AuthnContextClassRef>            </saml:AuthnContext>        </saml:AuthnStatement>    </saml:Assertion></samlp:Response> BasicScheme as Authentication Scheme For this test, I will switch the default Authentication Scheme for the SP Partner Profile to BasicScheme instead of LDAPScheme. I will use the OIF WLST setSPPartnerProfileDefaultScheme() command and specify which scheme to be used as the default for the SP Partner Profile referenced by AcmeSP (which is saml20-sp-partner-profile in this case: getFedPartnerProfile("AcmeSP", "sp") ): Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the setSPPartnerProfileDefaultScheme() command:setSPPartnerProfileDefaultScheme("saml20-sp-partner-profile", "BasicScheme") Exit the WLST environment:exit() The user will now be challenged via HTTP Basic Authentication defined in the BasicScheme for AcmeSP. Also, as noted earlier, the default Federation Authentication Method mappings configuration maps only the urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport to LDAPScheme (also marked as the default scheme used for authentication), FAAuthScheme, BasicScheme and BasicFAScheme. After authentication via HTTP Basic Authentication, OIF/IdP would issue an Assertion similar to: <samlp:Response ...>    <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>    <samlp:Status>        <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>    </samlp:Status>    <saml:Assertion ...>        <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>        <dsig:Signature>            ...        </dsig:Signature>        <saml:Subject>            <saml:NameID ...>[email protected]</saml:NameID>            <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">                <saml:SubjectConfirmationData .../>            </saml:SubjectConfirmation>        </saml:Subject>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthnInstant="2014-03-21T20:53:55Z" SessionIndex="id-6i-Dm0yB-HekG6cejktwcKIFMzYE8Yrmqwfd0azz" SessionNotOnOrAfter="2014-03-21T21:53:55Z">            <saml:AuthnContext>                <saml:AuthnContextClassRef>                   urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport                </saml:AuthnContextClassRef>            </saml:AuthnContext>        </saml:AuthnStatement>    </saml:Assertion></samlp:Response> Mapping BasicScheme To change the Federation Authentication Method mapping for the BasicScheme to urn:oasis:names:tc:SAML:2.0:ac:classes:Password instead of urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport for the saml20-sp-partner-profile SAML 2.0 SP Partner Profile (the profile to which my AcmeSP Partner is bound to), I will execute the addSPPartnerProfileAuthnMethod() method: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the addSPPartnerProfileAuthnMethod() command:addSPPartnerProfileAuthnMethod("saml20-sp-partner-profile", "urn:oasis:names:tc:SAML:2.0:ac:classes:Password", "BasicScheme") Exit the WLST environment:exit() After authentication via HTTP Basic Authentication, OIF/IdP would now issue an Assertion similar to (see that the AuthnContextClassRef was changed from PasswordProtectedTransport to Password): <samlp:Response ...>    <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>    <samlp:Status>        <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>    </samlp:Status>    <saml:Assertion ...>        <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>        <dsig:Signature>            ...        </dsig:Signature>        <saml:Subject>            <saml:NameID ...>[email protected]</saml:NameID>            <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">                <saml:SubjectConfirmationData .../>            </saml:SubjectConfirmation>        </saml:Subject>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthnInstant="2014-03-21T20:53:55Z" SessionIndex="id-6i-Dm0yB-HekG6cejktwcKIFMzYE8Yrmqwfd0azz" SessionNotOnOrAfter="2014-03-21T21:53:55Z">            <saml:AuthnContext>                <saml:AuthnContextClassRef>                   urn:oasis:names:tc:SAML:2.0:ac:classes:Password                </saml:AuthnContextClassRef>            </saml:AuthnContext>        </saml:AuthnStatement>    </saml:Assertion></samlp:Response> OAMLDAPPluginAuthnScheme as Authentication Scheme For this test, I will switch the default Authentication Scheme for the SP Partner Profile to OAMLDAPPluginAuthnScheme instead of BasicScheme. I will use the OIF WLST setSPPartnerProfileDefaultScheme() command and specify which scheme to be used as the default for the SP Partner Profile referenced by AcmeSP (which is saml20-sp-partner-profile in this case: getFedPartnerProfile("AcmeSP", "sp") ): Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the setSPPartnerProfileDefaultScheme() command:setSPPartnerProfileDefaultScheme("saml20-sp-partner-profile", "OAMLDAPPluginAuthnScheme") Exit the WLST environment:exit() The user will now be challenged via FORM defined in the OAMLDAPPluginAuthnScheme for AcmeSP. Contrarily to LDAPScheme and BasicScheme, the OAMLDAPPluginAuthnScheme is not mapped by default to any Federation Authentication Methods. As such, OIF/IdP will not be able to find a Federation Authentication Method and will set the method in the SAML Assertion to the OAM Authentication Scheme name. After authentication via FORM, OIF/IdP would issue an Assertion similar to (see the AuthnContextClassRef set to OAMLDAPPluginAuthnScheme): <samlp:Response ...>    <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>    <samlp:Status>        <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>    </samlp:Status>    <saml:Assertion ...>        <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>        <dsig:Signature>            ...        </dsig:Signature>        <saml:Subject>            <saml:NameID ...>[email protected]</saml:NameID>            <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">                <saml:SubjectConfirmationData .../>            </saml:SubjectConfirmation>        </saml:Subject>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthnInstant="2014-03-21T20:53:55Z" SessionIndex="id-6i-Dm0yB-HekG6cejktwcKIFMzYE8Yrmqwfd0azz" SessionNotOnOrAfter="2014-03-21T21:53:55Z">            <saml:AuthnContext>                <saml:AuthnContextClassRef> OAMLDAPPluginAuthnScheme                </saml:AuthnContextClassRef>            </saml:AuthnContext>        </saml:AuthnStatement>    </saml:Assertion></samlp:Response> Mapping OAMLDAPPluginAuthnScheme To add the OAMLDAPPluginAuthnScheme  to the Federation Authentication Method urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport mapping, I will execute the addSPPartnerProfileAuthnMethod() method: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the addSPPartnerProfileAuthnMethod() command:addSPPartnerProfileAuthnMethod("saml20-sp-partner-profile", "urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport", "OAMLDAPPluginAuthnScheme") Exit the WLST environment:exit() After authentication via FORM, OIF/IdP would now issue an Assertion similar to (see that the method was changed from OAMLDAPPluginAuthnScheme to PasswordProtectedTransport): <samlp:Response ...>    <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>    <samlp:Status>        <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>    </samlp:Status>    <saml:Assertion ...>        <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>        <dsig:Signature>            ...        </dsig:Signature>        <saml:Subject>            <saml:NameID ...>[email protected]</saml:NameID>            <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">                <saml:SubjectConfirmationData .../>            </saml:SubjectConfirmation>        </saml:Subject>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthnInstant="2014-03-21T20:53:55Z" SessionIndex="id-6i-Dm0yB-HekG6cejktwcKIFMzYE8Yrmqwfd0azz" SessionNotOnOrAfter="2014-03-21T21:53:55Z">            <saml:AuthnContext>                <saml:AuthnContextClassRef>                   urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport                </saml:AuthnContextClassRef>            </saml:AuthnContext>        </saml:AuthnStatement>    </saml:Assertion></samlp:Response> SAML 1.1 Test Setup In this setup, OIF is acting as an IdP and is integrated with a remote SAML 1.1 SP partner identified by AcmeSP. In this test, I will perform Federation SSO with OIF/IdP configured to: Use LDAPScheme as the Authentication Scheme Use OAMLDAPPluginAuthnScheme as the Authentication Scheme Map OAMLDAPPluginAuthnScheme to  the urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport Federation Authentication Method Use LDAPScheme as the Authentication Scheme Map LDAPScheme to  the urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport Federation Authentication Method LDAPScheme as Authentication Scheme Using the OOTB settings regarding user authentication in OAM, the user will be challenged via a FORM based login page based on the LDAPScheme. Also the default Federation Authentication Method mappings configuration maps only the urn:oasis:names:tc:SAML:1.0:am:password to LDAPScheme (also marked as the default scheme used for authentication), FAAuthScheme, BasicScheme and BasicFAScheme. After authentication via FORM, OIF/IdP would issue an Assertion similar to: <samlp:Response ...>    <samlp:Status>        <samlp:StatusCode Value="samlp:Success"/>    </samlp:Status>    <saml:Assertion Issuer="https://idp.com/oam/fed" ...>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp/ssov11</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthenticationInstant="2014-03-21T20:53:55Z" AuthenticationMethod="urn:oasis:names:tc:SAML:1.0:am:password">            <saml:Subject>                <saml:NameIdentifier ...>[email protected]</saml:NameIdentifier>                <saml:SubjectConfirmation>                   <saml:ConfirmationMethod>                       urn:oasis:names:tc:SAML:1.0:cm:bearer                   </saml:ConfirmationMethod>                </saml:SubjectConfirmation>            </saml:Subject>        </saml:AuthnStatement>        <dsig:Signature>            ...        </dsig:Signature>    </saml:Assertion></samlp:Response> OAMLDAPPluginAuthnScheme as Authentication Scheme For this test, I will switch the default Authentication Scheme for the SP Partner to OAMLDAPPluginAuthnScheme instead of LDAPScheme. I will use the OIF WLST setSPPartnerDefaultScheme() command and specify which scheme to be used as the default for the SP Partner: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the setSPPartnerDefaultScheme() command:setSPPartnerDefaultScheme("AcmeSP", "OAMLDAPPluginAuthnScheme") Exit the WLST environment:exit() The user will be challenged via FORM defined in the OAMLDAPPluginAuthnScheme for AcmeSP. Contrarily to LDAPScheme, the OAMLDAPPluginAuthnScheme is not mapped by default to any Federation Authentication Methods (in the SP Partner Profile). As such, OIF/IdP will not be able to find a Federation Authentication Method and will set the method in the SAML Assertion to the OAM Authentication Scheme name. After authentication via FORM, OIF/IdP would issue an Assertion similar to (see the AuthenticationMethod set to OAMLDAPPluginAuthnScheme): <samlp:Response ...>    <samlp:Status>        <samlp:StatusCode Value="samlp:Success"/>    </samlp:Status>    <saml:Assertion Issuer="https://idp.com/oam/fed" ...>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp/ssov11</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthenticationInstant="2014-03-21T20:53:55Z" AuthenticationMethod="OAMLDAPPluginAuthnScheme">            <saml:Subject>                <saml:NameIdentifier ...>[email protected]</saml:NameIdentifier>                <saml:SubjectConfirmation>                   <saml:ConfirmationMethod>                       urn:oasis:names:tc:SAML:1.0:cm:bearer                   </saml:ConfirmationMethod>                </saml:SubjectConfirmation>            </saml:Subject>        </saml:AuthnStatement>        <dsig:Signature>            ...        </dsig:Signature>    </saml:Assertion></samlp:Response> Mapping OAMLDAPPluginAuthnScheme To map the OAMLDAPPluginAuthnScheme  to the Federation Authentication Method urn:oasis:names:tc:SAML:1.0:am:password for this SP Partner only, I will execute the addSPPartnerAuthnMethod() method: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the addSPPartnerAuthnMethod() command:addSPPartnerAuthnMethod("AcmeSP", "urn:oasis:names:tc:SAML:1.0:am:password", "OAMLDAPPluginAuthnScheme") Exit the WLST environment:exit() After authentication via FORM, OIF/IdP would now issue an Assertion similar to (see that the method was changed from OAMLDAPPluginAuthnScheme to password): <samlp:Response ...>    <samlp:Status>        <samlp:StatusCode Value="samlp:Success"/>    </samlp:Status>    <saml:Assertion Issuer="https://idp.com/oam/fed" ...>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp/ssov11</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthenticationInstant="2014-03-21T20:53:55Z" AuthenticationMethod="urn:oasis:names:tc:SAML:1.0:am:password">            <saml:Subject>                <saml:NameIdentifier ...>[email protected]</saml:NameIdentifier>                <saml:SubjectConfirmation>                   <saml:ConfirmationMethod>                       urn:oasis:names:tc:SAML:1.0:cm:bearer                   </saml:ConfirmationMethod>                </saml:SubjectConfirmation>            </saml:Subject>        </saml:AuthnStatement>        <dsig:Signature>            ...        </dsig:Signature>    </saml:Assertion></samlp:Response> LDAPScheme as Authentication Scheme I will now show that by defining a Federation Authentication Mapping at the Partner level, this now ignores all mappings defined at the SP Partner Profile level. For this test, I will switch the default Authentication Scheme for this SP Partner back to LDAPScheme, and the Assertion issued by OIF/IdP will not be able to map this LDAPScheme to a Federation Authentication Method anymore, since A Federation Authentication Method mapping is defined at the SP Partner level and thus the mappings defined at the SP Partner Profile are ignored The LDAPScheme is not listed in the mapping at the Partner level I will use the OIF WLST setSPPartnerDefaultScheme() command and specify which scheme to be used as the default for this SP Partner: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the setSPPartnerDefaultScheme() command:setSPPartnerDefaultScheme("AcmeSP", "LDAPScheme") Exit the WLST environment:exit() After authentication via FORM, OIF/IdP would issue an Assertion similar to (see the AuthenticationMethod set to LDAPScheme): <samlp:Response ...>    <samlp:Status>        <samlp:StatusCode Value="samlp:Success"/>    </samlp:Status>    <saml:Assertion Issuer="https://idp.com/oam/fed" ...>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp/ssov11</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthenticationInstant="2014-03-21T20:53:55Z" AuthenticationMethod="LDAPScheme">            <saml:Subject>                <saml:NameIdentifier ...>[email protected]</saml:NameIdentifier>                <saml:SubjectConfirmation>                   <saml:ConfirmationMethod>                       urn:oasis:names:tc:SAML:1.0:cm:bearer                   </saml:ConfirmationMethod>                </saml:SubjectConfirmation>            </saml:Subject>        </saml:AuthnStatement>        <dsig:Signature>            ...        </dsig:Signature>    </saml:Assertion></samlp:Response> Mapping LDAPScheme at Partner Level To fix this issue, we will need to add the LDAPScheme  to the Federation Authentication Method urn:oasis:names:tc:SAML:1.0:am:password mapping for this SP Partner only. I will execute the addSPPartnerAuthnMethod() method: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the addSPPartnerAuthnMethod() command:addSPPartnerAuthnMethod("AcmeSP", "urn:oasis:names:tc:SAML:1.0:am:password", "LDAPScheme") Exit the WLST environment:exit() After authentication via FORM, OIF/IdP would now issue an Assertion similar to (see that the method was changed from LDAPScheme to password): <samlp:Response ...>    <samlp:Status>        <samlp:StatusCode Value="samlp:Success"/>    </samlp:Status>    <saml:Assertion Issuer="https://idp.com/oam/fed" ...>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp/ssov11</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthenticationInstant="2014-03-21T20:53:55Z" AuthenticationMethod="urn:oasis:names:tc:SAML:1.0:am:password">            <saml:Subject>                <saml:NameIdentifier ...>[email protected]</saml:NameIdentifier>                <saml:SubjectConfirmation>                   <saml:ConfirmationMethod>                       urn:oasis:names:tc:SAML:1.0:cm:bearer                   </saml:ConfirmationMethod>                </saml:SubjectConfirmation>            </saml:Subject>        </saml:AuthnStatement>        <dsig:Signature>            ...        </dsig:Signature>    </saml:Assertion></samlp:Response> OpenID 2.0 In the OpenID 2.0 flows, the RP must request use of PAPE, in order for OIF/IdP/OP to include PAPE information. For OpenID 2.0, the configuration will involve mapping a list of OpenID 2.0 policies to a list of Authentication Schemes. The WLST command will take a list of policies, delimited by the ',' character, instead of SAML 2.0 or SAML 1.1 where a single Federation Authentication Method had to be specified. Test Setup In this setup, OIF is acting as an IdP/OP and is integrated with a remote OpenID 2.0 SP/RP partner identified by AcmeRP. In this test, I will perform Federation SSO with OIF/IdP configured to: Use LDAPScheme as the Authentication Scheme Map LDAPScheme to  the http://schemas.openid.net/pape/policies/2007/06/phishing-resistant and http://openid-policies/password-protected policies Federation Authentication Methods (the second one is a custom for this use case) LDAPScheme as Authentication Scheme Using the OOTB settings regarding user authentication in OAM, the user will be challenged via a FORM based login page based on the LDAPScheme. No Federation Authentication Method is defined OOTB for OpenID 2.0, so if the IdP/OP issue an SSO response with a PAPE Response element, it will specify the scheme name instead of Federation Authentication Methods After authentication via FORM, OIF/IdP would issue an SSO Response similar to: https://acme.com/openid?refid=id-9PKVXZmRxAeDYcgLqPm36ClzOMA-&openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.mode=id_res&openid.op_endpoint=https%3A%2F%2Fidp.com%2Fopenid&openid.claimed_id=https%3A%2F%2Fidp.com%2Fopenid%3Fid%3Did-38iCmmlAVEXPsFjnFVKArfn5RIiF75D5doorhEgqqPM%3D&openid.identity=https%3A%2F%2Fidp.com%2Fopenid%3Fid%3Did-38iCmmlAVEXPsFjnFVKArfn5RIiF75D5doorhEgqqPM%3D&openid.return_to=https%3A%2F%2Facme.com%2Fopenid%3Frefid%3Did-9PKVXZmRxAeDYcgLqPm36ClzOMA-&openid.response_nonce=2014-03-24T19%3A20%3A06Zid-YPa2kTNNFftZkgBb460jxJGblk2g--iNwPpDI7M1&openid.assoc_handle=id-6a5S6zhAKaRwQNUnjTKROREdAGSjWodG1el4xyz3&openid.ns.ax=http%3A%2F%2Fopenid.net%2Fsrv%2Fax%2F1.0&openid.ax.mode=fetch_response&openid.ax.type.attr0=http%3A%2F%2Fsession%2Fcount&openid.ax.value.attr0=1&openid.ax.type.attr1=http%3A%2F%2Fopenid.net%2Fschema%2FnamePerson%2Ffriendly&openid.ax.value.attr1=My+name+is+Bobby+Smith&openid.ax.type.attr2=http%3A%2F%2Fschemas.openid.net%2Fax%2Fapi%2Fuser_id&openid.ax.value.attr2=bob&openid.ax.type.attr3=http%3A%2F%2Faxschema.org%2Fcontact%2Femail&openid.ax.value.attr3=bob%40oracle.com&openid.ax.type.attr4=http%3A%2F%2Fsession%2Fipaddress&openid.ax.value.attr4=10.145.120.253&openid.ns.pape=http%3A%2F%2Fspecs.openid.net%2Fextensions%2Fpape%2F1.0&openid.pape.auth_time=2014-03-24T19%3A20%3A05Z&openid.pape.auth_policies=LDAPScheme&openid.signed=op_endpoint%2Cclaimed_id%2Cidentity%2Creturn_to%2Cresponse_nonce%2Cassoc_handle%2Cns.ax%2Cax.mode%2Cax.type.attr0%2Cax.value.attr0%2Cax.type.attr1%2Cax.value.attr1%2Cax.type.attr2%2Cax.value.attr2%2Cax.type.attr3%2Cax.value.attr3%2Cax.type.attr4%2Cax.value.attr4%2Cns.pape%2Cpape.auth_time%2Cpape.auth_policies&openid.sig=mYMgbGYSs22l8e%2FDom9NRPw15u8%3D Mapping LDAPScheme To map the LDAP Scheme to the http://schemas.openid.net/pape/policies/2007/06/phishing-resistant and http://openid-policies/password-protected policies Federation Authentication Methods, I will execute the addSPPartnerAuthnMethod() method (the policies will be comma separated): Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the addSPPartnerAuthnMethod() command:addSPPartnerAuthnMethod("AcmeRP", "http://schemas.openid.net/pape/policies/2007/06/phishing-resistant,http://openid-policies/password-protected", "LDAPScheme") Exit the WLST environment:exit() After authentication via FORM, OIF/IdP would now issue an Assertion similar to (see that the method was changed from LDAPScheme to the two policies): https://acme.com/openid?refid=id-9PKVXZmRxAeDYcgLqPm36ClzOMA-&openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.mode=id_res&openid.op_endpoint=https%3A%2F%2Fidp.com%2Fopenid&openid.claimed_id=https%3A%2F%2Fidp.com%2Fopenid%3Fid%3Did-38iCmmlAVEXPsFjnFVKArfn5RIiF75D5doorhEgqqPM%3D&openid.identity=https%3A%2F%2Fidp.com%2Fopenid%3Fid%3Did-38iCmmlAVEXPsFjnFVKArfn5RIiF75D5doorhEgqqPM%3D&openid.return_to=https%3A%2F%2Facme.com%2Fopenid%3Frefid%3Did-9PKVXZmRxAeDYcgLqPm36ClzOMA-&openid.response_nonce=2014-03-24T19%3A20%3A06Zid-YPa2kTNNFftZkgBb460jxJGblk2g--iNwPpDI7M1&openid.assoc_handle=id-6a5S6zhAKaRwQNUnjTKROREdAGSjWodG1el4xyz3&openid.ns.ax=http%3A%2F%2Fopenid.net%2Fsrv%2Fax%2F1.0&openid.ax.mode=fetch_response&openid.ax.type.attr0=http%3A%2F%2Fsession%2Fcount&openid.ax.value.attr0=1&openid.ax.type.attr1=http%3A%2F%2Fopenid.net%2Fschema%2FnamePerson%2Ffriendly&openid.ax.value.attr1=My+name+is+Bobby+Smith&openid.ax.type.attr2=http%3A%2F%2Fschemas.openid.net%2Fax%2Fapi%2Fuser_id&openid.ax.value.attr2=bob&openid.ax.type.attr3=http%3A%2F%2Faxschema.org%2Fcontact%2Femail&openid.ax.value.attr3=bob%40oracle.com&openid.ax.type.attr4=http%3A%2F%2Fsession%2Fipaddress&openid.ax.value.attr4=10.145.120.253&openid.ns.pape=http%3A%2F%2Fspecs.openid.net%2Fextensions%2Fpape%2F1.0&openid.pape.auth_time=2014-03-24T19%3A20%3A05Z&openid.pape.auth_policies=http%3A%2F%2Fschemas.openid.net%2Fpape%2Fpolicies%2F2007%2F06%2Fphishing-resistant+http%3A%2F%2Fopenid-policies%2Fpassword-protected&openid.signed=op_endpoint%2Cclaimed_id%2Cidentity%2Creturn_to%2Cresponse_nonce%2Cassoc_handle%2Cns.ax%2Cax.mode%2Cax.type.attr0%2Cax.value.attr0%2Cax.type.attr1%2Cax.value.attr1%2Cax.type.attr2%2Cax.value.attr2%2Cax.type.attr3%2Cax.value.attr3%2Cax.type.attr4%2Cax.value.attr4%2Cns.pape%2Cpape.auth_time%2Cpape.auth_policies&openid.sig=mYMgbGYSs22l8e%2FDom9NRPw15u8%3D In the next article, I will cover how OIF/IdP can be configured so that an SP can request a specific Federation Authentication Method to challenge the user during Federation SSO.Cheers,Damien Carru

    Read the article

  • VS 2012 Code Review &ndash; Before Check In OR After Check In?

    - by Tarun Arora
    “Is Code Review Important and Effective?” There is a consensus across the industry that code review is an effective and practical way to collar code inconsistency and possible defects early in the software development life cycle. Among others some of the advantages of code reviews are, Bugs are found faster Forces developers to write readable code (code that can be read without explanation or introduction!) Optimization methods/tricks/productive programs spread faster Programmers as specialists "evolve" faster It's fun “Code review is systematic examination (often known as peer review) of computer source code. It is intended to find and fix mistakes overlooked in the initial development phase, improving both the overall quality of software and the developers' skills. Reviews are done in various forms such as pair programming, informal walkthroughs, and formal inspections.” Wikipedia No where does the definition mention whether its better to review code before the code has been committed to version control or after the commit has been performed. No matter which side you favour, Visual Studio 2012 allows you to request for a code review both before check in and also request for a review after check in. Let’s weigh the pros and cons of the approaches independently. Code Review Before Check In or Code Review After Check In? Approach 1 – Code Review before Check in Developer completes the code and feels the code quality is appropriate for check in to TFS. The developer raises a code review request to have a second pair of eyes validate if the code abides to the recommended best practices, will not result in any defects due to common coding mistakes and whether any optimizations can be made to improve the code quality.                                             Image 1 – code review before check in Pros Everything that gets committed to source control is reviewed. Minimizes the chances of smelly code making its way into the code base. Decreases the cost of fixing bugs, remember, the earlier you find them, the lesser the pain in fixing them. Cons Development Code Freeze – Since the changes aren’t in the source control yet. Further development can only be done off-line. The changes have not been through a CI build, hard to say whether the code abides to all build quality standards. Inconsistent! Cumbersome to track the actual code review process.  Not every change to the code base is worth reviewing, a lot of effort is invested for very little gain. Approach 2 – Code Review after Check in Developer checks in, random code reviews are performed on the checked in code.                                                      Image 2 – Code review after check in Pros The code has already passed the CI build and run through any code analysis plug ins you may have running on the build server. Instruct the developer to ensure ZERO fx cop, style cop and static code analysis before check in. Code is cleaner and smell free even before the code review. No Offline development, developers can continue to develop against the source control. Cons Bad code can easily make its way into the code base. Since the review take place much later in the cycle, the cost of fixing issues can prove to be much higher. Approach 3 – Hybrid Approach The community advocates a more hybrid approach, a blend of tooling and human accountability quotient.                                                               Image 3 – Hybrid Approach 1. Code review high impact check ins. It is not possible to review everything, by setting up code review check in policies you can end up slowing your team. More over, the code that you are reviewing before check in hasn't even been through a green CI build either. 2. Tooling. Let the tooling work for you. By running static analysis, fx cop, style cop and other plug ins on the build agent, you can identify the real issues that in my opinion can't possibly be identified using human reviews. Configure the tooling to report back top 10 issues every day. Mandate the manual code review of individuals who keep making it to this list of shame more often. 3. During Merge. I would prefer eliminating some of the other code issues during merge from Main branch to the release branch. In a scrum project this is still easier because cheery picking the merges is a possibility and the size of code being reviewed is still limited. Let the tooling work for you, if some one breaks the CI build often, put them on a gated check in build course until you see improvement. If some one appears on the top 10 list of shame generated via the build then ensure that all their code is reviewed till you see improvement. At the end of the day, the goal is to ensure that the code being delivered is top quality. By enforcing a code review before any check in, you force the developer to work offline or stay put till the review is complete. What do the experts say? So I asked a few expects what they thought of “Code Review quality gate before Checking in code?" Terje Sandstrom | Microsoft ALM MVP You mean a review quality gate BEFORE checking in code????? That would mean a lot of code staying either local or in shelvesets, and not even been through a CI build, and a green CI build being the main criteria for going further, f.e. to the review state. I would not like code laying around with no checkin’s. Having a requirement that code is checked in small pieces, 4-8 hours work max, and AT LEAST daily checkins, a manual code review comes second down the lane. I would expect review quality gates to happen before merging back to main, or before merging to release.  But that would all be on checked-in code.  Branching is absolutely one way to ease the pain.   Another way we are using is automatic quality builds, running metrics, coverage, static code analysis.  Unfortunately it takes some time, would be great to be on CI’s – but…., so it’s done scheduled every night. Based on this we get, among other stuff,  top 10 lists of suspicious code, which is then subjected to reviews.  If a person seems to be very popular on these top 10 lists, we subject every check in from that person to a review for a period. That normally helps.   None of the clients I have can afford to have every checkin reviewed, so we need to find ways around it. I don’t disagree with the nicety of having all the code reviewed, but I find it hard to find those resources in today’s enterprises. David V. Corbin | Visual Studio ALM Ranger I tend to agree with both sides. I hate having code that is not checked in, but at the same time hate having “bad” code in the repository. I have found that branching is one approach to solving this dilemma. Code is checked into the private/feature branch before the review, but is not merged over to the “official” branch until after the review. I advocate both, depending on circumstance (especially team dynamics)   - The “pre-checkin” is usually for elements that may impact the project as a whole. Think of it as another “gate” along with passing unit tests. - The “post-checkin” may very well not be at the changeset level, but correlates to a review at the “user story” level.   Again, this depends on team dynamics in play…. Robert MacLean | Microsoft ALM MVP I do not think there is no right answer for the industry as a whole. In short the question is why do you do reviews? Your question implies risk mitigation, so in low risk areas you can get away with it after check in while in high risk you need to do it before check in. An example is those new to a team or juniors need it much earlier (maybe that is before checkin, maybe that is soon after) than seniors who have shipped twenty sprints on the team. Abhimanyu Singhal | Visual Studio ALM Ranger Depends on per scenario basis. We recommend post check-in reviews when: 1. We don't want to block other checks and processes on manual code reviews. Manual reviews take time, and some pieces may not require manual reviews at all. 2. We need to trace all changes and track history. 3. We have a code promotion strategy/process in place. For risk mitigation, post checkin code can be promoted to Accepted branches. Or can be rejected. Pre Checkin Reviews are used when 1. There is a high risk factor associated 2. Reviewers are generally (most of times) have immediate availability. 3. Team does not have strict tracking needs. Simply speaking, no single process fits all scenarios. You need to select what works best for your team/project. Thomas Schissler | Visual Studio ALM Ranger This is an interesting discussion, I’m right now discussing details about executing code reviews with my teams. I see and understand the aspects you brought in, but there is another side as well, I’d like to point out. 1.) If you do reviews per check in this is not very practical as a hard rule because this will disturb the flow of the team very often or it will lead to reduce the checkin frequency of the devs which I would not accept. 2.) If you do later reviews, for example if you review PBIs, it is not easy to find out which code you should review. Either you review all changesets associate with the PBI, but then you might review code which has been changed with a later checkin and the dev maybe has already fixed the issue. Or you review the diff of the latest changeset of the PBI with the first but then you might also review changes of other PBIs. Jakob Leander | Sr. Director, Avanade In my experience, manual code review: 1. Does not get done and at the very least does not get redone after changes (regardless of intentions at start of project) 2. When a project actually do it, they often do not do it right away = errors pile up 3. Requires a lot of time discussing/defining the standard and for the team to learn it However code review is very important since e.g. even small memory leaks in a high volume web solution have big consequences In the last years I have advocated following approach for code review - Architects up front do “at least one best practice example” of each type of component and tell the team. Copy from this one. This should include error handling, logging, security etc. - Dev lead on project continuously browse code to validate that the best practices are used. Especially that patterns etc. are not broken. You can do this formally after each sprint/iteration if you want. Once this is validated it is unlikely to “go bad” even during later code changes Agree with customer to rely on static code analysis from Visual Studio as the one and only coding standard. This has HUUGE benefits - You can easily tweak to reach the level you desire together with customer - It is easy to measure for both developers/management - It is 100% consistent across code base - It gets validated all the time so you never end up getting hammered by a customer review in the end - It is easy to tell the developer that you do not want code back unless it has zero errors = minimize communication You need to track this at least during nightly builds and make sure team sees total # issues. Do not allow #issues it to grow uncontrolled. On the project I run I require code analysis to have run on code before checkin (checkin rule). This means -  You have to have clean compile (or CA wont run) so this is extra benefit = very few broken builds - You can change a few of the rules to compile as errors instead of warnings. I often do this for “missing dispose” issues which you REALLY do not want in your app Tip: Place your custom CA rules files as part of solution. That  way it works when you do branching etc. (path to CA file is relative in VS) Some may argue that CA is not as good as manual inspection. But since manual inspection in reality suffers from the 3 issues in start it is IMO a MUCH better (and much cheaper) approach from helicopter perspective Tirthankar Dutta | Director, Avanade I think code review should be run both before and after check ins. There are some code metrics that are meant to be run on the entire codebase … Also, especially on multi-site projects, one should strive to architect in a way that lets men manage the framework while boys write the repetitive code… scales very well with the need to review less by containment and imposing architectural restrictions to emphasise the design. Bruno Capuano | Microsoft ALM MVP For code reviews (means peer reviews) in distributed team I use http://www.vsanywhere.com/default.aspx  David Jobling | Global Sr. Director, Avanade Peer review is the only way to scale and its a great practice for all in the team to learn to perform and accept. In my experience you soon learn who's code to watch more than others and tune the attention. Mikkel Toudal Kristiansen | Manager, Avanade If you have several branches in your code base, you will need to merge often. This requires manual merging, when a file has been changed in both branches. It offers a good opportunity to actually review to changed code. So my advice is: Merging between branches should be done as often as possible, it should be done by a senior developer, and he/she should perform a full code review of the code being merged. As for detecting architectural smells and code smells creeping into the code base, one really good third party tools exist: Ndepend (http://www.ndepend.com/, for static code analysis of the current state of the code base). You could also consider adding StyleCop to the solution. Jesse Houwing | Visual Studio ALM Ranger I gave a presentation on this subject on the TechDays conference in NL last year. See my presentation and slides here (talk in Dutch, but English presentation): http://blog.jessehouwing.nl/2012/03/did-you-miss-my-techdaysnl-talk-on-code.html  I’d like to add a few more points: - Before/After checking is mostly a trust issue. If you have a team that does diligent peer reviews and regularly talk/sit together or peer review, there’s no need to enforce a before-checkin policy. The peer peer-programming and regular feedback during development can take care of most of the review requirements as long as the team isn’t under stress. - Under stress, enforce pre-checkin reviews, it might sound strange, if you’re already under time or budgetary constraints, but it is under such conditions most real issues start to be created or pile up. - Use tools to catch most common errors, Code Analysis/FxCop was already mentioned. HP Fortify, Resharper, Coderush etc can help you there. There are also a lot of 3rd party rules you can add to Code Analysis. I’ve written a few myself (http://fccopcontrib.codeplex.com) and various teams from Microsoft have added their own rules (MSOCAF for SharePoint, WSSF for WCF). For common errors that keep cropping up, see if you can define a rule. It’s much easier. But more importantly make sure you have a good help page explaining *WHY* it's wrong. If you have small feature or developer branches/shelvesets, you might want to review pre-merge. It’s still better to do peer reviews and peer programming, but the most important thing is that bad quality code doesn’t make it into the important branch. So my philosophy: - Use tooling as much as possible. - Make sure the team understands the tooling and the importance of the things it flags. It’s too easy to just click suppress all to ignore the warnings. - Under stress, tighten process, it’s under stress that the problems of late reviews will really surface - Most importantly if you do reviews do them as early as possible, but never later than needed. In other words, pre-checkin/post checking doesn’t really matter, as long as the review is done before the code is released. It’ll just be much more expensive to fix any review outcomes the later you find them. --- I would love to hear what you think!

    Read the article

  • How John Got 15x Improvement Without Really Trying

    - by rchrd
    The following article was published on a Sun Microsystems website a number of years ago by John Feo. It is still useful and worth preserving. So I'm republishing it here.  How I Got 15x Improvement Without Really Trying John Feo, Sun Microsystems Taking ten "personal" program codes used in scientific and engineering research, the author was able to get from 2 to 15 times performance improvement easily by applying some simple general optimization techniques. Introduction Scientific research based on computer simulation depends on the simulation for advancement. The research can advance only as fast as the computational codes can execute. The codes' efficiency determines both the rate and quality of results. In the same amount of time, a faster program can generate more results and can carry out a more detailed simulation of physical phenomena than a slower program. Highly optimized programs help science advance quickly and insure that monies supporting scientific research are used as effectively as possible. Scientific computer codes divide into three broad categories: ISV, community, and personal. ISV codes are large, mature production codes developed and sold commercially. The codes improve slowly over time both in methods and capabilities, and they are well tuned for most vendor platforms. Since the codes are mature and complex, there are few opportunities to improve their performance solely through code optimization. Improvements of 10% to 15% are typical. Examples of ISV codes are DYNA3D, Gaussian, and Nastran. Community codes are non-commercial production codes used by a particular research field. Generally, they are developed and distributed by a single academic or research institution with assistance from the community. Most users just run the codes, but some develop new methods and extensions that feed back into the general release. The codes are available on most vendor platforms. Since these codes are younger than ISV codes, there are more opportunities to optimize the source code. Improvements of 50% are not unusual. Examples of community codes are AMBER, CHARM, BLAST, and FASTA. Personal codes are those written by single users or small research groups for their own use. These codes are not distributed, but may be passed from professor-to-student or student-to-student over several years. They form the primordial ocean of applications from which community and ISV codes emerge. Government research grants pay for the development of most personal codes. This paper reports on the nature and performance of this class of codes. Over the last year, I have looked at over two dozen personal codes from more than a dozen research institutions. The codes cover a variety of scientific fields, including astronomy, atmospheric sciences, bioinformatics, biology, chemistry, geology, and physics. The sources range from a few hundred lines to more than ten thousand lines, and are written in Fortran, Fortran 90, C, and C++. For the most part, the codes are modular, documented, and written in a clear, straightforward manner. They do not use complex language features, advanced data structures, programming tricks, or libraries. I had little trouble understanding what the codes did or how data structures were used. Most came with a makefile. Surprisingly, only one of the applications is parallel. All developers have access to parallel machines, so availability is not an issue. Several tried to parallelize their applications, but stopped after encountering difficulties. Lack of education and a perception that parallelism is difficult prevented most from trying. I parallelized several of the codes using OpenMP, and did not judge any of the codes as difficult to parallelize. Even more surprising than the lack of parallelism is the inefficiency of the codes. I was able to get large improvements in performance in a matter of a few days applying simple optimization techniques. Table 1 lists ten representative codes [names and affiliation are omitted to preserve anonymity]. Improvements on one processor range from 2x to 15.5x with a simple average of 4.75x. I did not use sophisticated performance tools or drill deep into the program's execution character as one would do when tuning ISV or community codes. Using only a profiler and source line timers, I identified inefficient sections of code and improved their performance by inspection. The changes were at a high level. I am sure there is another factor of 2 or 3 in each code, and more if the codes are parallelized. The study’s results show that personal scientific codes are running many times slower than they should and that the problem is pervasive. Computational scientists are not sloppy programmers; however, few are trained in the art of computer programming or code optimization. I found that most have a working knowledge of some programming language and standard software engineering practices; but they do not know, or think about, how to make their programs run faster. They simply do not know the standard techniques used to make codes run faster. In fact, they do not even perceive that such techniques exist. The case studies described in this paper show that applying simple, well known techniques can significantly increase the performance of personal codes. It is important that the scientific community and the Government agencies that support scientific research find ways to better educate academic scientific programmers. The inefficiency of their codes is so bad that it is retarding both the quality and progress of scientific research. # cacheperformance redundantoperations loopstructures performanceimprovement 1 x x 15.5 2 x 2.8 3 x x 2.5 4 x 2.1 5 x x 2.0 6 x 5.0 7 x 5.8 8 x 6.3 9 2.2 10 x x 3.3 Table 1 — Area of improvement and performance gains of 10 codes The remainder of the paper is organized as follows: sections 2, 3, and 4 discuss the three most common sources of inefficiencies in the codes studied. These are cache performance, redundant operations, and loop structures. Each section includes several examples. The last section summaries the work and suggests a possible solution to the issues raised. Optimizing cache performance Commodity microprocessor systems use caches to increase memory bandwidth and reduce memory latencies. Typical latencies from processor to L1, L2, local, and remote memory are 3, 10, 50, and 200 cycles, respectively. Moreover, bandwidth falls off dramatically as memory distances increase. Programs that do not use cache effectively run many times slower than programs that do. When optimizing for cache, the biggest performance gains are achieved by accessing data in cache order and reusing data to amortize the overhead of cache misses. Secondary considerations are prefetching, associativity, and replacement; however, the understanding and analysis required to optimize for the latter are probably beyond the capabilities of the non-expert. Much can be gained simply by accessing data in the correct order and maximizing data reuse. 6 out of the 10 codes studied here benefited from such high level optimizations. Array Accesses The most important cache optimization is the most basic: accessing Fortran array elements in column order and C array elements in row order. Four of the ten codes—1, 2, 4, and 10—got it wrong. Compilers will restructure nested loops to optimize cache performance, but may not do so if the loop structure is too complex, or the loop body includes conditionals, complex addressing, or function calls. In code 1, the compiler failed to invert a key loop because of complex addressing do I = 0, 1010, delta_x IM = I - delta_x IP = I + delta_x do J = 5, 995, delta_x JM = J - delta_x JP = J + delta_x T1 = CA1(IP, J) + CA1(I, JP) T2 = CA1(IM, J) + CA1(I, JM) S1 = T1 + T2 - 4 * CA1(I, J) CA(I, J) = CA1(I, J) + D * S1 end do end do In code 2, the culprit is conditionals do I = 1, N do J = 1, N If (IFLAG(I,J) .EQ. 0) then T1 = Value(I, J-1) T2 = Value(I-1, J) T3 = Value(I, J) T4 = Value(I+1, J) T5 = Value(I, J+1) Value(I,J) = 0.25 * (T1 + T2 + T5 + T4) Delta = ABS(T3 - Value(I,J)) If (Delta .GT. MaxDelta) MaxDelta = Delta endif enddo enddo I fixed both programs by inverting the loops by hand. Code 10 has three-dimensional arrays and triply nested loops. The structure of the most computationally intensive loops is too complex to invert automatically or by hand. The only practical solution is to transpose the arrays so that the dimension accessed by the innermost loop is in cache order. The arrays can be transposed at construction or prior to entering a computationally intensive section of code. The former requires all array references to be modified, while the latter is cost effective only if the cost of the transpose is amortized over many accesses. I used the second approach to optimize code 10. Code 5 has four-dimensional arrays and loops are nested four deep. For all of the reasons cited above the compiler is not able to restructure three key loops. Assume C arrays and let the four dimensions of the arrays be i, j, k, and l. In the original code, the index structure of the three loops is L1: for i L2: for i L3: for i for l for l for j for k for j for k for j for k for l So only L3 accesses array elements in cache order. L1 is a very complex loop—much too complex to invert. I brought the loop into cache alignment by transposing the second and fourth dimensions of the arrays. Since the code uses a macro to compute all array indexes, I effected the transpose at construction and changed the macro appropriately. The dimensions of the new arrays are now: i, l, k, and j. L3 is a simple loop and easily inverted. L2 has a loop-carried scalar dependence in k. By promoting the scalar name that carries the dependence to an array, I was able to invert the third and fourth subloops aligning the loop with cache. Code 5 is by far the most difficult of the four codes to optimize for array accesses; but the knowledge required to fix the problems is no more than that required for the other codes. I would judge this code at the limits of, but not beyond, the capabilities of appropriately trained computational scientists. Array Strides When a cache miss occurs, a line (64 bytes) rather than just one word is loaded into the cache. If data is accessed stride 1, than the cost of the miss is amortized over 8 words. Any stride other than one reduces the cost savings. Two of the ten codes studied suffered from non-unit strides. The codes represent two important classes of "strided" codes. Code 1 employs a multi-grid algorithm to reduce time to convergence. The grids are every tenth, fifth, second, and unit element. Since time to convergence is inversely proportional to the distance between elements, coarse grids converge quickly providing good starting values for finer grids. The better starting values further reduce the time to convergence. The downside is that grids of every nth element, n > 1, introduce non-unit strides into the computation. In the original code, much of the savings of the multi-grid algorithm were lost due to this problem. I eliminated the problem by compressing (copying) coarse grids into continuous memory, and rewriting the computation as a function of the compressed grid. On convergence, I copied the final values of the compressed grid back to the original grid. The savings gained from unit stride access of the compressed grid more than paid for the cost of copying. Using compressed grids, the loop from code 1 included in the previous section becomes do j = 1, GZ do i = 1, GZ T1 = CA(i+0, j-1) + CA(i-1, j+0) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) S1 = T1 + T4 - 4 * CA1(i+0, j+0) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 enddo enddo where CA and CA1 are compressed arrays of size GZ. Code 7 traverses a list of objects selecting objects for later processing. The labels of the selected objects are stored in an array. The selection step has unit stride, but the processing steps have irregular stride. A fix is to save the parameters of the selected objects in temporary arrays as they are selected, and pass the temporary arrays to the processing functions. The fix is practical if the same parameters are used in selection as in processing, or if processing comprises a series of distinct steps which use overlapping subsets of the parameters. Both conditions are true for code 7, so I achieved significant improvement by copying parameters to temporary arrays during selection. Data reuse In the previous sections, we optimized for spatial locality. It is also important to optimize for temporal locality. Once read, a datum should be used as much as possible before it is forced from cache. Loop fusion and loop unrolling are two techniques that increase temporal locality. Unfortunately, both techniques increase register pressure—as loop bodies become larger, the number of registers required to hold temporary values grows. Once register spilling occurs, any gains evaporate quickly. For multiprocessors with small register sets or small caches, the sweet spot can be very small. In the ten codes presented here, I found no opportunities for loop fusion and only two opportunities for loop unrolling (codes 1 and 3). In code 1, unrolling the outer and inner loop one iteration increases the number of result values computed by the loop body from 1 to 4, do J = 1, GZ-2, 2 do I = 1, GZ-2, 2 T1 = CA1(i+0, j-1) + CA1(i-1, j+0) T2 = CA1(i+1, j-1) + CA1(i+0, j+0) T3 = CA1(i+0, j+0) + CA1(i-1, j+1) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) T5 = CA1(i+2, j+0) + CA1(i+1, j+1) T6 = CA1(i+1, j+1) + CA1(i+0, j+2) T7 = CA1(i+2, j+1) + CA1(i+1, j+2) S1 = T1 + T4 - 4 * CA1(i+0, j+0) S2 = T2 + T5 - 4 * CA1(i+1, j+0) S3 = T3 + T6 - 4 * CA1(i+0, j+1) S4 = T4 + T7 - 4 * CA1(i+1, j+1) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 CA(i+1, j+0) = CA1(i+1, j+0) + DD * S2 CA(i+0, j+1) = CA1(i+0, j+1) + DD * S3 CA(i+1, j+1) = CA1(i+1, j+1) + DD * S4 enddo enddo The loop body executes 12 reads, whereas as the rolled loop shown in the previous section executes 20 reads to compute the same four values. In code 3, two loops are unrolled 8 times and one loop is unrolled 4 times. Here is the before for (k = 0; k < NK[u]; k++) { sum = 0.0; for (y = 0; y < NY; y++) { sum += W[y][u][k] * delta[y]; } backprop[i++]=sum; } and after code for (k = 0; k < KK - 8; k+=8) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (y = 0; y < NY; y++) { sum0 += W[y][0][k+0] * delta[y]; sum1 += W[y][0][k+1] * delta[y]; sum2 += W[y][0][k+2] * delta[y]; sum3 += W[y][0][k+3] * delta[y]; sum4 += W[y][0][k+4] * delta[y]; sum5 += W[y][0][k+5] * delta[y]; sum6 += W[y][0][k+6] * delta[y]; sum7 += W[y][0][k+7] * delta[y]; } backprop[k+0] = sum0; backprop[k+1] = sum1; backprop[k+2] = sum2; backprop[k+3] = sum3; backprop[k+4] = sum4; backprop[k+5] = sum5; backprop[k+6] = sum6; backprop[k+7] = sum7; } for one of the loops unrolled 8 times. Optimizing for temporal locality is the most difficult optimization considered in this paper. The concepts are not difficult, but the sweet spot is small. Identifying where the program can benefit from loop unrolling or loop fusion is not trivial. Moreover, it takes some effort to get it right. Still, educating scientific programmers about temporal locality and teaching them how to optimize for it will pay dividends. Reducing instruction count Execution time is a function of instruction count. Reduce the count and you usually reduce the time. The best solution is to use a more efficient algorithm; that is, an algorithm whose order of complexity is smaller, that converges quicker, or is more accurate. Optimizing source code without changing the algorithm yields smaller, but still significant, gains. This paper considers only the latter because the intent is to study how much better codes can run if written by programmers schooled in basic code optimization techniques. The ten codes studied benefited from three types of "instruction reducing" optimizations. The two most prevalent were hoisting invariant memory and data operations out of inner loops. The third was eliminating unnecessary data copying. The nature of these inefficiencies is language dependent. Memory operations The semantics of C make it difficult for the compiler to determine all the invariant memory operations in a loop. The problem is particularly acute for loops in functions since the compiler may not know the values of the function's parameters at every call site when compiling the function. Most compilers support pragmas to help resolve ambiguities; however, these pragmas are not comprehensive and there is no standard syntax. To guarantee that invariant memory operations are not executed repetitively, the user has little choice but to hoist the operations by hand. The problem is not as severe in Fortran programs because in the absence of equivalence statements, it is a violation of the language's semantics for two names to share memory. Codes 3 and 5 are C programs. In both cases, the compiler did not hoist all invariant memory operations from inner loops. Consider the following loop from code 3 for (y = 0; y < NY; y++) { i = 0; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += delta[y] * I1[i++]; } } } Since dW[y][u] can point to the same memory space as delta for one or more values of y and u, assignment to dW[y][u][k] may change the value of delta[y]. In reality, dW and delta do not overlap in memory, so I rewrote the loop as for (y = 0; y < NY; y++) { i = 0; Dy = delta[y]; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += Dy * I1[i++]; } } } Failure to hoist invariant memory operations may be due to complex address calculations. If the compiler can not determine that the address calculation is invariant, then it can hoist neither the calculation nor the associated memory operations. As noted above, code 5 uses a macro to address four-dimensional arrays #define MAT4D(a,q,i,j,k) (double *)((a)->data + (q)*(a)->strides[0] + (i)*(a)->strides[3] + (j)*(a)->strides[2] + (k)*(a)->strides[1]) The macro is too complex for the compiler to understand and so, it does not identify any subexpressions as loop invariant. The simplest way to eliminate the address calculation from the innermost loop (over i) is to define a0 = MAT4D(a,q,0,j,k) before the loop and then replace all instances of *MAT4D(a,q,i,j,k) in the loop with a0[i] A similar problem appears in code 6, a Fortran program. The key loop in this program is do n1 = 1, nh nx1 = (n1 - 1) / nz + 1 nz1 = n1 - nz * (nx1 - 1) do n2 = 1, nh nx2 = (n2 - 1) / nz + 1 nz2 = n2 - nz * (nx2 - 1) ndx = nx2 - nx1 ndy = nz2 - nz1 gxx = grn(1,ndx,ndy) gyy = grn(2,ndx,ndy) gxy = grn(3,ndx,ndy) balance(n1,1) = balance(n1,1) + (force(n2,1) * gxx + force(n2,2) * gxy) * h1 balance(n1,2) = balance(n1,2) + (force(n2,1) * gxy + force(n2,2) * gyy)*h1 end do end do The programmer has written this loop well—there are no loop invariant operations with respect to n1 and n2. However, the loop resides within an iterative loop over time and the index calculations are independent with respect to time. Trading space for time, I precomputed the index values prior to the entering the time loop and stored the values in two arrays. I then replaced the index calculations with reads of the arrays. Data operations Ways to reduce data operations can appear in many forms. Implementing a more efficient algorithm produces the biggest gains. The closest I came to an algorithm change was in code 4. This code computes the inner product of K-vectors A(i) and B(j), 0 = i < N, 0 = j < M, for most values of i and j. Since the program computes most of the NM possible inner products, it is more efficient to compute all the inner products in one triply-nested loop rather than one at a time when needed. The savings accrue from reading A(i) once for all B(j) vectors and from loop unrolling. for (i = 0; i < N; i+=8) { for (j = 0; j < M; j++) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (k = 0; k < K; k++) { sum0 += A[i+0][k] * B[j][k]; sum1 += A[i+1][k] * B[j][k]; sum2 += A[i+2][k] * B[j][k]; sum3 += A[i+3][k] * B[j][k]; sum4 += A[i+4][k] * B[j][k]; sum5 += A[i+5][k] * B[j][k]; sum6 += A[i+6][k] * B[j][k]; sum7 += A[i+7][k] * B[j][k]; } C[i+0][j] = sum0; C[i+1][j] = sum1; C[i+2][j] = sum2; C[i+3][j] = sum3; C[i+4][j] = sum4; C[i+5][j] = sum5; C[i+6][j] = sum6; C[i+7][j] = sum7; }} This change requires knowledge of a typical run; i.e., that most inner products are computed. The reasons for the change, however, derive from basic optimization concepts. It is the type of change easily made at development time by a knowledgeable programmer. In code 5, we have the data version of the index optimization in code 6. Here a very expensive computation is a function of the loop indices and so cannot be hoisted out of the loop; however, the computation is invariant with respect to an outer iterative loop over time. We can compute its value for each iteration of the computation loop prior to entering the time loop and save the values in an array. The increase in memory required to store the values is small in comparison to the large savings in time. The main loop in Code 8 is doubly nested. The inner loop includes a series of guarded computations; some are a function of the inner loop index but not the outer loop index while others are a function of the outer loop index but not the inner loop index for (j = 0; j < N; j++) { for (i = 0; i < M; i++) { r = i * hrmax; R = A[j]; temp = (PRM[3] == 0.0) ? 1.0 : pow(r, PRM[3]); high = temp * kcoeff * B[j] * PRM[2] * PRM[4]; low = high * PRM[6] * PRM[6] / (1.0 + pow(PRM[4] * PRM[6], 2.0)); kap = (R > PRM[6]) ? high * R * R / (1.0 + pow(PRM[4]*r, 2.0) : low * pow(R/PRM[6], PRM[5]); < rest of loop omitted > }} Note that the value of temp is invariant to j. Thus, we can hoist the computation for temp out of the loop and save its values in an array. for (i = 0; i < M; i++) { r = i * hrmax; TEMP[i] = pow(r, PRM[3]); } [N.B. – the case for PRM[3] = 0 is omitted and will be reintroduced later.] We now hoist out of the inner loop the computations invariant to i. Since the conditional guarding the value of kap is invariant to i, it behooves us to hoist the computation out of the inner loop, thereby executing the guard once rather than M times. The final version of the code is for (j = 0; j < N; j++) { R = rig[j] / 1000.; tmp1 = kcoeff * par[2] * beta[j] * par[4]; tmp2 = 1.0 + (par[4] * par[4] * par[6] * par[6]); tmp3 = 1.0 + (par[4] * par[4] * R * R); tmp4 = par[6] * par[6] / tmp2; tmp5 = R * R / tmp3; tmp6 = pow(R / par[6], par[5]); if ((par[3] == 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp5; } else if ((par[3] == 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp4 * tmp6; } else if ((par[3] != 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp5; } else if ((par[3] != 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp4 * tmp6; } for (i = 0; i < M; i++) { kap = KAP[i]; r = i * hrmax; < rest of loop omitted > } } Maybe not the prettiest piece of code, but certainly much more efficient than the original loop, Copy operations Several programs unnecessarily copy data from one data structure to another. This problem occurs in both Fortran and C programs, although it manifests itself differently in the two languages. Code 1 declares two arrays—one for old values and one for new values. At the end of each iteration, the array of new values is copied to the array of old values to reset the data structures for the next iteration. This problem occurs in Fortran programs not included in this study and in both Fortran 77 and Fortran 90 code. Introducing pointers to the arrays and swapping pointer values is an obvious way to eliminate the copying; but pointers is not a feature that many Fortran programmers know well or are comfortable using. An easy solution not involving pointers is to extend the dimension of the value array by 1 and use the last dimension to differentiate between arrays at different times. For example, if the data space is N x N, declare the array (N, N, 2). Then store the problem’s initial values in (_, _, 2) and define the scalar names new = 2 and old = 1. At the start of each iteration, swap old and new to reset the arrays. The old–new copy problem did not appear in any C program. In programs that had new and old values, the code swapped pointers to reset data structures. Where unnecessary coping did occur is in structure assignment and parameter passing. Structures in C are handled much like scalars. Assignment causes the data space of the right-hand name to be copied to the data space of the left-hand name. Similarly, when a structure is passed to a function, the data space of the actual parameter is copied to the data space of the formal parameter. If the structure is large and the assignment or function call is in an inner loop, then copying costs can grow quite large. While none of the ten programs considered here manifested this problem, it did occur in programs not included in the study. A simple fix is always to refer to structures via pointers. Optimizing loop structures Since scientific programs spend almost all their time in loops, efficient loops are the key to good performance. Conditionals, function calls, little instruction level parallelism, and large numbers of temporary values make it difficult for the compiler to generate tightly packed, highly efficient code. Conditionals and function calls introduce jumps that disrupt code flow. Users should eliminate or isolate conditionls to their own loops as much as possible. Often logical expressions can be substituted for if-then-else statements. For example, code 2 includes the following snippet MaxDelta = 0.0 do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) if (Delta > MaxDelta) MaxDelta = Delta enddo enddo if (MaxDelta .gt. 0.001) goto 200 Since the only use of MaxDelta is to control the jump to 200 and all that matters is whether or not it is greater than 0.001, I made MaxDelta a boolean and rewrote the snippet as MaxDelta = .false. do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) MaxDelta = MaxDelta .or. (Delta .gt. 0.001) enddo enddo if (MaxDelta) goto 200 thereby, eliminating the conditional expression from the inner loop. A microprocessor can execute many instructions per instruction cycle. Typically, it can execute one or more memory, floating point, integer, and jump operations. To be executed simultaneously, the operations must be independent. Thick loops tend to have more instruction level parallelism than thin loops. Moreover, they reduce memory traffice by maximizing data reuse. Loop unrolling and loop fusion are two techniques to increase the size of loop bodies. Several of the codes studied benefitted from loop unrolling, but none benefitted from loop fusion. This observation is not too surpising since it is the general tendency of programmers to write thick loops. As loops become thicker, the number of temporary values grows, increasing register pressure. If registers spill, then memory traffic increases and code flow is disrupted. A thick loop with many temporary values may execute slower than an equivalent series of thin loops. The biggest gain will be achieved if the thick loop can be split into a series of independent loops eliminating the need to write and read temporary arrays. I found such an occasion in code 10 where I split the loop do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do into two disjoint loops do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) end do end do do i = 1, n do j = 1, m C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do Conclusions Over the course of the last year, I have had the opportunity to work with over two dozen academic scientific programmers at leading research universities. Their research interests span a broad range of scientific fields. Except for two programs that relied almost exclusively on library routines (matrix multiply and fast Fourier transform), I was able to improve significantly the single processor performance of all codes. Improvements range from 2x to 15.5x with a simple average of 4.75x. Changes to the source code were at a very high level. I did not use sophisticated techniques or programming tools to discover inefficiencies or effect the changes. Only one code was parallel despite the availability of parallel systems to all developers. Clearly, we have a problem—personal scientific research codes are highly inefficient and not running parallel. The developers are unaware of simple optimization techniques to make programs run faster. They lack education in the art of code optimization and parallel programming. I do not believe we can fix the problem by publishing additional books or training manuals. To date, the developers in questions have not studied the books or manual available, and are unlikely to do so in the future. Short courses are a possible solution, but I believe they are too concentrated to be much use. The general concepts can be taught in a three or four day course, but that is not enough time for students to practice what they learn and acquire the experience to apply and extend the concepts to their codes. Practice is the key to becoming proficient at optimization. I recommend that graduate students be required to take a semester length course in optimization and parallel programming. We would never give someone access to state-of-the-art scientific equipment costing hundreds of thousands of dollars without first requiring them to demonstrate that they know how to use the equipment. Yet the criterion for time on state-of-the-art supercomputers is at most an interesting project. Requestors are never asked to demonstrate that they know how to use the system, or can use the system effectively. A semester course would teach them the required skills. Government agencies that fund academic scientific research pay for most of the computer systems supporting scientific research as well as the development of most personal scientific codes. These agencies should require graduate schools to offer a course in optimization and parallel programming as a requirement for funding. About the Author John Feo received his Ph.D. in Computer Science from The University of Texas at Austin in 1986. After graduate school, Dr. Feo worked at Lawrence Livermore National Laboratory where he was the Group Leader of the Computer Research Group and principal investigator of the Sisal Language Project. In 1997, Dr. Feo joined Tera Computer Company where he was project manager for the MTA, and oversaw the programming and evaluation of the MTA at the San Diego Supercomputer Center. In 2000, Dr. Feo joined Sun Microsystems as an HPC application specialist. He works with university research groups to optimize and parallelize scientific codes. Dr. Feo has published over two dozen research articles in the areas of parallel parallel programming, parallel programming languages, and application performance.

    Read the article

  • CodePlex Daily Summary for Monday, November 19, 2012

    CodePlex Daily Summary for Monday, November 19, 2012Popular ReleasesmojoPortal: 2.3.9.4: see release notes on mojoportal.com http://www.mojoportal.com/mojoportal-2394-released Note that we have separate deployment packages for .NET 3.5 and .NET 4.0, but we recommend you to use .NET 4, we will probably drop support for .NET 3.5 once .NET 4.5 is available The deployment package downloads on this page are pre-compiled and ready for production deployment, they contain no C# source code and are not intended for use in Visual Studio. To download the source code see getting the lates...VidCoder: 1.4.6 Beta: Brought back the x264 advanced options panel due to popular demand. Thank you for all the feedback. x264 Preset/Profile/Tune/Level has been moved back to the Video tab, along with a copy of the "extra options" string. Added Fast Decode and Zero Latency checkboxes to support multiple Tunes. Added cropping option "None". Audio bitrates that are incompatible with the encoder (such as MP3 > 320 kbps) are no longer preset on the list. Fixed crash on opening VidCoder after de-selecting "re...Metodología General Ajustada - MGA: 03.05.02: Cambios Parmenio: Correcciones al formato F03 de programación, se deja en comentarios la validación de la unidad de la actividad sea igul a la del indicador. Cambios John: Integración de código con cambios enviados por Parmenio Bonilla. Generación de instaladores. Soporte técnico por correo electrónico, telefónico y en sitio.SPListViewFilter: Version 1.8: Fixed some bugsDotNetNuke® Store: 03.01.07: What's New in this release? IMPORTANT: this version requires DotNetNuke 04.06.02 or higher! DO NOT REPORT BUGS HERE IN THE ISSUE TRACKER, INSTEAD USE THE DotNetNuke Store Forum! Bugs corrected: - Replaced some hard coded references to the default address provider classes by the corresponding interfaces to allow the creation of another address provider with a different name. New Features: - Added the 'pickup' delivery option at checkout. - Added the 'no delivery' option in the Store Admin ...Bundle Transformer - a modular extension for ASP.NET Web Optimization Framework: Bundle Transformer 1.6.10: Version: 1.6.10 Published: 11/18/2012 Now almost all of the Bundle Transformer's assemblies is signed (except BundleTransformer.Yui.dll); In BundleTransformer.SassAndScss the SassAndCoffee.Ruby library was replaced by my own implementation of the Sass- and SCSS-compiler (based on code of the SassAndCoffee.Ruby library version 2.0.2.0); In BundleTransformer.CoffeeScript added support of CoffeeScript version 1.4.0-3; In BundleTransformer.TypeScript added support of TypeScript version 0....ExtJS based ASP.NET 2.0 Controls: FineUI v3.2.0: +2012-11-18 v3.2.0 -?????????????????SelectedValueArray????????(◇?◆:)。 -???????????????????RecoverPropertiesFromJObject????(〓?〓、????、??、Vian_Pan)。 -????????????,?????????????,???SelectedValueArray???????(sam.chang)。 -??Alert.Show???????????(swtseaman)。 -???????????????,??Icon??IconUrl????(swtseaman)。 -?????????TimePicker(??)。 -?????????,??/res.axd?css=blue.css&v=1。 -????????,?????????????,???????。 -????MenuCheckBox(???????)。 -?RadioButton??AutoPostBack??。 -???????FCKEditor?????????...BugNET Issue Tracker: BugNET 1.2: Please read our release notes for BugNET 1.2: http://blog.bugnetproject.com/bugnet-1-2-has-been-released Please do not post questions as reviews. Questions should be posted in the Discussions tab, where they will usually get promptly responded to. If you post a question as a review, you will pollute the rating, and you won't get an answer.Paint.NET PSD Plugin: 2.2.0: Changes: Layer group visibility is now applied to all layers within the group. This greatly improves the visual fidelity of complex PSD files that have hidden layer groups. Layer group names are prefixed so that users can get an indication of the layer group hierarchy. (Paint.NET has a flat list of layers, so the hierarchy is flattened out on load.) The progress bar now reports status when saving PSD files, instead of showing an indeterminate rolling bar. Performance improvement of 1...CRM 2011 Visual Ribbon Editor: Visual Ribbon Editor (1.3.1116.7): [IMPROVED] Detailed error message descriptions for FaultException [FIX] Fixed bug in rule CrmOfflineAccessStateRule which had incorrect State attribute name [FIX] Fixed bug in rule EntityPropertyRule which was missing PropertyValue attribute [FIX] Current connection information was not displayed in status bar while refreshing list of entitiesSuper Metroid Randomizer: Super Metroid Randomizer v5: v5 -Added command line functionality for automation purposes. -Implented Krankdud's change to randomize the Etecoon's item. NOTE: this version will not accept seeds from a previous version. The seed format has changed by necessity. v4 -Started putting version numbers at the top of the form. -Added a warning when suitless Maridia is required in a parsed seed. v3 -Changed seed to only generate filename-legal characters. Using old seeds will still work exactly the same. -Files can now be saved...Caliburn Micro: WPF, Silverlight, WP7 and WinRT/Metro made easy.: Caliburn.Micro v1.4: Changes This version includes many bug fixes across all platforms, improvements to nuget support and...the biggest news of all...full support for both WinRT and WP8. Download Contents Debug and Release Assemblies Samples Readme.txt License.txt Packages Available on Nuget Caliburn.Micro – The full framework compiled into an assembly. Caliburn.Micro.Start - Includes Caliburn.Micro plus a starting bootstrapper, view model and view. Caliburn.Micro.Container – The Caliburn.Micro invers...DirectX Tool Kit: November 15, 2012: November 15, 2012 Added support for WIC2 when available on Windows 8 and Windows 7 with KB 2670838 Cleaned up warning level 4 warningsDotNetNuke® Community Edition CMS: 06.02.05: Major Highlights Updated the system so that it supports nested folders in the App_Code folder Updated the Global Error Handling so that when errors within the global.asax handler happen, they are caught and shown in a page displaying the original HTTP error code Fixed issue that stopped users from specifying Link URLs that open on a new window Security FixesFixed issue in the Member Directory module that could show members to non authenticated users Fixed issue in the Lists modul...fastJSON: v2.0.10: - added MonoDroid projectxUnit.net Contrib: xunitcontrib-resharper 0.7 (RS 7.1, 6.1.1): xunitcontrib release 0.6.1 (ReSharper runner) This release provides a test runner plugin for Resharper 7.1 RTM and 6.1.1, targetting all versions of xUnit.net. (See the xUnit.net project to download xUnit.net itself.) This release drops 7.0 support and targets the latest revisions of the last two major versions of ReSharper (namely 7.0 and 6.1.1). Copies of the plugin that support previous verions of ReSharper can be downloaded from this release. Also note that all builds work against ALL ...OnTopReplica: Release 3.4: Update to the 3 version with major fixes and improvements. Compatible with Windows 8. Now runs (and requires) .NET Framework v.4.0. Added relative mode for region selection (allows the user to select regions as margins from the borders of the thumbnail, useful for windows which have a variable size but fixed size controls, like video players). Improved window seeking when restoring cloned thumbnail or cloning a window by title or by class. Improved settings persistence. Improved co...DotSpatial: DotSpatial 1.4: This is a Minor Release. See the changes in the issue tracker. Minimal -- includes DotSpatial core and essential extensions Extended -- includes debugging symbols and additional extensions Tutorials are available. Just want to run the software? End user (non-programmer) version available branded as MapWindow Want to add your own feature? Develop a plugin, using the template and contribute to the extension feed (you can also write extensions that you distribute in other ways). Components ...WinRT XAML Toolkit: WinRT XAML Toolkit - 1.3.5: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features Attachable Behaviors AwaitableUI extensions Controls Converters Debugging helpers Extension methods Imaging helpers IO helpers VisualTree helpers Samples Recent changes Docum...AcDown?????: AcDown????? v4.3: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown??????????????????,????????????????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ???? 32??64? ???Linux ????(1)????????Windows XP???,????????.NET Framework 2.0???(x86),?????"?????????"??? (2)???????????Linux???,????????Mono?? ??2...New Projects1119P1: So far, I haven't found any bugcoolow: a simple projectDatabase Tools: Windows application for managing SQL Server databases.Editable WILEz Books: Sorry for my bad enlgish. I'm italian. With this project you can write a simple book with images, you can customize text font, color, beckground immage ecc.. simply with a editable txt file.EstimateTracker: Program to track estimate time using XAML, MVVM, WPF, ninject Ioc, nhibernate and Microsoft PrismExtJS based ASP.NET 2.0 Controls: About FineUI ExtJS based professional ASP.NET 2.0 Controls. FineUI Mission Create No JavaScript, No CSS, No UpdatePanel, No ViewState and No WebServices web apGCU: This project supports the Gedcom Utility which allows users to review many Gedcom files for certain information.Heng.Elements: Entity Relationship ModelingiRoboticsPrototype1: iRobotics Prototype 1. Under developmentJamaican kitchen: This a website which will display various jamaican food. these dishes which be ranging from mild to spicey food. JarvisProject1: ???? ?????? ??????? ??????? - ????????????? ?????????? ? ???????? ????????? ??????Java 2D Game Developing Setup: Set of classes as "interface" between game powering code and creative game development in Java. KZ.Express: Project is build to resolve bill managementlbpWGaeBlog: my blog on gaeLogistic Management System: D? án giao nh?n v?n t?i logistics. D? án có r?t nhi?u ký t? D? án giao nh?n v?n t?i logistics. D? án có r?t nhi?u ký t?D? án giao nh?n v?n t?i logistics. D? án Managed3D: Managed3D is a scene graph API that allows developers to have both high-level and low-level access to objects in a 3-dimensional scene.MicroTao: MicroTao is for the future.MVC4 ASPX TourBooking: Website Booking tourMyDay: Simple little spike, using a todo list. Spiking MVC 4, Twitter Bootstrap, code-first migrations in EF 5, and AppHarbor deployments. mytestmusicstoremvc: my Study mvcNDateTime: NDateTime is a javascript library that wrap the most commons properties and methods of .NET DateTime object.onexin: This is test.PersiaCaptchaHandler: A Persian Captcha that use number in lettersPhoneGap/Cordova Libs, PhoneGap Demos, PhoneGap Solutions, PhoneGap Practices: Here you can find PhoneGap/Cordova Libs, demos, solutions, best practices, architectures, etc. Most important, all should be the best and free.PhotOrganizer: Windows application to organize your pictures. Scans folder for number and size of pictures. Moves to destination by year-month and removes duplicate files.PoNCE: PoNCE Engine helps creating of Point and Click quest gamesPragTest: List of my projectsProof of Concept Code: This is pretty much throw-away PoC code. I intend to have a folder for each PoC and the solution file for the PoC under the same folder. Security Center: Security Center is a handy tool to secure your secret notes. 512-bits AES algorithm with your private master password is used to protect your data. SharePoint Metro Sliders: SharePoint 2010 Feature that includes two Metro style image sliders web parts: - Image slider with just one image - Image slider with four smaller images in it.SkilledRES_Portal: SkilledRES Portal consists about Organization Information & Activity Profiling of SkilledRESSuper BASE32: This awesome app let you convert all your music, pictures and video to brand new BASE32 encoding! System.Data.Entity.Repository.Filters: System.Data.Entity.Repository.FiltersThe Media Store: The Media StoreUser Group: Maintain support data for user groupsVivitap: Vivitap Samples and SDK support.Web Scripting and Content Creation - DIY Wedding Cake - Assignment 2 - Prototype: Web Scripting and Content Creation - DIY Wedding Cake - Assignment 2webass2: protoype project for the final project for WSCC .WebTechCoursework2.HRSystem: Human Resource System for Web Technology CourseworkWindows Store Application Library: Windows Store Application Library provides a collection of UI controls and utilities for Windows 8 store application developers.WriteMyName: Código para escrever o nome do autor no começo de código fonte.XMPP Chat for Windows 8 Apps Store: xmpp sample for windows app store???-Windows8: ???Windows8???,?????????

    Read the article

  • ASPNET WebAPI REST Guidance

    - by JoshReuben
    ASP.NET Web API is an ideal platform for building RESTful applications on the .NET Framework. While I may be more partial to NodeJS these days, there is no denying that WebAPI is a well engineered framework. What follows is my investigation of how to leverage WebAPI to construct a RESTful frontend API.   The Advantages of REST Methodology over SOAP Simpler API for CRUD ops Standardize Development methodology - consistent and intuitive Standards based à client interop Wide industry adoption, Ease of use à easy to add new devs Avoid service method signature blowout Smaller payloads than SOAP Stateless à no session data means multi-tenant scalability Cache-ability Testability   General RESTful API Design Overview · utilize HTTP Protocol - Usage of HTTP methods for CRUD, standard HTTP response codes, common HTTP headers and Mime Types · Resources are mapped to URLs, actions are mapped to verbs and the rest goes in the headers. · keep the API semantic, resource-centric – A RESTful, resource-oriented service exposes a URI for every piece of data the client might want to operate on. A REST-RPC Hybrid exposes a URI for every operation the client might perform: one URI to fetch a piece of data, a different URI to delete that same data. utilize Uri to specify CRUD op, version, language, output format: http://api.MyApp.com/{ver}/{lang}/{resource_type}/{resource_id}.{output_format}?{key&filters} · entity CRUD operations are matched to HTTP methods: · Create - POST / PUT · Read – GET - cacheable · Update – PUT · Delete - DELETE · Use Uris to represent a hierarchies - Resources in RESTful URLs are often chained · Statelessness allows for idempotency – apply an op multiple times without changing the result. POST is non-idempotent, the rest are idempotent (if DELETE flags records instead of deleting them). · Cache indication - Leverage HTTP headers to label cacheable content and indicate the permitted duration of cache · PUT vs POST - The client uses PUT when it determines which URI (Id key) the new resource should have. The client uses POST when the server determines they key. PUT takes a second param – the id. POST creates a new resource. The server assigns the URI for the new object and returns this URI as part of the response message. Note: The PUT method replaces the entire entity. That is, the client is expected to send a complete representation of the updated product. If you want to support partial updates, the PATCH method is preferred DELETE deletes a resource at a specified URI – typically takes an id param · Leverage Common HTTP Response Codes in response headers 200 OK: Success 201 Created - Used on POST request when creating a new resource. 304 Not Modified: no new data to return. 400 Bad Request: Invalid Request. 401 Unauthorized: Authentication. 403 Forbidden: Authorization 404 Not Found – entity does not exist. 406 Not Acceptable – bad params. 409 Conflict - For POST / PUT requests if the resource already exists. 500 Internal Server Error 503 Service Unavailable · Leverage uncommon HTTP Verbs to reduce payload sizes HEAD - retrieves just the resource meta-information. OPTIONS returns the actions supported for the specified resource. PATCH - partial modification of a resource. · When using PUT, POST or PATCH, send the data as a document in the body of the request. Don't use query parameters to alter state. · Utilize Headers for content negotiation, caching, authorization, throttling o Content Negotiation – choose representation (e.g. JSON or XML and version), language & compression. Signal via RequestHeader.Accept & ResponseHeader.Content-Type Accept: application/json;version=1.0 Accept-Language: en-US Accept-Charset: UTF-8 Accept-Encoding: gzip o Caching - ResponseHeader: Expires (absolute expiry time) or Cache-Control (relative expiry time) o Authorization - basic HTTP authentication uses the RequestHeader.Authorization to specify a base64 encoded string "username:password". can be used in combination with SSL/TLS (HTTPS) and leverage OAuth2 3rd party token-claims authorization. Authorization: Basic sQJlaTp5ZWFslylnaNZ= o Rate Limiting - Not currently part of HTTP so specify non-standard headers prefixed with X- in the ResponseHeader. X-RateLimit-Limit: 10000 X-RateLimit-Remaining: 9990 · HATEOAS Methodology - Hypermedia As The Engine Of Application State – leverage API as a state machine where resources are states and the transitions between states are links between resources and are included in their representation (hypermedia) – get API metadata signatures from the response Link header - in a truly REST based architecture any URL, except the initial URL, can be changed, even to other servers, without worrying about the client. · error responses - Do not just send back a 200 OK with every response. Response should consist of HTTP error status code (JQuery has automated support for this), A human readable message , A Link to a meaningful state transition , & the original data payload that was problematic. · the URIs will typically map to a server-side controller and a method name specified by the type of request method. Stuff all your calls into just four methods is not as crazy as it sounds. · Scoping - Path variables look like you’re traversing a hierarchy, and query variables look like you’re passing arguments into an algorithm · Mapping URIs to Controllers - have one controller for each resource is not a rule – can consolidate - route requests to the appropriate controller and action method · Keep URls Consistent - Sometimes it’s tempting to just shorten our URIs. not recommend this as this can cause confusion · Join Naming – for m-m entity relations there may be multiple hierarchy traversal paths · Routing – useful level of indirection for versioning, server backend mocking in development ASPNET WebAPI Considerations ASPNET WebAPI implements a lot (but not all) RESTful API design considerations as part of its infrastructure and via its coding convention. Overview When developing an API there are basically three main steps: 1. Plan out your URIs 2. Setup return values and response codes for your URIs 3. Implement a framework for your API.   Design · Leverage Models MVC folder · Repositories – support IoC for tests, abstraction · Create DTO classes – a level of indirection decouples & allows swap out · Self links can be generated using the UrlHelper · Use IQueryable to support projections across the wire · Models can support restful navigation properties – ICollection<T> · async mechanism for long running ops - return a response with a ticket – the client can then poll or be pushed the final result later. · Design for testability - Test using HttpClient , JQuery ( $.getJSON , $.each) , fiddler, browser debug. Leverage IDependencyResolver – IoC wrapper for mocking · Easy debugging - IE F12 developer tools: Network tab, Request Headers tab     Routing · HTTP request method is matched to the method name. (This rule applies only to GET, POST, PUT, and DELETE requests.) · {id}, if present, is matched to a method parameter named id. · Query parameters are matched to parameter names when possible · Done in config via Routes.MapHttpRoute – similar to MVC routing · Can alternatively: o decorate controller action methods with HttpDelete, HttpGet, HttpHead,HttpOptions, HttpPatch, HttpPost, or HttpPut., + the ActionAttribute o use AcceptVerbsAttribute to support other HTTP verbs: e.g. PATCH, HEAD o use NonActionAttribute to prevent a method from getting invoked as an action · route table Uris can support placeholders (via curly braces{}) – these can support default values and constraints, and optional values · The framework selects the first route in the route table that matches the URI. Response customization · Response code: By default, the Web API framework sets the response status code to 200 (OK). But according to the HTTP/1.1 protocol, when a POST request results in the creation of a resource, the server should reply with status 201 (Created). Non Get methods should return HttpResponseMessage · Location: When the server creates a resource, it should include the URI of the new resource in the Location header of the response. public HttpResponseMessage PostProduct(Product item) {     item = repository.Add(item);     var response = Request.CreateResponse<Product>(HttpStatusCode.Created, item);     string uri = Url.Link("DefaultApi", new { id = item.Id });     response.Headers.Location = new Uri(uri);     return response; } Validation · Decorate Models / DTOs with System.ComponentModel.DataAnnotations properties RequiredAttribute, RangeAttribute. · Check payloads using ModelState.IsValid · Under posting – leave out values in JSON payload à JSON formatter assigns a default value. Use with RequiredAttribute · Over-posting - if model has RO properties à use DTO instead of model · Can hook into pipeline by deriving from ActionFilterAttribute & overriding OnActionExecuting Config · Done in App_Start folder > WebApiConfig.cs – static Register method: HttpConfiguration param: The HttpConfiguration object contains the following members. Member Description DependencyResolver Enables dependency injection for controllers. Filters Action filters – e.g. exception filters. Formatters Media-type formatters. by default contains JsonFormatter, XmlFormatter IncludeErrorDetailPolicy Specifies whether the server should include error details, such as exception messages and stack traces, in HTTP response messages. Initializer A function that performs final initialization of the HttpConfiguration. MessageHandlers HTTP message handlers - plug into pipeline ParameterBindingRules A collection of rules for binding parameters on controller actions. Properties A generic property bag. Routes The collection of routes. Services The collection of services. · Configure JsonFormatter for circular references to support links: PreserveReferencesHandling.Objects Documentation generation · create a help page for a web API, by using the ApiExplorer class. · The ApiExplorer class provides descriptive information about the APIs exposed by a web API as an ApiDescription collection · create the help page as an MVC view public ILookup<string, ApiDescription> GetApis()         {             return _explorer.ApiDescriptions.ToLookup(                 api => api.ActionDescriptor.ControllerDescriptor.ControllerName); · provide documentation for your APIs by implementing the IDocumentationProvider interface. Documentation strings can come from any source that you like – e.g. extract XML comments or define custom attributes to apply to the controller [ApiDoc("Gets a product by ID.")] [ApiParameterDoc("id", "The ID of the product.")] public HttpResponseMessage Get(int id) · GlobalConfiguration.Configuration.Services – add the documentation Provider · To hide an API from the ApiExplorer, add the ApiExplorerSettingsAttribute Plugging into the Message Handler pipeline · Plug into request / response pipeline – derive from DelegatingHandler and override theSendAsync method – e.g. for logging error codes, adding a custom response header · Can be applied globally or to a specific route Exception Handling · Throw HttpResponseException on method failures – specify HttpStatusCode enum value – examine this enum, as its values map well to typical op problems · Exception filters – derive from ExceptionFilterAttribute & override OnException. Apply on Controller or action methods, or add to global HttpConfiguration.Filters collection · HttpError object provides a consistent way to return error information in the HttpResponseException response body. · For model validation, you can pass the model state to CreateErrorResponse, to include the validation errors in the response public HttpResponseMessage PostProduct(Product item) {     if (!ModelState.IsValid)     {         return Request.CreateErrorResponse(HttpStatusCode.BadRequest, ModelState); Cookie Management · Cookie header in request and Set-Cookie headers in a response - Collection of CookieState objects · Specify Expiry, max-age resp.Headers.AddCookies(new CookieHeaderValue[] { cookie }); Internet Media Types, formatters and serialization · Defaults to application/json · Request Accept header and response Content-Type header · determines how Web API serializes and deserializes the HTTP message body. There is built-in support for XML, JSON, and form-urlencoded data · customizable formatters can be inserted into the pipeline · POCO serialization is opt out via JsonIgnoreAttribute, or use DataMemberAttribute for optin · JSON serializer leverages NewtonSoft Json.NET · loosely structured JSON objects are serialzed as JObject which derives from Dynamic · to handle circular references in json: json.SerializerSettings.PreserveReferencesHandling =    PreserveReferencesHandling.All à {"$ref":"1"}. · To preserve object references in XML [DataContract(IsReference=true)] · Content negotiation Accept: Which media types are acceptable for the response, such as “application/json,” “application/xml,” or a custom media type such as "application/vnd.example+xml" Accept-Charset: Which character sets are acceptable, such as UTF-8 or ISO 8859-1. Accept-Encoding: Which content encodings are acceptable, such as gzip. Accept-Language: The preferred natural language, such as “en-us”. o Web API uses the Accept and Accept-Charset headers. (At this time, there is no built-in support for Accept-Encoding or Accept-Language.) · Controller methods can take JSON representations of DTOs as params – auto-deserialization · Typical JQuery GET request: function find() {     var id = $('#prodId').val();     $.getJSON("api/products/" + id,         function (data) {             var str = data.Name + ': $' + data.Price;             $('#product').text(str);         })     .fail(         function (jqXHR, textStatus, err) {             $('#product').text('Error: ' + err);         }); }            · Typical GET response: HTTP/1.1 200 OK Server: ASP.NET Development Server/10.0.0.0 Date: Mon, 18 Jun 2012 04:30:33 GMT X-AspNet-Version: 4.0.30319 Cache-Control: no-cache Pragma: no-cache Expires: -1 Content-Type: application/json; charset=utf-8 Content-Length: 175 Connection: Close [{"Id":1,"Name":"TomatoSoup","Price":1.39,"ActualCost":0.99},{"Id":2,"Name":"Hammer", "Price":16.99,"ActualCost":10.00},{"Id":3,"Name":"Yo yo","Price":6.99,"ActualCost": 2.05}] True OData support · Leverage Query Options $filter, $orderby, $top and $skip to shape the results of controller actions annotated with the [Queryable]attribute. [Queryable]  public IQueryable<Supplier> GetSuppliers()  · Query: ~/Suppliers?$filter=Name eq ‘Microsoft’ · Applies the following selection filter on the server: GetSuppliers().Where(s => s.Name == “Microsoft”)  · Will pass the result to the formatter. · true support for the OData format is still limited - no support for creates, updates, deletes, $metadata and code generation etc · vnext: ability to configure how EditLinks, SelfLinks and Ids are generated Self Hosting no dependency on ASPNET or IIS: using (var server = new HttpSelfHostServer(config)) {     server.OpenAsync().Wait(); Tracing · tracability tools, metrics – e.g. send to nagios · use your choice of tracing/logging library, whether that is ETW,NLog, log4net, or simply System.Diagnostics.Trace. · To collect traces, implement the ITraceWriter interface public class SimpleTracer : ITraceWriter {     public void Trace(HttpRequestMessage request, string category, TraceLevel level,         Action<TraceRecord> traceAction)     {         TraceRecord rec = new TraceRecord(request, category, level);         traceAction(rec);         WriteTrace(rec); · register the service with config · programmatically trace – has helper extension methods: Configuration.Services.GetTraceWriter().Info( · Performance tracing - pipeline writes traces at the beginning and end of an operation - TraceRecord class includes aTimeStamp property, Kind property set to TraceKind.Begin / End Security · Roles class methods: RoleExists, AddUserToRole · WebSecurity class methods: UserExists, .CreateUserAndAccount · Request.IsAuthenticated · Leverage HTTP 401 (Unauthorized) response · [AuthorizeAttribute(Roles="Administrator")] – can be applied to Controller or its action methods · See section in WebApi document on "Claim-based-security for ASP.NET Web APIs using DotNetOpenAuth" – adapt this to STS.--> Web API Host exposes secured Web APIs which can only be accessed by presenting a valid token issued by the trusted issuer. http://zamd.net/2012/05/04/claim-based-security-for-asp-net-web-apis-using-dotnetopenauth/ · Use MVC membership provider infrastructure and add a DelegatingHandler child class to the WebAPI pipeline - http://stackoverflow.com/questions/11535075/asp-net-mvc-4-web-api-authentication-with-membership-provider - this will perform the login actions · Then use AuthorizeAttribute on controllers and methods for role mapping- http://sixgun.wordpress.com/2012/02/29/asp-net-web-api-basic-authentication/ · Alternate option here is to rely on MVC App : http://forums.asp.net/t/1831767.aspx/1

    Read the article

  • Suspend not working after kernel update on an HP Envy14 1050

    - by leoxweb
    I just update ubuntu 12.04 to the kernel 3.2.0-30 and together with it a lot of packeges in the system. I am running it on an HP Envy14 1050. When I first made a fresh install of the clean ubuntu 12.04 I had the problem that when restoring from suspend the screen was black, although some backlight was there. I could fix it following this https://bugs.launchpad.net/ubuntu/+source/linux/+bug/989674/comments/20. Now, after the update the black screen after suspend has reappeared but with no backlight at all and the led in the caps lock key blinking. The laptop is using a ATI radeon 5600 with the privative drivers. During the update process there was an error with depmod . You can see the /var/log/dist-upgrade/apt-term.log file at the end. UPDATE: suspend is not working at all. The problem is not when restoring from suspend, but when I try to suspend the system it gets blocked and the fan running. Only option is to press power button. Log started: 2012-09-12 00:46:46 (Reading database ... 198909 files and directories currently installed.) Removing icedtea-7-jre-cacao ... Selecting previously unselected package linux-image-3.2.0-30-generic. (Reading database ... 198901 files and directories currently installed.) Unpacking linux-image-3.2.0-30-generic (from .../linux-image-3.2.0-30-generic_3.2.0-30.48_amd64.deb) ... Done. Preparing to replace icedtea-7-jre-jamvm 7~u3-2.1.1~pre1-1ubuntu3 (using .../icedtea-7-jre-jamvm_7u7-2.3.2-1ubuntu0.12.04.1_amd64.deb) ... Unpacking replacement icedtea-7-jre-jamvm ... Preparing to replace openjdk-7-jre-lib 7~u3-2.1.1~pre1-1ubuntu3 (using .../openjdk-7-jre-lib_7u7-2.3.2-1ubuntu0.12.04.1_all.deb) ... Unpacking replacement openjdk-7-jre-lib ... Preparing to replace openjdk-7-jre-headless 7~u3-2.1.1~pre1-1ubuntu3 (using .../openjdk-7-jre-headless_7u7-2.3.2-1ubuntu0.12.04.1_amd64.deb) ... Unpacking replacement openjdk-7-jre-headless ... Preparing to replace python-problem-report 2.0.1-0ubuntu12 (using .../python-problem-report_2.0.1-0ubuntu13_all.deb) ... Unpacking replacement python-problem-report ... Preparing to replace python-apport 2.0.1-0ubuntu12 (using .../python-apport_2.0.1-0ubuntu13_all.deb) ... Unpacking replacement python-apport ... Preparing to replace apport 2.0.1-0ubuntu12 (using .../apport_2.0.1-0ubuntu13_all.deb) ... apport stop/waiting Unpacking replacement apport ... Preparing to replace apport-gtk 2.0.1-0ubuntu12 (using .../apport-gtk_2.0.1-0ubuntu13_all.deb) ... Unpacking replacement apport-gtk ... Preparing to replace firefox-globalmenu 15.0+build1-0ubuntu0.12.04.1 (using .../firefox-globalmenu_15.0.1+build1-0ubuntu0.12.04.1_amd64.deb) ... Unpacking replacement firefox-globalmenu ... Preparing to replace firefox 15.0+build1-0ubuntu0.12.04.1 (using .../firefox_15.0.1+build1-0ubuntu0.12.04.1_amd64.deb) ... Unpacking replacement firefox ... Preparing to replace firefox-gnome-support 15.0+build1-0ubuntu0.12.04.1 (using .../firefox-gnome-support_15.0.1+build1-0ubuntu0.12.04.1_amd64.deb) ... Unpacking replacement firefox-gnome-support ... Preparing to replace firefox-locale-en 15.0+build1-0ubuntu0.12.04.1 (using .../firefox-locale-en_15.0.1+build1-0ubuntu0.12.04.1_amd64.deb) ... Unpacking replacement firefox-locale-en ... Preparing to replace firefox-locale-es 15.0+build1-0ubuntu0.12.04.1 (using .../firefox-locale-es_15.0.1+build1-0ubuntu0.12.04.1_amd64.deb) ... Unpacking replacement firefox-locale-es ... Preparing to replace firefox-locale-zh-hans 15.0+build1-0ubuntu0.12.04.1 (using .../firefox-locale-zh-hans_15.0.1+build1-0ubuntu0.12.04.1_amd64.deb) ... Unpacking replacement firefox-locale-zh-hans ... Preparing to replace totem-mozilla 3.0.1-0ubuntu21 (using .../totem-mozilla_3.0.1-0ubuntu21.1_amd64.deb) ... Unpacking replacement totem-mozilla ... Preparing to replace libtotem0 3.0.1-0ubuntu21 (using .../libtotem0_3.0.1-0ubuntu21.1_amd64.deb) ... Unpacking replacement libtotem0 ... Preparing to replace totem-plugins 3.0.1-0ubuntu21 (using .../totem-plugins_3.0.1-0ubuntu21.1_amd64.deb) ... Unpacking replacement totem-plugins ... Preparing to replace totem 3.0.1-0ubuntu21 (using .../totem_3.0.1-0ubuntu21.1_amd64.deb) ... Unpacking replacement totem ... Preparing to replace totem-common 3.0.1-0ubuntu21 (using .../totem-common_3.0.1-0ubuntu21.1_all.deb) ... Unpacking replacement totem-common ... Preparing to replace gir1.2-totem-1.0 3.0.1-0ubuntu21 (using .../gir1.2-totem-1.0_3.0.1-0ubuntu21.1_amd64.deb) ... Unpacking replacement gir1.2-totem-1.0 ... Preparing to replace glib-networking-common 2.32.1-1ubuntu1 (using .../glib-networking-common_2.32.1-1ubuntu2_all.deb) ... Unpacking replacement glib-networking-common ... Preparing to replace glib-networking 2.32.1-1ubuntu1 (using .../glib-networking_2.32.1-1ubuntu2_amd64.deb) ... Unpacking replacement glib-networking ... Preparing to replace glib-networking-services 2.32.1-1ubuntu1 (using .../glib-networking-services_2.32.1-1ubuntu2_amd64.deb) ... Unpacking replacement glib-networking-services ... Preparing to replace linux-firmware 1.79 (using .../linux-firmware_1.79.1_all.deb) ... Unpacking replacement linux-firmware ... Preparing to replace linux-generic 3.2.0.29.31 (using .../linux-generic_3.2.0.30.32_amd64.deb) ... Unpacking replacement linux-generic ... Preparing to replace linux-image-generic 3.2.0.29.31 (using .../linux-image-generic_3.2.0.30.32_amd64.deb) ... Unpacking replacement linux-image-generic ... Selecting previously unselected package linux-headers-3.2.0-30. Unpacking linux-headers-3.2.0-30 (from .../linux-headers-3.2.0-30_3.2.0-30.48_all.deb) ... Selecting previously unselected package linux-headers-3.2.0-30-generic. Unpacking linux-headers-3.2.0-30-generic (from .../linux-headers-3.2.0-30-generic_3.2.0-30.48_amd64.deb) ... Preparing to replace linux-headers-generic 3.2.0.29.31 (using .../linux-headers-generic_3.2.0.30.32_amd64.deb) ... Unpacking replacement linux-headers-generic ... Preparing to replace linux-libc-dev 3.2.0-29.46 (using .../linux-libc-dev_3.2.0-30.48_amd64.deb) ... Unpacking replacement linux-libc-dev ... Preparing to replace openjdk-7-jre 7~u3-2.1.1~pre1-1ubuntu3 (using .../openjdk-7-jre_7u7-2.3.2-1ubuntu0.12.04.1_amd64.deb) ... Unpacking replacement openjdk-7-jre ... Preparing to replace openjdk-7-jdk 7~u3-2.1.1~pre1-1ubuntu3 (using .../openjdk-7-jdk_7u7-2.3.2-1ubuntu0.12.04.1_amd64.deb) ... Unpacking replacement openjdk-7-jdk ... Preparing to replace policykit-1-gnome 0.105-1ubuntu3 (using .../policykit-1-gnome_0.105-1ubuntu3.1_amd64.deb) ... Unpacking replacement policykit-1-gnome ... Preparing to replace xserver-xorg-input-synaptics 1.6.2-1ubuntu1~precise1 (using .../xserver-xorg-input-synaptics_1.6.2-1ubuntu1~precise2_amd64.deb) ... Unpacking replacement xserver-xorg-input-synaptics ... Processing triggers for ureadahead ... ureadahead will be reprofiled on next reboot Processing triggers for hicolor-icon-theme ... Processing triggers for shared-mime-info ... Unknown media type in type 'all/all' Unknown media type in type 'all/allfiles' Unknown media type in type 'uri/mms' Unknown media type in type 'uri/mmst' Unknown media type in type 'uri/mmsu' Unknown media type in type 'uri/pnm' Unknown media type in type 'uri/rtspt' Unknown media type in type 'uri/rtspu' Processing triggers for man-db ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Processing triggers for gconf2 ... Processing triggers for libglib2.0-0:i386 ... Processing triggers for libglib2.0-0 ... Setting up linux-image-3.2.0-30-generic (3.2.0-30.48) ... Running depmod. update-initramfs: deferring update (hook will be called later) Examining /etc/kernel/postinst.d. run-parts: executing /etc/kernel/postinst.d/dkms 3.2.0-30-generic /boot/vmlinuz-3.2.0-30-generic Error! Problems with depmod detected. Automatically uninstalling this module. DKMS: Install Failed (depmod problems). Module rolled back to built state. run-parts: executing /etc/kernel/postinst.d/initramfs-tools 3.2.0-30-generic /boot/vmlinuz-3.2.0-30-generic update-initramfs: Generating /boot/initrd.img-3.2.0-30-generic run-parts: executing /etc/kernel/postinst.d/pm-utils 3.2.0-30-generic /boot/vmlinuz-3.2.0-30-generic run-parts: executing /etc/kernel/postinst.d/update-notifier 3.2.0-30-generic /boot/vmlinuz-3.2.0-30-generic run-parts: executing /etc/kernel/postinst.d/zz-update-grub 3.2.0-30-generic /boot/vmlinuz-3.2.0-30-generic Generating grub.cfg ... Found linux image: /boot/vmlinuz-3.2.0-30-generic Found initrd image: /boot/initrd.img-3.2.0-30-generic Found linux image: /boot/vmlinuz-3.2.0-29-generic Found initrd image: /boot/initrd.img-3.2.0-29-generic Found linux image: /boot/vmlinuz-3.2.0-23-generic Found initrd image: /boot/initrd.img-3.2.0-23-generic Found memtest86+ image: /boot/memtest86+.bin Found Windows 7 (loader) on /dev/sda1 Found Windows Recovery Environment (loader) on /dev/sda2 Found Windows Recovery Environment (loader) on /dev/sda3 done Setting up python-problem-report (2.0.1-0ubuntu13) ... Setting up python-apport (2.0.1-0ubuntu13) ... Setting up apport (2.0.1-0ubuntu13) ... apport start/running Setting up apport-gtk (2.0.1-0ubuntu13) ... Setting up firefox (15.0.1+build1-0ubuntu0.12.04.1) ... Please restart all running instances of firefox, or you will experience problems. Setting up firefox-globalmenu (15.0.1+build1-0ubuntu0.12.04.1) ... Setting up firefox-gnome-support (15.0.1+build1-0ubuntu0.12.04.1) ... Setting up firefox-locale-en (15.0.1+build1-0ubuntu0.12.04.1) ... Setting up firefox-locale-es (15.0.1+build1-0ubuntu0.12.04.1) ... Setting up firefox-locale-zh-hans (15.0.1+build1-0ubuntu0.12.04.1) ... Setting up libtotem0 (3.0.1-0ubuntu21.1) ... Setting up totem-common (3.0.1-0ubuntu21.1) ... Setting up totem (3.0.1-0ubuntu21.1) ... Setting up totem-mozilla (3.0.1-0ubuntu21.1) ... Setting up gir1.2-totem-1.0 (3.0.1-0ubuntu21.1) ... Setting up totem-plugins (3.0.1-0ubuntu21.1) ... Setting up glib-networking-common (2.32.1-1ubuntu2) ... Setting up glib-networking-services (2.32.1-1ubuntu2) ... Setting up glib-networking (2.32.1-1ubuntu2) ... Setting up linux-firmware (1.79.1) ... Setting up linux-image-generic (3.2.0.30.32) ... Setting up linux-generic (3.2.0.30.32) ... Setting up linux-headers-3.2.0-30 (3.2.0-30.48) ... Setting up linux-headers-3.2.0-30-generic (3.2.0-30.48) ... Examining /etc/kernel/header_postinst.d. run-parts: executing /etc/kernel/header_postinst.d/dkms 3.2.0-30-generic /boot/vmlinuz-3.2.0-30-generic Setting up linux-headers-generic (3.2.0.30.32) ... Setting up linux-libc-dev (3.2.0-30.48) ... Setting up policykit-1-gnome (0.105-1ubuntu3.1) ... Setting up xserver-xorg-input-synaptics (1.6.2-1ubuntu1~precise2) ... Setting up openjdk-7-jre-headless (7u7-2.3.2-1ubuntu0.12.04.1) ... Installing new version of config file /etc/java-7-openjdk/security/java.security ... Installing new version of config file /etc/java-7-openjdk/jvm-amd64.cfg ... Setting up openjdk-7-jre-lib (7u7-2.3.2-1ubuntu0.12.04.1) ... Setting up icedtea-7-jre-jamvm (7u7-2.3.2-1ubuntu0.12.04.1) ... Setting up openjdk-7-jre (7u7-2.3.2-1ubuntu0.12.04.1) ... Setting up openjdk-7-jdk (7u7-2.3.2-1ubuntu0.12.04.1) ... update-alternatives: using /usr/lib/jvm/java-7-openjdk-amd64/bin/jcmd to provide /usr/bin/jcmd (jcmd) in auto mode. Processing triggers for libc-bin ... ldconfig deferred processing now taking place Log ended: 2012-09-12 00:49:16

    Read the article

  • error: The specified device is claimed by another driver (uvcvideo) on the host operating system

    - by kamal
    I am trying to run an ubuntu 11, in VMplayer 5 on an ubuntu 12.04 host, and when trying to run the guest, i get this error, and no display. The specified device is claimed by another driver (uvcvideo) on the host operating system. The device might be in use. To Continue, the device will first be disconnected from its current driver. I also found a sort of related comment ( RESOLVED ) on similar issue on: ubuntu forums

    Read the article

  • PetStore 2.0 - java.lang.RuntimeException: javax.naming.NamingException: Lookup failed for 'jdbc/Pet

    - by Harry Pham
    I download PetStore 2.0 from https://blueprints.dev.java.net/servlets/ProjectDocumentList?folderID=5315&expandFolder=5315&folderID=0. I tried to build them in netbean 6.8 and I got this error SEVERE: Exception while preparing the app java.lang.RuntimeException: javax.naming.NamingException: Lookup failed for 'jdbc/PetstoreDB' in SerialContext [Root exception is javax.naming.NameNotFoundException: PetstoreDB not found] Seems like it cant find 'jdbc/PetStoreDB'. So I went back to the website and it look like I have to build the database via Ant script. So at the PetStore home directory, I run this command ant setup ant run And I got the same error when I try ant run [exec] Deprecated syntax, instead use: [exec] asadmin --port 4848 --host localhost --passwordfile /Users/KingdomHeart/.asadminpass --user admin deploy [options] ... [exec] com.sun.enterprise.admin.cli.CommandException: remote failure: Exception while preparing the app : java.lang.RuntimeException: javax.naming.NamingException: Lookup failed for 'jdbc/PetstoreDB' in SerialContext [Root exception is javax.naming.NameNotFoundException: PetstoreDB not found] [exec] [exec] [exec] Command deploy failed. Any idea what happen?

    Read the article

  • Using telerik radGrid - how to set the Date format for autogenerated column in edit mode

    - by Mark Breen
    Hello All, Using VS2008, and Telerik radGrid version 2010.1.519.35 I have a about 50 DNN modules using telerik radgrid and I need to display my dates in dd/mm/yy format. It is possible to do this easily in view mode, but when I switch to edit mode, it is more of a struggle. I can write a snippit of code to reformat the displayed date values to dd/mm/yy, but for inserts the user must enter mm/dd/yy. IOW, I need to change the culture of the form to en-GB culture. In my DotnetNuke App, I have made a change to the web.config, but it still assumes en-US format. I am not sure whether I need to set this at web.config level, page level or at the column within the control. I am struggling with this for a month or more and any help would be appriciated, thanks Mark Breen Ireland BMW R80GS 1987

    Read the article

  • Maven 3 - duplicate declaration of version

    - by Taylor Leese
    I just upgraded to m2eclipse version 0.10 which includes an embedded Maven 3 and I'm now getting the following error when trying to run a build. I've already set Maven-Installations to my Maven 2 installation, but it had no effect. How do I resolve this? [INFO] Scanning for projects... [ERROR] The build could not read 1 project -> [Help 1] [ERROR] The project com.stuff:sutff-web:0.0.1-SNAPSHOT (C:\development\taylor\stuff\pom.xml) has 1 error [ERROR] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must be unique: com.google.appengine:appengine-api-labs:jar -> duplicate declaration of version ${gae.version} [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException

    Read the article

  • How to include an external jar in gwt client side?

    - by Sergio del Amo
    I would like to use the org.apache.commons.validator.GenericValidator class in a view class of my GWT web app. I have read that I have to implicitely tell that I intend to use this external library. I thought adding the next line into my App.gwt.xml would work. <inherits name='org.apache.commons.validator.GenericValidator'/> I get the next error: Loading inherited module 'org.apache.commons.validator.GenericValidator' [ERROR] Unable to find 'org/apache/commons/validator/GenericValidator.gwt.xml' on your classpath; could be a typo, or maybe you forgot to include a classpath entry for source? [ERROR] Line 13: Unexpected exception while processing element 'inherits' com.google.gwt.core.ext.UnableToCompleteException: (see previous log entries) at com.google.gwt.dev.cfg.ModuleDefLoader.nestedLoad(ModuleDefLoader.java:239) at com.google.gwt.dev.cfg.ModuleDefSchema$BodySchema.__inherits_begin(ModuleDefSchema.java:354) at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.google.gwt.dev.util.xml.HandlerMethod.invokeBegin(HandlerMethod.java:223) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.startElement(ReflectiveParser.java:270) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.startElement(AbstractSAXParser.java:501) at com.sun.org.apache.xerces.internal.parsers.AbstractXMLDocumentParser.emptyElement(AbstractXMLDocumentParser.java:179) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanStartElement(XMLDocumentFragmentScannerImpl.java:1339) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2747) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:648) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:510) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:807) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:107) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1205) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:522) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.parse(ReflectiveParser.java:327) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.access$100(ReflectiveParser.java:48) at com.google.gwt.dev.util.xml.ReflectiveParser.parse(ReflectiveParser.java:398) at com.google.gwt.dev.cfg.ModuleDefLoader.nestedLoad(ModuleDefLoader.java:257) at com.google.gwt.dev.cfg.ModuleDefLoader$1.load(ModuleDefLoader.java:169) at com.google.gwt.dev.cfg.ModuleDefLoader.doLoadModule(ModuleDefLoader.java:283) at com.google.gwt.dev.cfg.ModuleDefLoader.loadFromClassPath(ModuleDefLoader.java:141) at com.google.gwt.dev.Compiler.run(Compiler.java:184) at com.google.gwt.dev.Compiler$1.run(Compiler.java:152) at com.google.gwt.dev.CompileTaskRunner.doRun(CompileTaskRunner.java:87) at com.google.gwt.dev.CompileTaskRunner.runWithAppropriateLogger(CompileTaskRunner.java:81) at com.google.gwt.dev.Compiler.main(Compiler.java:159) [ERROR] Failure while parsing XML com.google.gwt.core.ext.UnableToCompleteException: (see previous log entries) at com.google.gwt.dev.util.xml.DefaultSchema.onHandlerException(DefaultSchema.java:56) at com.google.gwt.dev.util.xml.Schema.onHandlerException(Schema.java:66) at com.google.gwt.dev.util.xml.Schema.onHandlerException(Schema.java:66) at com.google.gwt.dev.util.xml.HandlerMethod.invokeBegin(HandlerMethod.java:233) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.startElement(ReflectiveParser.java:270) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.startElement(AbstractSAXParser.java:501) at com.sun.org.apache.xerces.internal.parsers.AbstractXMLDocumentParser.emptyElement(AbstractXMLDocumentParser.java:179) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanStartElement(XMLDocumentFragmentScannerImpl.java:1339) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2747) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:648) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:510) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:807) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:107) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1205) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:522) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.parse(ReflectiveParser.java:327) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.access$100(ReflectiveParser.java:48) at com.google.gwt.dev.util.xml.ReflectiveParser.parse(ReflectiveParser.java:398) at com.google.gwt.dev.cfg.ModuleDefLoader.nestedLoad(ModuleDefLoader.java:257) at com.google.gwt.dev.cfg.ModuleDefLoader$1.load(ModuleDefLoader.java:169) at com.google.gwt.dev.cfg.ModuleDefLoader.doLoadModule(ModuleDefLoader.java:283) at com.google.gwt.dev.cfg.ModuleDefLoader.loadFromClassPath(ModuleDefLoader.java:141) at com.google.gwt.dev.Compiler.run(Compiler.java:184) at com.google.gwt.dev.Compiler$1.run(Compiler.java:152) at com.google.gwt.dev.CompileTaskRunner.doRun(CompileTaskRunner.java:87) at com.google.gwt.dev.CompileTaskRunner.runWithAppropriateLogger(CompileTaskRunner.java:81) at com.google.gwt.dev.Compiler.main(Compiler.java:159) [ERROR] Unexpected error while processing XML com.google.gwt.core.ext.UnableToCompleteException: (see previous log entries) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.parse(ReflectiveParser.java:351) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.access$100(ReflectiveParser.java:48) at com.google.gwt.dev.util.xml.ReflectiveParser.parse(ReflectiveParser.java:398) at com.google.gwt.dev.cfg.ModuleDefLoader.nestedLoad(ModuleDefLoader.java:257) at com.google.gwt.dev.cfg.ModuleDefLoader$1.load(ModuleDefLoader.java:169) at com.google.gwt.dev.cfg.ModuleDefLoader.doLoadModule(ModuleDefLoader.java:283) at com.google.gwt.dev.cfg.ModuleDefLoader.loadFromClassPath(ModuleDefLoader.java:141) at com.google.gwt.dev.Compiler.run(Compiler.java:184) at com.google.gwt.dev.Compiler$1.run(Compiler.java:152) at com.google.gwt.dev.CompileTaskRunner.doRun(CompileTaskRunner.java:87) at com.google.gwt.dev.CompileTaskRunner.runWithAppropriateLogger(CompileTaskRunner.java:81) at com.google.gwt.dev.Compiler.main(Compiler.java:159) Anyone knows how it works?

    Read the article

  • How to include an external jar in a GWT module?

    - by Sergio del Amo
    I would like to use the org.apache.commons.validator.GenericValidator class in a view class of my GWT web app. I have read that I have to implicitely tell that I intend to use this external library. I thought adding the next line into my App.gwt.xml would work. <inherits name='org.apache.commons.validator.GenericValidator'/> I get the next error: Loading inherited module 'org.apache.commons.validator.GenericValidator' [ERROR] Unable to find 'org/apache/commons/validator/GenericValidator.gwt.xml' on your classpath; could be a typo, or maybe you forgot to include a classpath entry for source? [ERROR] Line 13: Unexpected exception while processing element 'inherits' com.google.gwt.core.ext.UnableToCompleteException: (see previous log entries) at com.google.gwt.dev.cfg.ModuleDefLoader.nestedLoad(ModuleDefLoader.java:239) at com.google.gwt.dev.cfg.ModuleDefSchema$BodySchema.__inherits_begin(ModuleDefSchema.java:354) at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.google.gwt.dev.util.xml.HandlerMethod.invokeBegin(HandlerMethod.java:223) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.startElement(ReflectiveParser.java:270) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.startElement(AbstractSAXParser.java:501) at com.sun.org.apache.xerces.internal.parsers.AbstractXMLDocumentParser.emptyElement(AbstractXMLDocumentParser.java:179) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanStartElement(XMLDocumentFragmentScannerImpl.java:1339) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2747) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:648) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:510) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:807) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:107) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1205) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:522) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.parse(ReflectiveParser.java:327) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.access$100(ReflectiveParser.java:48) at com.google.gwt.dev.util.xml.ReflectiveParser.parse(ReflectiveParser.java:398) at com.google.gwt.dev.cfg.ModuleDefLoader.nestedLoad(ModuleDefLoader.java:257) at com.google.gwt.dev.cfg.ModuleDefLoader$1.load(ModuleDefLoader.java:169) at com.google.gwt.dev.cfg.ModuleDefLoader.doLoadModule(ModuleDefLoader.java:283) at com.google.gwt.dev.cfg.ModuleDefLoader.loadFromClassPath(ModuleDefLoader.java:141) at com.google.gwt.dev.Compiler.run(Compiler.java:184) at com.google.gwt.dev.Compiler$1.run(Compiler.java:152) at com.google.gwt.dev.CompileTaskRunner.doRun(CompileTaskRunner.java:87) at com.google.gwt.dev.CompileTaskRunner.runWithAppropriateLogger(CompileTaskRunner.java:81) at com.google.gwt.dev.Compiler.main(Compiler.java:159) [ERROR] Failure while parsing XML com.google.gwt.core.ext.UnableToCompleteException: (see previous log entries) at com.google.gwt.dev.util.xml.DefaultSchema.onHandlerException(DefaultSchema.java:56) at com.google.gwt.dev.util.xml.Schema.onHandlerException(Schema.java:66) at com.google.gwt.dev.util.xml.Schema.onHandlerException(Schema.java:66) at com.google.gwt.dev.util.xml.HandlerMethod.invokeBegin(HandlerMethod.java:233) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.startElement(ReflectiveParser.java:270) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.startElement(AbstractSAXParser.java:501) at com.sun.org.apache.xerces.internal.parsers.AbstractXMLDocumentParser.emptyElement(AbstractXMLDocumentParser.java:179) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanStartElement(XMLDocumentFragmentScannerImpl.java:1339) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2747) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:648) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:510) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:807) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:107) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1205) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:522) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.parse(ReflectiveParser.java:327) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.access$100(ReflectiveParser.java:48) at com.google.gwt.dev.util.xml.ReflectiveParser.parse(ReflectiveParser.java:398) at com.google.gwt.dev.cfg.ModuleDefLoader.nestedLoad(ModuleDefLoader.java:257) at com.google.gwt.dev.cfg.ModuleDefLoader$1.load(ModuleDefLoader.java:169) at com.google.gwt.dev.cfg.ModuleDefLoader.doLoadModule(ModuleDefLoader.java:283) at com.google.gwt.dev.cfg.ModuleDefLoader.loadFromClassPath(ModuleDefLoader.java:141) at com.google.gwt.dev.Compiler.run(Compiler.java:184) at com.google.gwt.dev.Compiler$1.run(Compiler.java:152) at com.google.gwt.dev.CompileTaskRunner.doRun(CompileTaskRunner.java:87) at com.google.gwt.dev.CompileTaskRunner.runWithAppropriateLogger(CompileTaskRunner.java:81) at com.google.gwt.dev.Compiler.main(Compiler.java:159) [ERROR] Unexpected error while processing XML com.google.gwt.core.ext.UnableToCompleteException: (see previous log entries) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.parse(ReflectiveParser.java:351) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.access$100(ReflectiveParser.java:48) at com.google.gwt.dev.util.xml.ReflectiveParser.parse(ReflectiveParser.java:398) at com.google.gwt.dev.cfg.ModuleDefLoader.nestedLoad(ModuleDefLoader.java:257) at com.google.gwt.dev.cfg.ModuleDefLoader$1.load(ModuleDefLoader.java:169) at com.google.gwt.dev.cfg.ModuleDefLoader.doLoadModule(ModuleDefLoader.java:283) at com.google.gwt.dev.cfg.ModuleDefLoader.loadFromClassPath(ModuleDefLoader.java:141) at com.google.gwt.dev.Compiler.run(Compiler.java:184) at com.google.gwt.dev.Compiler$1.run(Compiler.java:152) at com.google.gwt.dev.CompileTaskRunner.doRun(CompileTaskRunner.java:87) at com.google.gwt.dev.CompileTaskRunner.runWithAppropriateLogger(CompileTaskRunner.java:81) at com.google.gwt.dev.Compiler.main(Compiler.java:159) I have commons.validator-1.3.1.jar in war/WEB-INF/lib I am using eclipse with Google Plugin. Anyone knows how it works?

    Read the article

  • Having trouble running code analysis from command prompt with msbuild

    - by devlife
    I'm using VS2010 RC while targeting .NET 3.5. I can run code analysis via Visual Studio without a problem. However, when I try to run code analysis on our CI server it isn't getting executed. When I attempt to build using msbuild 4.0 I get the following exception: C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\CodeAnalysis\Microsoft.CodeAnalysis.targets(129,9): error MSB4018: The "CodeAnalysis" task failed unexpectedly. C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\CodeAnalysis\Microsoft.CodeAnalysis.targets(129,9): error MSB4018: System.TypeLoadException: Could not load type 'System.Runtime.Versioning.TargetFrameworkAttribute' from assembly 'mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Like I said, it works fine when I run it through VS.

    Read the article

  • I need help debugging apache with gdb (no debugging symbols found)

    - by blockhead
    I'm trying to debug a segfault in PHP/Apache running on RHEL4. This is a production server, so I'm trying to install a separate copy of apache and php, and run apache through gdb. When I load httpd through gdb and then do run -X, I get the error (no debugging symbols found)...Error while reading shared library symbols: Dwarf Error: Cannot handle DW_FORM_strp in DWARF reader. The Apache manual says something about setting EXTRA_CFLAGS to "-g", and i've tried configuring with --enable-maintainer-mode, but I don't seem to be getting far.

    Read the article

  • Mailman Error / Cpanel

    - by Faith In Unseen Things
    Mailman is giving off this error when any changes are made to the list: ======================= Bug in Mailman version 2.1.14 We're sorry, we hit a bug! Please inform the webmaster for this site of this problem. Printing of traceback and other system information has been explicitly inhibited, but the webmaster can find this information in the Mailman error logs. Ran: /usr/local/cpanel/bin/mailman-install --force Then it says at the end: Updating Usenet watermarks - nothing to update here Nothing to do. updating old qfiles cp: cannot remove `/usr/local/cpanel/img-sys/gnu-head-tiny.jpg': Permission denied cp: cannot remove `/usr/local/cpanel/img-sys/mailman-large.jpg': Permission denied cp: cannot remove `/usr/local/cpanel/img-sys/mailman.jpg': Permission denied cp: cannot remove `/usr/local/cpanel/img-sys/mm-icon.png': Permission denied cp: cannot remove `/usr/local/cpanel/img-sys/PythonPowered.png': Permission denied directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/.cpanel (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/ca (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/uk (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/it (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/es (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/en (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/da (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/eu (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/no (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/pl (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/sv (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/tr (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/zh_CN (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/nl (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/fi (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/ast (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/zh_TW (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/ko (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/sk (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/ro (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/ja (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/pt (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/ru (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/ia (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/gl (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/vi (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/lt (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/cs (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/sl (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/pt_BR (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/he (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/hr (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/ar (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/et (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/de (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/fr (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/hu (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/sr (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/.cpanel/caches (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/.cpanel/caches/config (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/ca (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/uk (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/it (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/es (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/da (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/eu (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/no (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/pl (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/sv (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/tr (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/zh_CN (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/nl (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/fi (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/ast (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/zh_TW (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/ko (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/sk (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/ro (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/ja (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/pt (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/ru (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/ia (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/gl (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/vi (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/lt (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/cs (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/sl (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/pt_BR (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/he (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/hr (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/ar (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/et (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/de (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/fr (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/hu (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/sr (fixing) Warning: Private archive directory is other-executable (o+x). This could allow other users on your system to read private archives. If you're on a shared multiuser system, you should consult the installation manual on how to fix this. Problems found: 76 Re-run as mailman (or root) with -f flag to fix must be privileged to use -u must be privileged to use -u Unable to touch file /var/cpanel/mailman2: Permission denied at /usr/local/cpanel/bin/mailman-install line 275. [2012-04-13 19:33:55 -0500] warn [restartsrv_mailman] 2254: Unable to set RLIMIT_RSS to infinity 2254: Unable to set RLIMIT_AS to infinity at /usr/local/cpanel/Cpanel/Rlimit.pm line 113 Cpanel::Rlimit::set_rlimit_to_infinity() called at /usr/local/cpanel/scripts/restartsrv_mailman line 18 eval {...} called at /usr/local/cpanel/scripts/restartsrv_mailman line 15 warn [restartsrv_mailman] 2254: Unable to set RLIMIT_RSS to infinity 2254: Unable to set RLIMIT_AS to infinity [2012-04-13 19:33:55 -0500] warn [Cpanel::SafeDir::MK] mkdir /var/run/restartsrv/startup failed: Permission denied at /usr/local/cpanel/Cpanel/SafeDir/MK.pm line 153 Cpanel::SafeDir::MK::safemkdir('/var/run/restartsrv/startup', 0700) called at /usr/local/cpanel/Cpanel/RestartSrv.pm line 756 Cpanel::RestartSrv::logged_startup('mailman', 1, ARRAY(0x8fa4bc8)) called at /usr/local/cpanel/scripts/restartsrv_mailman line 47 warn [Cpanel::SafeDir::MK] mkdir /var/run/restartsrv/startup failed: Permission denied [2012-04-13 19:33:55 -0500] warn [restartsrv_mailman] Failed to create /var/run/restartsrv/startup at /usr/local/cpanel/Cpanel/RestartSrv.pm line 759 Cpanel::RestartSrv::logged_startup('mailman', 1, ARRAY(0x8fa4bc8)) called at /usr/local/cpanel/scripts/restartsrv_mailman line 47 warn [restartsrv_mailman] Failed to create /var/run/restartsrv/startup Ran this: /usr/local/cpanel/bin/mailman-install --force -f Same message as above. root@server2 [~]# cat /usr/local/cpanel/3rdparty/mailman/logs/error Apr 13 19:33:55 2012 mailmanctl(2255): PID unreadable in: /usr/local/cpanel/3rdparty/mailman/data/master-qrunner.pid Apr 13 19:33:55 2012 mailmanctl(2255): [Errno 2] No such file or directory: '/usr/local/cpanel/3rdparty/mailman/data/master-qrunner.pid' Apr 13 19:33:55 2012 mailmanctl(2255): Is qrunner even running? root@server2 [~]# /scripts/restartsrv_mailman --status mailmanctl (/usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/mailmanctl --stale-lock-cleanup start) running as mailman with PID 24070 3rdparty/mailman/bin/qrunner (/usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/qrunner --runner=RetryRunner:0:1 -s) running as mailman with PID 24078 root@server2 [~]# cat /usr/local/cpanel/3rdparty/mailman/data/master-qrunner.pid 24070 root@server2 [~]# ls -lah /usr/local/cpanel/3rdparty/mailman/data/master-qrunner.pid -rw-rw---- 1 mailman mailman 6 Apr 13 19:47 /usr/local/cpanel/3rdparty/mailman/data/master-qrunner.pid root@server2 [~]# ps aux | grep python mailman 4557 0.0 0.1 10484 6044 ? S 19:40 0:00 /usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/qrunner --runner=RetryRunner:0:1 -s mailman 24070 0.0 0.1 14268 4480 ? Ss 19:47 0:00 /usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/mailmanctl --stale-lock-cleanup start mailman 24071 1.1 0.1 14052 6100 ? S 19:47 0:00 /usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/qrunner --runner=ArchRunner:0:1 -s mailman 24072 1.0 0.1 13444 6112 ? S 19:47 0:00 /usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/qrunner --runner=BounceRunner:0:1 -s mailman 24073 1.0 0.1 13040 6108 ? S 19:47 0:00 /usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/qrunner --runner=CommandRunner:0:1 -s mailman 24074 1.0 0.1 13484 6104 ? S 19:47 0:00 /usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/qrunner --runner=IncomingRunner:0:1 -s mailman 24075 1.0 0.1 12940 6136 ? S 19:47 0:00 /usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/qrunner --runner=NewsRunner:0:1 -s mailman 24076 1.0 0.1 13700 6172 ? S 19:47 0:00 /usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/qrunner --runner=OutgoingRunner:0:1 -s mailman 24077 1.0 0.1 13416 6100 ? S 19:47 0:00 /usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/qrunner --runner=VirginRunner:0:1 -s mailman 24078 0.9 0.1 13944 6100 ? S 19:47 0:00 /usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/qrunner --runner=RetryRunner:0:1 -s root 24177 0.0 0.0 5428 756 pts/0 S+ 19:48 0:00 grep python

    Read the article

  • How to burn WIM image to DVD or to convert it to ISO?

    - by dv9000
    I need to run vista recovery but its boot loader is destoryed. The only way to run reovery seems to be burning a temporary DVD from base.wim and boot.wim taken from vista recovery partition. How can I do this without having any vista installation or DVD? I can dowload anything but ony from legitimate sources (e.g. not torrents etc). Thank you in advance.

    Read the article

  • Canon EDSDK 2.8 (Xcode 3.2.2 - Snow Leopard 10.6.3)

    - by Scott Langendyk
    I'm trying to build an application using the Canon EDSDK 2.8. I created a new Cocoa Application project in Xcode, and imported the headers and framework files. When I try to build and run (without writing any code), I get two warnings that say the frameworks are missing x86_64 architecture files. If I try and import the "EDSDK.h" header file, I end up with about 100 miscellaneous errors. I've tried changing the architecture to i386, however when I try and build and run, I get a debugger error that says "Cannot access memory at address 0x0". The odd thing is that I can get the example applications bundled with the SDK to compile and run without issues, Anyone have any ideas as to why this is happening?

    Read the article

  • Fluent NHibernate Map Enum as Lookup Table

    - by Jaimal Chohan
    I have the following (simplified) public enum Level { Bronze, Silver, Gold } public class Member { public virtual Level MembershipLevel { get; set; } } public class MemberMap : ClassMap<Member> { Map(x => x.MembershipLevel); } This creates a table with a column called MembershipLevel with the value as the Enum string value. What I want is for the entire Enum to be created as a lookup table, with the Membe table referencing this with the integer value as the FK. Also, I want to do this without bas***izing my model. Any ideas? [And I want time machine] [With 2 cup holders]

    Read the article

  • soapfault: Couldn't create SOAP message

    - by polarw
    11-23 16:19:30.085: SoapFault - faultcode: 'S:Client' faultstring: 'Couldn't create SOAP message due to exception: Unable to create StAX reader or writer' faultactor: 'null' detail: null 11-23 16:19:30.085: at org.ksoap2.serialization.SoapSerializationEnvelope.parseBody(SoapSerializationEnvelope.java:121) 11-23 16:19:30.085: at org.ksoap2.SoapEnvelope.parse(SoapEnvelope.java:137) 11-23 16:19:30.085: at org.ksoap2.transport.Transport.parseResponse(Transport.java:63) 11-23 16:19:30.085: at org.ksoap2.transport.HttpTransportSE.call(HttpTransportSE.java:104) 11-23 16:19:30.085: at com.mobilebox.webservice.CommonWSClient.callWS(CommonWSClient.java:247) 11-23 16:19:30.085: at com.mobilebox.webservice.CommonWSClient.access$1(CommonWSClient.java:217) 11-23 16:19:30.085: at com.mobilebox.webservice.CommonWSClient$WSHandle.run(CommonWSClient.java:201) 11-23 16:19:30.085: at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1088) 11-23 16:19:30.085: at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:581) 11-23 16:19:30.085: at java.lang.Thread.run(Thread.java:1019) My Android application use Soap webservice client to call remote method. Sometimes, it will return the excepion as above. When I call it with SoapUI, it never occours.

    Read the article

  • Linux VirtualBox inside Windows VirtualBox doesn't boot

    - by Tobbe
    I'm trying to run a Linux VirtualBox instance inside a Windows 7 VirtualBox instance, but Linux (tried both Puppy and Mint) doesn't boot. My research says that this should be possible, see here for example: Can you run one virtual machine inside another? but I can't get it to work. Here are a couple of screenshots showing where the boot process stops for Linux Mint and Puppy Linux. What am I doing wrong?

    Read the article

< Previous Page | 301 302 303 304 305 306 307 308 309 310 311 312  | Next Page >