Search Results

Search found 18139 results on 726 pages for 'private cloud'.

Page 714/726 | < Previous Page | 710 711 712 713 714 715 716 717 718 719 720 721  | Next Page >

  • Setting up and using Bing Translate API Service for Machine Translation

    - by Rick Strahl
    Last week I spent quite a bit of time trying to set up the Bing Translate API service. I can honestly say this was one of the most screwed up developer experiences I've had in a long while - specifically related to the byzantine sign up process that Microsoft has in place. Not only is it nearly impossible to find decent documentation on the required signup process, some of the links in the docs are just plain wrong, and some of the account pages you need to access the actual account information once signed up are not linked anywhere from the administration UI. To make things even harder is the fact that the APIs changed a while back, with a completely new authentication scheme that's described and not directly linked documentation topic also made for a very frustrating search experience. It's a bummer that this is the case too, because the actual API itself is easy to use and works very well - fast and reasonably accurate (as accurate as you can expect machine translation to be). But the sign up process is a pain in the ass doubtlessly leaving many people giving up in frustration. In this post I'll try to hit all the points needed to set up to use the Bing Translate API in one place since such a document seems to be missing from Microsoft. Hopefully the API folks at Microsoft will get their shit together and actually provide this sort of info on their site… Signing Up The first step required is to create a Windows Azure MarketPlace account. Go to: https://datamarket.azure.com/ Sign in with your Windows Live Id If you don't have an account you will be taken to a registration page which you have to fill out. Follow the links and complete the registration. Once you're signed in you can start adding services. Click on the Data Link on the main page Select Microsoft Translator from the list This adds the Microsoft Bing Translator to your services. Pricing The page shows the pricing matrix and the free service which provides 2 megabytes for translations a month for free. Prices go up steeply from there. Pricing is determined by actual bytes of the result translations used. Max translations are 1000 characters so at minimum this means you get around 2000 translations a month for free. However most translations are probable much less so you can expect larger number of translations to go through. For testing or low volume translations this should be just fine. Once signed up there are no further instructions and you're left in limbo on the MS site. Register your Application Once you've created the Data association with Translator the next step is registering your application. To do this you need to access your developer account. Go to https://datamarket.azure.com/developer/applications/register Provide a ClientId, which is effectively the unique string identifier for your application (not your customer id!) Provide your name The client secret was auto-created and this becomes your 'password' For the redirect url provide any https url: https://microsoft.com works Give this application a description of your choice so you can identify it in the list of apps Now, once you've registered your application, keep track of the ClientId and ClientSecret - those are the two keys you need to authenticate before you can call the Translate API. Oddly the applications page is hidden from the Azure Portal UI. I couldn't find a direct link from anywhere on the site back to this page where I can examine my developer application keys. To find them you can go to: https://datamarket.azure.com/developer/applications You can come back here to look at your registered applications and pick up the ClientID and ClientSecret. Fun eh? But we're now ready to actually call the API and do some translating. Using the Bing Translate API The good news is that after this signup hell, using the API is pretty straightforward. To use the translation API you'll need to actually use two services: You need to call an authentication API service first, before you can call the actual translator API. These two APIs live on different domains, and the authentication API returns JSON data while the translator service returns XML. So much for consistency. Authentication The first step is authentication. The service uses oAuth authentication with a  bearer token that has to be passed to the translator API. The authentication call retrieves the oAuth token that you can then use with the translate API call. The bearer token has a short 10 minute life time, so while you can cache it for successive calls, the token can't be cached for long periods. This means for Web backend requests you typically will have to authenticate each time unless you build a more elaborate caching scheme that takes the timeout into account (perhaps using the ASP.NET Cache object). For low volume operations you can probably get away with simply calling the auth API for every translation you do. To call the Authentication API use code like this:/// /// Retrieves an oAuth authentication token to be used on the translate /// API request. The result string needs to be passed as a bearer token /// to the translate API. /// /// You can find client ID and Secret (or register a new one) at: /// https://datamarket.azure.com/developer/applications/ /// /// The client ID of your application /// The client secret or password /// public string GetBingAuthToken(string clientId = null, string clientSecret = null) { string authBaseUrl = https://datamarket.accesscontrol.windows.net/v2/OAuth2-13; if (string.IsNullOrEmpty(clientId) || string.IsNullOrEmpty(clientSecret)) { ErrorMessage = Resources.Resources.Client_Id_and_Client_Secret_must_be_provided; return null; } var postData = string.Format("grant_type=client_credentials&client_id={0}" + "&client_secret={1}" + "&scope=http://api.microsofttranslator.com", HttpUtility.UrlEncode(clientId), HttpUtility.UrlEncode(clientSecret)); // POST Auth data to the oauth API string res, token; try { var web = new WebClient(); web.Encoding = Encoding.UTF8; res = web.UploadString(authBaseUrl, postData); } catch (Exception ex) { ErrorMessage = ex.GetBaseException().Message; return null; } var ser = new JavaScriptSerializer(); var auth = ser.Deserialize<BingAuth>(res); if (auth == null) return null; token = auth.access_token; return token; } private class BingAuth { public string token_type { get; set; } public string access_token { get; set; } } This code basically takes the client id and secret and posts it at the oAuth endpoint which returns a JSON string. Here I use the JavaScript serializer to deserialize the JSON into a custom object I created just for deserialization. You can also use JSON.NET and dynamic deserialization if you are already using JSON.NET in your app in which case you don't need the extra type. In my library that houses this component I don't, so I just rely on the built in serializer. The auth method returns a long base64 encoded string which can be used as a bearer token in the translate API call. Translation Once you have the authentication token you can use it to pass to the translate API. The auth token is passed as an Authorization header and the value is prefixed with a 'Bearer ' prefix for the string. Here's what the simple Translate API call looks like:/// /// Uses the Bing API service to perform translation /// Bing can translate up to 1000 characters. /// /// Requires that you provide a CLientId and ClientSecret /// or set the configuration values for these two. /// /// More info on setup: /// http://www.west-wind.com/weblog/ /// /// Text to translate /// Two letter culture name /// Two letter culture name /// Pass an access token retrieved with GetBingAuthToken. /// If not passed the default keys from .config file are used if any /// public string TranslateBing(string text, string fromCulture, string toCulture, string accessToken = null) { string serviceUrl = "http://api.microsofttranslator.com/V2/Http.svc/Translate"; if (accessToken == null) { accessToken = GetBingAuthToken(); if (accessToken == null) return null; } string res; try { var web = new WebClient(); web.Headers.Add("Authorization", "Bearer " + accessToken); string ct = "text/plain"; string postData = string.Format("?text={0}&from={1}&to={2}&contentType={3}", HttpUtility.UrlEncode(text), fromCulture, toCulture, HttpUtility.UrlEncode(ct)); web.Encoding = Encoding.UTF8; res = web.DownloadString(serviceUrl + postData); } catch (Exception e) { ErrorMessage = e.GetBaseException().Message; return null; } // result is a single XML Element fragment var doc = new XmlDocument(); doc.LoadXml(res); return doc.DocumentElement.InnerText; } The first of this code deals with ensuring the auth token exists. You can either pass the token into the method manually or let the method automatically retrieve the auth code on its own. In my case I'm using this inside of a Web application and in that situation I simply need to re-authenticate every time as there's no convenient way to manage the lifetime of the auth cookie. The auth token is added as an Authorization HTTP header prefixed with 'Bearer ' and attached to the request. The text to translate, the from and to language codes and a result format are passed on the query string of this HTTP GET request against the Translate API. The translate API returns an XML string which contains a single element with the translated string. Using the Wrapper Methods It should be pretty obvious how to use these two methods but here are a couple of test methods that demonstrate the two usage scenarios:[TestMethod] public void TranslateBingWithAuthTest() { var translate = new TranslationServices(); string clientId = DbResourceConfiguration.Current.BingClientId; string clientSecret = DbResourceConfiguration.Current.BingClientSecret; string auth = translate.GetBingAuthToken(clientId, clientSecret); Assert.IsNotNull(auth); string text = translate.TranslateBing("Hello World we're back home!", "en", "de",auth); Assert.IsNotNull(text, translate.ErrorMessage); Console.WriteLine(text); } [TestMethod] public void TranslateBingIntegratedTest() { var translate = new TranslationServices(); string text = translate.TranslateBing("Hello World we're back home!","en","de"); Assert.IsNotNull(text, translate.ErrorMessage); Console.WriteLine(text); } Other API Methods The Translate API has a number of methods available and this one is the simplest one but probably also the most common one that translates a single string. You can find additional methods for this API here: http://msdn.microsoft.com/en-us/library/ff512419.aspx Soap and AJAX APIs are also available and documented on MSDN: http://msdn.microsoft.com/en-us/library/dd576287.aspx These links will be your starting points for calling other methods in this API. Dual Interface I've talked about my database driven localization provider here in the past, and it's for this tool that I added the Bing localization support. Basically I have a localization administration form that allows me to translate individual strings right out of the UI, using both Google and Bing APIs: As you can see in this example, the results from Google and Bing can vary quite a bit - in this case Google is stumped while Bing actually generated a valid translation. At other times it's the other way around - it's pretty useful to see multiple translations at the same time. Here I can choose from one of the values and driectly embed them into the translated text field. Lost in Translation There you have it. As I mentioned using the API once you have all the bureaucratic crap out of the way calling the APIs is fairly straight forward and reasonably fast, even if you have to call the Auth API for every call. Hopefully this post will help out a few of you trying to navigate the Microsoft bureaucracy, at least until next time Microsoft upends everything and introduces new ways to sign up again. Until then - happy translating… Related Posts Translation method Source on Github Translating with Google Translate without Google API Keys Creating a data-driven ASP.NET Resource Provider© Rick Strahl, West Wind Technologies, 2005-2013Posted in Localization  ASP.NET  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Getting Started Building Windows 8 Store Apps with XAML/C#

    - by dwahlin
    Technology is fun isn’t it? As soon as you think you’ve figured out where things are heading a new technology comes onto the scene, changes things up, and offers new opportunities. One of the new technologies I’ve been spending quite a bit of time with lately is Windows 8 store applications. I posted my thoughts about Windows 8 during the BUILD conference in 2011 and still feel excited about the opportunity there. Time will tell how well it ends up being accepted by consumers but I’m hopeful that it’ll take off. I currently have two Windows 8 store application concepts I’m working on with one being built in XAML/C# and another in HTML/JavaScript. I really like that Microsoft supports both options since it caters to a variety of developers and makes it easy to get started regardless if you’re a desktop developer or Web developer. Here’s a quick look at how the technologies are organized in Windows 8: In this post I’ll focus on the basics of Windows 8 store XAML/C# apps by looking at features, files, and code provided by Visual Studio projects. To get started building these types of apps you’ll definitely need to have some knowledge of XAML and C#. Let’s get started by looking at the Windows 8 store project types available in Visual Studio 2012.   Windows 8 Store XAML/C# Project Types When you open Visual Studio 2012 you’ll see a new entry under C# named Windows Store. It includes 6 different project types as shown next.   The Blank App project provides initial starter code and a single page whereas the Grid App and Split App templates provide quite a bit more code as well as multiple pages for your application. The other projects available can be be used to create a class library project that runs in Windows 8 store apps, a WinRT component such as a custom control, and a unit test library project respectively. If you’re building an application that displays data in groups using the “tile” concept then the Grid App or Split App project templates are a good place to start. An example of the initial screens generated by each project is shown next: Grid App Split View App   When a user clicks a tile in a Grid App they can view details about the tile data. With a Split View app groups/categories are shown and when the user clicks on a group they can see a list of all the different items and then drill-down into them:   For the remainder of this post I’ll focus on functionality provided by the Blank App project since it provides a simple way to get started learning the fundamentals of building Windows 8 store apps.   Blank App Project Walkthrough The Blank App project is a great place to start since it’s simple and lets you focus on the basics. In this post I’ll focus on what it provides you out of the box and cover additional details in future posts. Once you have the basics down you can move to the other project types if you need the functionality they provide. The Blank App project template does exactly what it says – you get an empty project with a few starter files added to help get you going. This is a good option if you’ll be building an app that doesn’t fit into the grid layout view that you see a lot of Windows 8 store apps following (such as on the Windows 8 start screen). I ended up starting with the Blank App project template for the app I’m currently working on since I’m not displaying data/image tiles (something the Grid App project does well) or drilling down into lists of data (functionality that the Split App project provides). The Blank App project provides images for the tiles and splash screen (you’ll definitely want to change these), a StandardStyles.xaml resource dictionary that includes a lot of helpful styles such as buttons for the AppBar (a special type of menu in Windows 8 store apps), an App.xaml file, and the app’s main page which is named MainPage.xaml. It also adds a Package.appxmanifest that is used to define functionality that your app requires, app information used in the store, plus more. The App.xaml, App.xaml.cs and StandardStyles.xaml Files The App.xaml file handles loading a resource dictionary named StandardStyles.xaml which has several key styles used throughout the application: <Application x:Class="BlankApp.App" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="using:BlankApp"> <Application.Resources> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <!-- Styles that define common aspects of the platform look and feel Required by Visual Studio project and item templates --> <ResourceDictionary Source="Common/StandardStyles.xaml"/> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> </Application.Resources> </Application>   StandardStyles.xaml has style definitions for different text styles and AppBar buttons. If you scroll down toward the middle of the file you’ll see that many AppBar button styles are included such as one for an edit icon. Button styles like this can be used to quickly and easily add icons/buttons into your application without having to be an expert in design. <Style x:Key="EditAppBarButtonStyle" TargetType="ButtonBase" BasedOn="{StaticResource AppBarButtonStyle}"> <Setter Property="AutomationProperties.AutomationId" Value="EditAppBarButton"/> <Setter Property="AutomationProperties.Name" Value="Edit"/> <Setter Property="Content" Value="&#xE104;"/> </Style> Switching over to App.xaml.cs, it includes some code to help get you started. An OnLaunched() method is added to handle creating a Frame that child pages such as MainPage.xaml can be loaded into. The Frame has the same overall purpose as the one found in WPF and Silverlight applications - it’s used to navigate between pages in an application. /// <summary> /// Invoked when the application is launched normally by the end user. Other entry points /// will be used when the application is launched to open a specific file, to display /// search results, and so forth. /// </summary> /// <param name="args">Details about the launch request and process.</param> protected override void OnLaunched(LaunchActivatedEventArgs args) { Frame rootFrame = Window.Current.Content as Frame; // Do not repeat app initialization when the Window already has content, // just ensure that the window is active if (rootFrame == null) { // Create a Frame to act as the navigation context and navigate to the first page rootFrame = new Frame(); if (args.PreviousExecutionState == ApplicationExecutionState.Terminated) { //TODO: Load state from previously suspended application } // Place the frame in the current Window Window.Current.Content = rootFrame; } if (rootFrame.Content == null) { // When the navigation stack isn't restored navigate to the first page, // configuring the new page by passing required information as a navigation // parameter if (!rootFrame.Navigate(typeof(MainPage), args.Arguments)) { throw new Exception("Failed to create initial page"); } } // Ensure the current window is active Window.Current.Activate(); }   Notice that in addition to creating a Frame the code also checks to see if the app was previously terminated so that you can load any state/data that the user may need when the app is launched again. If you’re new to the lifecycle of Windows 8 store apps the following image shows how an app can be running, suspended, and terminated.   If the user switches from an app they’re running the app will be suspended in memory. The app may stay suspended or may be terminated depending on how much memory the OS thinks it needs so it’s important to save state in case the application is ultimately terminated and has to be started fresh. Although I won’t cover saving application state here, additional information can be found at http://msdn.microsoft.com/en-us/library/windows/apps/xaml/hh465099.aspx. Another method in App.xaml.cs named OnSuspending() is also included in App.xaml.cs that can be used to store state as the user switches to another application:   /// <summary> /// Invoked when application execution is being suspended. Application state is saved /// without knowing whether the application will be terminated or resumed with the contents /// of memory still intact. /// </summary> /// <param name="sender">The source of the suspend request.</param> /// <param name="e">Details about the suspend request.</param> private void OnSuspending(object sender, SuspendingEventArgs e) { var deferral = e.SuspendingOperation.GetDeferral(); //TODO: Save application state and stop any background activity deferral.Complete(); } The MainPage.xaml and MainPage.xaml.cs Files The Blank App project adds a file named MainPage.xaml that acts as the initial screen for the application. It doesn’t include anything aside from an empty <Grid> XAML element in it. The code-behind class named MainPage.xaml.cs includes a constructor as well as a method named OnNavigatedTo() that is called once the page is displayed in the frame.   /// <summary> /// An empty page that can be used on its own or navigated to within a Frame. /// </summary> public sealed partial class MainPage : Page { public MainPage() { this.InitializeComponent(); } /// <summary> /// Invoked when this page is about to be displayed in a Frame. /// </summary> /// <param name="e">Event data that describes how this page was reached. The Parameter /// property is typically used to configure the page.</param> protected override void OnNavigatedTo(NavigationEventArgs e) { } }   If you’re experienced with XAML you can switch to Design mode and start dragging and dropping XAML controls from the ToolBox in Visual Studio. If you prefer to type XAML you can do that as well in the XAML editor or while in split mode. Many of the controls available in WPF and Silverlight are included such as Canvas, Grid, StackPanel, and Border for layout. Standard input controls are also included such as TextBox, CheckBox, PasswordBox, RadioButton, ComboBox, ListBox, and more. MediaElement is available for rendering video or playing audio files. Some of the “common” XAML controls included out of the box are shown next:   Although XAML/C# Windows 8 store apps don’t include all of the functionality available in Silverlight 5, the core functionality required to build store apps is there with additional functionality available in open source projects such as Callisto (started by Microsoft’s Tim Heuer), Q42.WinRT, and others. Standard XAML data binding can be used to bind C# objects to controls, converters can be used to manipulate data during the data binding process, and custom styles and templates can be applied to controls to modify them. Although Visual Studio 2012 doesn’t support visually creating styles or templates, Expression Blend 5 handles that very well. To get started building the initial screen of a Windows 8 app you can start adding controls as mentioned earlier. Simply place them inside of the <Grid> element that’s included. You can arrange controls in a stacked manner using the StackPanel control, add a border around controls using the Border control, arrange controls in columns and rows using the Grid control, or absolutely position controls using the Canvas control. One of the controls that may be new to you is the AppBar. It can be used to add menu/toolbar functionality into a store app and keep the app clean and focused. You can place an AppBar at the top or bottom of the screen. A user on a touch device can swipe up to display the bottom AppBar or right-click when using a mouse. An example of defining an AppBar that contains an Edit button is shown next. The EditAppBarButtonStyle is available in the StandardStyles.xaml file mentioned earlier. <Page.BottomAppBar> <AppBar x:Name="ApplicationAppBar" Padding="10,0,10,0" AutomationProperties.Name="Bottom App Bar"> <Grid> <StackPanel x:Name="RightPanel" Orientation="Horizontal" Grid.Column="1" HorizontalAlignment="Right"> <Button x:Name="Edit" Style="{StaticResource EditAppBarButtonStyle}" Tag="Edit" /> </StackPanel> </Grid> </AppBar> </Page.BottomAppBar> Like standard XAML controls, the <Button> control in the AppBar can be wired to an event handler method in the MainPage.Xaml.cs file or even bound to a ViewModel object using “commanding” if your app follows the Model-View-ViewModel (MVVM) pattern (check out the MVVM Light package available through NuGet if you’re using MVVM with Windows 8 store apps). The AppBar can be used to navigate to different screens, show and hide controls, display dialogs, show settings screens, and more.   The Package.appxmanifest File The Package.appxmanifest file contains configuration details about your Windows 8 store app. By double-clicking it in Visual Studio you can define the splash screen image, small and wide logo images used for tiles on the start screen, orientation information, and more. You can also define what capabilities the app has such as if it uses the Internet, supports geolocation functionality, requires a microphone or webcam, etc. App declarations such as background processes, file picker functionality, and sharing can also be defined Finally, information about how the app is packaged for deployment to the store can also be defined. Summary If you already have some experience working with XAML technologies you’ll find that getting started building Windows 8 applications is pretty straightforward. Many of the controls available in Silverlight and WPF are available making it easy to get started without having to relearn a lot of new technologies. In the next post in this series I’ll discuss additional features that can be used in your Windows 8 store apps.

    Read the article

  • How to handle screen orientation change when progress dialog and background thread active?

    - by Heikki Toivonen
    My program does some network activity in a background thread. Before starting, it pops up a progress dialog. The dialog is dismissed on the handler. This all works fine, except when screen orientation changes while the dialog is up (and the background thread is going). At this point the app either crashes, or deadlocks, or gets into a weird stage where the app does not work at all until all the threads have been killed. How can I handle the screen orientation change gracefully? The sample code below matches roughly what my real program does: public class MyAct extends Activity implements Runnable { public ProgressDialog mProgress; // UI has a button that when pressed calls send public void send() { mProgress = ProgressDialog.show(this, "Please wait", "Please wait", true, true); Thread thread = new Thread(this); thread.start(); } public void run() { Thread.sleep(10000); Message msg = new Message(); mHandler.sendMessage(msg); } private final Handler mHandler = new Handler() { @Override public void handleMessage(Message msg) { mProgress.dismiss(); } }; } Stack: E/WindowManager( 244): Activity MyAct has leaked window com.android.internal.policy.impl.PhoneWindow$DecorView@433b7150 that was originally added here E/WindowManager( 244): android.view.WindowLeaked: Activity MyAct has leaked window com.android.internal.policy.impl.PhoneWindow$DecorView@433b7150 that was originally added here E/WindowManager( 244): at android.view.ViewRoot.<init>(ViewRoot.java:178) E/WindowManager( 244): at android.view.WindowManagerImpl.addView(WindowManagerImpl.java:147) E/WindowManager( 244): at android.view.WindowManagerImpl.addView(WindowManagerImpl.java:90) E/WindowManager( 244): at android.view.Window$LocalWindowManager.addView(Window.java:393) E/WindowManager( 244): at android.app.Dialog.show(Dialog.java:212) E/WindowManager( 244): at android.app.ProgressDialog.show(ProgressDialog.java:103) E/WindowManager( 244): at android.app.ProgressDialog.show(ProgressDialog.java:91) E/WindowManager( 244): at MyAct.send(MyAct.java:294) E/WindowManager( 244): at MyAct$4.onClick(MyAct.java:174) E/WindowManager( 244): at android.view.View.performClick(View.java:2129) E/WindowManager( 244): at android.view.View.onTouchEvent(View.java:3543) E/WindowManager( 244): at android.widget.TextView.onTouchEvent(TextView.java:4664) E/WindowManager( 244): at android.view.View.dispatchTouchEvent(View.java:3198) E/WindowManager( 244): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:857) E/WindowManager( 244): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:857) E/WindowManager( 244): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:857) E/WindowManager( 244): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:857) E/WindowManager( 244): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:857) E/WindowManager( 244): at com.android.internal.policy.impl.PhoneWindow$DecorView.superDispatchTouchEvent(PhoneWindow.java:1593) E/WindowManager( 244): at com.android.internal.policy.impl.PhoneWindow.superDispatchTouchEvent(PhoneWindow.java:1089) E/WindowManager( 244): at android.app.Activity.dispatchTouchEvent(Activity.java:1871) E/WindowManager( 244): at com.android.internal.policy.impl.PhoneWindow$DecorView.dispatchTouchEvent(PhoneWindow.java:1577) E/WindowManager( 244): at android.view.ViewRoot.handleMessage(ViewRoot.java:1140) E/WindowManager( 244): at android.os.Handler.dispatchMessage(Handler.java:88) E/WindowManager( 244): at android.os.Looper.loop(Looper.java:123) E/WindowManager( 244): at android.app.ActivityThread.main(ActivityThread.java:3739) E/WindowManager( 244): at java.lang.reflect.Method.invokeNative(Native Method) E/WindowManager( 244): at java.lang.reflect.Method.invoke(Method.java:515) E/WindowManager( 244): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:739) E/WindowManager( 244): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:497) E/WindowManager( 244): at dalvik.system.NativeStart.main(Native Method) and: W/dalvikvm( 244): threadid=3: thread exiting with uncaught exception (group=0x4000fe68) E/AndroidRuntime( 244): Uncaught handler: thread main exiting due to uncaught exception E/AndroidRuntime( 244): java.lang.IllegalArgumentException: View not attached to window manager E/AndroidRuntime( 244): at android.view.WindowManagerImpl.findViewLocked(WindowManagerImpl.java:331) E/AndroidRuntime( 244): at android.view.WindowManagerImpl.removeView(WindowManagerImpl.java:200) E/AndroidRuntime( 244): at android.view.Window$LocalWindowManager.removeView(Window.java:401) E/AndroidRuntime( 244): at android.app.Dialog.dismissDialog(Dialog.java:249) E/AndroidRuntime( 244): at android.app.Dialog.access$000(Dialog.java:59) E/AndroidRuntime( 244): at android.app.Dialog$1.run(Dialog.java:93) E/AndroidRuntime( 244): at android.app.Dialog.dismiss(Dialog.java:233) E/AndroidRuntime( 244): at MyAct$1.handleMessage(MyAct.java:321) E/AndroidRuntime( 244): at android.os.Handler.dispatchMessage(Handler.java:88) E/AndroidRuntime( 244): at android.os.Looper.loop(Looper.java:123) E/AndroidRuntime( 244): at android.app.ActivityThread.main(ActivityThread.java:3739) E/AndroidRuntime( 244): at java.lang.reflect.Method.invokeNative(Native Method) E/AndroidRuntime( 244): at java.lang.reflect.Method.invoke(Method.java:515) E/AndroidRuntime( 244): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:739) E/AndroidRuntime( 244): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:497) E/AndroidRuntime( 244): at dalvik.system.NativeStart.main(Native Method) I/Process ( 46): Sending signal. PID: 244 SIG: 3 I/dalvikvm( 244): threadid=7: reacting to signal 3 I/dalvikvm( 244): Wrote stack trace to '/data/anr/traces.txt' I/Process ( 244): Sending signal. PID: 244 SIG: 9 I/ActivityManager( 46): Process MyAct (pid 244) has died. I have tried to dismiss the progress dialog in onSaveInstanceState, but that just prevents an immediate crash. The background thread is still going, and the UI is in partially drawn state. Need to kill the whole app before it starts working again.

    Read the article

  • OIM 11g notification framework

    - by Rajesh G Kumar
    OIM 11g has introduced an improved and template based Notifications framework. New release has removed the limitation of sending text based emails (out-of-the-box emails) and enhanced to support html features. New release provides in-built out-of-the-box templates for events like 'Reset Password', 'Create User Self Service' , ‘User Deleted' etc. Also provides new APIs to support custom templates to send notifications out of OIM. OIM notification framework supports notification mechanism based on events, notification templates and template resolver. They are defined as follows: Ø Events are defined as XML file and imported as part of MDS database in order to make notification event available for use. Ø Notification templates are created using OIM advance administration console. The template contains the text and the substitution 'variables' which will be replaced with the data provided by the template resolver. Templates support internationalization and can be defined as HTML or in form of simple text. Ø Template resolver is a Java class that is responsible to provide attributes and data to be used at runtime and design time. It must be deployed following the OIM plug-in framework. Resolver data provided at design time is to be used by end user to design notification template with available entity variables and it also provides data at runtime to replace the designed variable with value to be displayed to recipients. Steps to define custom notifications in OIM 11g are: Steps# Steps 1. Define the Notification Event 2. Create the Custom Template Resolver class 3. Create Template with notification contents to be sent to recipients 4. Create Event triggering spots in OIM 1. Notification Event metadata The Notification Event is defined as XML file which need to be imported into MDS database. An event file must be compliant with the schema defined by the notification engine, which is NotificationEvent.xsd. The event file contains basic information about the event.XSD location in MDS database: “/metadata/iam-features-notification/NotificationEvent.xsd”Schema file can be viewed by exporting file from MDS using weblogicExportMetadata.sh script.Sample Notification event metadata definition: 1: <?xml version="1.0" encoding="UTF-8"?> 2: <Events xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance xsi:noNamespaceSchemaLocation="../../../metadata/NotificationEvent.xsd"> 3: <EventType name="Sample Notification"> 4: <StaticData> 5: <Attribute DataType="X2-Entity" EntityName="User" Name="Granted User"/> 6: </StaticData> 7: <Resolver class="com.iam.oim.demo.notification.DemoNotificationResolver"> 8: <Param DataType="91-Entity" EntityName="Resource" Name="ResourceInfo"/> 9: </Resolver> 10: </EventType> 11: </Events> Line# Description 1. XML file notation tag 2. Events is root tag 3. EventType tag is to declare a unique event name which will be available for template designing 4. The StaticData element lists a set of parameters which allow user to add parameters that are not data dependent. In other words, this element defines the static data to be displayed when notification is to be configured. An example of static data is the User entity, which is not dependent on any other data and has the same set of attributes for all event instances and notification templates. Available attributes are used to be defined as substitution tokens in the template. 5. Attribute tag is child tag for StaticData to declare the entity and its data type with unique reference name. User entity is most commonly used Entity as StaticData. 6. StaticData closing tag 7. Resolver tag defines the resolver class. The Resolver class must be defined for each notification. It defines what parameters are available in the notification creation screen and how those parameters are replaced when the notification is to be sent. Resolver class resolves the data dynamically at run time and displays the attributes in the UI. 8. The Param DataType element lists a set of parameters which allow user to add parameters that are data dependent. An example of the data dependent or a dynamic entity is a resource object which user can select at run time. A notification template is to be configured for the resource object. Corresponding to the resource object field, a lookup is displayed on the UI. When a user selects the event the call goes to the Resolver class provided to fetch the fields that are displayed in the Available Data list, from which user can select the attribute to be used on the template. Param tag is child tag to declare the entity and its data type with unique reference name. 9. Resolver closing tag 10 EventType closing tag 11. Events closing tag Note: - DataType needs to be declared as “X2-Entity” for User entity and “91-Entity” for Resource or Organization entities. The dynamic entities supported for lookup are user, resource, and organization. Once notification event metadata is defined, need to be imported into MDS database. Fully qualified resolver class name need to be define for XML but do not need to load the class in OIM yet (it can be loaded later). 2. Coding the notification resolver All event owners have to provide a resolver class which would resolve the data dynamically at run time. Custom resolver class must implement the interface oracle.iam.notification.impl.NotificationEventResolver and override the implemented methods with actual implementation. It has 2 methods: S# Methods Descriptions 1. public List<NotificationAttribute> getAvailableData(String eventType, Map<String, Object> params); This API will return the list of available data variables. These variables will be available on the UI while creating/modifying the Templates and would let user select the variables so that they can be embedded as a token as part of the Messages on the template. These tokens are replaced by the value passed by the resolver class at run time. Available data is displayed in a list. The parameter "eventType" specifies the event Name for which template is to be read.The parameter "params" is the map which has the entity name and the corresponding value for which available data is to be fetched. Sample code snippet: List<NotificationAttribute> list = new ArrayList<NotificationAttribute>(); long objKey = (Long) params.get("resource"); //Form Field details based on Resource object key HashMap<String, Object> formFieldDetail = getObjectFormName(objKey); for (Iterator<?> itrd = formFieldDetail.entrySet().iterator(); itrd.hasNext(); ) { NotificationAttribute availableData = new NotificationAttribute(); Map.Entry formDetailEntrySet = (Entry<?, ?>)itrd.next(); String fieldLabel = (String)formDetailEntrySet.getValue(); availableData.setName(fieldLabel); list.add(availableData); } return list; 2. Public HashMap<String, Object> getReplacedData(String eventType, Map<String, Object> params); This API would return the resolved value of the variables present on the template at the runtime when notification is being sent. The parameter "eventType" specifies the event Name for which template is to be read.The parameter "params" is the map which has the base values such as usr_key, obj_key etc required by the resolver implementation to resolve the rest of the variables in the template. Sample code snippet: HashMap<String, Object> resolvedData = new HashMap<String, Object>();String firstName = getUserFirstname(params.get("usr_key"));resolvedData.put("fname", firstName); String lastName = getUserLastName(params.get("usr_key"));resolvedData.put("lname", lastname);resolvedData.put("count", "1 million");return resolvedData; This code must be deployed as per OIM 11g plug-in framework. The XML file defining the plug-in is as below: <?xml version="1.0" encoding="UTF-8"?> <oimplugins xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <plugins pluginpoint="oracle.iam.notification.impl.NotificationEventResolver"> <plugin pluginclass= " com.iam.oim.demo.notification.DemoNotificationResolver" version="1.0" name="Sample Notification Resolver"/> </plugins> </oimplugins> 3. Defining the template To create a notification template: Log in to the Oracle Identity Administration Click the System Management tab and then click the Notification tab From the Actions list on the left pane, select Create On the Create page, enter values for the following fields under the Template Information section: Template Name: Demo template Description Text: Demo template Under the Event Details section, perform the following: From the Available Event list, select the event for which the notification template is to be created from a list of available events. Depending on your selection, other fields are displayed in the Event Details section. Note that the template Sample Notification Event created in the previous step being used as the notification event. The contents of the Available Data drop down are based on the event XML StaticData tag, the drop down basically lists all the attributes of the entities defined in that tag. Once you select an element in the drop down, it will show up in the Selected Data text field and then you can just copy it and paste it into either the message subject or the message body fields prefixing $ symbol. Example if list has attribute like First_Name then message body will contains this as $First_Name which resolver will parse and replace it with actual value at runtime. In the Resource field, select a resource from the lookup. This is the dynamic data defined by the Param DataType element in the XML definition. Based on selected resource getAvailableData method of resolver will be called to fetch the resource object attribute detail, if method is overridden with required implementation. For current scenario, Map<String, Object> params will get populated with object key as value and key as “resource” in the map. This is the only input will be provided to resolver at design time. You need to implement the further logic to fetch the object attributes detail to populate the available Data list. List string should not have space in between, if object attributes has space for attribute name then implement logic to replace the space with ‘_’ before populating the list. Example if attribute name is “First Name” then make it “First_Name” and populate the list. Space is not supported while you try to parse and replace the token at run time with real value. Make a note that the Available Data and Selected Data are used in the substitution tokens definition only, they do not define the final data that will be sent in the notification. OIM will invoke the resolver class to get the data and make the substitutions. Under the Locale Information section, enter values in the following fields: To specify a form of encoding, select either UTF-8 or ASCII. In the Message Subject field, enter a subject for the notification. From the Type options, select the data type in which you want to send the message. You can choose between HTML and Text/Plain. In the Short Message field, enter a gist of the message in very few words. In the Long Message field, enter the message that will be sent as the notification with Available data token which need to be replaced by resolver at runtime. After you have entered the required values in all the fields, click Save. A message is displayed confirming the creation of the notification template. Click OK 4. Triggering the event A notification event can be triggered from different places in OIM. The logic behind the triggering must be coded and plugged into OIM. Examples of triggering points for notifications: Event handlers: post process notifications for specific data updates in OIM users Process tasks: to notify the users that a provisioning task was executed by OIM Scheduled tasks: to notify something related to the task The scheduled job has two parameters: Template Name: defines the notification template to be sent User Login: defines the user record that will provide the data to be sent in the notification Sample Code Snippet: public void execute(String templateName , String userId) { try { NotificationService notService = Platform.getService(NotificationService.class); NotificationEvent eventToSend=this.createNotificationEvent(templateName,userId); notService.notify(eventToSend); } catch (Exception e) { e.printStackTrace(); } } private NotificationEvent createNotificationEvent(String poTemplateName, String poUserId) { NotificationEvent event = new NotificationEvent(); String[] receiverUserIds= { poUserId }; event.setUserIds(receiverUserIds); event.setTemplateName(poTemplateName); event.setSender(null); HashMap<String, Object> templateParams = new HashMap<String, Object>(); templateParams.put("USER_LOGIN",poUserId); event.setParams(templateParams); return event; } public HashMap getAttributes() { return null; } public void setAttributes() {} }

    Read the article

  • ASP.NET AJAX Problem

    - by Rich Andrews
    I've updated some code to use the Ajax Control toolkit 0911 beta and for some reason code that dynamically added collapsable panel extenders in the code behind now causes the following error in the client side jscript... Microsoft JScript runtime error: Sys.ArgumentException: Value must not be null for Controls and Behaviors. Parameter name: element in... $create = Sys.Component.create = function Sys$Component$create(type, properties, events, references, element) { /// <summary locid="M:J#Sys.Component.create" /> /// <param name="type" type="Type"></param> /// <param name="properties" optional="true" mayBeNull="true"></param> /// <param name="events" optional="true" mayBeNull="true"></param> /// <param name="references" optional="true" mayBeNull="true"></param> /// <param name="element" domElement="true" optional="true" mayBeNull="true"></param> /// <returns type="Object"></returns> var e = Function._validateParams(arguments, [ {name: "type", type: Type}, {name: "properties", mayBeNull: true, optional: true}, {name: "events", mayBeNull: true, optional: true}, {name: "references", mayBeNull: true, optional: true}, {name: "element", mayBeNull: true, domElement: true, optional: true} ]); if (e) throw e; if (type.inheritsFrom(Sys.UI.Behavior) || type.inheritsFrom(Sys.UI.Control)) { if (!element) throw Error.argument('element', Sys.Res.createNoDom); } I accept that this is only a beta but I'm unable to either find a work around or even understand the reason why this pretty simple code no longer works. Code private Panel GetReportPanel(DataRow dr, ReportParameter[] Params) { Panel pnlReport = new Panel(); pnlReport.ID = Uri.EscapeDataString(dr["ReportName"].ToString()) + "_MainReportContainer"; //Report Title Section var pnlReportTitle = new Panel(); pnlReportTitle.CssClass = "ReportSectionTitle"; var tblReportTitle = new Table(); var trowReportTitle = new TableRow(); var tcellReportTitle = new TableCell(); var imgReportTitleExpand = new Image(); imgReportTitleExpand.ID = Uri.EscapeDataString("img" + dr["ReportName"].ToString() + "DataExpand"); tcellReportTitle.Controls.Add(imgReportTitleExpand); trowReportTitle.Controls.Add(tcellReportTitle); tcellReportTitle = new TableCell(); var lblReportTitle = new Label(); lblReportTitle.ID = Uri.EscapeDataString("lnk" + dr["ReportName"].ToString()); lblReportTitle.Text = "Functional " + dr["ReportName"].ToString(); tcellReportTitle.Controls.Add(lblReportTitle); trowReportTitle.Controls.Add(tcellReportTitle); tblReportTitle.Controls.Add(trowReportTitle); pnlReportTitle.Controls.Add(tblReportTitle); pnlReport.Controls.Add(pnlReportTitle); //Report Section var pnlReportSection = new Panel(); pnlReportSection.ID = Uri.EscapeDataString("pnlReportSection" + dr["ReportName"].ToString()); pnlReportSection.CssClass = "ReportSection"; pnlReportSection.ScrollBars = ScrollBars.None; var pnlInnerReportSection = new Panel(); pnlInnerReportSection.CssClass = "ReportSection"; var rptControl = new ReportViewer(); rptControl.ID = "rpt" + dr["ReportName"].ToString().Replace(' ', '_'); rptControl.ProcessingMode = ProcessingMode.Remote; rptControl.Width = new Unit("100%"); rptControl.ShowDocumentMapButton = false; rptControl.ShowParameterPrompts = false; rptControl.Visible = true; rptControl.Height = new Unit("500px"); rptControl.AsyncRendering = (bool)dr["ASyncRenderingEnabled"]; rptControl.ServerReport.ReportPath = dr["SSRSReportPath"].ToString(); rptControl.ServerReport.ReportServerUrl = new Uri("http://horoap336/reportserver"); rptControl.ServerReport.SetParameters(Params); pnlInnerReportSection.Controls.Add(rptControl); pnlReportSection.Controls.Add(pnlInnerReportSection); pnlReport.Controls.Add(pnlReportSection); //Collapsable Panel Extender var Extender = new AjaxControlToolkit.CollapsiblePanelExtender(); Extender.TargetControlID = pnlReportSection.ID; Extender.ID = Uri.EscapeDataString(dr["ReportName"].ToString()) + "_Extender"; Extender.CollapsedSize = 0; Extender.Collapsed = true; Extender.ExpandControlID = lblReportTitle.ID; Extender.CollapseControlID = lblReportTitle.ID; Extender.AutoCollapse = false; Extender.AutoExpand = false; Extender.ScrollContents = false; Extender.TextLabelID = lblReportTitle.ID; Extender.CollapsedText = "Functional " + dr["ReportName"].ToString() + " (Click To Show Details...)"; Extender.ExpandedText = "Functional " + dr["ReportName"].ToString() + " (Click To Hide Details...)"; Extender.ImageControlID = imgReportTitleExpand.ID; Extender.ExpandedImage = "~/images/collapse.jpg"; Extender.CollapsedImage = "~/images/expand.jpg"; Extender.ExpandDirection = AjaxControlToolkit.CollapsiblePanelExpandDirection.Vertical; pnlReport.Controls.Add(Extender); return pnlReport; } This panel is then added to a panel in the aspx file using... pnlContainer.Controls.Add(GetReportPanel(dr,Params)); Aspx file... <%@ Page Title="Operations MI Dashboard - Functional Reporting" Language="C#" MasterPageFile="~/MasterPage.master" AutoEventWireup="true" CodeFile="FunctionalReport.aspx.cs" Inherits="TelephonyReport" %> <%@ Register Assembly="Microsoft.ReportViewer.WebForms, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" Namespace="Microsoft.Reporting.WebForms" TagPrefix="rsweb" %> <asp:Content ID="Content1" ContentPlaceHolderID="head" Runat="Server"> </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="ContentPlaceHolder1" Runat="Server"> <asp:Panel ID="pnlContainer" runat="server"> </asp:Panel> </asp:Content> So, my questions are: Is there a problem with my code that is only evident in the later version of the toolkit? Does anyone know of a workaround that I can try? Can anyone explain why this problem happens only in the latest version?

    Read the article

  • CodePlex Daily Summary for Thursday, November 18, 2010

    CodePlex Daily Summary for Thursday, November 18, 2010Popular ReleasesSitefinity Migration Tool: Sitefinity Migration Tool 0.2 Alpha: - Improvements for the Sitefinity RC releaseMiniTwitter: 1.57: MiniTwitter 1.57 ???? ?? ?????????????????? ?? User Streams ????????????????????? ???????????????·??????·???????VFPX: VFP2C32 2.0.0.7: fixed a bug in AAverage - NULL values in the array corrupted the result removed limitation in ASum, AMin, AMax, AAverage - the functions were limited to 65000 elements, now they're limited to 65000 rows ASplitStr now returns a 1 element array with an empty string when an empty string is passed (behaves more like ALINES) internal code cleanup and optimization: optimized FoxArray class - results in a speedup of 10-20% in many functions which return the result in an array - like AProcesses...Microsoft SQL Server Product Samples: Database: AdventureWorks 2008R2 SR1: Sample Databases for Microsoft SQL Server 2008R2 (SR1)This release is dedicated to the sample databases that ship for Microsoft SQL Server 2008R2. See Database Prerequisites for SQL Server 2008R2 for feature configurations required for installing the sample databases. See Installing SQL Server 2008R2 Databases for step by step installation instructions. The SR1 release contains minor bug fixes to the installer used to create the sample databases. There are no changes to the databases them...VidCoder: 0.7.2: Fixed duplicated subtitles when running multiple encodes off of the same title.Razor Templating Engine: Razor Template Engine v1.1: Release 1.1 Changes: ADDED: Signed assemblies with strong name to allow assemblies to be referenced by other strongly-named assemblies. FIX: Filter out dynamic assemblies which causes failures in template compilation. FIX: Changed ASCII to UTF8 encoding to support UTF-8 encoded string templates. FIX: Corrected implementation of TemplateBase adding ITemplate interface.Prism Training Kit: Prism Training Kit - 1.1: This is an updated version of the Prism training Kit that targets Prism 4.0 and fixes the bugs reported in the version 1.0. This release consists of a Training Kit with Labs on the following topics Modularity Dependency Injection Bootstrapper UI Composition Communication Note: Take into account that this is a Beta version. If you find any bugs please report them in the Issue Tracker PrerequisitesVisual Studio 2010 Microsoft Word 2007/2010 Microsoft Silverlight 4 Microsoft S...Craig's Utility Library: Craig's Utility Library Code 2.0: This update contains a number of changes, added functionality, and bug fixes: Added transaction support to SQLHelper. Added linked/embedded resource ability to EmailSender. Updated List to take into account new functions. Added better support for MAC address in WMI classes. Fixed Parsing in Reflection class when dealing with sub classes. Fixed bug in SQLHelper when replacing the Command that is a select after doing a select. Fixed issue in SQL Server helper with regard to generati...MFCMAPI: November 2010 Release: Build: 6.0.0.1023 Full release notes at SGriffin's blog. If you just want to run the tool, get the executable. If you want to debug it, get the symbol file and the source. The 64 bit build will only work on a machine with Outlook 2010 64 bit installed. All other machines should use the 32 bit build, regardless of the operating system. Facebook BadgeDotNetNuke® Community Edition: 05.06.00: Major HighlightsAdded automatic portal alias creation for single portal installs Updated the file manager upload page to allow user to upload multiple files without returning to the file manager page. Fixed issue with Event Log Email Notifications. Fixed issue where Telerik HTML Editor was unable to upload files to secure or database folder. Fixed issue where registration page is not set correctly during an upgrade. Fixed issue where Sendmail stripped HTML and Links from emails...mVu Mobile Viewer: mVu Mobile Viewer 0.7.10.0: Tube8 fix.EPPlus-Create advanced Excel 2007 spreadsheets on the server: EPPlus 2.8.0.1: EPPlus-Create advanced Excel 2007 spreadsheets on the serverNew Features Improved chart support Different chart-types series on the same chart Support for secondary axis and a lot of new properties Better styling Encryption and Workbook protection Table support Import csv files Array formulas ...and a lot of bugfixesAutoLoL: AutoLoL v1.4.2: Added support for more clients (French and Russian) Settings are now stored sepperatly for each user on a computer Auto Login is much faster now Auto Login detects and handles caps lock state properly nowTailspinSpyworks - WebForms Sample Application: TailspinSpyworks-v0.9: Contains a number of bug fixes and additional tutorial steps as well as complete database implementation details.ASP.NET MVC Project Awesome (rich jQuery AJAX helpers): 1.3 and demos: a library with mvc helpers and a demo project that demonstrates an awesome way of doing asp.net mvc. tested on mozilla, safari, chrome, opera, ie 9b/8/7/6 new stuff in 1.3 Autocomplete helper Autocomplete and AjaxDropdown can have parentId and be filled with data depending on the value of the parent PopupForm besides Content("ok") on success can also return Json(data) and use 'data' in a client side function Awesome demo improved (cruder, builder, added service layer)Nearforums - ASP.NET MVC forum engine: Nearforums v4.1: Version 4.1 of the ASP.NET MVC forum engine, with great improvements: TinyMCE added as visual editor for messages (removed CKEditor). Integrated AntiSamy for cleaner html user post and add more prevention to potential injections. Admin status page: a page for the site admin to check the current status of the configuration / db / etc. View Roadmap for more details.UltimateJB: UltimateJB 2.01 PL3 KakaRoto + PSNYes by EvilSperm: Voici une version attendu avec impatience pour beaucoup : - La Version PSNYes pour pouvoir jouer sur le PSN avec une PS3 Jailbreaker. - Pour l'instant le PSNYes n'est disponible qu'avec les PS3 en firmwares 3.41 !!! - La version PL3 KAKAROTO intégre ses dernières modification et prépare a l'intégration du Firmware 3.30 !!! Conclusion : - UltimateJB PSNYes => Valide l'utilisation du PSN : Uniquement compatible avec les 3.41 - ultimateJB DEFAULT => Pas de PSN mais disponible pour les PS3 sui...Fluent Ribbon Control Suite: Fluent Ribbon Control Suite 2.0: Fluent Ribbon Control Suite 2.0(supports .NET 4.0 RTM and .NET 3.5) Includes: Fluent.dll (with .pdb and .xml) Showcase Application Samples (only for .NET 4.0) Foundation (Tabs, Groups, Contextual Tabs, Quick Access Toolbar, Backstage) Resizing (ribbon reducing & enlarging principles) Galleries (Gallery in ContextMenu, InRibbonGallery) MVVM (shows how to use this library with Model-View-ViewModel pattern) KeyTips ScreenTips Toolbars ColorGallery NEW! *Walkthrough (documenta...patterns & practices: Prism: Prism 4 Documentation: This release contains the Prism 4 documentation in Help 1.0 (CHM) format and PDF format. The documentation is also included with the full download of the guidance. Note: If you cannot view the content of the CHM, using Windows Explorer, select the properties for the file and then click Unblock on the General tab. Note: The PDF version of the guidance is provided for printing and reading in book format. The online version of the Prism 4 documentation can be read here.Farseer Physics Engine: Farseer Physics Engine 3.1: DonationsIf you like this release and would like to keep Farseer Physics Engine running, please consider a small donation. What's new?We bring a lot of new features in Farseer Physics Engine 3.1. Just to name a few: New Box2D core Rope joint added More stable CCD algorithm YuPeng clipper Explosives logic New Constrained Delaunay Triangulation algorithm from the Poly2Tri project. New Flipcode triangulation algorithm. Silverlight 4 samples Silverlight 4 debug view XNA 4.0 relea...New Projectsbizicosoft crm: crmBlog Migrator: The Blog Migrator tool is an all purpose utility designed to help transition a blog from one platform to another. It leverages XML-RPC, BlogML, and WordPress WXR formats. It also provides the ability to "rewrite" your posts on your old blog to point to the new location.bzr-tfs integration tests: Used to test bzr-tfs integrationC++ Open Source Advanced Operating System: C++ Open Source Advanced Operating System is a project which allows starter developers create their own OS. For now it is at a really initial stage.Chavah - internet radio for Yeshua's disciples: Chavah (pronounced "ha-vah") is internet radio for Yeshua's disciples. Inspired by Pandora, Chavah is a Silverlight application that brings community-driven Messianic Jewish tunes for the Lord over the web to your eager ears.CodePoster: An add-in for Visual Studio which allows you to post code directly from Visual Studio to your blog. CRM 2011 Plugin Testing Tools: This solution is meant to make unit testing of plugins in CRM 2011 a simpler and more efficient process. This solution serializes the objects that the CRM server passes to a plugin on execution and then offers a library that allows you to deserialize them in a unit test.Edinamarry Free Tarot Software for Windows: A freeware yet an advanced Tarot reading divinity Software for Psychics and for all those who practice Divinity and Spirituality. This software includes Tarot Spread Designer, Tarot Deck Designer, Tarot Cards Gallery, Client & Customer Profile, Word Editor, Tarot Reader, etc.EPiSocial: Social addons for EPiServer.first team foundation project: this is my first project for the student to teach them about the ms visual studio 201o and team foundation serverFKTdev: Proyecto donde subiremos las pruebas, códigos de ejemplo y demás recursos en nuestro aprendizaje en XNA, hasta que comencemos un desarrollo estable.Gardens Point Component Pascal: Gardens Point Component Pascal is an implementation for .NET of the Component Pascal Language (CP). CP is an object oriented version of Pascal, and shares many design features with Oberon-2. Geoinformatics: geoinformaticsGREENHOUSEMANAGER: GREENHOUSE es un proyecto universitario para manejar los distintos aspectos de un invernadero. El sistema esta desarrollado en c# con interfaz grafica en WPFHousing: This project is only for the asp.net learning. HR-XML.NET: A .NET HR-XML Serialization Library. Also supports the Dutch SETU standard and some proprietary extensions used in the Netherlands. The project is currently targeting HR-XML version 2.5 and Setu standard 2008-01.InternetShop2: ShopLesson4: Lesson4 for M.Logical Synchronous Circuit Simulator: As part of a student project, we are trying to make a logic synchronous circuit simulator, with the ultimate goal of simulating a processor and a digital clock running on it.MediaOwl: MediaOwl is a music (albums, artists, tracks, tags) and movie (movies, series, actors, directors, genres) search engine, but above all, it is a Microsoft Silverlight 4 application (C#), that shows how to use Caliburn Micro.N2F Yverdon Solar Flare Reflector: The solar flare reflector provides minimal base-range protection for your N2F Yverdon installation against solar flare interference.Netduino Plus Home Automation Toolkit: The Netduino Plus Home Automation project is designed to proivde a communication platform from various consumer based home automation products that offer a common web service endpoint. This will hopefully create a low cost DIY alternative to the expensive ethernet interfaces.NRapid: NRapidOfficeHelper: Wrapper around the open xml office package. You can easily create xlsx documents based on a template xlsx document and reuse parts from that document, if you mark them as named ranges (i.e. "names").OffProjects: This is a private project which for my dev investigationParis Velib Stations for Windows Mobile: Allow to find the closest Velib bike station in Paris on a Windows Mobile Phone (6.5)/ Permet de trouver la station de Vélib la plus proche dans Paris ainsi que ses informations sur un smartphone Windows MobilePolarConverter: Adjust the measured distance of HRM files created by Polar Heart Rate monitorsSexy Select: a jQuery plugin that allows for easy manipulation of select options. Allows for adding, removing, sorting, validation and custom skinningSilverlight Progress Feedback: Demonstrates how to get progress feedback from slow running WPF processes in Silverlight.Silverlight Tabbed Panel: Tabbed Panel based on Silverlight targeted for both developers and designers audience. Tabbed Control is used in this project. This is a basic application. More features will be added in further releases. XAML has been used to design this panel. slabhid: SLABHIDDevice.dll is used for the SLAB MCU example code on PC, the original source code is written by C++. This wrapper class brings SLABHIDDevice.dll to the .Net world, so it will be possible to make some quick solution for firmware testing purpose.SuperWebSocket: A .NET server side implementation of WebSocket protocol.test1-jjoiner: just a test projectTotem Alpha Developer Framework For .Net: ????tadf??VS.NET???????????,????jtadf???????????????。 ?????????tadf??????????????J2EE???????VS.NET?????????,??tadf?????.NET??,???????????,????????????,??????C#??????????Java???????,??????。 tadf?????????????,????HTML???????????,???????,?????????,?????。tadf???????????,????????RICH UI?????WEB??。??????,??。 tadf?????????????????????,????WEB??????????。???????,???????????,?Ajax???????,????????????????,????????,????????????????。???????????,???????????????????????????????,?xml??????,?????????????xml...Ukázkové projekty: Obsahuje ukázkové projekty uživatele TenCoKaciStromy.WPFDemo: This Peoject is only for the WPF learning.Xinx TimeIt!: TinyAlarm is a small utility that allows you to configure an Alarm so that you can opt for 1. Shutdown computer 2. Play a sound 3. Show a note with sound 4. Disconnect a dial-up connection 5. Connect via dial-up connection

    Read the article

  • Getting started with Exchange Web Services 2010

    - by Adam Tuttle
    I've been tasked with writing a SOAP web-service in .Net to be middleware between EWS2010 and an application server that previously used WebDAV to connect to Exchange. (As I understand it, WebDAV is going away with EWS2010, so the application server will no longer be able to connect as it previously did, and it is exponentially harder to connect to EWS without WebDAV. The theory is that doing it in .Net should be easier than anything else... Right?!) My end goal is to be able to get and create/update email, calendar items, contacts, and to-do list items for a specified Exchange account. (Deleting is not currently necessary, but I may build it in for future consideration, if it's easy enough). I was originally given some sample code, which did in fact work, but I quickly realized that it was outdated. The types and classes used appear nowhere in the current documentation. For example, the method used to create a connection to the Exchange server was: ExchangeService svc = new ExchangeService(); svc.Credentials = new WebCredentials(AuthEmailAddress, AuthEmailPassword); svc.AutodiscoverUrl(AutoDiscoverEmailAddress); For what it's worth, this was using an assembly that came with the sample code: Microsoft.Exchange.WebServices.dll ("MEWS"). Before I realized that this wasn't the current standard way to accomplish the connection, and it worked, I tried to build on it and add a method to create calendar items, which I copied from here: static void CreateAppointment(ExchangeServiceBinding esb) { // Create the appointment. CalendarItemType appointment = new CalendarItemType(); ... } Right away, I'm confronted with the difference between ExchangeService and ExchangeServiceBinding ("ESB"); so I started Googling to try and figure out how to get an ESB definition so that the CreateAppointment method will compile. I found this blog post that explains how to generate a proxy class from a WSDL, which I did. Unfortunately, this caused some conflicts where types that were defined in the original Assembly, Microsoft.Exchange.WebServices.dll (that came with the sample code) overlapped with Types in my new EWS.dll assembly (which I compiled from the code generated from the services.wsdl provided by the Exchange server). I excluded the MEWS assembly, which only made things worse. I went from a handful of errors and warnings to 25 errors and 2,510 warnings. All kinds of types and methods were not found. Something is clearly wrong, here. So I went back on the hunt. I found instructions on adding service references and web references (i.e. the extra steps it takes in VS2008), and I think I'm back on the right track. I removed (actually, for now, just excluded) all previous assemblies I had been trying; and I added a service reference for https://my.exchange-server.com/ews/services.wsdl Now I'm down to just 1 error and 1 warning. Warning: The element 'transport' cannot contain child element 'extendedProtectionPolicy' because the parent element's content model is empty. This is in reference to a change that was made to web.config when I added the service reference; and I just found a fix for that here on SO. I've commented that section out as indicated, and it did make the warning go away, so woot for that. The error hasn't been so easy to get around, though: Error: The type or namespace name 'ExchangeService' could not be found (are you missing a using directive or an assembly reference?) This is in reference to the function I was using to create the EWS connection, called by each of the web methods: private ExchangeService getService(String AutoDiscoverEmailAddress, String AuthEmailAddress, String AuthEmailPassword) { ExchangeService svc = new ExchangeService(); svc.Credentials = new WebCredentials(AuthEmailAddress, AuthEmailPassword); svc.AutodiscoverUrl(AutoDiscoverEmailAddress); return svc; } This function worked perfectly with the MEWS assembly from the sample code, but the ExchangeService type is no longer available. (Nor is ExchangeServiceBinding, that was the first thing I checked.) At this point, since I'm not following any directions from the documentation (I couldn't find anywhere in the documentation that said to add a service reference to your Exchange server's services.wsdl -- but that does seem to be the best/farthest I've gotten so far), I feel like I'm flying blind. I know I need to figure out whatever it is that should replace ExchangeService / ExchangeServiceBinding, implement that, and then work through whatever errors crop up as a result of that switch... But I have no idea how to do that, or where to look for how to do it. Googling "ExchangeService" and "ExchangeServiceBinding" only seem to lead back to outdated blog posts and MSDN, neither of which has proven terribly helpful thus far. Help me, Obi-Wan, you're my only hope!

    Read the article

  • WPF Check/Uncheck all checkboxes located in a gridview

    - by toni
    Hi! I have a gridview with some columns. One of these columns is checkbox type. Then I have two buttons in my UI, one for check all and another for uncheck all. I would like to check all checkboxes in the column when I press the a button and uncheck all checkboxes when I press the another one. How can I do this? Some snippet code: <... <Classes:SortableListView x:Name="lstViewRutas" ItemsSource="{Binding Source={StaticResource RutasCollectionData}}" ... > <...> <GridViewColumn Header="Activa" Width="50"> <GridViewColumn.CellTemplate> <DataTemplate> <CheckBox x:Name="chkBxF" Click="chkBx_Click" IsChecked="{Binding Path=Activa}" HorizontalContentAlignment="Stretch" HorizontalAlignment="Stretch"/> </DataTemplate> </GridViewColumn.CellTemplate> </GridViewColumn> <...> </Classes:SortableListView> <...> </Page> My data object binding to gridview is: namespace GParts.Classes { public class RutasCollection { /// <summary> /// Colección de datos de la tabla /// </summary> ObservableCollection<RutasData> _RutasCollection; /// <summary> /// Constructor. Crea una nueva instancia tipo ObservableCollection /// de tipo RutasData /// </summary> public RutasCollection() { _RutasCollection = new ObservableCollection<RutasData>(); } /// <summary> /// Retorna el conjunto entero de rutas en la colección /// </summary> public ObservableCollection<RutasData> Get { get { return _RutasCollection; } } /// <summary> /// Retorna el conjunto entero de rutas en la colección /// </summary> /// <returns></returns> public ObservableCollection<RutasData> GetCollection() { return _RutasCollection; } /// <summary> /// Añade un elemento tipo RutasData a la colección /// </summary> /// <param name="hora"></param> public void Add(RutasData ruta) { _RutasCollection.Add(ruta); } /// <summary> /// Elimina un elemento tipo RutasData de la colección /// </summary> /// <param name="ruta"></param> public void Remove(RutasData ruta) { _RutasCollection.Remove(ruta); } /// <summary> /// Elimina todos los registros de la colección /// </summary> public void RemoveAll() { _RutasCollection.Clear(); } /// <summary> /// Inserta un elemento tipo RutasData a la colección /// en la posición rowId establecida /// </summary> /// <param name="rowId"></param> /// <param name="ruta"></param> public void Insert(int rowId, RutasData ruta) { _RutasCollection.Insert(rowId, ruta); } } /// <summary> /// Clase RutasData /// </summary> // Registro tabla interficie pantalla public class RutasData { public int Id { get; set; } public bool Activa { get; set; } public string Ruta { get; set; } } } and in my page loaded event I do this to populate gridview: // Obtiene datos tabla Rutas var tbl_Rutas = Accessor.GetRutasTable(); // This method returns entire table foreach (var ruta in tbl_Rutas) { _RutasCollection.Add(new RutasData { Id = (int) ruta.Id, Ruta = ruta.Ruta, Activa = (bool) ruta.Activa }); } // Enlaza los datos con el objeto proveedor RutasCollection lstViewRutas.ItemsSource = _RutasCollection.GetCollection(); Everything is ok but now I would like to check/uncheck all checkboxes in the gridviewcolumn when I press one button or another. How can I do this? Something like this¿? I receive an error that says I can modify itemsource property. private void btnCheckAll_Click(object sender, RoutedEventArgs e) { // Update data object bind to gridview ObservableCollection<RutasData> listas = _RutasCollection.GetCollection(); foreach (var lst in listas) { ((RutasData)lst).Activa = true; } // Update with new values the UI lstViewRutas.ItemsSource = _RutasCollection.GetCollection(); } Thanks!

    Read the article

  • Using AES encryption in .NET - CryptographicException saying the padding is invalid and cannot be removed

    - by Jake Petroules
    I wrote some AES encryption code in C# and I am having trouble getting it to encrypt and decrypt properly. If I enter "test" as the passphrase and "This data must be kept secret from everyone!" I receive the following exception: System.Security.Cryptography.CryptographicException: Padding is invalid and cannot be removed. at System.Security.Cryptography.RijndaelManagedTransform.DecryptData(Byte[] inputBuffer, Int32 inputOffset, Int32 inputCount, Byte[]& outputBuffer, Int32 outputOffset, PaddingMode paddingMode, Boolean fLast) at System.Security.Cryptography.RijndaelManagedTransform.TransformFinalBlock(Byte[] inputBuffer, Int32 inputOffset, Int32 inputCount) at System.Security.Cryptography.CryptoStream.FlushFinalBlock() at System.Security.Cryptography.CryptoStream.Dispose(Boolean disposing) at System.IO.Stream.Close() at System.IO.Stream.Dispose() ... And if I enter something less than 16 characters I get no output. I believe I need some special handling in the encryption since AES is a block cipher, but I'm not sure exactly what that is, and I wasn't able to find any examples on the web showing how. Here is my code: using System; using System.IO; using System.Security.Cryptography; using System.Text; public static class DatabaseCrypto { public static EncryptedData Encrypt(string password, string data) { return DatabaseCrypto.Transform(true, password, data, null, null) as EncryptedData; } public static string Decrypt(string password, EncryptedData data) { return DatabaseCrypto.Transform(false, password, data.DataString, data.SaltString, data.MACString) as string; } private static object Transform(bool encrypt, string password, string data, string saltString, string macString) { using (AesManaged aes = new AesManaged()) { aes.Mode = CipherMode.CBC; aes.Padding = PaddingMode.PKCS7; int key_len = aes.KeySize / 8; int iv_len = aes.BlockSize / 8; const int salt_size = 8; const int iterations = 8192; byte[] salt = encrypt ? new Rfc2898DeriveBytes(string.Empty, salt_size).Salt : Convert.FromBase64String(saltString); byte[] bc_key = new Rfc2898DeriveBytes("BLK" + password, salt, iterations).GetBytes(key_len); byte[] iv = new Rfc2898DeriveBytes("IV" + password, salt, iterations).GetBytes(iv_len); byte[] mac_key = new Rfc2898DeriveBytes("MAC" + password, salt, iterations).GetBytes(16); aes.Key = bc_key; aes.IV = iv; byte[] rawData = encrypt ? Encoding.UTF8.GetBytes(data) : Convert.FromBase64String(data); using (ICryptoTransform transform = encrypt ? aes.CreateEncryptor() : aes.CreateDecryptor()) using (MemoryStream memoryStream = encrypt ? new MemoryStream() : new MemoryStream(rawData)) using (CryptoStream cryptoStream = new CryptoStream(memoryStream, transform, encrypt ? CryptoStreamMode.Write : CryptoStreamMode.Read)) { if (encrypt) { cryptoStream.Write(rawData, 0, rawData.Length); return new EncryptedData(salt, mac_key, memoryStream.ToArray()); } else { byte[] originalData = new byte[rawData.Length]; int count = cryptoStream.Read(originalData, 0, originalData.Length); return Encoding.UTF8.GetString(originalData, 0, count); } } } } } public class EncryptedData { public EncryptedData() { } public EncryptedData(byte[] salt, byte[] mac, byte[] data) { this.Salt = salt; this.MAC = mac; this.Data = data; } public EncryptedData(string salt, string mac, string data) { this.SaltString = salt; this.MACString = mac; this.DataString = data; } public byte[] Salt { get; set; } public string SaltString { get { return Convert.ToBase64String(this.Salt); } set { this.Salt = Convert.FromBase64String(value); } } public byte[] MAC { get; set; } public string MACString { get { return Convert.ToBase64String(this.MAC); } set { this.MAC = Convert.FromBase64String(value); } } public byte[] Data { get; set; } public string DataString { get { return Convert.ToBase64String(this.Data); } set { this.Data = Convert.FromBase64String(value); } } } static void ReadTest() { Console.WriteLine("Enter password: "); string password = Console.ReadLine(); using (StreamReader reader = new StreamReader("aes.cs.txt")) { EncryptedData enc = new EncryptedData(); enc.SaltString = reader.ReadLine(); enc.MACString = reader.ReadLine(); enc.DataString = reader.ReadLine(); Console.WriteLine("The decrypted data was: " + DatabaseCrypto.Decrypt(password, enc)); } } static void WriteTest() { Console.WriteLine("Enter data: "); string data = Console.ReadLine(); Console.WriteLine("Enter password: "); string password = Console.ReadLine(); EncryptedData enc = DatabaseCrypto.Encrypt(password, data); using (StreamWriter stream = new StreamWriter("aes.cs.txt")) { stream.WriteLine(enc.SaltString); stream.WriteLine(enc.MACString); stream.WriteLine(enc.DataString); Console.WriteLine("The encrypted data was: " + enc.DataString); } }

    Read the article

  • Passing multiple POST parameters to Web API Controller Methods

    - by Rick Strahl
    ASP.NET Web API introduces a new API for creating REST APIs and making AJAX callbacks to the server. This new API provides a host of new great functionality that unifies many of the features of many of the various AJAX/REST APIs that Microsoft created before it - ASP.NET AJAX, WCF REST specifically - and combines them into a whole more consistent API. Web API addresses many of the concerns that developers had with these older APIs, namely that it was very difficult to build consistent REST style resource APIs easily. While Web API provides many new features and makes many scenarios much easier, a lot of the focus has been on making it easier to build REST compliant APIs that are focused on resource based solutions and HTTP verbs. But  RPC style calls that are common with AJAX callbacks in Web applications, have gotten a lot less focus and there are a few scenarios that are not that obvious, especially if you're expecting Web API to provide functionality similar to ASP.NET AJAX style AJAX callbacks. RPC vs. 'Proper' REST RPC style HTTP calls mimic calling a method with parameters and returning a result. Rather than mapping explicit server side resources or 'nouns' RPC calls tend simply map a server side operation, passing in parameters and receiving a typed result where parameters and result values are marshaled over HTTP. Typically RPC calls - like SOAP calls - tend to always be POST operations rather than following HTTP conventions and using the GET/POST/PUT/DELETE etc. verbs to implicitly determine what operation needs to be fired. RPC might not be considered 'cool' anymore, but for typical private AJAX backend operations of a Web site I'd wager that a large percentage of use cases of Web API will fall towards RPC style calls rather than 'proper' REST style APIs. Web applications that have needs for things like live validation against data, filling data based on user inputs, handling small UI updates often don't lend themselves very well to limited HTTP verb usage. It might not be what the cool kids do, but I don't see RPC calls getting replaced by proper REST APIs any time soon.  Proper REST has its place - for 'real' API scenarios that manage and publish/share resources, but for more transactional operations RPC seems a better choice and much easier to implement than trying to shoehorn a boatload of endpoint methods into a few HTTP verbs. In any case Web API does a good job of providing both RPC abstraction as well as the HTTP Verb/REST abstraction. RPC works well out of the box, but there are some differences especially if you're coming from ASP.NET AJAX service or WCF Rest when it comes to multiple parameters. Action Routing for RPC Style Calls If you've looked at Web API demos you've probably seen a bunch of examples of how to create HTTP Verb based routing endpoints. Verb based routing essentially maps a controller and then uses HTTP verbs to map the methods that are called in response to HTTP requests. This works great for resource APIs but doesn't work so well when you have many operational methods in a single controller. HTTP Verb routing is limited to the few HTTP verbs available (plus separate method signatures) and - worse than that - you can't easily extend the controller with custom routes or action routing beyond that. Thankfully Web API also supports Action based routing which allows you create RPC style endpoints fairly easily:RouteTable.Routes.MapHttpRoute( name: "AlbumRpcApiAction", routeTemplate: "albums/{action}/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumApi", action = "GetAblums" } ); This uses traditional MVC style {action} method routing which is different from the HTTP verb based routing you might have read a bunch about in conjunction with Web API. Action based routing like above lets you specify an end point method in a Web API controller either via the {action} parameter in the route string or via a default value for custom routes. Using routing you can pass multiple parameters either on the route itself or pass parameters on the query string, via ModelBinding or content value binding. For most common scenarios this actually works very well. As long as you are passing either a single complex type via a POST operation, or multiple simple types via query string or POST buffer, there's no issue. But if you need to pass multiple parameters as was easily done with WCF REST or ASP.NET AJAX things are not so obvious. Web API has no issue allowing for single parameter like this:[HttpPost] public string PostAlbum(Album album) { return String.Format("{0} {1:d}", album.AlbumName, album.Entered); } There are actually two ways to call this endpoint: albums/PostAlbum Using the Model Binder with plain POST values In this mechanism you're sending plain urlencoded POST values to the server which the ModelBinder then maps the parameter. Each property value is matched to each matching POST value. This works similar to the way that MVC's  ModelBinder works. Here's how you can POST using the ModelBinder and jQuery:$.ajax( { url: "albums/PostAlbum", type: "POST", data: { AlbumName: "Dirty Deeds", Entered: "5/1/2012" }, success: function (result) { alert(result); }, error: function (xhr, status, p3, p4) { var err = "Error " + " " + status + " " + p3; if (xhr.responseText && xhr.responseText[0] == "{") err = JSON.parse(xhr.responseText).message; alert(err); } }); Here's what the POST data looks like for this request: The model binder and it's straight form based POST mechanism is great for posting data directly from HTML pages to model objects. It avoids having to do manual conversions for many operations and is a great boon for AJAX callback requests. Using Web API JSON Formatter The other option is to post data using a JSON string. The process for this is similar except that you create a JavaScript object and serialize it to JSON first.album = { AlbumName: "PowerAge", Entered: new Date(1977,0,1) } $.ajax( { url: "albums/PostAlbum", type: "POST", contentType: "application/json", data: JSON.stringify(album), success: function (result) { alert(result); } }); Here the data is sent using a JSON object rather than form data and the data is JSON encoded over the wire. The trace reveals that the data is sent using plain JSON (Source above), which is a little more efficient since there's no UrlEncoding that occurs. BTW, notice that WebAPI automatically deals with the date. I provided the date as a plain string, rather than a JavaScript date value and the Formatter and ModelBinder both automatically map the date propertly to the Entered DateTime property of the Album object. Passing multiple Parameters to a Web API Controller Single parameters work fine in either of these RPC scenarios and that's to be expected. ModelBinding always works against a single object because it maps a model. But what happens when you want to pass multiple parameters? Consider an API Controller method that has a signature like the following:[HttpPost] public string PostAlbum(Album album, string userToken) Here I'm asking to pass two objects to an RPC method. Is that possible? This used to be fairly straight forward either with WCF REST and ASP.NET AJAX ASMX services, but as far as I can tell this is not directly possible using a POST operation with WebAPI. There a few workarounds that you can use to make this work: Use both POST *and* QueryString Parameters in Conjunction If you have both complex and simple parameters, you can pass simple parameters on the query string. The above would actually work with: /album/PostAlbum?userToken=sekkritt but that's not always possible. In this example it might not be a good idea to pass a user token on the query string though. It also won't work if you need to pass multiple complex objects, since query string values do not support complex type mapping. They only work with simple types. Use a single Object that wraps the two Parameters If you go by service based architecture guidelines every service method should always pass and return a single value only. The input should wrap potentially multiple input parameters and the output should convey status as well as provide the result value. You typically have a xxxRequest and a xxxResponse class that wraps the inputs and outputs. Here's what this method might look like:public PostAlbumResponse PostAlbum(PostAlbumRequest request) { var album = request.Album; var userToken = request.UserToken; return new PostAlbumResponse() { IsSuccess = true, Result = String.Format("{0} {1:d} {2}", album.AlbumName, album.Entered,userToken) }; } with these support types:public class PostAlbumRequest { public Album Album { get; set; } public User User { get; set; } public string UserToken { get; set; } } public class PostAlbumResponse { public string Result { get; set; } public bool IsSuccess { get; set; } public string ErrorMessage { get; set; } }   To call this method you now have to assemble these objects on the client and send it up as JSON:var album = { AlbumName: "PowerAge", Entered: "1/1/1977" } var user = { Name: "Rick" } var userToken = "sekkritt"; $.ajax( { url: "samples/PostAlbum", type: "POST", contentType: "application/json", data: JSON.stringify({ Album: album, User: user, UserToken: userToken }), success: function (result) { alert(result.Result); } }); I assemble the individual types first and then combine them in the data: property of the $.ajax() call into the actual object passed to the server, that mimics the structure of PostAlbumRequest server class that has Album, User and UserToken properties. This works well enough but it gets tedious if you have to create Request and Response types for each method signature. If you have common parameters that are always passed (like you always pass an album or usertoken) you might be able to abstract this to use a single object that gets reused for all methods, but this gets confusing too: Overload a single 'parameter' too much and it becomes a nightmare to decipher what your method actual can use. Use JObject to parse multiple Property Values out of an Object If you recall, ASP.NET AJAX and WCF REST used a 'wrapper' object to make default AJAX calls. Rather than directly calling a service you always passed an object which contained properties for each parameter: { parm1: Value, parm2: Value2 } WCF REST/ASP.NET AJAX would then parse this top level property values and map them to the parameters of the endpoint method. This automatic type wrapping functionality is no longer available directly in Web API, but since Web API now uses JSON.NET for it's JSON serializer you can actually simulate that behavior with a little extra code. You can use the JObject class to receive a dynamic JSON result and then using the dynamic cast of JObject to walk through the child objects and even parse them into strongly typed objects. Here's how to do this on the API Controller end:[HttpPost] public string PostAlbum(JObject jsonData) { dynamic json = jsonData; JObject jalbum = json.Album; JObject juser = json.User; string token = json.UserToken; var album = jalbum.ToObject<Album>(); var user = juser.ToObject<User>(); return String.Format("{0} {1} {2}", album.AlbumName, user.Name, token); } This is clearly not as nice as having the parameters passed directly, but it works to allow you to pass multiple parameters and access them using Web API. JObject is JSON.NET's generic object container which sports a nice dynamic interface that allows you to walk through the object's properties using standard 'dot' object syntax. All you have to do is cast the object to dynamic to get access to the property interface of the JSON type. Additionally JObject also allows you to parse JObject instances into strongly typed objects, which enables us here to retrieve the two objects passed as parameters from this jquery code:var album = { AlbumName: "PowerAge", Entered: "1/1/1977" } var user = { Name: "Rick" } var userToken = "sekkritt"; $.ajax( { url: "samples/PostAlbum", type: "POST", contentType: "application/json", data: JSON.stringify({ Album: album, User: user, UserToken: userToken }), success: function (result) { alert(result); } }); Summary ASP.NET Web API brings many new features and many advantages over the older Microsoft AJAX and REST APIs, but realize that some things like passing multiple strongly typed object parameters will work a bit differently. It's not insurmountable, but just knowing what options are available to simulate this behavior is good to know. Now let me say here that it's probably not a good practice to pass a bunch of parameters to an API call. Ideally APIs should be closely factored to accept single parameters or a single content parameter at least along with some identifier parameters that can be passed on the querystring. But saying that doesn't mean that occasionally you don't run into a situation where you have the need to pass several objects to the server and all three of the options I mentioned might have merit in different situations. For now I'm sure the question of how to pass multiple parameters will come up quite a bit from people migrating WCF REST or ASP.NET AJAX code to Web API. At least there are options available to make it work.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • ASP.NET MVC Checkbox Group

    - by Greg Ogle
    I am trying to formulate a work-around for the lack of a "checkbox group" in ASP.NET MVC. The typical way to implement this is to have check boxes of the same name, each with the value it represents. <input type="checkbox" name="n" value=1 /> <input type="checkbox" name="n" value=2 /> <input type="checkbox" name="n" value=3 /> When submitted, it will comma delimit all values to the request item "n".. so Request["n"] == "1,2,3" if all three are checked when submitted. In ASP.NET MVC, you can have a parameter of n as an array to accept this post. public ActionResult ActionName( int[] n ) { ... } All of the above works fine. The problem I have is that when validation fails, the check boxes are not restored to their checked state. Any suggestions. Problem Code: (I started with the default asp.net mvc project) Controller public class HomeController : Controller { public ActionResult Index() { var t = getTestModel("First"); return View(t); } [AcceptVerbs(HttpVerbs.Post)] public ActionResult Index(TestModelView t) { if(String.IsNullOrEmpty( t.TextBoxValue)) ModelState.AddModelError("TextBoxValue", "TextBoxValue required."); var newView = getTestModel("Next"); return View(newView); } private TestModelView getTestModel(string prefix) { var t = new TestModelView(); t.Checkboxes = new List<CheckboxInfo>() { new CheckboxInfo(){Text = prefix + "1", Value="1", IsChecked=false}, new CheckboxInfo(){Text = prefix + "2", Value="2", IsChecked=false} }; return t; } } public class TestModelView { public string TextBoxValue { get; set; } public List<CheckboxInfo> Checkboxes { get; set; } } public class CheckboxInfo { public string Text { get; set; } public string Value { get; set; } public bool IsChecked { get; set; } } } ASPX <% using( Html.BeginForm() ){ %> <p><%= Html.ValidationSummary() %></p> <p><%= Html.TextBox("TextBoxValue")%></p> <p><% int i = 0; foreach (var cb in Model.Checkboxes) { %> <input type="checkbox" name="Checkboxes[<%=i%>]" value="<%= Html.Encode(cb.Value) %>" <%=cb.IsChecked ? "checked=\"checked\"" : String.Empty %> /><%= Html.Encode(cb.Text)%><br /> <% i++; } %></p> <p><input type="submit" value="submit" /></p> <% } %> Working Code Controller [AcceptVerbs(HttpVerbs.Post)] public ActionResult Index(TestModelView t) { if(String.IsNullOrEmpty( t.TextBoxValue)) { ModelState.AddModelError("TextBoxValue", "TextBoxValue required."); return View(t); } var newView = getTestModel("Next"); return View(newView); } ASPX int i = 0; foreach (var cb in Model.Checkboxes) { %> <input type="checkbox" name="Checkboxes[<%=i%>].IsChecked" <%=cb.IsChecked ? "checked=\"checked\"" : String.Empty %> value="true" /> <input type="hidden" name="Checkboxes[<%=i%>].IsChecked" value="false" /> <input type="hidden" name="Checkboxes[<%=i%>].Value" value="<%= cb.Value %>" /> <input type="hidden" name="Checkboxes[<%=i%>].Text" value="<%= cb.Text %>" /> <%= Html.Encode(cb.Text)%><br /> <% i++; } %></p> <p><input type="submit" value="submit" /></p> Of course something similar could be done with Html Helpers, but this works.

    Read the article

  • Using Image Source with big images in WPF

    - by xyzzer
    I am working on an application that allows users to manipulate multiple images by using ItemsControl. I started running some tests and found that the app has problems displaying some big images - ie. it did not work with the high resolution (21600x10800), 20MB images from http://earthobservatory.nasa.gov/Features/BlueMarble/BlueMarble_monthlies.php, though it displays the 6200x6200, 60MB Hubble telescope image from http://zebu.uoregon.edu/hudf/hudf.jpg just fine. The original solution just specified an Image control with a Source property pointing at a file on a disk (through a binding). With the Blue Marble file - the image would just not show up. Now this could be just a bug hidden somewhere deep in the funky MVVM + XAML implementation - the visual tree displayed by Snoop goes like: Window/Border/AdornerDecorator/ContentPresenter/Grid/Canvas/UserControl/Border/ContentPresenter/Grid/Grid/Grid/Grid/Border/Grid/ContentPresenter/UserControl/UserControl/Border/ContentPresenter/Grid/Grid/Grid/Grid/Viewbox/ContainerVisual/UserControl/Border/ContentPresenter/Grid/Grid/ItemsControl/Border/ItemsPresenter/Canvas/ContentPresenter/Grid/Grid/ContentPresenter/Image... Now debug this! WPF can be crazy like that... Anyway, it turned out that if I create a simple WPF application - the images load just fine. I tried finding out the root cause, but I don't want to spend weeks on it. I figured the right thing to do might be to use a converter to scale the images down - this is what I have done: ImagePath = @"F:\Astronomical\world.200402.3x21600x10800.jpg"; TargetWidth = 2800; TargetHeight = 1866; and <Image> <Image.Source> <MultiBinding Converter="{StaticResource imageResizingConverter}"> <MultiBinding.Bindings> <Binding Path="ImagePath"/> <Binding RelativeSource="{RelativeSource Self}" /> <Binding Path="TargetWidth"/> <Binding Path="TargetHeight"/> </MultiBinding.Bindings> </MultiBinding> </Image.Source> </Image> and public class ImageResizingConverter : MarkupExtension, IMultiValueConverter { public Image TargetImage { get; set; } public string SourcePath { get; set; } public int DecodeWidth { get; set; } public int DecodeHeight { get; set; } public object Convert(object[] values, Type targetType, object parameter, CultureInfo culture) { this.SourcePath = values[0].ToString(); this.TargetImage = (Image)values[1]; this.DecodeWidth = (int)values[2]; this.DecodeHeight = (int)values[3]; return DecodeImage(); } private BitmapImage DecodeImage() { BitmapImage bi = new BitmapImage(); bi.BeginInit(); bi.DecodePixelWidth = (int)DecodeWidth; bi.DecodePixelHeight = (int)DecodeHeight; bi.UriSource = new Uri(SourcePath); bi.EndInit(); return bi; } public object[] ConvertBack(object value, Type[] targetTypes, object parameter, CultureInfo culture) { throw new Exception("The method or operation is not implemented."); } public override object ProvideValue(IServiceProvider serviceProvider) { return this; } } Now this works fine, except for one "little" problem. When you just specify a file path in Image.Source - the application actually uses less memory and works faster than if you use BitmapImage.DecodePixelWidth. Plus with Image.Source if you have multiple Image controls that point to the same image - they only use as much memory as if only one image was loaded. With the BitmapImage.DecodePixelWidth solution - each additional Image control uses more memory and each of them uses more than when just specifying Image.Source. Perhaps WPF somehow caches these images in compressed form while if you specify the decoded dimensions - it feels like you get an uncompressed image in memory, plus it takes 6 times the time (perhaps without it the scaling is done on the GPU?), plus it feels like the original high resolution image also gets loaded and takes up space. If I just scale the image down, save it to a temporary file and then use Image.Source to point at the file - it will probably work, but it will be pretty slow and it will require handling cleanup of the temporary file. If I could detect an image that does not get loaded properly - maybe I could only scale it down if I need to, but Image.ImageFailed never gets triggered. Maybe it has something to do with the video memory and this app just using more of it with the deep visual tree, opacity masks etc. Actual question: How can I load big images as quickly as Image.Source option does it, without using more memory for additional copies and additional memory for the scaled down image if I only need them at a certain resolution lower than original? Also, I don't want to keep them in memory if no Image control is using them anymore.

    Read the article

  • How to populate a drop down list in Spring MVC

    - by GigaPr
    Hi, would like to populate a drop down list on a jsp page i have my page that looks like <form:form method="POST" action="addRss.htm" commandName="addNewRss" cssClass="addUserForm"> <div class="floatL"> <div class="padding5"> <div class="fieldContainer"> <strong>Title:</strong>&nbsp; </div> <form:errors path="title" cssClass="error"/> <form:input path="title" cssClass="textArea" /> </div> <div class="padding5"> <div class="fieldContainer"> <strong>Description:</strong>&nbsp; </div> <form:errors path="description" cssClass="error"/> <form:input path="description" cssClass="textArea" /> </div> <div class="padding5"> <div class="fieldContainer"> <strong>Language:</strong>&nbsp; </div> <form:errors path="language" cssClass="error"/> <form:select path="language" cssClass="textArea" /> </div> </div> <div class="floatR"> <div class="padding5"> <div class="fieldContainer"> <strong>Link:</strong>&nbsp; </div> <form:errors path="link" cssClass="error"/> <form:input path="link" cssClass="textArea" /> </div> <div class="padding5"> <div class="fieldContainer"> <strong>Url:</strong>&nbsp; </div> <form:errors path="url" cssClass="error"/> <form:input path="url" cssClass="textArea" /> </div> <div class="padding5"> <div class="fieldContainer"> <strong>Url</strong>&nbsp; </div> <form:errors path="url" cssClass="error"/> <form:input path="url" cssClass="textArea" /> </div> </div> <input type="submit" class="floatR" value="Add New Rss"> </form:form> and my controller public class AddRssController extends BaseController { private static final String[] LANGUAGES = { "AL", "AK", "AZ", "AR", "CA", "CO", "CT", "DE", "DC", "FL", "GA", "HI", "ID", "IL", "IN", "IA", "KS", "KY", "LA", "ME", "MD", "MA", "MI", "MN", "MS", "MO", "MT", "NE", "NV", "NH", "NJ", "NM", "NY", "NC", "ND", "OH", "OK", "OR", "PA", "RI", "SC", "SD", "TN", "TX", "UT", "VA", "VT", "WA", "WV", "WI", "WY" }; public AddRssController() { setCommandClass(RSS.class); setCommandName("addNewRss"); } @Override protected Object formBackingObject(HttpServletRequest request) throws Exception { RSS rantForm = (RSS) super.formBackingObject(request); // rantForm.setVehicle(new Vehicle()); return rantForm; } @Override protected Map referenceData(HttpServletRequest request) throws Exception { Map referenceData = new HashMap(); referenceData.put("language", LANGUAGES); return referenceData; } @Override protected ModelAndView onSubmit(Object command, BindException bindException) throws Exception { RSS rss = (RSS) command; rssServiceImplementation.add(rss); return new ModelAndView(getSuccessView()); } } and my BaseController public class BaseController extends SimpleFormController implements Controller { public UserServiceImplementation userServiceImplementation; public UserServiceImplementation getUserServiceImplementation() { return userServiceImplementation; } public void setUserServiceImplementation(UserServiceImplementation userServiceImplementation) { this.userServiceImplementation = userServiceImplementation; } public RssServiceImplementation rssServiceImplementation; public RssServiceImplementation getRssServiceImplementation() { return rssServiceImplementation; } public void setRssServiceImplementation(RssServiceImplementation rssServiceImplementation) { this.rssServiceImplementation = rssServiceImplementation; } } But it doesn t work Any suggestion?

    Read the article

  • Custom Paging for GridView in an UpdatePanel not firing PageIndexChanging event

    - by JeffCren
    I have a GridView that uses custom paging inside an UpdatePanel (so that the paging and sorting of the gridview don't cause postback). The sorting works fine, but the paging doesn't. The PageIndexChanging event is never called. This is the aspx code: <asp:UpdatePanel runat="server" ID="upSearchResults" ChildrenAsTriggers="true" UpdateMode="Always"> <ContentTemplate> <asp:GridView ID="gvSearchResults" runat="server" AllowSorting="true" AutoGenerateColumns="false" AllowPaging="true" PageSize="10" OnDataBound="gvSearchResults_DataBound" OnRowDataBound ="gvSearchResults_RowDataBound" OnSorting="gvSearchResults_Sorting" OnPageIndexChanging="gvSearchResults_PageIndexChanging" Width="100%" EnableSortingAndPagingCallbacks="false"> <Columns> <asp:TemplateField HeaderText="Select" HeaderStyle-HorizontalAlign="Center"> <ItemTemplate> <asp:HyperLink ID="lnkAdd" runat="server">Add</asp:HyperLink> <asp:HiddenField ID="hfPersonId" runat="server" Value='<%# Eval("Id") %>'/> </ItemTemplate> </asp:TemplateField> <asp:BoundField HeaderText="First Name" DataField="FirstName" HeaderStyle-HorizontalAlign="Center" ItemStyle-HorizontalAlign="Center" SortExpression="FirstName" /> <asp:BoundField HeaderText="Last Name" DataField="LastName" HeaderStyle-HorizontalAlign="Center" ItemStyle-HorizontalAlign="Center" SortExpression="LastName" /> <asp:TemplateField HeaderText="Phone Number" HeaderStyle-HorizontalAlign="Center" ItemStyle-HorizontalAlign="Center" > <ItemTemplate> <asp:Label ID="lblPhone" runat="server" Text="" /> </ItemTemplate> </asp:TemplateField> </Columns> <PagerTemplate> <table width="100%" class="pager"> <tr> <td> </td> </tr> </table> </PagerTemplate> </asp:GridView> <div class="btnContainer"> <div class="btn btn-height_small btn-style_dominant"> <asp:LinkButton ID="lbtNewRecord" runat="server" OnClick="lbtNewRecord_Click"><span>Create New Record</span></asp:LinkButton> </div> <div class="btn btn-height_small btn-style_subtle"> <a onclick="openParticipantModal();"><span>Cancel</span></a> </div> </div> </ContentTemplate> <Triggers> <asp:AsyncPostBackTrigger ControlID="gvSearchResults" EventName="PageIndexChanging" /> <asp:AsyncPostBackTrigger ControlID="gvSearchResults" EventName="Sorting" /> </Triggers> </asp:UpdatePanel> In the code behind I have a SetPaging method that is called on the GridView OnDataBound event: private void SetPaging(GridView gv) { GridViewRow row = gv.BottomPagerRow; var place = row.Cells[0]; var first = new LinkButton(); first.CommandName = "Page"; first.CommandArgument = "First"; first.Text = "First"; first.ToolTip = "First Page"; if (place != null) place.Controls.Add(first); var lbl = new Label(); lbl.Text = " "; if (place != null) place.Controls.Add(lbl); var prev = new LinkButton(); prev.CommandName = "Page"; prev.CommandArgument = "Prev"; prev.Text = "Prev"; prev.ToolTip = "Previous Page"; if (place != null) place.Controls.Add(prev); var lbl2 = new Label(); lbl2.Text = " "; if (place != null) place.Controls.Add(lbl2); for (int i = 1; i <= gv.PageCount; i++) { var btn = new LinkButton(); btn.CommandName = "Page"; btn.CommandArgument = i.ToString(); if (i == gv.PageIndex + 1) { btn.BackColor = Color.Gray; } btn.Text = i.ToString(); btn.ToolTip = "Page " + i.ToString(); if (place != null) place.Controls.Add(btn); var lbl3 = new Label(); lbl3.Text = " "; if (place != null) place.Controls.Add(lbl3); } var next = new LinkButton(); next.CommandName = "Page"; next.CommandArgument = "Next"; next.Text = "Next"; next.ToolTip = "Next Page"; if (place != null) place.Controls.Add(next); var lbl4 = new Label(); lbl4.Text = " "; if (place != null) place.Controls.Add(lbl4); var last = new LinkButton(); last.CommandName = "Page"; last.CommandArgument = "Last"; last.Text = "Last"; last.ToolTip = "Last Page"; if (place != null) place.Controls.Add(last); var lbl5 = new Label(); lbl5.Text = " "; if (place != null) place.Controls.Add(lbl5); } The paging works if I don't use custom paging, but I really need to use the custom paging. I can't figure out why the PageIndexChanging event isn't fired when I'm using the custom paging. Thanks, Jeff

    Read the article

  • Not catching all mouse events with wxWidgets

    - by Stef
    Hi I am am trying to catch mouse movements for a MouseOver function in an app created with Code::Blocks using the wxSmith plugin. I have stumbled upon a puzzling problem. EVT_MOUSEWHEEL calling the function in the EventTable works well, but all other macros have no result at all. And the mousewheel is not really want I want (I just used it to test...) Here is a the basic problem code (mostly generated by the fantastic wxSmith plugin) MouseMain.h #include <wx/frame.h> #include <wx/statusbr.h> //*) class MouseFrame: public wxFrame { public: MouseFrame(wxWindow* parent,wxWindowID id = -1); virtual ~MouseFrame(); private: //(*Handlers(MouseFrame) void OnQuit(wxCommandEvent& event); void OnAbout(wxCommandEvent& event); void OnButton1Click(wxCommandEvent& event); void MouseOver(wxMouseEvent& event); //*) //(*Identifiers(MouseFrame) static const long ID_BUTTON1; static const long ID_STATICBITMAP1; static const long ID_PANEL1; static const long idMenuQuit; static const long idMenuAbout; static const long ID_STATUSBAR1; //*) //(*Declarations(MouseFrame) wxButton* Button1; wxStaticBitmap* StaticBitmap1; wxPanel* Panel1; wxStatusBar* StatusBar1; //*) DECLARE_EVENT_TABLE() }; #endif // MOUSEMAIN_H ...and MouseMain.cpp #include "wx_pch.h" #include "MouseMain.h" #include <wx/msgdlg.h> //(*InternalHeaders(MouseFrame) #include <wx/bitmap.h> #include <wx/intl.h> #include <wx/image.h> #include <wx/string.h> //*) //helper functions enum wxbuildinfoformat { short_f, long_f }; wxString wxbuildinfo(wxbuildinfoformat format) { wxString wxbuild(wxVERSION_STRING); if (format == long_f ) { #if defined(__WXMSW__) wxbuild << _T("-Windows"); #elif defined(__UNIX__) wxbuild << _T("-Linux"); #endif #if wxUSE_UNICODE wxbuild << _T("-Unicode build"); #else wxbuild << _T("-ANSI build"); #endif // wxUSE_UNICODE } return wxbuild; } //(*IdInit(MouseFrame) const long MouseFrame::ID_BUTTON1 = wxNewId(); const long MouseFrame::ID_STATICBITMAP1 = wxNewId(); const long MouseFrame::ID_PANEL1 = wxNewId(); const long MouseFrame::idMenuQuit = wxNewId(); const long MouseFrame::idMenuAbout = wxNewId(); const long MouseFrame::ID_STATUSBAR1 = wxNewId(); //*) BEGIN_EVENT_TABLE(MouseFrame,wxFrame) EVT_RIGHT_DCLICK(MouseFrame::MouseOver) EVT_MOUSEWHEEL(MouseFrame::MouseOver) EVT_MOTION(MouseFrame::MouseOver) EVT_RIGHT_DOWN(MouseFrame::MouseOver) //(*EventTable(MouseFrame) //*) END_EVENT_TABLE() MouseFrame::MouseFrame(wxWindow* parent,wxWindowID id) { //(*Initialize(MouseFrame) wxMenuItem* MenuItem2; wxMenuItem* MenuItem1; wxMenu* Menu1; wxMenuBar* MenuBar1; wxFlexGridSizer* FlexGridSizer1; wxMenu* Menu2; Create(parent, id, wxEmptyString, wxDefaultPosition, wxDefaultSize, wxDEFAULT_FRAME_STYLE, _T("id")); Panel1 = new wxPanel(this, ID_PANEL1, wxPoint(144,392), wxDefaultSize, wxTAB_TRAVERSAL, _T("ID_PANEL1")); FlexGridSizer1 = new wxFlexGridSizer(0, 3, 0, 0); Button1 = new wxButton(Panel1, ID_BUTTON1, _("TheButton"), wxDefaultPosition, wxDefaultSize, 0, wxDefaultValidator, _T("ID_BUTTON1")); FlexGridSizer1->Add(Button1, 1, wxALL|wxALIGN_CENTER_HORIZONTAL|wxALIGN_CENTER_VERTICAL, 5); StaticBitmap1 = new wxStaticBitmap(Panel1, ID_STATICBITMAP1, wxNullBitmap, wxDefaultPosition, wxSize(159,189), wxSUNKEN_BORDER, _T("ID_STATICBITMAP1")); FlexGridSizer1->Add(StaticBitmap1, 1, wxALL|wxALIGN_CENTER_HORIZONTAL|wxALIGN_CENTER_VERTICAL, 5); Panel1->SetSizer(FlexGridSizer1); FlexGridSizer1->Fit(Panel1); FlexGridSizer1->SetSizeHints(Panel1); MenuBar1 = new wxMenuBar(); Menu1 = new wxMenu(); MenuItem1 = new wxMenuItem(Menu1, idMenuQuit, _("Quit\tAlt-F4"), _("Quit the application"), wxITEM_NORMAL); Menu1->Append(MenuItem1); MenuBar1->Append(Menu1, _("&File")); Menu2 = new wxMenu(); MenuItem2 = new wxMenuItem(Menu2, idMenuAbout, _("About\tF1"), _("Show info about this application"), wxITEM_NORMAL); Menu2->Append(MenuItem2); MenuBar1->Append(Menu2, _("Help")); SetMenuBar(MenuBar1); StatusBar1 = new wxStatusBar(this, ID_STATUSBAR1, 0, _T("ID_STATUSBAR1")); int __wxStatusBarWidths_1[1] = { -1 }; int __wxStatusBarStyles_1[1] = { wxSB_NORMAL }; StatusBar1->SetFieldsCount(1,__wxStatusBarWidths_1); StatusBar1->SetStatusStyles(1,__wxStatusBarStyles_1); SetStatusBar(StatusBar1); Connect(ID_BUTTON1,wxEVT_COMMAND_BUTTON_CLICKED,(wxObjectEventFunction)&MouseFrame::OnButton1Click); Connect(idMenuQuit,wxEVT_COMMAND_MENU_SELECTED,(wxObjectEventFunction)&MouseFrame::OnQuit); Connect(idMenuAbout,wxEVT_COMMAND_MENU_SELECTED,(wxObjectEventFunction)&MouseFrame::OnAbout); //*) } MouseFrame::~MouseFrame() { //(*Destroy(MouseFrame) //*) } void MouseFrame::OnQuit(wxCommandEvent& event) { Close(); } void MouseFrame::OnAbout(wxCommandEvent& event) { wxString msg = wxbuildinfo(long_f); wxMessageBox(msg, _("Welcome to...")); } void MouseFrame::OnButton1Click(wxCommandEvent& event) { } void MouseFrame::MouseOver(wxMouseEvent& event){ wxMessageBox(_("MouseOver event!")); } So my big question: Why are EVT_MOTION, EVT_RIGHT_DOWN or EVT_RIGHT_DCLICK not calling MouseFrame::MouseOver(wxMouseEvent& event) in the way EVT_MOUSEWHEEL does?

    Read the article

  • lxc containers hangs after upgrade to 13.10

    - by doug123
    I have 3 lxc containers. They were all working fine on 12.10 and I upgraded the containers with do-release-upgrade on the containers to 13.04 and 13.10 and that worked great. Then I upgraded the host to 13.04 and then 13.10 and now the 3 containers hang with this: >lxc-start -n as1 -l DEBUG -o $(tty) lxc-start 1383145786.513 INFO lxc_start_ui - using rcfile /var/lib/lxc/as1/config lxc-start 1383145786.513 WARN lxc_log - lxc_log_init called with log already initialized lxc-start 1383145786.513 INFO lxc_apparmor - aa_enabled set to 1 lxc-start 1383145786.514 DEBUG lxc_conf - allocated pty '/dev/pts/2' (5/6) lxc-start 1383145786.514 DEBUG lxc_conf - allocated pty '/dev/pts/13' (7/8) lxc-start 1383145786.514 DEBUG lxc_conf - allocated pty '/dev/pts/14' (9/10) lxc-start 1383145786.514 DEBUG lxc_conf - allocated pty '/dev/pts/15' (11/12) lxc-start 1383145786.514 DEBUG lxc_conf - allocated pty '/dev/pts/17' (13/14) lxc-start 1383145786.514 DEBUG lxc_conf - allocated pty '/dev/pts/18' (15/16) lxc-start 1383145786.514 DEBUG lxc_conf - allocated pty '/dev/pts/19' (17/18) lxc-start 1383145786.514 DEBUG lxc_conf - allocated pty '/dev/pts/20' (19/20) lxc-start 1383145786.514 INFO lxc_conf - tty's configured lxc-start 1383145786.514 DEBUG lxc_start - sigchild handler set lxc-start 1383145786.514 DEBUG lxc_console - opening /dev/tty for console peer lxc-start 1383145786.514 DEBUG lxc_console - using '/dev/tty' as console lxc-start 1383145786.514 DEBUG lxc_console - 6242 got SIGWINCH fd 25 lxc-start 1383145786.514 DEBUG lxc_console - set winsz dstfd:22 cols:177 rows:53 lxc-start 1383145786.514 INFO lxc_start - 'as1' is initialized lxc-start 1383145786.522 DEBUG lxc_start - Not dropping cap_sys_boot or watching utmp lxc-start 1383145786.524 DEBUG lxc_conf - mac address of host interface 'vethB4L35W' changed to private fe:7c:96:a0:ae:29 lxc-start 1383145786.525 DEBUG lxc_conf - instanciated veth 'vethB4L35W/vethVC61K2', index is '26' lxc-start 1383145786.529 DEBUG lxc_cgroup - cgroup 'memory.limit_in_bytes' set to '20G' lxc-start 1383145786.529 DEBUG lxc_cgroup - cgroup 'cpuset.cpus' set to '12-23' lxc-start 1383145786.529 INFO lxc_cgroup - cgroup has been setup lxc-start 1383145786.555 DEBUG lxc_conf - move 'eth0' to '6249' lxc-start 1383145786.555 INFO lxc_conf - 'as1' hostname has been setup lxc-start 1383145786.575 DEBUG lxc_conf - 'eth0' has been setup lxc-start 1383145786.575 INFO lxc_conf - network has been setup lxc-start 1383145786.575 INFO lxc_conf - looking at .44 42 252:0 / / rw,relatime - ext4 /dev/mapper/limitorderbook1-root rw,errors=remount-ro,data=ordered . lxc-start 1383145786.575 INFO lxc_conf - now p is . /. lxc-start 1383145786.575 INFO lxc_conf - looking at .52 44 0:5 / /dev rw,relatime - devtmpfs udev rw,size=32961632k,nr_inodes=8240408,mode=755 . lxc-start 1383145786.575 INFO lxc_conf - now p is . /dev. lxc-start 1383145786.575 INFO lxc_conf - looking at .61 52 0:11 / /dev/pts rw,nosuid,noexec,relatime - devpts devpts rw,mode=600,ptmxmode=000 . lxc-start 1383145786.575 INFO lxc_conf - now p is . /dev/pts. lxc-start 1383145786.575 INFO lxc_conf - looking at .68 44 0:15 / /run rw,nosuid,noexec,relatime - tmpfs tmpfs rw,size=6594456k,mode=755 . lxc-start 1383145786.575 INFO lxc_conf - now p is . /run. lxc-start 1383145786.575 INFO lxc_conf - looking at .69 68 0:18 / /run/lock rw,nosuid,nodev,noexec,relatime - tmpfs none rw,size=5120k . lxc-start 1383145786.575 INFO lxc_conf - now p is . /run/lock. lxc-start 1383145786.575 INFO lxc_conf - looking at .72 68 0:19 / /run/shm rw,nosuid,nodev,relatime - tmpfs none rw . lxc-start 1383145786.575 INFO lxc_conf - now p is . /run/shm. lxc-start 1383145786.575 INFO lxc_conf - looking at .73 68 0:21 / /run/user rw,nosuid,nodev,noexec,relatime - tmpfs none rw,size=102400k,mode=755 . lxc-start 1383145786.575 INFO lxc_conf - now p is . /run/user. lxc-start 1383145786.575 INFO lxc_conf - looking at .76 44 0:14 / /sys rw,nosuid,nodev,noexec,relatime - sysfs sysfs rw . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys. lxc-start 1383145786.575 INFO lxc_conf - looking at .77 76 0:16 / /sys/fs/cgroup rw,relatime - tmpfs none rw,size=4k,mode=755 . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup. lxc-start 1383145786.575 INFO lxc_conf - looking at .78 77 0:20 / /sys/fs/cgroup/cpuset rw,relatime - cgroup cgroup rw,cpuset,clone_children . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/cpuset. lxc-start 1383145786.575 INFO lxc_conf - looking at .79 77 0:23 / /sys/fs/cgroup/cpu rw,relatime - cgroup cgroup rw,cpu . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/cpu. lxc-start 1383145786.575 INFO lxc_conf - looking at .80 77 0:24 / /sys/fs/cgroup/cpuacct rw,relatime - cgroup cgroup rw,cpuacct . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/cpuacct. lxc-start 1383145786.575 INFO lxc_conf - looking at .81 77 0:25 / /sys/fs/cgroup/memory rw,relatime - cgroup cgroup rw,memory . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/memory. lxc-start 1383145786.575 INFO lxc_conf - looking at .82 77 0:26 / /sys/fs/cgroup/devices rw,relatime - cgroup cgroup rw,devices . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/devices. lxc-start 1383145786.575 INFO lxc_conf - looking at .83 77 0:27 / /sys/fs/cgroup/freezer rw,relatime - cgroup cgroup rw,freezer . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/freezer. lxc-start 1383145786.575 INFO lxc_conf - looking at .84 77 0:28 / /sys/fs/cgroup/blkio rw,relatime - cgroup cgroup rw,blkio . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/blkio. lxc-start 1383145786.575 INFO lxc_conf - looking at .85 77 0:29 / /sys/fs/cgroup/perf_event rw,relatime - cgroup cgroup rw,perf_event . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/perf_event. lxc-start 1383145786.575 INFO lxc_conf - looking at .94 77 0:30 / /sys/fs/cgroup/hugetlb rw,relatime - cgroup cgroup rw,hugetlb . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/hugetlb. lxc-start 1383145786.575 INFO lxc_conf - looking at .95 77 0:31 / /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime - cgroup systemd rw,name=systemd . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/systemd. lxc-start 1383145786.575 INFO lxc_conf - looking at .96 76 0:17 / /sys/fs/fuse/connections rw,relatime - fusectl none rw . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/fuse/connections. lxc-start 1383145786.575 INFO lxc_conf - looking at .98 76 0:6 / /sys/kernel/debug rw,relatime - debugfs none rw . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/kernel/debug. lxc-start 1383145786.575 INFO lxc_conf - looking at .101 76 0:10 / /sys/kernel/security rw,relatime - securityfs none rw . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/kernel/security. lxc-start 1383145786.575 INFO lxc_conf - looking at .102 76 0:22 / /sys/fs/pstore rw,relatime - pstore none rw . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/pstore. lxc-start 1383145786.575 INFO lxc_conf - looking at .103 44 0:3 / /proc rw,nosuid,nodev,noexec,relatime - proc proc rw . lxc-start 1383145786.575 INFO lxc_conf - now p is . /proc. lxc-start 1383145786.575 INFO lxc_conf - looking at .104 44 9:2 / /data rw,relatime - ext4 /dev/md2 rw,errors=remount-ro,data=ordered . lxc-start 1383145786.575 INFO lxc_conf - now p is . /data. lxc-start 1383145786.575 INFO lxc_conf - looking at .105 44 8:1 / /boot rw,relatime - ext2 /dev/sda1 rw,errors=continue . lxc-start 1383145786.575 INFO lxc_conf - now p is . /boot. lxc-start 1383145786.576 DEBUG lxc_conf - mounted '/data/srv/lxc/as1' on '/usr/lib/x86_64-linux-gnu/lxc' lxc-start 1383145786.576 DEBUG lxc_conf - mounted 'none' on '/usr/lib/x86_64-linux-gnu/lxc//dev/pts', type 'devpts' lxc-start 1383145786.576 DEBUG lxc_conf - mounted 'none' on '/usr/lib/x86_64-linux-gnu/lxc//proc', type 'proc' lxc-start 1383145786.576 DEBUG lxc_conf - mounted 'none' on '/usr/lib/x86_64-linux-gnu/lxc//sys', type 'sysfs' lxc-start 1383145786.576 DEBUG lxc_conf - mounted 'none' on '/usr/lib/x86_64-linux-gnu/lxc//run', type 'tmpfs' lxc-start 1383145786.576 INFO lxc_conf - mount points have been setup lxc-start 1383145786.577 INFO lxc_conf - console has been setup lxc-start 1383145786.577 INFO lxc_conf - 8 tty(s) has been setup lxc-start 1383145786.577 INFO lxc_conf - rootfs path is ./data/srv/lxc/as1., mount is ./usr/lib/x86_64-linux-gnu/lxc. lxc-start 1383145786.577 INFO lxc_apparmor - I am 1, /proc/self points to 1 lxc-start 1383145786.577 DEBUG lxc_conf - created '/usr/lib/x86_64-linux-gnu/lxc/lxc_putold' directory lxc-start 1383145786.577 DEBUG lxc_conf - mountpoint for old rootfs is '/usr/lib/x86_64-linux-gnu/lxc/lxc_putold' lxc-start 1383145786.577 DEBUG lxc_conf - pivot_root syscall to '/usr/lib/x86_64-linux-gnu/lxc' successful lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/dev/pts' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/run/lock' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/run/shm' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/run/user' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/cpuset' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/cpu' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/cpuacct' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/memory' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/devices' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/freezer' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/blkio' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/perf_event' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/hugetlb' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/systemd' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/fuse/connections' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/kernel/debug' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/kernel/security' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/pstore' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/proc' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/data' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/boot' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/dev' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/run' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold' lxc-start 1383145786.577 INFO lxc_conf - created new pts instance lxc-start 1383145786.578 DEBUG lxc_conf - drop capability 'sys_boot' (22) lxc-start 1383145786.578 DEBUG lxc_conf - capabilities have been setup lxc-start 1383145786.578 NOTICE lxc_conf - 'as1' is setup. lxc-start 1383145786.578 DEBUG lxc_cgroup - cgroup 'memory.limit_in_bytes' set to '20G' lxc-start 1383145786.578 DEBUG lxc_cgroup - cgroup 'cpuset.cpus' set to '12-23' lxc-start 1383145786.578 INFO lxc_cgroup - cgroup has been setup lxc-start 1383145786.578 INFO lxc_apparmor - setting up apparmor lxc-start 1383145786.578 INFO lxc_apparmor - changed apparmor profile to lxc-container-default lxc-start 1383145786.578 NOTICE lxc_start - exec'ing '/sbin/init' lxc-start 1383145786.578 INFO lxc_conf - looking at .15 20 0:14 / /sys rw,nosuid,nodev,noexec,relatime - sysfs sysfs rw . lxc-start 1383145786.578 INFO lxc_conf - now p is . /sys. lxc-start 1383145786.578 INFO lxc_conf - looking at .16 20 0:3 / /proc rw,nosuid,nodev,noexec,relatime - proc proc rw . lxc-start 1383145786.578 INFO lxc_conf - now p is . /proc. lxc-start 1383145786.578 INFO lxc_conf - looking at .17 20 0:5 / /dev rw,relatime - devtmpfs udev rw,size=32961632k,nr_inodes=8240408,mode=755 . lxc-start 1383145786.578 INFO lxc_conf - now p is . /dev. lxc-start 1383145786.578 INFO lxc_conf - looking at .18 17 0:11 / /dev/pts rw,nosuid,noexec,relatime - devpts devpts rw,mode=600,ptmxmode=000 . lxc-start 1383145786.578 INFO lxc_conf - now p is . /dev/pts. lxc-start 1383145786.578 INFO lxc_conf - looking at .19 20 0:15 / /run rw,nosuid,noexec,relatime - tmpfs tmpfs rw,size=6594456k,mode=755 . lxc-start 1383145786.578 INFO lxc_conf - now p is . /run. lxc-start 1383145786.578 INFO lxc_conf - looking at .20 1 252:0 / / rw,relatime - ext4 /dev/mapper/limitorderbook1-root rw,errors=remount-ro,data=ordered . lxc-start 1383145786.578 INFO lxc_conf - now p is . /. lxc-start 1383145786.578 INFO lxc_conf - looking at .22 15 0:16 / /sys/fs/cgroup rw,relatime - tmpfs none rw,size=4k,mode=755 . lxc-start 1383145786.578 INFO lxc_conf - now p is . /sys/fs/cgroup. lxc-start 1383145786.578 INFO lxc_conf - looking at .23 15 0:17 / /sys/fs/fuse/connections rw,relatime - fusectl none rw . lxc-start 1383145786.578 INFO lxc_conf - now p is . /sys/fs/fuse/connections. lxc-start 1383145786.578 INFO lxc_conf - looking at .24 15 0:6 / /sys/kernel/debug rw,relatime - debugfs none rw . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/kernel/debug. lxc-start 1383145786.579 INFO lxc_conf - looking at .25 15 0:10 / /sys/kernel/security rw,relatime - securityfs none rw . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/kernel/security. lxc-start 1383145786.579 INFO lxc_conf - looking at .26 19 0:18 / /run/lock rw,nosuid,nodev,noexec,relatime - tmpfs none rw,size=5120k . lxc-start 1383145786.579 INFO lxc_conf - now p is . /run/lock. lxc-start 1383145786.579 INFO lxc_conf - looking at .27 19 0:19 / /run/shm rw,nosuid,nodev,relatime - tmpfs none rw . lxc-start 1383145786.579 INFO lxc_conf - now p is . /run/shm. lxc-start 1383145786.579 INFO lxc_conf - looking at .28 22 0:20 / /sys/fs/cgroup/cpuset rw,relatime - cgroup cgroup rw,cpuset,clone_children . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/cpuset. lxc-start 1383145786.579 INFO lxc_conf - looking at .29 19 0:21 / /run/user rw,nosuid,nodev,noexec,relatime - tmpfs none rw,size=102400k,mode=755 . lxc-start 1383145786.579 INFO lxc_conf - now p is . /run/user. lxc-start 1383145786.579 INFO lxc_conf - looking at .30 15 0:22 / /sys/fs/pstore rw,relatime - pstore none rw . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/pstore. lxc-start 1383145786.579 INFO lxc_conf - looking at .31 22 0:23 / /sys/fs/cgroup/cpu rw,relatime - cgroup cgroup rw,cpu . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/cpu. lxc-start 1383145786.579 INFO lxc_conf - looking at .32 22 0:24 / /sys/fs/cgroup/cpuacct rw,relatime - cgroup cgroup rw,cpuacct . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/cpuacct. lxc-start 1383145786.579 INFO lxc_conf - looking at .33 22 0:25 / /sys/fs/cgroup/memory rw,relatime - cgroup cgroup rw,memory . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/memory. lxc-start 1383145786.579 INFO lxc_conf - looking at .34 22 0:26 / /sys/fs/cgroup/devices rw,relatime - cgroup cgroup rw,devices . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/devices. lxc-start 1383145786.579 INFO lxc_conf - looking at .35 22 0:27 / /sys/fs/cgroup/freezer rw,relatime - cgroup cgroup rw,freezer . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/freezer. lxc-start 1383145786.579 INFO lxc_conf - looking at .36 22 0:28 / /sys/fs/cgroup/blkio rw,relatime - cgroup cgroup rw,blkio . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/blkio. lxc-start 1383145786.579 INFO lxc_conf - looking at .37 22 0:29 / /sys/fs/cgroup/perf_event rw,relatime - cgroup cgroup rw,perf_event . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/perf_event. lxc-start 1383145786.579 INFO lxc_conf - looking at .38 22 0:30 / /sys/fs/cgroup/hugetlb rw,relatime - cgroup cgroup rw,hugetlb . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/hugetlb. lxc-start 1383145786.579 INFO lxc_conf - looking at .39 20 9:2 / /data rw,relatime - ext4 /dev/md2 rw,errors=remount-ro,data=ordered . lxc-start 1383145786.579 INFO lxc_conf - now p is . /data. lxc-start 1383145786.579 INFO lxc_conf - looking at .40 20 8:1 / /boot rw,relatime - ext2 /dev/sda1 rw,errors=continue . lxc-start 1383145786.579 INFO lxc_conf - now p is . /boot. lxc-start 1383145786.579 INFO lxc_conf - looking at .41 22 0:31 / /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime - cgroup systemd rw,name=systemd . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/systemd. lxc-start 1383145786.579 NOTICE lxc_start - '/sbin/init' started with pid '6249' lxc-start 1383145786.579 WARN lxc_start - invalid pid for SIGCHLD <4>init: ureadahead main process (7) terminated with status 5 <4>init: console-font main process (94) terminated with status 1 And it will just sit there like that for hours at least. The container becomes pingable but I can't ssh and if I try lxc-console -n as1 I get a blank screen. If I do lxc-stop -n as1 or ^C in the window where it has hung I get: ^CTERM environment variable not set. <4>init: plymouth-upstart-bridge main process (192) terminated with status 1 <4>init: hwclock-save main process (187) terminated with status 70 * Asking all remaining processes to terminate... ...done. * All processes ended within 1 seconds... ...done. * Deactivating swap... ...fail! mount: cannot mount block device /dev/md2 read-only * Will now restart But after 20 minutes it hasn't restarted. Any ideas why these containers are hanging?

    Read the article

  • Voice Recognition Connection problem

    - by user244190
    I,m trying to work through and test a Voice Recognition example based on the VoiceRecognition.java example at http://developer.android.com/resources/samples/ApiDemos/src/com/example/android/apis/app/VoiceRecognition.html but when click on the button to create the activity, I get a dialog that says Connection problem. My Manifest file is using the Internet Permission, and I understand it passes the to the Google Servers. Do I need to do anything else to use this. Code below UPDATE 2: Thanks to Steve, I have been able to install the USB Driver and debug the app directly on my Droid. Here is the LogCat output from clicking on my mic button: 03-08 18:36:45.686: INFO/ActivityManager(1017): Starting activity: Intent { act=android.speech.action.RECOGNIZE_SPEECH cmp=com.google.android.voicesearch/.IntentApiActivity (has extras) } 03-08 18:36:45.686: WARN/ActivityManager(1017): Activity is launching as a new task, so cancelling activity result. 03-08 18:36:45.787: DEBUG/NetworkLocationProvider(1017): setMinTime: 120000 03-08 18:36:45.889: INFO/ActivityManager(1017): Displayed activity com.google.android.voicesearch/.IntentApiActivity: 135 ms (total 135 ms) 03-08 18:36:45.905: DEBUG/NetworkLocationProvider(1017): onCellLocationChanged [802,0,0,4192,3] 03-08 18:36:45.951: INFO/MicrophoneInputStream(1429): Starting voice recognition with audio source VOICE_RECOGNITION 03-08 18:36:45.998: DEBUG/AudioHardwareMot(990): Codec sampling rate already 16000 03-08 18:36:46.092: INFO/RecognitionService(1429): ssfe url=http://www.google.com/m/voice-search 03-08 18:36:46.092: WARN/RecognitionService(1429): required parameter 'calling_package' is missing in IntentAPI request 03-08 18:36:46.115: DEBUG/AudioHardwareMot(990): Codec sampling rate already 16000 03-08 18:36:46.131: WARN/InputManagerService(1017): Starting input on non-focused client com.android.internal.view.IInputMethodClient$Stub$Proxy@4487d240 (uid=10090 pid=3132) 03-08 18:36:46.131: WARN/IInputConnectionWrapper(3132): showStatusIcon on inactive InputConnection 03-08 18:36:46.248: WARN/MediaPlayer(1429): info/warning (1, 44) 03-08 18:36:46.334: DEBUG/dalvikvm(3206): GC freed 3682 objects / 369416 bytes in 293ms 03-08 18:36:46.358: WARN/MediaPlayer(1429): info/warning (1, 44) 03-08 18:36:46.412: WARN/MediaPlayer(1429): info/warning (1, 44) 03-08 18:36:46.444: WARN/MediaPlayer(1429): info/warning (1, 44) 03-08 18:36:46.475: WARN/MediaPlayer(1429): info/warning (1, 44) 03-08 18:36:46.506: WARN/MediaPlayer(1429): info/warning (1, 44) 03-08 18:36:46.514: INFO/MediaPlayer(1429): Info (1,44) 03-08 18:36:46.514: INFO/MediaPlayer(1429): Info (1,44) 03-08 18:36:46.514: INFO/MediaPlayer(1429): Info (1,44) 03-08 18:36:46.514: INFO/MediaPlayer(1429): Info (1,44) 03-08 18:36:46.514: INFO/MediaPlayer(1429): Info (1,44) 03-08 18:36:46.514: INFO/MediaPlayer(1429): Info (1,44) The line that concerns me is the warning of the missing parameter calling-package. UPDATE: Ok, I was able to replace my emulator image with one from HTC that appears to come with Google Voice Search, however now when I run from the emulator, i'm getting an Audio Problem message with Speak Again or Cancel buttons. It appears to make it back to the onActivityResult(), but the resultCode is 0. Here is the LogCat output: 03-07 20:21:25.396: INFO/ActivityManager(578): Starting activity: Intent { action=android.speech.action.RECOGNIZE_SPEECH comp={com.google.android.voicesearch/com.google.android.voicesearch.RecognitionActivity} (has extras) } 03-07 20:21:25.406: WARN/ActivityManager(578): Activity is launching as a new task, so cancelling activity result. 03-07 20:21:25.968: WARN/ActivityManager(578): Activity pause timeout for HistoryRecord{434f7850 {com.ikonicsoft.mileagegenie/com.ikonicsoft.mileagegenie.MileageGenie}} 03-07 20:21:26.206: WARN/AudioHardwareInterface(554): getInputBufferSize bad sampling rate: 16000 03-07 20:21:26.256: ERROR/AudioRecord(819): Recording parameters are not supported: sampleRate 16000, channelCount 1, format 1 03-07 20:21:26.696: INFO/ActivityManager(578): Displayed activity com.google.android.voicesearch/.RecognitionActivity: 1295 ms 03-07 20:21:29.890: DEBUG/dalvikvm(806): threadid=3: still suspended after undo (s=1 d=1) 03-07 20:21:29.896: INFO/dalvikvm(806): Uncaught exception thrown by finalizer (will be discarded): 03-07 20:21:29.896: INFO/dalvikvm(806): Ljava/lang/IllegalStateException;: Finalizing cursor android.database.sqlite.SQLiteCursor@435d3c50 on ml_trackdata that has not been deactivated or closed 03-07 20:21:29.896: INFO/dalvikvm(806): at android.database.sqlite.SQLiteCursor.finalize(SQLiteCursor.java:596) 03-07 20:21:29.896: INFO/dalvikvm(806): at dalvik.system.NativeStart.run(Native Method) 03-07 20:21:31.468: DEBUG/dalvikvm(806): threadid=5: still suspended after undo (s=1 d=1) 03-07 20:21:32.436: WARN/IInputConnectionWrapper(806): showStatusIcon on inactive InputConnection I,m still not sure why I,m getting the Connect problem on the Droid. I can use Voice Search ok. I also tried clearing the cache, and data as described in some posts, butstill not working?? /** * Fire an intent to start the speech recognition activity. */ private void startVoiceRecognitionActivity() { Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH); intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM); intent.putExtra(RecognizerIntent.EXTRA_PROMPT, "Speech recognition demo"); startActivityForResult(intent, VOICE_RECOGNITION_REQUEST_CODE); } /** * Handle the results from the recognition activity. */ @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { if (requestCode == VOICE_RECOGNITION_REQUEST_CODE && resultCode == RESULT_OK) { // Fill the list view with the strings the recognizer thought it could have heard ArrayList<String> matches = data.getStringArrayListExtra( RecognizerIntent.EXTRA_RESULTS); mList.setAdapter(new ArrayAdapter<String>(this, android.R.layout.simple_list_item_1, matches)); } super.onActivityResult(requestCode, resultCode, data); }

    Read the article

  • Webservice works with SoapSonar but not with Visual Studio Winform

    - by Rebol Tutorial
    I have generated a wsdl file with Visual Studio which is here; http://reboltutorial.com/webservices/discordian.wsdl Implementation is a cgi instead of a .net framework program but that should not matter as it is the purposes of webservices. I tested it successfully with SoapSonar: But under Visual Studio it fails with this code: private void button1_Click(object sender, EventArgs e) { RebolTutorial.ServiceSoapClient Discordian = new RebolTutorial.ServiceSoapClient("ServiceSoap"); int year = int.Parse(this.year.Text); int month = int.Parse(this.month.Text); int day = int.Parse(this.day.Text); response.Text = Discordian.Discordian(year,month,day); } Any reason you can see ? Thanks. Request below: <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:s="http://www.w3.org/2001/XMLSchema" xmlns:tns="http://reboltutorial.com/"> <soap:Body> <tns:Discordian> <tns:year>2010</tns:year> <tns:month>5</tns:month> <tns:day>1</tns:day> </tns:Discordian> </soap:Body> </soap:Envelope> as well as WSDL if needed: <?xml version="1.0" encoding="utf-8"?> <wsdl:definitions xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:tm="http://microsoft.com/wsdl/mime/textMatching/" xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/" xmlns:mime="http://schemas.xmlsoap.org/wsdl/mime/" xmlns:tns="http://reboltutorial.com/" xmlns:s="http://www.w3.org/2001/XMLSchema" xmlns:soap12="http://schemas.xmlsoap.org/wsdl/soap12/" xmlns:http="http://schemas.xmlsoap.org/wsdl/http/" targetNamespace="http://reboltutorial.com/" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/"> <wsdl:types> <s:schema elementFormDefault="qualified" targetNamespace="http://reboltutorial.com/"> <s:element name="Discordian"> <s:complexType> <s:sequence> <s:element minOccurs="1" maxOccurs="1" name="year" type="s:int" /> <s:element minOccurs="1" maxOccurs="1" name="month" type="s:int" /> <s:element minOccurs="1" maxOccurs="1" name="day" type="s:int" /> </s:sequence> </s:complexType> </s:element> <s:element name="DiscordianResponse"> <s:complexType> <s:sequence> <s:element minOccurs="0" maxOccurs="1" name="DiscordianResult" type="s:string" /> </s:sequence> </s:complexType> </s:element> </s:schema> </wsdl:types> <wsdl:message name="DiscordianSoapIn"> <wsdl:part name="parameters" element="tns:Discordian" /> </wsdl:message> <wsdl:message name="DiscordianSoapOut"> <wsdl:part name="parameters" element="tns:DiscordianResponse" /> </wsdl:message> <wsdl:portType name="ServiceSoap"> <wsdl:operation name="Discordian"> <wsdl:input message="tns:DiscordianSoapIn" /> <wsdl:output message="tns:DiscordianSoapOut" /> </wsdl:operation> </wsdl:portType> <wsdl:binding name="ServiceSoap" type="tns:ServiceSoap"> <soap:binding transport="http://schemas.xmlsoap.org/soap/http" /> <wsdl:operation name="Discordian"> <soap:operation soapAction="http://reboltutorial.com/Discordian" style="document" /> <wsdl:input> <soap:body use="literal" /> </wsdl:input> <wsdl:output> <soap:body use="literal" /> </wsdl:output> </wsdl:operation> </wsdl:binding> <wsdl:binding name="ServiceSoap12" type="tns:ServiceSoap"> <soap12:binding transport="http://schemas.xmlsoap.org/soap/http" /> <wsdl:operation name="Discordian"> <soap12:operation soapAction="http://reboltutorial.com/Discordian" style="document" /> <wsdl:input> <soap12:body use="literal" /> </wsdl:input> <wsdl:output> <soap12:body use="literal" /> </wsdl:output> </wsdl:operation> </wsdl:binding> <wsdl:service name="Service"> <wsdl:port name="ServiceSoap" binding="tns:ServiceSoap"> <soap:address location="http://reboltutorial.com/cgi-bin/discordian.cgi" /> </wsdl:port> <wsdl:port name="ServiceSoap12" binding="tns:ServiceSoap12"> <soap12:address location="http://reboltutorial.com/cgi-bin/discordian.cgi" /> </wsdl:port> </wsdl:service> </wsdl:definitions>

    Read the article

  • CLSF & CLK 2013 Trip Report by Jeff Liu

    - by jamesmorris
    This is a contributed post from Jeff Liu, lead XFS developer for the Oracle mainline Linux kernel team. Recently, I attended both the China Linux Storage and Filesystem workshop (CLSF), and the China Linux Kernel conference (CLK), which were held in Shanghai. Here are the highlights for both events. CLSF - 17th October XFS update (led by Jeff Liu) XFS keeps rapid progress with a lot of changes, especially focused on the infrastructure/performance improvements as well as  new feature development.  This can be reflected with a sample statistics among XFS/Ext4+JBD2/Btrfs via: # git diff --stat --minimal -C -M v3.7..v3.12-rc4 -- fs/xfs|fs/ext4+fs/jbd2|fs/btrfs XFS: 141 files changed, 27598 insertions(+), 19113 deletions(-) Ext4+JBD2: 39 files changed, 10487 insertions(+), 5454 deletions(-) Btrfs: 70 files changed, 19875 insertions(+), 8130 deletions(-) What made up those changes in XFS? Self-describing metadata(CRC32c). This is a new feature and it contributed about 70% code changes, it can be enabled via `mkfs.xfs -m crc=1 /dev/xxx` for v5 superblock. Transaction log space reservation improvements. With this change, we can calculate the log space reservation at mount time rather than runtime to reduce the the CPU overhead. User namespace support. So both XFS and USERNS can be enabled on kernel configuration begin from Linux 3.10. Thanks Dwight Engen's efforts for this thing. Split project/group quota inodes. Originally, project quota can not be enabled with group quota at the same time because they were share the same quota file inode, now it works but only for v5 super block. i.e, CRC enabled. CONFIG_XFS_WARN, an new lightweight runtime debugger which can be deployed in production environment. Readahead log object recovery, this change can speed up the log replay progress significantly. Speculative preallocation inode tracking, clearing and throttling. The main purpose is to deal with inodes with post-EOF space due to speculative preallocation, support improved quota management to free up a significant amount of unwritten space when at or near EDQUOT. It support backgroup scanning which occurs on a longish interval(5 mins by default, tunable), and on-demand scanning/trimming via ioctl(2). Bitter arguments ensued from this session, especially for the comparison between Ext4 and Btrfs in different areas, I have to spent a whole morning of the 1st day answering those questions. We basically agreed on XFS is the best choice in Linux nowadays because: Stable, XFS has a good record in stability in the past 10 years. Fengguang Wu who lead the 0-day kernel test project also said that he has observed less error than other filesystems in the past 1+ years, I own it to the XFS upstream code reviewer, they always performing serious code review as well as testing. Good performance for large/small files, XFS does not works very well for small files has already been an old story for years. Best choice (maybe) for distributed PB filesystems. e.g, Ceph recommends delopy OSD daemon on XFS because Ext4 has limited xattr size. Best choice for large storage (>16TB). Ext4 does not support a single file more than around 15.95TB. Scalability, any objection to XFS is best in this point? :) XFS is better to deal with transaction concurrency than Ext4, why? The maximum size of the log in XFS is 2038MB compare to 128MB in Ext4. Misc. Ext4 is widely used and it has been proved fast/stable in various loads and scenarios, XFS just need more customers, and Btrfs is still on the road to be a manhood. Ceph Introduction (Led by Li Wang) This a hot topic.  Li gave us a nice introduction about the design as well as their current works. Actually, Ceph client has been included in Linux kernel since 2.6.34 and supported by Openstack since Folsom but it seems that it has not yet been widely deployment in production environment. Their major work is focus on the inline data support to separate the metadata and data storage, reduce the file access time, i.e, a file access need communication twice, fetch the metadata from MDS and then get data from OSD, and also, the small file access is limited by the network latency. The solution is, for the small files they would like to store the data at metadata so that when accessing a small file, the metadata server can push both metadata and data to the client at the same time. In this way, they can reduce the overhead of calculating the data offset and save the communication to OSD. For this feature, they have only run some small scale testing but really saw noticeable improvements. Test environment: Intel 2 CPU 12 Core, 64GB RAM, Ubuntu 12.04, Ceph 0.56.6 with 200GB SATA disk, 15 OSD, 1 MDS, 1 MON. The sequence read performance for 1K size files improved about 50%. I have asked Li and Zheng Yan (the core developer of Ceph, who also worked on Btrfs) whether Ceph is really stable and can be deployed at production environment for large scale PB level storage, but they can not give a positive answer, looks Ceph even does not spread over Dreamhost (subject to confirmation). From Li, they only deployed Ceph for a small scale storage(32 nodes) although they'd like to try 6000 nodes in the future. Improve Linux swap for Flash storage (led by Shaohua Li) Because of high density, low power and low price, flash storage (SSD) is a good candidate to partially replace DRAM. A quick answer for this is using SSD as swap. But Linux swap is designed for slow hard disk storage, so there are a lot of challenges to efficiently use SSD for swap. SWAPOUT swap_map scan swap_map is the in-memory data structure to track swap disk usage, but it is a slow linear scan. It will become a bottleneck while finding many adjacent pages in the use of SSD. Shaohua Li have changed it to a cluster(128K) list, resulting in O(1) algorithm. However, this apporoach needs restrictive cluster alignment and only enabled for SSD. IO pattern In most cases, the swap io is in interleaved pattern because of mutiple reclaimers or a free cluster is shared by all reclaimers. Even though block layer can merge interleaved IO to some extent, but we cannot count on it completely. Hence the per-cpu cluster is added base on the previous change, it can help reclaimer do sequential IO and the block layer will be easier to merge IO. TLB flush: If we're reclaiming one active page, we should first move the page from active lru list to inactive lru list, and then reclaim the page from inactive lru to swap it out. During the process, we need to clear PTE twice: first is 'A'(ACCESS) bit, second is 'P'(PRESENT) bit. Processors need to send lots of ipi which make the TLB flush really expensive. Some works have been done to improve this, including rework smp_call_functiom_many() or remove the first TLB flush in x86, but there still have some arguments here and only parts of works have been pushed to mainline. SWAPIN: Page fault does iodepth=1 sync io, but it's a little waste if only issue a page size's IO. The obvious solution is doing swap readahead. But the current in-kernel swap readahead is arbitary(always 8 pages), and it always doesn't perform well for both random and sequential access workload. Shaohua introduced a new flag for madvise(MADV_WILLNEED) to do swap prefetch, so the changes happen in userspace API and leave the in-kernel readahead unchanged(but I think some improvement can also be done here). SWAP discard As we know, discard is important for SSD write throughout, but the current swap discard implementation is synchronous. He changed it to async discard which allow discard and write run in the same time. Meanwhile, the unit of discard is also optimized to cluster. Misc: lock contention For many concurrent swapout and swapin , the lock contention such as anon_vma or swap_lock is high, so he changed the swap_lock to a per-swap lock. But there still have some lock contention in very high speed SSD because of swapcache address_space lock. Zproject (led by Bob Liu) Bob gave us a very nice introduction about the current memory compression status. Now there are 3 projects(zswap/zram/zcache) which all aim at smooth swap IO storm and promote performance, but they all have their own pros and cons. ZSWAP It is implemented based on frontswap API and it uses a dynamic allocater named Zbud to allocate free pages. Zbud means pairs of zpages are "buddied" and it can only store at most two compressed pages in one page frame, so the max compress ratio is 50%. Each page frame is lru-linked and can do shink in memory pressure. If the compressed memory pool reach its limitation, shink or reclaim happens. It decompress the page frame into two new allocated pages and then write them to real swap device, but it can fail when allocating the two pages. ZRAM Acts as a compressed ramdisk and used as swap device, and it use zsmalloc as its allocator which has high density but may have fragmentation issues. Besides, page reclaim is hard since it will need more pages to uncompress and free just one page. ZRAM is preferred by embedded system which may not have any real swap device. Now both ZRAM and ZSWAP are in driver/staging tree, and in the mm community there are some disscussions of merging ZRAM into ZSWAP or viceversa, but no agreement yet. ZCACHE Handles file page compression but it is removed out of staging recently. From industry (led by Tang Jie, LSI) An LSI engineer introduced several new produces to us. The first is raid5/6 cards that it use full stripe writes to improve performance. The 2nd one he introduced is SandForce flash controller, who can understand data file types (data entropy) to reduce write amplification (WA) for nearly all writes. It's called DuraWrite and typical WA is 0.5. What's more, if enable its Dynamic Logical Capacity function module, the controller can do data compression which is transparent to upper layer. LSI testing shows that with this virtual capacity enables 1x TB drive can support up to 2x TB capacity, but the application must monitor free flash space to maintain optimal performance and to guard against free flash space exhaustion. He said the most useful application is for datebase. Another thing I think it's worth to mention is that a NV-DRAM memory in NMR/Raptor which is directly exposed to host system. Applications can directly access the NV-DRAM via a memory address - using standard system call mmap(). He said that it is very useful for database logging now. This kind of NVM produces are beginning to appear in recent years, and it is said that Samsung is building a research center in China for related produces. IMHO, NVM will bring an effect to current os layer especially on file system, e.g. its journaling may need to redesign to fully utilize these nonvolatile memory. OCFS2 (led by Canquan Shen) Without a doubt, HuaWei is the biggest contributor to OCFS2 in the past two years. They have posted 46 upstream patches and 39 patches have been merged. Their current project is based on 32/64 nodes cluster, but they also tried 128 nodes at the experimental stage. The major work they are working is to support ATS (atomic test and set), it can be works with DLM at the same time. Looks this idea is inspired by the vmware VMFS locking, i.e, http://blogs.vmware.com/vsphere/2012/05/vmfs-locking-uncovered.html CLK - 18th October 2013 Improving Linux Development with Better Tools (Andi Kleen) This talk focused on how to find/solve bugs along with the Linux complexity growing. Generally, we can do this with the following kind of tools: Static code checkers tools. e.g, sparse, smatch, coccinelle, clang checker, checkpatch, gcc -W/LTO, stanse. This can help check a lot of things, simple mistakes, complex problems, but the challenges are: some are very slow, false positives, may need a concentrated effort to get false positives down. Especially, no static checker I found can follow indirect calls (“OO in C”, common in kernel): struct foo_ops { int (*do_foo)(struct foo *obj); } foo->do_foo(foo); Dynamic runtime checkers, e.g, thread checkers, kmemcheck, lockdep. Ideally all kernel code would come with a test suite, then someone could run all the dynamic checkers. Fuzzers/test suites. e.g, Trinity is a great tool, it finds many bugs, but needs manual model for each syscall. Modern fuzzers around using automatic feedback, but notfor kernel yet: http://taviso.decsystem.org/making_software_dumber.pdf Debuggers/Tracers to understand code, e.g, ftrace, can dump on events/oops/custom triggers, but still too much overhead in many cases to run always during debug. Tools to read/understand source, e.g, grep/cscope work great for many cases, but do not understand indirect pointers (OO in C model used in kernel), give us all “do_foo” instances: struct foo_ops { int (*do_foo)(struct foo *obj); } = { .do_foo = my_foo }; foo>do_foo(foo); That would be great to have a cscope like tool that understands this based on types/initializers XFS: The High Performance Enterprise File System (Jeff Liu) [slides] I gave a talk for introducing the disk layout, unique features, as well as the recent changes.   The slides include some charts to reflect the performances between XFS/Btrfs/Ext4 for small files. About a dozen users raised their hands when I asking who has experienced with XFS. I remembered that when I asked the same question in LinuxCon/Japan, only 3 people raised their hands, but they are Chris Mason, Ric Wheeler, and another attendee. The attendee questions were mainly focused on stability, and comparison with other file systems. Linux Containers (Feng Gao) The speaker introduced us that the purpose for those kind of namespaces, include mount/UTS/IPC/Network/Pid/User, as well as the system API/ABI. For the userspace tools, He mainly focus on the Libvirt LXC rather than us(LXC). Libvirt LXC is another userspace container management tool, implemented as one type of libvirt driver, it can manage containers, create namespace, create private filesystem layout for container, Create devices for container and setup resources controller via cgroup. In this talk, Feng also mentioned another two possible new namespaces in the future, the 1st is the audit, but not sure if it should be assigned to user namespace or not. Another is about syslog, but the question is do we really need it? In-memory Compression (Bob Liu) Same as CLSF, a nice introduction that I have already mentioned above. Misc There were some other talks related to ACPI based memory hotplug, smart wake-affinity in scheduler etc., but my head is not big enough to record all those things. -- Jeff Liu

    Read the article

  • How do I get spring to inject my EntityManager?

    - by Trampas Kirk
    I'm following the guide here, but when the DAO executes, the EntityManager is null. I've tried a number of fixes I found in the comments on the guide, on various forums, and here (including this), to no avail. No matter what I seem to do the EntityManager remains null. Here are the relevant files, with packages etc changed to protect the innocent. spring-context.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:tx="http://www.springframework.org/schema/tx" xmlns:context="http://www.springframework.org/schema/context" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd" xmlns:p="http://www.springframework.org/schema/p"> <context:component-scan base-package="com.group.server"/> <context:annotation-config/> <tx:annotation-driven/> <bean id="propertyPlaceholderConfigurer" class="com.group.DecryptingPropertyPlaceholderConfigurer" p:systemPropertiesModeName="SYSTEM_PROPERTIES_MODE_OVERRIDE"> <property name="locations"> <list> <value>classpath*:spring-*.properties</value> <value>classpath*:${application.environment}.properties</value> </list> </property> </bean> <bean id="orderDao" class="com.package.service.OrderDaoImpl"/> <bean class="org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor"/> <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="persistenceUnitName" value="MyServer"/> <property name="loadTimeWeaver"> <bean class="org.springframework.instrument.classloading.InstrumentationLoadTimeWeaver"/> </property> <property name="dataSource" ref="dataSource"/> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"> <property name="showSql" value="${com.group.server.vendoradapter.showsql}"/> <property name="generateDdl" value="${com.group.server.vendoradapter.generateDdl}"/> <property name="database" value="${com.group.server.vendoradapter.database}"/> </bean> </property> </bean> <bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager"> <property name="entityManagerFactory" ref="entityManagerFactory"/> <property name="dataSource" ref="dataSource"/> </bean> <bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource"> <property name="driverClassName" value="${com.group.server.datasource.driverClassName}"/> <property name="url" value="${com.group.server.datasource.url}"/> <property name="username" value="${com.group.server.datasource.username}"/> <property name="password" value="${com.group.server.datasource.password}"/> </bean> <bean id="executorService" class="java.util.concurrent.Executors" factory-method="newCachedThreadPool"/> </beans> persistence.xml <persistence xmlns="http://java.sun.com/xml/ns/persistence" version="1.0"> <persistence-unit name="MyServer" transaction-type="RESOURCE_LOCAL"/> </persistence> OrderDaoImpl package com.group.service; import com.group.model.Order; import org.springframework.stereotype.Repository; import org.springframework.transaction.annotation.Transactional; import javax.persistence.EntityManager; import javax.persistence.PersistenceContext; import javax.persistence.Query; import java.util.List; @Repository @Transactional public class OrderDaoImpl implements OrderDao { private EntityManager entityManager; @PersistenceContext public void setEntityManager(EntityManager entityManager) { this.entityManager = entityManager; } @Override public Order find(Integer id) { Order order = entityManager.find(Order.class, id); return order; } @Override public List<Order> findAll() { Query query = entityManager.createQuery("select o from Order o"); return query.getResultList(); } @Override public List<Order> findBySymbol(String symbol) { Query query = entityManager.createQuery("select o from Order o where o.symbol = :symbol"); return query.setParameter("symbol", symbol).getResultList(); } }

    Read the article

  • WCF: SecurityNegotiationException when using client

    - by bradhe
    So I've been trying to set up certificate authentication for my clients and services. The eventual goal is to partition data based on the certificate a client connects with (i.e. the certificate becomes their credentials in to the greater system and their data is partitioned based on these credentials). I have been able to set it up successfully on both the client and the server side. I have created a certificate and a private key, installed them on my computer, and set up my server such that 1) it has a certificate-based service credential and 2) if a client connects without providing a certificate-based credential an exception is thrown. What I then did was create a simple client and add a certificate credential to the configuration and try to call a simple operation on the service. It looks like the client connects OK, and it looks like the certificate is accepted by the server, but I do get this: SecurityNegotiationException: "The caller was not authenticated by the service." That seems rather ambiguous to me. Note that I am using wsHttpBinding, which supposedly defaults to Windows auth for transport security...but all of these processes are being run as my user account as I'm running in my debug environment. Here is my server configuration: <system.serviceModel> <bindings> <wsHttpBinding> <binding name="MyServiceBinding"> <security mode="Message"> <transport clientCredentialType="None"/> <message clientCredentialType="Certificate"/> </security> </binding> </wsHttpBinding> </bindings> <services> <service behaviorConfiguration="MyServiceBehavior" name="MyService"> <endpoint binding="wsHttpBinding" bindingConfiguration="MyServiceBinding" contract="IMyContract"/> <endpoint binding="mexHttpBinding" address="mex" contract="IMetadataExchange"> <identity> <dns value="localhost"/> </identity> </endpoint> </service> </services> <behaviors> <serviceBehaviors> <behavior name="MyServiceBehavior"> <serviceMetadata httpGetEnabled="true" policyVersion="Policy15" /> <serviceDebug includeExceptionDetailInFaults="false" /> <serviceCredentials> <serviceCertificate storeLocation="CurrentUser" storeName="My" x509FindType="FindBySubjectName" findValue="tmp123"/> </serviceCredentials> </behavior> </serviceBehaviors> </behaviors> <serviceHostingEnvironment multipleSiteBindingsEnabled="true" /> </system.serviceModel> Here is my client config -- note that I'm using the same cert for the client that I use on the service: <system.serviceModel> <bindings> <wsHttpBinding> <binding name="WSHttpBinding_IMyService" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384"/> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false"/> <security mode="Message"> <!--<transport clientCredentialType="Windows" proxyCredentialType="None" realm=""/>--> <message clientCredentialType="Certificate" negotiateServiceCredential="true" algorithmSuite="Default"/> </security> </binding> </wsHttpBinding> </bindings> <client> <endpoint address="http://localhost:50120/UserServices.svc" binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_IMyService" behaviorConfiguration="IMyService_Behavior" contract="UserServices.IUserServices" name="WSHttpBinding_IMyService"> <identity> <certificate encodedValue="Some RSA stuff"/> </identity> </endpoint> </client> <behaviors> <endpointBehaviors> <behavior name="IMyService_Behavior"> <clientCredentials> <clientCertificate storeLocation="CurrentUser" storeName="My" x509FindType="FindBySubjectName" findValue="tmp123"/> </clientCredentials> </behavior> </endpointBehaviors> </behaviors> </system.serviceModel> Can anyone please help provide some insight as to what might be up here? Thanks,

    Read the article

  • Three ways to upload/post/convert iMovie to YouTube

    - by user351686
    For Mac users, iMovie is probably a convenient tool for making, editing their own home movies so as to upload to YouTube for sharing with more people. However, uploading iMovie files to YouTube can't be always a smooth run, I did notice many people complaining about it. This article is delivered for guiding those who are haunted by the nightmare by providing three common ways to upload iMovie files to YouTube. YouTube and iMovie YouTube is the most popular video sharing website for users to upload, share and view videos. It empowers anyone with an Internet connection the ability to upload video clips and share them with friends, family and the world. Users are invited to leave comments, pick favourites, send messages to each other and watch videos sorted into subjects and channels. YouTube accepts videos uploaded in most container formats, including WMV (Windows Media Video), 3GP (Cell Phones), AVI (Windows), MOV (Mac), MP4 (iPod/PSP), FLV (Adobe Flash), MKV (H.264). These include video codecs such as MP4, MPEG and WMV. iMovie is a common video editing software application comes with every Mac for users to edit their own home movies. It imports video footage to the Mac using either the Firewire interface on most MiniDV format digital video cameras, the USB port, or by importing the files from a hard drive where users can edit the video clips, add titles, and add music. Since 1999, eight versions of iMovie have been released by Apple, each with its own functions and characteristic, and each of them deal with videos in a way more or less different. But the most common formats handled with iMovie if specialty discarded as far as to my research are MOV, DV, HDV, MPEG-4. Three ways for successful upload iMovie files to YouTube Solution one and solution two suitable for those who are 100 certainty with their iMovie files which are fully compatible with YouTube. For smooth uploading, you are required to get a YouTube account first. Solution 1: Directly upload iMovie to YouTube Step 1: Launch iMovie, select the project you want to upload in YouTube. Step 2: Go to the file menu, click Share, select Export Movie Step 3: Specify the output file name and directory and then type the video type and video size. Solution 2: Post iMovie to YouTube straightly Step 1: Launch iMovie, choose the project you want to post in YouTube Step 2: From the Share menu, choose YouTube Step 3: In the pop-up YouTube windows, specify the name of your YouTube account, the password, choose the Category and fill in the description and tags of the project. Tick Make this movie more private on the bottom of the window, if possible, to limit those who can view the project. Click Next, and then click Publish. iMovie will automatically export and upload the movie to YouTube. Step 4: Click Tell a Friend to email friends and your family about your film. You are also allowed to copy the URL from Tell a Friend window and paste it into an email you created in your favourite email application if you like. Anyone you send to email to will be able to follow the URL directly to your movie. Note: Videos uploaded to YouTube are limited to ten minutes in length and a file size of 2GB. Solution 3: Upload to iMovie after conversion If neither of the above mentioned method works, there is still a third way to turn to. Sometimes, your iMovie files may not be recognized by YouTube due to the versions of iMovie (settings and functions may varies among versions), video itself (video format difference because of file extension, resolution, video size and length), compatibility (videos that are completely incompatible with YouTube). In this circumstance, the best and reliable method is to convert your iMovie files to YouTube accepted files, iMovie to YouTube converter will be inevitably the ideal choice. iMovie to YouTube converter is an elaborately designed tool for convert iMovie files to YouTube workable WMV, 3GP, AVI, MOV, MP4, FLV, MKV for smooth uploading with hard-to-believe conversion speed and second to none output quality. It can also convert between almost all popular popular file formats like AVI, WMV, MPG, MOV, VOB, DV, MP4, FLV, 3GP, RM, ASF, SWF, MP3, AAC, AC3, AIFF, AMR, WAV, WMA etc so as to put on various portable devices, import to video editing software or play on vast amount video players. iMovie to YouTube converter can also served as an excellent video editing tool to meet your specific program requirements. For example, you can cut your video files to a certain length, or split your video files to smaller ones and select the proper resolution suitable for demands of YouTube by Clip or Settings separately. Crop allows you to cut off unwanted black edges from your videos. Besides, you can also have a good command of the whole process or snapshot your favourite pictures from the preview window. More can be expected if you have a try.

    Read the article

  • VS 2012 Code Review &ndash; Before Check In OR After Check In?

    - by Tarun Arora
    “Is Code Review Important and Effective?” There is a consensus across the industry that code review is an effective and practical way to collar code inconsistency and possible defects early in the software development life cycle. Among others some of the advantages of code reviews are, Bugs are found faster Forces developers to write readable code (code that can be read without explanation or introduction!) Optimization methods/tricks/productive programs spread faster Programmers as specialists "evolve" faster It's fun “Code review is systematic examination (often known as peer review) of computer source code. It is intended to find and fix mistakes overlooked in the initial development phase, improving both the overall quality of software and the developers' skills. Reviews are done in various forms such as pair programming, informal walkthroughs, and formal inspections.” Wikipedia No where does the definition mention whether its better to review code before the code has been committed to version control or after the commit has been performed. No matter which side you favour, Visual Studio 2012 allows you to request for a code review both before check in and also request for a review after check in. Let’s weigh the pros and cons of the approaches independently. Code Review Before Check In or Code Review After Check In? Approach 1 – Code Review before Check in Developer completes the code and feels the code quality is appropriate for check in to TFS. The developer raises a code review request to have a second pair of eyes validate if the code abides to the recommended best practices, will not result in any defects due to common coding mistakes and whether any optimizations can be made to improve the code quality.                                             Image 1 – code review before check in Pros Everything that gets committed to source control is reviewed. Minimizes the chances of smelly code making its way into the code base. Decreases the cost of fixing bugs, remember, the earlier you find them, the lesser the pain in fixing them. Cons Development Code Freeze – Since the changes aren’t in the source control yet. Further development can only be done off-line. The changes have not been through a CI build, hard to say whether the code abides to all build quality standards. Inconsistent! Cumbersome to track the actual code review process.  Not every change to the code base is worth reviewing, a lot of effort is invested for very little gain. Approach 2 – Code Review after Check in Developer checks in, random code reviews are performed on the checked in code.                                                      Image 2 – Code review after check in Pros The code has already passed the CI build and run through any code analysis plug ins you may have running on the build server. Instruct the developer to ensure ZERO fx cop, style cop and static code analysis before check in. Code is cleaner and smell free even before the code review. No Offline development, developers can continue to develop against the source control. Cons Bad code can easily make its way into the code base. Since the review take place much later in the cycle, the cost of fixing issues can prove to be much higher. Approach 3 – Hybrid Approach The community advocates a more hybrid approach, a blend of tooling and human accountability quotient.                                                               Image 3 – Hybrid Approach 1. Code review high impact check ins. It is not possible to review everything, by setting up code review check in policies you can end up slowing your team. More over, the code that you are reviewing before check in hasn't even been through a green CI build either. 2. Tooling. Let the tooling work for you. By running static analysis, fx cop, style cop and other plug ins on the build agent, you can identify the real issues that in my opinion can't possibly be identified using human reviews. Configure the tooling to report back top 10 issues every day. Mandate the manual code review of individuals who keep making it to this list of shame more often. 3. During Merge. I would prefer eliminating some of the other code issues during merge from Main branch to the release branch. In a scrum project this is still easier because cheery picking the merges is a possibility and the size of code being reviewed is still limited. Let the tooling work for you, if some one breaks the CI build often, put them on a gated check in build course until you see improvement. If some one appears on the top 10 list of shame generated via the build then ensure that all their code is reviewed till you see improvement. At the end of the day, the goal is to ensure that the code being delivered is top quality. By enforcing a code review before any check in, you force the developer to work offline or stay put till the review is complete. What do the experts say? So I asked a few expects what they thought of “Code Review quality gate before Checking in code?" Terje Sandstrom | Microsoft ALM MVP You mean a review quality gate BEFORE checking in code????? That would mean a lot of code staying either local or in shelvesets, and not even been through a CI build, and a green CI build being the main criteria for going further, f.e. to the review state. I would not like code laying around with no checkin’s. Having a requirement that code is checked in small pieces, 4-8 hours work max, and AT LEAST daily checkins, a manual code review comes second down the lane. I would expect review quality gates to happen before merging back to main, or before merging to release.  But that would all be on checked-in code.  Branching is absolutely one way to ease the pain.   Another way we are using is automatic quality builds, running metrics, coverage, static code analysis.  Unfortunately it takes some time, would be great to be on CI’s – but…., so it’s done scheduled every night. Based on this we get, among other stuff,  top 10 lists of suspicious code, which is then subjected to reviews.  If a person seems to be very popular on these top 10 lists, we subject every check in from that person to a review for a period. That normally helps.   None of the clients I have can afford to have every checkin reviewed, so we need to find ways around it. I don’t disagree with the nicety of having all the code reviewed, but I find it hard to find those resources in today’s enterprises. David V. Corbin | Visual Studio ALM Ranger I tend to agree with both sides. I hate having code that is not checked in, but at the same time hate having “bad” code in the repository. I have found that branching is one approach to solving this dilemma. Code is checked into the private/feature branch before the review, but is not merged over to the “official” branch until after the review. I advocate both, depending on circumstance (especially team dynamics)   - The “pre-checkin” is usually for elements that may impact the project as a whole. Think of it as another “gate” along with passing unit tests. - The “post-checkin” may very well not be at the changeset level, but correlates to a review at the “user story” level.   Again, this depends on team dynamics in play…. Robert MacLean | Microsoft ALM MVP I do not think there is no right answer for the industry as a whole. In short the question is why do you do reviews? Your question implies risk mitigation, so in low risk areas you can get away with it after check in while in high risk you need to do it before check in. An example is those new to a team or juniors need it much earlier (maybe that is before checkin, maybe that is soon after) than seniors who have shipped twenty sprints on the team. Abhimanyu Singhal | Visual Studio ALM Ranger Depends on per scenario basis. We recommend post check-in reviews when: 1. We don't want to block other checks and processes on manual code reviews. Manual reviews take time, and some pieces may not require manual reviews at all. 2. We need to trace all changes and track history. 3. We have a code promotion strategy/process in place. For risk mitigation, post checkin code can be promoted to Accepted branches. Or can be rejected. Pre Checkin Reviews are used when 1. There is a high risk factor associated 2. Reviewers are generally (most of times) have immediate availability. 3. Team does not have strict tracking needs. Simply speaking, no single process fits all scenarios. You need to select what works best for your team/project. Thomas Schissler | Visual Studio ALM Ranger This is an interesting discussion, I’m right now discussing details about executing code reviews with my teams. I see and understand the aspects you brought in, but there is another side as well, I’d like to point out. 1.) If you do reviews per check in this is not very practical as a hard rule because this will disturb the flow of the team very often or it will lead to reduce the checkin frequency of the devs which I would not accept. 2.) If you do later reviews, for example if you review PBIs, it is not easy to find out which code you should review. Either you review all changesets associate with the PBI, but then you might review code which has been changed with a later checkin and the dev maybe has already fixed the issue. Or you review the diff of the latest changeset of the PBI with the first but then you might also review changes of other PBIs. Jakob Leander | Sr. Director, Avanade In my experience, manual code review: 1. Does not get done and at the very least does not get redone after changes (regardless of intentions at start of project) 2. When a project actually do it, they often do not do it right away = errors pile up 3. Requires a lot of time discussing/defining the standard and for the team to learn it However code review is very important since e.g. even small memory leaks in a high volume web solution have big consequences In the last years I have advocated following approach for code review - Architects up front do “at least one best practice example” of each type of component and tell the team. Copy from this one. This should include error handling, logging, security etc. - Dev lead on project continuously browse code to validate that the best practices are used. Especially that patterns etc. are not broken. You can do this formally after each sprint/iteration if you want. Once this is validated it is unlikely to “go bad” even during later code changes Agree with customer to rely on static code analysis from Visual Studio as the one and only coding standard. This has HUUGE benefits - You can easily tweak to reach the level you desire together with customer - It is easy to measure for both developers/management - It is 100% consistent across code base - It gets validated all the time so you never end up getting hammered by a customer review in the end - It is easy to tell the developer that you do not want code back unless it has zero errors = minimize communication You need to track this at least during nightly builds and make sure team sees total # issues. Do not allow #issues it to grow uncontrolled. On the project I run I require code analysis to have run on code before checkin (checkin rule). This means -  You have to have clean compile (or CA wont run) so this is extra benefit = very few broken builds - You can change a few of the rules to compile as errors instead of warnings. I often do this for “missing dispose” issues which you REALLY do not want in your app Tip: Place your custom CA rules files as part of solution. That  way it works when you do branching etc. (path to CA file is relative in VS) Some may argue that CA is not as good as manual inspection. But since manual inspection in reality suffers from the 3 issues in start it is IMO a MUCH better (and much cheaper) approach from helicopter perspective Tirthankar Dutta | Director, Avanade I think code review should be run both before and after check ins. There are some code metrics that are meant to be run on the entire codebase … Also, especially on multi-site projects, one should strive to architect in a way that lets men manage the framework while boys write the repetitive code… scales very well with the need to review less by containment and imposing architectural restrictions to emphasise the design. Bruno Capuano | Microsoft ALM MVP For code reviews (means peer reviews) in distributed team I use http://www.vsanywhere.com/default.aspx  David Jobling | Global Sr. Director, Avanade Peer review is the only way to scale and its a great practice for all in the team to learn to perform and accept. In my experience you soon learn who's code to watch more than others and tune the attention. Mikkel Toudal Kristiansen | Manager, Avanade If you have several branches in your code base, you will need to merge often. This requires manual merging, when a file has been changed in both branches. It offers a good opportunity to actually review to changed code. So my advice is: Merging between branches should be done as often as possible, it should be done by a senior developer, and he/she should perform a full code review of the code being merged. As for detecting architectural smells and code smells creeping into the code base, one really good third party tools exist: Ndepend (http://www.ndepend.com/, for static code analysis of the current state of the code base). You could also consider adding StyleCop to the solution. Jesse Houwing | Visual Studio ALM Ranger I gave a presentation on this subject on the TechDays conference in NL last year. See my presentation and slides here (talk in Dutch, but English presentation): http://blog.jessehouwing.nl/2012/03/did-you-miss-my-techdaysnl-talk-on-code.html  I’d like to add a few more points: - Before/After checking is mostly a trust issue. If you have a team that does diligent peer reviews and regularly talk/sit together or peer review, there’s no need to enforce a before-checkin policy. The peer peer-programming and regular feedback during development can take care of most of the review requirements as long as the team isn’t under stress. - Under stress, enforce pre-checkin reviews, it might sound strange, if you’re already under time or budgetary constraints, but it is under such conditions most real issues start to be created or pile up. - Use tools to catch most common errors, Code Analysis/FxCop was already mentioned. HP Fortify, Resharper, Coderush etc can help you there. There are also a lot of 3rd party rules you can add to Code Analysis. I’ve written a few myself (http://fccopcontrib.codeplex.com) and various teams from Microsoft have added their own rules (MSOCAF for SharePoint, WSSF for WCF). For common errors that keep cropping up, see if you can define a rule. It’s much easier. But more importantly make sure you have a good help page explaining *WHY* it's wrong. If you have small feature or developer branches/shelvesets, you might want to review pre-merge. It’s still better to do peer reviews and peer programming, but the most important thing is that bad quality code doesn’t make it into the important branch. So my philosophy: - Use tooling as much as possible. - Make sure the team understands the tooling and the importance of the things it flags. It’s too easy to just click suppress all to ignore the warnings. - Under stress, tighten process, it’s under stress that the problems of late reviews will really surface - Most importantly if you do reviews do them as early as possible, but never later than needed. In other words, pre-checkin/post checking doesn’t really matter, as long as the review is done before the code is released. It’ll just be much more expensive to fix any review outcomes the later you find them. --- I would love to hear what you think!

    Read the article

  • NoSQL with MongoDB, NoRM and ASP.NET MVC - Part 2

    - by shiju
     In my last post, I have given an introduction to MongoDB and NoRM using an ASP.NET MVC demo app. I have updated the demo ASP.NET MVC app and a created a new drop at codeplex. You can download the demo at http://mongomvc.codeplex.com/In my last post, we have discussed to doing basic CRUD operations against a simple domain entity. In this post, let’s discuss on domain entity with deep object graph.The below is our domain entities  public class Category {       [MongoIdentifier]     public ObjectId Id { get; set; }       [Required(ErrorMessage = "Name Required")]     [StringLength(25, ErrorMessage = "Must be less than 25 characters")]     public string Name { get; set;}     public string Description { get; set; }     public List<Expense> Expenses { get; set; }       public Category()     {         Expenses = new List<Expense>();     } }    public class Expense {     [MongoIdentifier]     public ObjectId Id { get; set; }     public Category Category { get; set; }     public string  Transaction { get; set; }     public DateTime Date { get; set; }     public double Amount { get; set; }   }   We have two domain entities - Category and Expense. A single category contains a list of expense transactions and every expense transaction should have a Category.The MongoSession class  internal class MongoSession : IDisposable {     private readonly MongoQueryProvider provider;       public MongoSession()     {         this.provider = new MongoQueryProvider("Expense");     }       public IQueryable<Category> Categories     {         get { return new MongoQuery<Category>(this.provider); }     }     public IQueryable<Expense> Expenses     {         get { return new MongoQuery<Expense>(this.provider); }     }     public MongoQueryProvider Provider     {         get { return this.provider; }     }       public void Add<T>(T item) where T : class, new()     {         this.provider.DB.GetCollection<T>().Insert(item);     }       public void Dispose()     {         this.provider.Server.Dispose();     }     public void Delete<T>(T item) where T : class, new()     {         this.provider.DB.GetCollection<T>().Delete(item);     }       public void Drop<T>()     {         this.provider.DB.DropCollection(typeof(T).Name);     }       public void Save<T>(T item) where T : class,new()     {         this.provider.DB.GetCollection<T>().Save(item);                }     }     ASP.NET MVC view model  for Expense transaction  public class ExpenseViewModel {     public ObjectId Id { get; set; }       public ObjectId CategoryId { get; set; }       [Required(ErrorMessage = "Transaction Required")]            public string Transaction { get; set; }       [Required(ErrorMessage = "Date Required")]            public DateTime Date { get; set; }       [Required(ErrorMessage = "Amount Required")]        public double Amount { get; set; }       public IEnumerable<SelectListItem> Category { get; set; } }  Let's create action method for Insert and Update a expense transaction   [HttpPost] public ActionResult Save(ExpenseViewModel expenseViewModel) {     try     {         if (!ModelState.IsValid)         {             using (var session = new MongoSession())             {                 var categories = session.Categories.AsEnumerable<Category>();                 expenseViewModel.Category = categories.ToSelectListItems(expenseViewModel.CategoryId);                }             return View("Save", expenseViewModel);         }           var expense=new Expense();         ModelCopier.CopyModel(expenseViewModel, expense);           using (var session = new MongoSession())         {             ObjectId Id = expenseViewModel.CategoryId;             var category = session.Categories                 .Where(c => c.Id ==Id  )                 .FirstOrDefault();             expense.Category = category;             session.Save(expense);         }         return RedirectToAction("Index");     }     catch     {         return View();     } } Query with Expenses  using (var session = new MongoSession()) {     var expenses = session.Expenses.         Where(exp => exp.Date >= StartDate && exp.Date <= EndDate)         .AsEnumerable<Expense>(); }  We are doing a LINQ query expression with a Date filter. We can easily work with MongoDB using NoRM driver and can managing object graph of domain entities are pretty cool. Download the Source - You can download the source code form http://mongomvc.codeplex.com

    Read the article

  • "Expected initializer before '<' token" in header file

    - by Sarah
    I'm pretty new to programming and am generally confused by header files and includes. I would like help with an immediate compile problem and would appreciate general suggestions about cleaner, safer, slicker ways to write my code. I'm currently repackaging a lot of code that used to be in main() into a Simulation class. I'm getting a compile error with the header file for this class. I'm compiling with gcc version 4.2.1. // Simulation.h #ifndef SIMULATION_H #define SIMULATION_H #include <cstdlib> #include <iostream> #include <cmath> #include <string> #include <fstream> #include <set> #include <boost/multi_index_container.hpp> #include <boost/multi_index/hashed_index.hpp> #include <boost/multi_index/member.hpp> #include <boost/multi_index/ordered_index.hpp> #include <boost/multi_index/mem_fun.hpp> #include <boost/multi_index/composite_key.hpp> #include <boost/shared_ptr.hpp> #include <boost/tuple/tuple_comparison.hpp> #include <boost/tuple/tuple_io.hpp> #include "Parameters.h" #include "Host.h" #include "rng.h" #include "Event.h" #include "Rdraws.h" typedef multi_index_container< // line 33 - first error boost::shared_ptr< Host >, indexed_by< hashed_unique< const_mem_fun<Host,int,&Host::getID> >, // 0 - ID index ordered_non_unique< tag<age>,const_mem_fun<Host,int,&Host::getAgeInY> >, // 1 - Age index hashed_non_unique< tag<household>,const_mem_fun<Host,int,&Host::getHousehold> >, // 2 - Household index ordered_non_unique< // 3 - Eligible by age & household tag<aeh>, composite_key< Host, const_mem_fun<Host,int,&Host::getAgeInY>, const_mem_fun<Host,bool,&Host::isEligible>, const_mem_fun<Host,int,&Host::getHousehold> > >, ordered_non_unique< // 4 - Eligible by household (all single adults) tag<eh>, composite_key< Host, const_mem_fun<Host,bool,&Host::isEligible>, const_mem_fun<Host,int,&Host::getHousehold> > >, ordered_non_unique< // 5 - Household & age tag<ah>, composite_key< Host, const_mem_fun<Host,int,&Host::getHousehold>, const_mem_fun<Host,int,&Host::getAgeInY> > > > // end indexed_by > HostContainer; typedef std::set<int> HHSet; class Simulation { public: Simulation( int sid ); ~Simulation(); // MEMBER FUNCTION PROTOTYPES void runDemSim( void ); void runEpidSim( void ); void ageHost( int id ); int calcPartnerAge( int a ); void executeEvent( Event & te ); void killHost( int id ); void pairHost( int id ); void partner2Hosts( int id1, int id2 ); void fledgeHost( int id ); void birthHost( int id ); void calcSI( void ); double beta_ij_h( int ai, int aj, int s ); double beta_ij_nh( int ai, int aj, int s ); private: // SIMULATION OBJECTS double t; double outputStrobe; int idCtr; int hholdCtr; int simID; RNG rgen; HostContainer allHosts; // shared_ptr to Hosts - line 102 - second error HHSet allHouseholds; int numInfecteds[ INIT_NUM_AGE_CATS ][ INIT_NUM_STYPES ]; EventPQ currentEvents; // STREAM MANAGEMENT void writeOutput(); void initOutput(); void closeOutput(); std::ofstream ageDistStream; std::ofstream ageDistTStream; std::ofstream hhDistStream; std::ofstream hhDistTStream; std::string ageDistFile; std::string ageDistTFile; std::string hhDistFile; std::string hhDistTFile; }; #endif I'm hoping the other files aren't so relevant to this problem. When I compile with g++ -g -o -c a.out -I /Applications/boost_1_42_0/ Host.cpp Simulation.cpp rng.cpp main.cpp Rdraws.cpp I get Simulation.h:33: error: expected initializer before '<' token Simulation.h:102: error: 'HostContainer' does not name a type and then a bunch of other errors related to not recognizing the HostContainer. It seems like I have all the right Boost #includes for the HostContainer to be understood. What else could be going wrong? I would appreciate immediate suggestions, troubleshooting tips, and other advice about my code. My plan is to create a "HostContainer.h" file that includes the typedef and structs that define its tags, similar to what I'm doing in "Event.h" for the EventPQ container. I'm assuming this is legal and good form.

    Read the article

< Previous Page | 710 711 712 713 714 715 716 717 718 719 720 721  | Next Page >