Search Results

Search found 14022 results on 561 pages for 'coded ui tests'.

Page 293/561 | < Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >

  • Fractional Calculator in Java

    - by user2888881
    I am trying to create a fractional calculator in Java with inputs of mixed fractions, proper fractions, improper fractions or integers. It should include the four basic operators as well. The program should be set up as a loop where it is continuous until the user types "quit". I have coded the beginning loop but have no idea where to go from there. Please help, I am a beginner and would really appreciate it. Thank you again. This is what I have so far: import java.util.*; public class FractionCalculator { private static Scanner input; public static void main(String[] args) { input = new Scanner(System.in); String x = "quit"; System.out.println("Enter a fraction"); while (true) { String y = input.next(); if (y.equals(x)) { break; } } } }

    Read the article

  • Catching uncaught exceptions

    - by kajyr
    Hi everybody. In my workplace we are mantaining a lot of ecommerce websites, some coded better than others. On some of those, sometimes uncaught exceptions are thrown, and showed by the alertbox from the flash player debug (If you have it installed). To rise the average user experience I'd like to report all those exceptions throught a in house tool we already have. Is there a way to catch those exceptions? Maybe the flash player debug exposes them to javascript, or in some other way.

    Read the article

  • Strings - Filling In Leading Zeros Wtih A Zero

    - by headscratch
    I'm reading an array of hard-coded strings of numeric characters - all positions are filled with a character, even for the leading zeros. Thus, can confidently parse it using substring(start, end) to convert to numeric. Example: "0123 0456 0789" However, a string coming from a database does not fill in the leading zero with a 'zero character', it simply fetches the '123 456 789', which is correct for an arithmetic number but not for my needs and makes for parsing trouble. Before writing conditionals to check for leading zeros and adding them to the string if needed, is there a simple way of specifying they be filled with a character ? I'm not finding this in my Java book... I could have done the three conditionals in the time it took to post this but, this is more about 'education'... Thanks

    Read the article

  • JS/PHP: Who is responsible for generating the content?

    - by Extrakun
    Here's a situation which I have encountered, creating a form on the client side and using PHP to process. Here are some considerations The PHP script generates the form and send to the client side. This is because of internationalization issues The client side uses JavaScript to submit form; the ID of the form is hard-coded inside the JavaScript as it is generated by PHP. This means everytime the PHP code is updated, the JS must change. The question here is, who should be dependent on who? Should the JS generate the form instead, so that the PHP script has to know the names of the form elements? OR should it be the other way round?

    Read the article

  • Is it possible to *optionally* override a theme in Drupal 6?

    - by David Semeria
    I want to override the theming of only one (custom) menu. I can do this with phptemplate_menu_tree() but - of course - it overrides the rendering of all menus. I've tried returning FALSE (an obvious technique IMO) if the menu is not the specific one I want to override - but this doesn't cause the overridden theme function to be called. My only alternative (when the menu is anything other than the specific one) is to call the overridden function from within phptemplate_menu_tree() - but this seems to defeat the whole point of the override system, since the default rendering function will be hard-coded therein. I hope the explanation is clear, and any help is greatly appreciated - tks.

    Read the article

  • C Allocating Two Dimensional Arrays

    - by Jacob
    I am trying to allocate a 2D dimension array of File Descriptors... So I would need something like this fd[0][0] fd[0][1] I have coded so far: void allocateMemory(int row, int col, int ***myPipes){ int i = 0,i2 = 0; myPipes = (int**)malloc(row * sizeof(int*)); for(i = 0; i < row;i++){ myPipes[i] = (int*)malloc(col * sizeof(int)); } } How can I set it all too zeros right now I keep getting a seg fault when I try to assign a value... Thanks

    Read the article

  • ActiveRecord validates... custom field name.

    - by Dmitriy Likhten
    I would like to fix up some error messages my site generates. Here is the problem: class Brand < ActiveRecord::Base validates_presence_of :foo ... end My goal is to make a message "Ticket description is required" instead of "Foo is required" or may not be blank, or whatever. The reason this is so important is because lets say previously the field was ticket_summary. That was great and the server was coded to use that, but now due to crazy-insane business analysts it has been determined that ticket_summary is a poor name, and should be ticket_description. Now I don't necessarily want to have my db be driven by the user requirements for field names, especially since they can change frequently without functionality changes. Is there a mechanism for providing this already?

    Read the article

  • Add the categories selector widget to the PAGE editor with predefined categories listed?

    - by Scott B
    The following code will add the categories selector widget to the WordPress Page editor interface... add_action('admin_menu', 'my_post_categories_meta_box'); function my_post_categories_meta_box() { add_meta_box('categorydiv', __('Categories'), 'post_categories_meta_box', 'page', 'side', 'core'); } What I would like to do is to figure out how to modify the resulting category listing so that it only contains a predefined list of hard coded categories that I define. Since I'm adding this via my custom theme, it will only appear on the page editor when my theme is active on the site. And I have some specific "handler" categories that my theme installs into the site and later uses to determine layout elements.

    Read the article

  • can constructors actually return Strings?

    - by elwynn
    Hi all, I have a class called ArionFileExtractor in a .java file of the same name. public class ArionFileExtractor { public String ArionFileExtractor (String fName, String startText, String endText) { String afExtract = ""; // Extract string from fName into afExtract in code I won't show here return afExtract; } However, when I try to invoke ArionFileExtractor in another .java file, as follows: String afe = ArionFileExtractor("gibberish.txt", "foo", "/foo"); NetBeans informs me that there are incompatible types and that java.lang.String is required. But I coded ArionFileExtractor to return the standard string type, which is java.lang.string. I am wondering, can my ArionFileExtractor constructor legally return a String? I very much appreciate any tips or pointers on what I'm doing wrong here.

    Read the article

  • Insert multiple attributes into html tag using JavaScript and not between tags

    - by WilliamK
    From a previous quest we asked about inserting a new attribute into a html tag and the code below does the job nicely, but how should it be coded to add multiple attributes, for example changing.. <body bgcolor="#DDDDDD"> to... <body bgcolor="#DDDDDD" topmargin="0" leftmargin="0"> The code that works for a single attribute is... document.getElementsByTagName("body")[0].setAttribute("id", "something"); How to modify this for inserting multiple attributes?

    Read the article

  • can i use perl to download files in a php script?

    - by jision
    i have a file sharing website made in php which gives users to download files.But as i am using a hosted server there is limitation on time out and other factors that come to play while for a download script coded in php. This causes large files to get corrupted while downloading. i am looking for a solution for this i have tried using symbolic links but the force download factor is not taking place.... i am thinking of using perl to download files bt dont have any clue what so ever.... can any one help me outwith this problem???

    Read the article

  • jquery problem where the returned data from an XML file seems inaccessible

    - by squeaker
    Hi all, I'm using an xml file to generate some links which i would like to then be able to click on to populate an input box: $(xmlResponse).find('types').each(function(){ var id = $(this).attr('id'); var type = $(this).find('type').text(); $('<span title=\"'+type+'\" class=\"type\">'+type+'</span>').appendTo('#types'); }); $('span.type').click(function() { var title = $(this).attr('title'); $("input[name='type']").val(title); }); But for some reason clicking on the liks does not populate the input box. It does work if the span is hard coded into the page for example: <span title="text to populate" class="type">test</span> I'm guessing that the XML is not getting loaded into the DOM in the right way (or something like that) Any Ideas?

    Read the article

  • Codeigniter: how do I select count when `$query->num_rows()` doesn't work for me?

    - by mOrloff
    I have a query which is returning a sum, so naturally it returns one row. I need to count the number of records in the DB which made that sum. Here's a sample of the type of query I am talking about (MySQL): SELECT i.id, i.vendor_quote_id, i.product_id_requested, SUM(i.quantity_on_hand) AS qty, COUNT(i.quantity_on_hand) AS count FROM vendor_quote_item AS i JOIN vendor_quote_container AS c ON i.vendor_quote_id = c.id LEFT JOIN company_types ON company_types.company_id = c.company_id WHERE company_types.company_type = 'f' AND i.product_id_requested = 12345678 I have found and am now using the select_min(), select_max(), and select_sum() functions, but my COUNT() is still hard-coded in. The main problem is that I am having to specify the table name in a tightly coupled manner with something like $this->$db->select( 'COUNT(myDbPrefix_vendor_quote_item.quantity_on_hand) AS count' ) which kills portability and makes switching environments a PIA. How can/should I get my the count values I am after with CI in an uncoupled way??

    Read the article

  • Pure Server-Side Filtering with RadGridView and WCF RIA Services

    Those of you who are familiar with WCF RIA Services know that the DomainDataSource control provides a FilterDescriptors collection that enables you to filter data returned by the query on the server. We have been using this DomainDataSource feature in our RIA Services with DomainDataSource online example for almost an year now. In the example, we are listening for RadGridViews Filtering event in order to intercept any filtering that is performed on the client and translate it to something that the DomainDataSource will understand, in this case a System.Windows.Data.FilterDescriptor being added or removed from its FilterDescriptors collection. Think of RadGridView.FilterDescriptors as client-side filtering and of DomainDataSource.FilterDescriptors as server-side filtering. We no longer need the client-side one. With the introduction of the Custom Filtering Controls feature many new possibilities have opened. With these custom controls we no longer need to do any filtering on the client. I have prepared a very small project that demonstrates how to filter solely on the server by using a custom filtering control. As I have already mentioned filtering on the server is done through the FilterDescriptors collection of the DomainDataSource control. This collection holds instances of type System.Windows.Data.FilterDescriptor. The FilterDescriptor has three important properties: PropertyPath: Specifies the name of the property that we want to filter on (the left operand). Operator: Specifies the type of comparison to use when filtering. An instance of FilterOperator Enumeration. Value: The value to compare with (the right operand). An instance of the Parameter Class. By adding filters, you can specify that only entities which meet the condition in the filter are loaded from the domain context. In case you are not familiar with these concepts you might find Brad Abrams blog interesting. Now, our requirements are to create some kind of UI that will manipulate the DomainDataSource.FilterDescriptors collection. When it comes to collections, my first choice of course would be RadGridView. If you are not familiar with the Custom Filtering Controls concept I would strongly recommend getting acquainted with my step-by-step tutorial Custom Filtering with RadGridView for Silverlight and checking the online example out. I have created a simple custom filtering control that contains a RadGridView and several buttons. This control is aware of the DomainDataSource instance, since it is operating on its FilterDescriptors collection. In fact, the RadGridView that is inside it is bound to this collection. In order to display filters that are relevant for the current column only, I have applied a filter to the grid. This filter is a Telerik.Windows.Data.FilterDescriptor and is used to filter the little grid inside the custom control. It should not be confused with the DomainDataSource.FilterDescriptors collection that RadGridView is actually bound to. These are the RIA filters. Additionally, I have added several other features. For example, if you have specified a DataFormatString on your original column, the Value column inside the custom control will pick it up and format the filter values accordingly. Also, I have transferred the data type of the column that you are filtering to the Value column of the custom control. This will help the little RadGridView determine what kind of editor to show up when you begin edit, for example a date picker for DateTime columns. Finally, I have added four buttons two of them can be used to add or remove filters and the other two will communicate the changes you have made to the server. Here is the full source code of the DomainDataSourceFilteringControl. The XAML: <UserControl x:Class="PureServerSideFiltering.DomainDataSourceFilteringControl"    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"     xmlns:telerikGrid="clr-namespace:Telerik.Windows.Controls;assembly=Telerik.Windows.Controls.GridView"     xmlns:telerik="clr-namespace:Telerik.Windows.Controls;assembly=Telerik.Windows.Controls"     Width="300">     <Border x:Name="LayoutRoot"             BorderThickness="1"             BorderBrush="#FF8A929E"             Padding="5"             Background="#FFDFE2E5">           <Grid>             <Grid.RowDefinitions>                 <RowDefinition Height="Auto"/>                 <RowDefinition Height="150"/>                 <RowDefinition Height="Auto"/>             </Grid.RowDefinitions>               <StackPanel Grid.Row="0"                         Margin="2"                         Orientation="Horizontal"                         HorizontalAlignment="Center">                 <telerik:RadButton Name="addFilterButton"                                   Click="OnAddFilterButtonClick"                                   Content="Add Filter"                                   Margin="2"                                   Width="96"/>                 <telerik:RadButton Name="removeFilterButton"                                   Click="OnRemoveFilterButtonClick"                                   Content="Remove Filter"                                   Margin="2"                                   Width="96"/>             </StackPanel>               <telerikGrid:RadGridView Name="filtersGrid"                                     Grid.Row="1"                                     Margin="2"                                     ItemsSource="{Binding FilterDescriptors}"                                     AddingNewDataItem="OnFilterGridAddingNewDataItem"                                     ColumnWidth="*"                                     ShowGroupPanel="False"                                     AutoGenerateColumns="False"                                     CanUserResizeColumns="False"                                     CanUserReorderColumns="False"                                     CanUserFreezeColumns="False"                                     RowIndicatorVisibility="Collapsed"                                     IsFilteringAllowed="False"                                     CanUserSortColumns="False">                 <telerikGrid:RadGridView.Columns>                     <telerikGrid:GridViewComboBoxColumn DataMemberBinding="{Binding Operator}"                                                         UniqueName="Operator"/>                     <telerikGrid:GridViewDataColumn Header="Value"                                                     DataMemberBinding="{Binding Value.Value}"                                                     UniqueName="Value"/>                 </telerikGrid:RadGridView.Columns>             </telerikGrid:RadGridView>               <StackPanel Grid.Row="2"                         Margin="2"                         Orientation="Horizontal"                         HorizontalAlignment="Center">                 <telerik:RadButton Name="filterButton"                                   Click="OnApplyFiltersButtonClick"                                   Content="Apply Filters"                                   Margin="2"                                   Width="96"/>                 <telerik:RadButton Name="clearButton"                                   Click="OnClearFiltersButtonClick"                                   Content="Clear Filters"                                   Margin="2"                                   Width="96"/>             </StackPanel>           </Grid>       </Border> </UserControl>   And the code-behind: using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Shapes; using Telerik.Windows.Controls.GridView; using System.Windows.Data; using Telerik.Windows.Controls; using Telerik.Windows.Data;   namespace PureServerSideFiltering {     /// <summary>     /// A custom filtering control capable of filtering purely server-side.     /// </summary>     public partial class DomainDataSourceFilteringControl : UserControl, IFilteringControl     {         // The main player here.         DomainDataSource domainDataSource;           // This is the name of the property that this column displays.         private string dataMemberName;           // This is the type of the property that this column displays.         private Type dataMemberType;           /// <summary>         /// Identifies the <see cref="IsActive"/> dependency property.         /// </summary>         /// <remarks>         /// The state of the filtering funnel (i.e. full or empty) is bound to this property.         /// </remarks>         public static readonly DependencyProperty IsActiveProperty =             DependencyProperty.Register(                 "IsActive",                 typeof(bool),                 typeof(DomainDataSourceFilteringControl),                 new PropertyMetadata(false));           /// <summary>         /// Gets or sets a value indicating whether the filtering is active.         /// </summary>         /// <remarks>         /// Set this to true if you want to lit-up the filtering funnel.         /// </remarks>         public bool IsActive         {             get { return (bool)GetValue(IsActiveProperty); }             set { SetValue(IsActiveProperty, value); }         }           /// <summary>         /// Gets or sets the domain data source.         /// We need this in order to work on its FilterDescriptors collection.         /// </summary>         /// <value>The domain data source.</value>         public DomainDataSource DomainDataSource         {             get { return this.domainDataSource; }             set { this.domainDataSource = value; }         }           public System.Windows.Data.FilterDescriptorCollection FilterDescriptors         {             get { return this.DomainDataSource.FilterDescriptors; }         }           public DomainDataSourceFilteringControl()         {             InitializeComponent();         }           public void Prepare(GridViewBoundColumnBase column)         {             this.LayoutRoot.DataContext = this;               if (this.DomainDataSource == null)             {                 // Sorry, but we need a DomainDataSource. Can't do anything without it.                 return;             }               // This is the name of the property that this column displays.             this.dataMemberName = column.GetDataMemberName();               // This is the type of the property that this column displays.             // We need this in order to see which FilterOperators to feed to the combo-box column.             this.dataMemberType = column.DataType;               // We will use our magic Type extension method to see which operators are applicable for             // this data type. You can go to the extension method body and see what it does.             ((GridViewComboBoxColumn)this.filtersGrid.Columns["Operator"]).ItemsSource                 = this.dataMemberType.ApplicableFilterOperators();               // This is very nice as well. We will tell the Value column its data type. In this way             // RadGridView will pick up the best editor according to the data type. For example,             // if the data type of the value is DateTime, you will be editing it with a DatePicker.             // Nice!             ((GridViewDataColumn)this.filtersGrid.Columns["Value"]).DataType = this.dataMemberType;               // Yet another nice feature. We will transfer the original DataFormatString (if any) to             // the Value column. In this way if you have specified a DataFormatString for the original             // column, you will see all filter values formatted accordingly.             ((GridViewDataColumn)this.filtersGrid.Columns["Value"]).DataFormatString = column.DataFormatString;               // This is important. Since our little filtersGrid will be bound to the entire collection             // of this.domainDataSource.FilterDescriptors, we need to set a Telerik filter on the             // grid so that it will display FilterDescriptor which are relevane to this column ONLY!             Telerik.Windows.Data.FilterDescriptor columnFilter = new Telerik.Windows.Data.FilterDescriptor("PropertyPath"                 , Telerik.Windows.Data.FilterOperator.IsEqualTo                 , this.dataMemberName);             this.filtersGrid.FilterDescriptors.Add(columnFilter);               // We want to listen for this in order to activate and de-activate the UI funnel.             this.filtersGrid.Items.CollectionChanged += this.OnFilterGridItemsCollectionChanged;         }           /// <summary>         // Since the DomainDataSource is a little bit picky about adding uninitialized FilterDescriptors         // to its collection, we will prepare each new instance with some default values and then         // the user can change them later. Go to the event handler to see how we do this.         /// </summary>         void OnFilterGridAddingNewDataItem(object sender, GridViewAddingNewEventArgs e)         {             // We need to initialize the new instance with some values and let the user go on from here.             System.Windows.Data.FilterDescriptor newFilter = new System.Windows.Data.FilterDescriptor();               // This is a must. It should know what member it is filtering on.             newFilter.PropertyPath = this.dataMemberName;               // Initialize it with one of the allowed operators.             // TypeExtensions.ApplicableFilterOperators method for more info.             newFilter.Operator = this.dataMemberType.ApplicableFilterOperators().First();               if (this.dataMemberType == typeof(DateTime))             {                 newFilter.Value.Value = DateTime.Now;             }             else if (this.dataMemberType == typeof(string))             {                 newFilter.Value.Value = "<enter text>";             }             else if (this.dataMemberType.IsValueType)             {                 // We need something non-null for all value types.                 newFilter.Value.Value = Activator.CreateInstance(this.dataMemberType);             }               // Let the user edit the new filter any way he/she likes.             e.NewObject = newFilter;         }           void OnFilterGridItemsCollectionChanged(object sender, System.Collections.Specialized.NotifyCollectionChangedEventArgs e)         {             // We are active only if we have any filters define. In this case the filtering funnel will lit-up.             this.IsActive = this.filtersGrid.Items.Count > 0;         }           private void OnApplyFiltersButtonClick(object sender, RoutedEventArgs e)         {             if (this.DomainDataSource.IsLoadingData)             {                 return;             }               // Comment this if you want the popup to stay open after the button is clicked.             this.ClosePopup();               // Since this.domainDataSource.AutoLoad is false, this will take into             // account all filtering changes that the user has made since the last             // Load() and pull the new data to the client.             this.DomainDataSource.Load();         }           private void OnClearFiltersButtonClick(object sender, RoutedEventArgs e)         {             if (this.DomainDataSource.IsLoadingData)             {                 return;             }               // We want to remove ONLY those filters from the DomainDataSource             // that this control is responsible for.             this.DomainDataSource.FilterDescriptors                 .Where(fd => fd.PropertyPath == this.dataMemberName) // Only "our" filters.                 .ToList()                 .ForEach(fd => this.DomainDataSource.FilterDescriptors.Remove(fd)); // Bye-bye!               // Comment this if you want the popup to stay open after the button is clicked.             this.ClosePopup();               // After we did our housekeeping, get the new data to the client.             this.DomainDataSource.Load();         }           private void OnAddFilterButtonClick(object sender, RoutedEventArgs e)         {             if (this.DomainDataSource.IsLoadingData)             {                 return;             }               // Let the user enter his/or her requirements for a new filter.             this.filtersGrid.BeginInsert();             this.filtersGrid.UpdateLayout();         }           private void OnRemoveFilterButtonClick(object sender, RoutedEventArgs e)         {             if (this.DomainDataSource.IsLoadingData)             {                 return;             }               // Find the currently selected filter and destroy it.             System.Windows.Data.FilterDescriptor filterToRemove = this.filtersGrid.SelectedItem as System.Windows.Data.FilterDescriptor;             if (filterToRemove != null                 && this.DomainDataSource.FilterDescriptors.Contains(filterToRemove))             {                 this.DomainDataSource.FilterDescriptors.Remove(filterToRemove);             }         }           private void ClosePopup()         {             System.Windows.Controls.Primitives.Popup popup = this.ParentOfType<System.Windows.Controls.Primitives.Popup>();             if (popup != null)             {                 popup.IsOpen = false;             }         }     } }   Finally, we need to tell RadGridViews Columns to use this custom control instead of the default one. Here is how to do it: using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Shapes; using System.Windows.Data; using Telerik.Windows.Data; using Telerik.Windows.Controls; using Telerik.Windows.Controls.GridView;   namespace PureServerSideFiltering {     public partial class MainPage : UserControl     {         public MainPage()         {             InitializeComponent();             this.grid.AutoGeneratingColumn += this.OnGridAutoGeneratingColumn;               // Uncomment this if you want the DomainDataSource to start pre-filtered.             // You will notice how our custom filtering controls will correctly read this information,             // populate their UI with the respective filters and lit-up the funnel to indicate that             // filtering is active. Go ahead and try it.             this.employeesDataSource.FilterDescriptors.Add(new System.Windows.Data.FilterDescriptor("Title", System.Windows.Data.FilterOperator.Contains, "Assistant"));             this.employeesDataSource.FilterDescriptors.Add(new System.Windows.Data.FilterDescriptor("HireDate", System.Windows.Data.FilterOperator.IsGreaterThan, new DateTime(1998, 12, 31)));             this.employeesDataSource.FilterDescriptors.Add(new System.Windows.Data.FilterDescriptor("HireDate", System.Windows.Data.FilterOperator.IsLessThanOrEqualTo, new DateTime(1999, 12, 31)));               this.employeesDataSource.Load();         }           /// <summary>         /// First of all, we will need to replace the default filtering control         /// of each column with out custom filtering control DomainDataSourceFilteringControl         /// </summary>         private void OnGridAutoGeneratingColumn(object sender, GridViewAutoGeneratingColumnEventArgs e)         {             GridViewBoundColumnBase dataColumn = e.Column as GridViewBoundColumnBase;             if (dataColumn != null)             {                 // We do not like ugly dates.                 if (dataColumn.DataType == typeof(DateTime))                 {                     dataColumn.DataFormatString = "{0:d}"; // Short date pattern.                       // Notice how this format will be later transferred to the Value column                     // of the grid that we have inside the DomainDataSourceFilteringControl.                 }                   // Replace the default filtering control with our.                 dataColumn.FilteringControl = new DomainDataSourceFilteringControl()                 {                     // Let the control know about the DDS, after all it will work directly on it.                     DomainDataSource = this.employeesDataSource                 };                   // Finally, lit-up the filtering funnel through the IsActive dependency property                 // in case there are some filters on the DDS that match our column member.                 string dataMemberName = dataColumn.GetDataMemberName();                 dataColumn.FilteringControl.IsActive =                     this.employeesDataSource.FilterDescriptors                     .Where(fd => fd.PropertyPath == dataMemberName)                     .Count() > 0;             }         }     } } The best part is that we are not only writing filters for the DomainDataSource we can read and load them. If the DomainDataSource has some pre-existing filters (like I have created in the code above), our control will read them and will populate its UI accordingly. Even the filtering funnel will light-up! Remember, the funnel is controlled by the IsActive property of our control. While this is just a basic implementation, the source code is absolutely yours and you can take it from here and extend it to match your specific business requirements. Below the main grid there is another debug grid. With its help you can monitor what filter descriptors are added and removed to the domain data source. Download Source Code. (You will have to have the AdventureWorks sample database installed on the default SQLExpress instance in order to run it.) Enjoy!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Building applications with WCF - Intro

    - by skjagini
    I am going to write series of articles using Windows Communication Framework (WCF) to develop client and server applications and this is the first part of that series. What is WCF As Juwal puts in his Programming WCF book, WCF provides an SDK for developing and deploying services on Windows, provides runtime environment to expose CLR types as services and consume services as CLR types. Building services with WCF is incredibly easy and it’s implementation provides a set of industry standards and off the shelf plumbing including service hosting, instance management, reliability, transaction management, security etc such that it greatly increases productivity Scenario: Lets consider a typical bank customer trying to create an account, deposit amount and transfer funds between accounts, i.e. checking and savings. To make it interesting, we are going to divide the functionality into multiple services and each of them working with database directly. We will run test cases with and without transactional support across services. In this post we will build contracts, services, data access layer, unit tests to verify end to end communication etc, nothing big stuff here and we dig into other features of the WCF in subsequent posts with incremental changes. In any distributed architecture we have two pieces i.e. services and clients. Services as the name implies provide functionality to execute various pieces of business logic on the server, and clients providing interaction to the end user. Services can be built with Web Services or with WCF. Service built on WCF have the advantage of binding independent, i.e. can run against TCP and HTTP protocol without any significant changes to the code. Solution Services Profile: For creating a new bank customer, getting details about existing customer ProfileContract ProfileService Checking Account: To get checking account balance, deposit or withdraw amount CheckingAccountContract CheckingAccountService Savings Account: To get savings account balance, deposit or withdraw amount SavingsAccountContract SavingsAccountService ServiceHost: To host services, i.e. running the services at particular address, binding and contract where client can connect to Client: Helps end user to use services like creating account and amount transfer between the accounts BankDAL: Data access layer to work with database     BankDAL It’s no brainer not to use an ORM as many matured products are available currently in market including Linq2Sql, Entity Framework (EF), LLblGenPro etc. For this exercise I am going to use Entity Framework 4.0, CTP 5 with code first approach. There are two approaches when working with data, data driven and code driven. In data driven we start by designing tables and their constrains in database and generate entities in code while in code driven (code first) approach entities are defined in code and the metadata generated from the entities is used by the EF to create tables and table constrains. In previous versions the entity classes had  to derive from EF specific base classes. In EF 4 it  is not required to derive from any EF classes, the entities are not only persistence ignorant but also enable full test driven development using mock frameworks.  Application consists of 3 entities, Customer entity which contains Customer details; CheckingAccount and SavingsAccount to hold the respective account balance. We could have introduced an Account base class for CheckingAccount and SavingsAccount which is certainly possible with EF mappings but to keep it simple we are just going to follow 1 –1 mapping between entity and table mappings. Lets start out by defining a class called Customer which will be mapped to Customer table, observe that the class is simply a plain old clr object (POCO) and has no reference to EF at all. using System;   namespace BankDAL.Model { public class Customer { public int Id { get; set; } public string FullName { get; set; } public string Address { get; set; } public DateTime DateOfBirth { get; set; } } }   In order to inform EF about the Customer entity we have to define a database context with properties of type DbSet<> for every POCO which needs to be mapped to a table in database. EF uses convention over configuration to generate the metadata resulting in much less configuration. using System.Data.Entity;   namespace BankDAL.Model { public class BankDbContext: DbContext { public DbSet<Customer> Customers { get; set; } } }   Entity constrains can be defined through attributes on Customer class or using fluent syntax (no need to muscle with xml files), CustomerConfiguration class. By defining constrains in a separate class we can maintain clean POCOs without corrupting entity classes with database specific information.   using System; using System.Data.Entity.ModelConfiguration;   namespace BankDAL.Model { public class CustomerConfiguration: EntityTypeConfiguration<Customer> { public CustomerConfiguration() { Initialize(); }   private void Initialize() { //Setting the Primary Key this.HasKey(e => e.Id);   //Setting required fields this.HasRequired(e => e.FullName); this.HasRequired(e => e.Address); //Todo: Can't create required constraint as DateOfBirth is not reference type, research it //this.HasRequired(e => e.DateOfBirth); } } }   Any queries executed against Customers property in BankDbContext are executed against Cusomers table. By convention EF looks for connection string with key of BankDbContext when working with the context.   We are going to define a helper class to work with Customer entity with methods for querying, adding new entity etc and these are known as repository classes, i.e., CustomerRepository   using System; using System.Data.Entity; using System.Linq; using BankDAL.Model;   namespace BankDAL.Repositories { public class CustomerRepository { private readonly IDbSet<Customer> _customers;   public CustomerRepository(BankDbContext bankDbContext) { if (bankDbContext == null) throw new ArgumentNullException(); _customers = bankDbContext.Customers; }   public IQueryable<Customer> Query() { return _customers; }   public void Add(Customer customer) { _customers.Add(customer); } } }   From the above code it is observable that the Query methods returns customers as IQueryable i.e. customers are retrieved only when actually used i.e. iterated. Returning as IQueryable also allows to execute filtering and joining statements from business logic using lamba expressions without cluttering the data access layer with tens of methods.   Our CheckingAccountRepository and SavingsAccountRepository look very similar to each other using System; using System.Data.Entity; using System.Linq; using BankDAL.Model;   namespace BankDAL.Repositories { public class CheckingAccountRepository { private readonly IDbSet<CheckingAccount> _checkingAccounts;   public CheckingAccountRepository(BankDbContext bankDbContext) { if (bankDbContext == null) throw new ArgumentNullException(); _checkingAccounts = bankDbContext.CheckingAccounts; }   public IQueryable<CheckingAccount> Query() { return _checkingAccounts; }   public void Add(CheckingAccount account) { _checkingAccounts.Add(account); }   public IQueryable<CheckingAccount> GetAccount(int customerId) { return (from act in _checkingAccounts where act.CustomerId == customerId select act); }   } } The repository classes look very similar to each other for Query and Add methods, with the help of C# generics and implementing repository pattern (Martin Fowler) we can reduce the repeated code. Jarod from ElegantCode has posted an article on how to use repository pattern with EF which we will implement in the subsequent articles along with WCF Unity life time managers by Drew Contracts It is very easy to follow contract first approach with WCF, define the interface and append ServiceContract, OperationContract attributes. IProfile contract exposes functionality for creating customer and getting customer details.   using System; using System.ServiceModel; using BankDAL.Model;   namespace ProfileContract { [ServiceContract] public interface IProfile { [OperationContract] Customer CreateCustomer(string customerName, string address, DateTime dateOfBirth);   [OperationContract] Customer GetCustomer(int id);   } }   ICheckingAccount contract exposes functionality for working with checking account, i.e., getting balance, deposit and withdraw of amount. ISavingsAccount contract looks the same as checking account.   using System.ServiceModel;   namespace CheckingAccountContract { [ServiceContract] public interface ICheckingAccount { [OperationContract] decimal? GetCheckingAccountBalance(int customerId);   [OperationContract] void DepositAmount(int customerId,decimal amount);   [OperationContract] void WithdrawAmount(int customerId, decimal amount);   } }   Services   Having covered the data access layer and contracts so far and here comes the core of the business logic, i.e. services.   .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } ProfileService implements the IProfile contract for creating customer and getting customer detail using CustomerRepository. using System; using System.Linq; using System.ServiceModel; using BankDAL; using BankDAL.Model; using BankDAL.Repositories; using ProfileContract;   namespace ProfileService { [ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class Profile: IProfile { public Customer CreateAccount( string customerName, string address, DateTime dateOfBirth) { Customer cust = new Customer { FullName = customerName, Address = address, DateOfBirth = dateOfBirth };   using (var bankDbContext = new BankDbContext()) { new CustomerRepository(bankDbContext).Add(cust); bankDbContext.SaveChanges(); } return cust; }   public Customer CreateCustomer(string customerName, string address, DateTime dateOfBirth) { return CreateAccount(customerName, address, dateOfBirth); } public Customer GetCustomer(int id) { return new CustomerRepository(new BankDbContext()).Query() .Where(i => i.Id == id).FirstOrDefault(); }   } } From the above code you shall observe that we are calling bankDBContext’s SaveChanges method and there is no save method specific to customer entity because EF manages all the changes centralized at the context level and all the pending changes so far are submitted in a batch and it is represented as Unit of Work. Similarly Checking service implements ICheckingAccount contract using CheckingAccountRepository, notice that we are throwing overdraft exception if the balance falls by zero. WCF has it’s own way of raising exceptions using fault contracts which will be explained in the subsequent articles. SavingsAccountService is similar to CheckingAccountService. using System; using System.Linq; using System.ServiceModel; using BankDAL.Model; using BankDAL.Repositories; using CheckingAccountContract;   namespace CheckingAccountService { [ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class Checking:ICheckingAccount { public decimal? GetCheckingAccountBalance(int customerId) { using (var bankDbContext = new BankDbContext()) { CheckingAccount account = (new CheckingAccountRepository(bankDbContext) .GetAccount(customerId)).FirstOrDefault();   if (account != null) return account.Balance;   return null; } }   public void DepositAmount(int customerId, decimal amount) { using(var bankDbContext = new BankDbContext()) { var checkingAccountRepository = new CheckingAccountRepository(bankDbContext); CheckingAccount account = (checkingAccountRepository.GetAccount(customerId)) .FirstOrDefault();   if (account == null) { account = new CheckingAccount() { CustomerId = customerId }; checkingAccountRepository.Add(account); }   account.Balance = account.Balance + amount; if (account.Balance < 0) throw new ApplicationException("Overdraft not accepted");   bankDbContext.SaveChanges(); } } public void WithdrawAmount(int customerId, decimal amount) { DepositAmount(customerId, -1*amount); } } }   BankServiceHost The host acts as a glue binding contracts with it’s services, exposing the endpoints. The services can be exposed either through the code or configuration file, configuration file is preferred as it allows run time changes to service behavior even after deployment. We have 3 services and for each of the service you need to define name (the class that implements the service with fully qualified namespace) and endpoint known as ABC, i.e. address, binding and contract. We are using netTcpBinding and have defined the base address with for each of the contracts .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } <system.serviceModel> <services> <service name="ProfileService.Profile"> <endpoint binding="netTcpBinding" contract="ProfileContract.IProfile"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Profile"/> </baseAddresses> </host> </service> <service name="CheckingAccountService.Checking"> <endpoint binding="netTcpBinding" contract="CheckingAccountContract.ICheckingAccount"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Checking"/> </baseAddresses> </host> </service> <service name="SavingsAccountService.Savings"> <endpoint binding="netTcpBinding" contract="SavingsAccountContract.ISavingsAccount"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Savings"/> </baseAddresses> </host> </service> </services> </system.serviceModel> Have to open the services by creating service host which will handle the incoming requests from clients.   using System;   namespace ServiceHost { class Program { static void Main(string[] args) { CreateHosts(); Console.ReadLine(); }   private static void CreateHosts() { CreateHost(typeof(ProfileService.Profile),"Profile Service"); CreateHost(typeof(SavingsAccountService.Savings), "Savings Account Service"); CreateHost(typeof(CheckingAccountService.Checking), "Checking Account Service"); }   private static void CreateHost(Type type, string hostDescription) { System.ServiceModel.ServiceHost host = new System.ServiceModel.ServiceHost(type); host.Open();   if (host.ChannelDispatchers != null && host.ChannelDispatchers.Count != 0 && host.ChannelDispatchers[0].Listener != null) Console.WriteLine("Started: " + host.ChannelDispatchers[0].Listener.Uri); else Console.WriteLine("Failed to start:" + hostDescription); } } } BankClient    The client has no knowledge about service business logic other than the functionality it exposes through the contract, end points and a proxy to work against. The endpoint data and server proxy can be generated by right clicking on the project reference and choosing ‘Add Service Reference’ and entering the service end point address. Or if you have access to source, you can manually reference contract dlls and update clients configuration file to point to the service end point if the server and client happens to be being built using .Net framework. One of the pros with the manual approach is you don’t have to work against messy code generated files.   <system.serviceModel> <client> <endpoint name="tcpProfile" address="net.tcp://localhost:1000/Profile" binding="netTcpBinding" contract="ProfileContract.IProfile"/> <endpoint name="tcpCheckingAccount" address="net.tcp://localhost:1000/Checking" binding="netTcpBinding" contract="CheckingAccountContract.ICheckingAccount"/> <endpoint name="tcpSavingsAccount" address="net.tcp://localhost:1000/Savings" binding="netTcpBinding" contract="SavingsAccountContract.ISavingsAccount"/>   </client> </system.serviceModel> The client uses a façade to connect to the services   using System.ServiceModel; using CheckingAccountContract; using ProfileContract; using SavingsAccountContract;   namespace Client { public class ProxyFacade { public static IProfile ProfileProxy() { return (new ChannelFactory<IProfile>("tcpProfile")).CreateChannel(); }   public static ICheckingAccount CheckingAccountProxy() { return (new ChannelFactory<ICheckingAccount>("tcpCheckingAccount")) .CreateChannel(); }   public static ISavingsAccount SavingsAccountProxy() { return (new ChannelFactory<ISavingsAccount>("tcpSavingsAccount")) .CreateChannel(); }   } }   With that in place, lets get our unit tests going   using System; using System.Diagnostics; using BankDAL.Model; using NUnit.Framework; using ProfileContract;   namespace Client { [TestFixture] public class Tests { private void TransferFundsFromSavingsToCheckingAccount(int customerId, decimal amount) { ProxyFacade.CheckingAccountProxy().DepositAmount(customerId, amount); ProxyFacade.SavingsAccountProxy().WithdrawAmount(customerId, amount); }   private void TransferFundsFromCheckingToSavingsAccount(int customerId, decimal amount) { ProxyFacade.SavingsAccountProxy().DepositAmount(customerId, amount); ProxyFacade.CheckingAccountProxy().WithdrawAmount(customerId, amount); }     [Test] public void CreateAndGetProfileTest() { IProfile profile = ProxyFacade.ProfileProxy(); const string customerName = "Tom"; int customerId = profile.CreateCustomer(customerName, "NJ", new DateTime(1982, 1, 1)).Id; Customer customer = profile.GetCustomer(customerId); Assert.AreEqual(customerName,customer.FullName); }   [Test] public void DepositWithDrawAndTransferAmountTest() { IProfile profile = ProxyFacade.ProfileProxy(); string customerName = "Smith" + DateTime.Now.ToString("HH:mm:ss"); var customer = profile.CreateCustomer(customerName, "NJ", new DateTime(1982, 1, 1)); // Deposit to Savings ProxyFacade.SavingsAccountProxy().DepositAmount(customer.Id, 100); ProxyFacade.SavingsAccountProxy().DepositAmount(customer.Id, 25); Assert.AreEqual(125, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); // Withdraw ProxyFacade.SavingsAccountProxy().WithdrawAmount(customer.Id, 30); Assert.AreEqual(95, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id));   // Deposit to Checking ProxyFacade.CheckingAccountProxy().DepositAmount(customer.Id, 60); ProxyFacade.CheckingAccountProxy().DepositAmount(customer.Id, 40); Assert.AreEqual(100, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id)); // Withdraw ProxyFacade.CheckingAccountProxy().WithdrawAmount(customer.Id, 30); Assert.AreEqual(70, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id));   // Transfer from Savings to Checking TransferFundsFromSavingsToCheckingAccount(customer.Id,10); Assert.AreEqual(85, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); Assert.AreEqual(80, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id));   // Transfer from Checking to Savings TransferFundsFromCheckingToSavingsAccount(customer.Id, 50); Assert.AreEqual(135, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); Assert.AreEqual(30, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id)); }   [Test] public void FundTransfersWithOverDraftTest() { IProfile profile = ProxyFacade.ProfileProxy(); string customerName = "Angelina" + DateTime.Now.ToString("HH:mm:ss");   var customerId = profile.CreateCustomer(customerName, "NJ", new DateTime(1972, 1, 1)).Id;   ProxyFacade.SavingsAccountProxy().DepositAmount(customerId, 100); TransferFundsFromSavingsToCheckingAccount(customerId,80); Assert.AreEqual(20, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customerId)); Assert.AreEqual(80, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customerId));   try { TransferFundsFromSavingsToCheckingAccount(customerId,30); } catch (Exception e) { Debug.WriteLine(e.Message); }   Assert.AreEqual(110, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customerId)); Assert.AreEqual(20, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customerId)); } } }   We are creating a new instance of the channel for every operation, we will look into instance management and how creating a new instance of channel affects it in subsequent articles. The first two test cases deals with creation of Customer, deposit and withdraw of month between accounts. The last case, FundTransferWithOverDraftTest() is interesting. Customer starts with depositing $100 in SavingsAccount followed by transfer of $80 in to checking account resulting in $20 in savings account.  Customer then initiates $30 transfer from Savings to Checking resulting in overdraft exception on Savings with $30 being deposited to Checking. As we are not running both the requests in transactions the customer ends up with more amount than what he started with $100. In subsequent posts we will look into transactions handling.  Make sure the ServiceHost project is set as start up project and start the solution. Run the test cases either from NUnit client or TestDriven.Net/Resharper which ever is your favorite tool. Make sure you have updated the data base connection string in the ServiceHost config file to point to your local database

    Read the article

  • Installing Lubuntu 14.04.1 fails, upowerd appears to hang

    - by Rantanplan
    On the live-CD session, I tried installing Lubuntu double clicking on the install button on the desktop. Here, the CD starts running but then stops running and nothing happens. Next, I rebooted and tried installing Lubuntu directly from the boot menu screen using forcepae again. After a while, I receive the following error message: The installer encountered an unrecoverable error. A desktop session will now be run so that you may investigate the problem or try installing again. Hitting Enter brings me to the desktop. For what errors should I search? And how? Thanks for some hints! On Lubuntu 12.04: uname -a Linux humboldt 3.2.0-67-generic #101-Ubuntu SMP Tue Jul 15 17:45:51 UTC 2014 i686 i686 i386 GNU/Linux lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 12.04.5 LTS Release: 12.04 Codename: precise upowerd appears to hang: Aug 25 10:53:28 lubuntu kernel: [ 367.920272] INFO: task upowerd:3002 blocked for more than 120 seconds. Aug 25 10:53:28 lubuntu kernel: [ 367.920288] Tainted: G S C 3.13.0-32-generic #57-Ubuntu Aug 25 10:53:28 lubuntu kernel: [ 367.920294] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Aug 25 10:53:28 lubuntu kernel: [ 367.920300] upowerd D e21f9da0 0 3002 1 0x00000000 Aug 25 10:53:28 lubuntu kernel: [ 367.920314] e21f9dfc 00000086 f5ef7094 e21f9da0 c1050272 c1a8d540 c1920a00 00000000 Aug 25 10:53:28 lubuntu kernel: [ 367.920333] c1a8d540 c1920a00 d9e44da0 f5ef6540 c1129061 00000002 000001c1 0001c37b Aug 25 10:53:28 lubuntu kernel: [ 367.920351] 00000000 00000002 00000000 e2276240 00000000 00000040 c12b0ec5 c19975a8 Aug 25 10:53:28 lubuntu kernel: [ 367.920368] Call Trace: Aug 25 10:53:28 lubuntu kernel: [ 367.920389] [<c1050272>] ? kmap_atomic_prot+0x42/0x100 Aug 25 10:53:28 lubuntu kernel: [ 367.920404] [<c1129061>] ? get_page_from_freelist+0x2a1/0x600 Aug 25 10:53:28 lubuntu kernel: [ 367.920417] [<c12b0ec5>] ? process_measurement+0x65/0x240 Aug 25 10:53:28 lubuntu kernel: [ 367.920432] [<c1654c73>] schedule_preempt_disabled+0x23/0x60 Aug 25 10:53:28 lubuntu kernel: [ 367.920443] [<c16565bd>] __mutex_lock_slowpath+0x10d/0x171 Aug 25 10:53:28 lubuntu kernel: [ 367.920454] [<c1655aec>] mutex_lock+0x1c/0x28 Aug 25 10:53:28 lubuntu kernel: [ 367.920478] [<f857223a>] acpi_smbus_transaction+0x48/0x210 [sbshc] Aug 25 10:53:28 lubuntu kernel: [ 367.920489] [<c11858e1>] ? do_last+0x1b1/0xf60 Aug 25 10:53:28 lubuntu kernel: [ 367.920504] [<f857242f>] acpi_smbus_read+0x2d/0x33 [sbshc] Aug 25 10:53:28 lubuntu kernel: [ 367.920520] [<f881e0f1>] acpi_battery_get_state+0x74/0x8b [sbs] Aug 25 10:53:28 lubuntu kernel: [ 367.920535] [<f881e8a9>] acpi_sbs_battery_get_property+0x2a/0x233 [sbs] Aug 25 10:53:28 lubuntu kernel: [ 367.920549] [<c14fa61f>] power_supply_show_property+0x3f/0x240 Aug 25 10:53:28 lubuntu kernel: [ 367.920561] [<c114664f>] ? handle_mm_fault+0x64f/0x8d0 Aug 25 10:53:28 lubuntu kernel: [ 367.920573] [<c14fa5e0>] ? power_supply_store_property+0x60/0x60 Aug 25 10:53:28 lubuntu kernel: [ 367.920586] [<c1407d20>] ? dev_uevent_name+0x30/0x30 Aug 25 10:53:28 lubuntu kernel: [ 367.920597] [<c1407d38>] dev_attr_show+0x18/0x40 Aug 25 10:53:28 lubuntu kernel: [ 367.920608] [<c11dad15>] sysfs_seq_show+0xe5/0x1c0 Aug 25 10:53:28 lubuntu kernel: [ 367.920621] [<c119846e>] seq_read+0xce/0x370 Aug 25 10:53:28 lubuntu kernel: [ 367.920633] [<c11983a0>] ? seq_hlist_next_percpu+0x90/0x90 Aug 25 10:53:28 lubuntu kernel: [ 367.920644] [<c1179238>] vfs_read+0x78/0x140 Aug 25 10:53:28 lubuntu kernel: [ 367.920654] [<c11799a9>] SyS_read+0x49/0x90 Aug 25 10:53:28 lubuntu kernel: [ 367.920667] [<c165efcd>] sysenter_do_call+0x12/0x28 /var/log/installer/debug shows upower related error: Ubiquity 2.18.8 Gtk-Message: Failed to load module "overlay-scrollbar" Gtk-Message: Failed to load module "overlay-scrollbar" ERROR:dbus.proxies:Introspect error on :1.23:/org/freedesktop/UPower: dbus.exceptions.DBusException: org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. Exception in GTK frontend (invoking crash handler): Traceback (most recent call last): File "/usr/lib/ubiquity/bin/ubiquity", line 636, in <module> main(oem_config) File "/usr/lib/ubiquity/bin/ubiquity", line 622, in main install(query=options.query) File "/usr/lib/ubiquity/bin/ubiquity", line 260, in install wizard = ui.Wizard(distro) File "/usr/lib/ubiquity/ubiquity/frontend/gtk_ui.py", line 290, in __init__ mod.ui = mod.ui_class(mod.controller) File "/usr/lib/ubiquity/plugins/ubi-prepare.py", line 93, in __init__ upower.setup_power_watch(self.prepare_power_source) File "/usr/lib/ubiquity/ubiquity/upower.py", line 21, in setup_power_watch power_state_changed() File "/usr/lib/ubiquity/ubiquity/upower.py", line 18, in power_state_changed not misc.get_prop(upower, UPOWER_PATH, 'OnBattery')) File "/usr/lib/ubiquity/ubiquity/misc.py", line 809, in get_prop return obj.Get(iface, prop, dbus_interface=dbus.PROPERTIES_IFACE) File "/usr/lib/python3/dist-packages/dbus/proxies.py", line 70, in __call__ return self._proxy_method(*args, **keywords) File "/usr/lib/python3/dist-packages/dbus/proxies.py", line 145, in __call__ **keywords) File "/usr/lib/python3/dist-packages/dbus/connection.py", line 651, in call_blocking message, timeout)

    Read the article

  • SharePoint 2010 – Central Admin tooling to create host header site collections

    - by eJugnoo
    Just like SharePoint 2007, you can create host-header based site collections in SharePoint 2010 as well. It means, that you do not necessarily need to create a site-collection under a managed path like /sites/, you can create multiple root-level site collections on same web-application/port by using host-header site collections. All you need to do is point your domain or sub-domain to your web-application and create a matching site-collection that you want. But, just like in 2007, it is something that you do by using STSADM, and is not available on Central Admin UI in 2010 as well. Yeah, though you can now also use PowerShell to create one: C:\PS>$w = Get-SPWebApplication http://sitename   C:\PS>New-SPSite http://www.contoso.com -OwnerAlias "DOMAIN\jdoe" -HostHeaderWebApplication $w -Title "Contoso" -Template "STS#0"   This example creates a host header site collection. Because the template is provided, the root Web of this site collection will be created. .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } I’ve been playing with WCM in SharePoint 2010 more and more, and for that I preferred creating hosts file entries for desired domains and create site-collections by those headers – in my dev environment. I used PowerShell initially, but then got interested to build my own UI on Central Admin instead. Developed with Visual Studio 2010 So I used new Visual Studio 2010 tooling to create an empty SharePoint 2010 project. Added an application page (there is no option to add _Admin page item in VS 2010 RC), that got created in Layouts “mapped” folder. Created a new Admin mapped folder for 14-“hive”, and moved my new page there instead. Yes, I didn’t change the base class for page, its just that it runs under _admin, but it is indeed a LayoutsPageBase inherited page. To introduce a action-link in Central Admin console, I created following element: 1: <Elements xmlns="http://schemas.microsoft.com/sharepoint/"> 2: <CustomAction 3: Id="CreateSiteByHeader" 4: Location="Microsoft.SharePoint.Administration.Applications" 5: Title="Create site collections by host header" 6: GroupId="SiteCollections" 7: Sequence="15" 8: RequiredAdmin="Delegated" 9: Description="Create a new top-level web site, by host header" > 10: <UrlAction Url="/_admin/OfficeToolbox/CreateSiteByHeader.aspx" /> 11: </CustomAction> 12: </Elements> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Used Reflector to understand any special code behind createpage.aspx, and created a new for our purpose – CreateSiteByHeader.aspx. From there I quickly created a similar code behind, without all the fancy of Farm Config Wizard handling and dealt with alternate implementations of sealed classes! Goal was to create a professional looking and OOB-type experience. I also added Regex validation to ensure user types a valid domain name as header value. Below is the result…   Release @ Codeplex I’ve released to WSP on OfficeToolbox @ Codeplex, and you can download from here. Hope you find it useful… -- Sharad

    Read the article

  • Is a text file with names/pixel locations something a graphic artist can/should produce? [on hold]

    - by edA-qa mort-ora-y
    I have an artist working on 2D graphics for a game UI. He has no problem producing a screenshot showing all the bits, but we're having some trouble exporting this all into an easy-to-use format. For example, take the game HUD, which is a bunch of elements laid out around the screen. He exports the individual graphics for each one, but how should he communicate the positioning of each of them? My desire is to have a yaml file (or some other simple markup file) that contains the name of each asset and pixel position of that element. For example: fire_icon: pos: 20, 30 fire_bar: pos: 30, 80 Is producing such files a common task of a graphic artist? Is is reasonable to request them to produce such files as part of their graphic work?

    Read the article

  • Running a program in background using command-line [duplicate]

    - by user291957
    This question already has an answer here: Running programs in the background from terminal 4 answers How do I run a program in the background of a shell, with the ability to close the shell while leaving the program running which should not disturb the window i am working on? Lets say my UI is having problems or for some reason, I need to boot up a program from the terminal window. The program should not disturb my window in which i am working on but it should be opened from the command line and i should be able to get access to it using the normal shortcut ALT+TAB. Even the command line should exit after running the command I tried this .... "gedit file-name & exit" this is working fine but the gedit file is opening in the foreground (let i be working on some application like mozilla. After running the command ..... gedit file is coming upwards and i have to flip to mozilla again but the command should just open the gedit file not shifting to gedit application from the mozilla window)

    Read the article

  • SQL SERVER – Enumerations in Relational Database – Best Practice

    - by pinaldave
    Marko Parkkola This article has been submitted by Marko Parkkola, Data systems designer at Saarionen Oy, Finland. Marko is excellent developer and always thinking at next level. You can read his earlier comment which created very interesting discussion here: SQL SERVER- IF EXISTS(Select null from table) vs IF EXISTS(Select 1 from table). I must express my special thanks to Marko for sending this best practice for Enumerations in Relational Database. He has really wrote excellent piece here and welcome comments here. Enumerations in Relational Database This is a subject which is very basic thing in relational databases but often not very well understood and sometimes badly implemented. There are of course many ways to do this but I concentrate only two cases, one which is “the right way” and one which is definitely wrong way. The concept Let’s say we have table Person in our database. Person has properties/fields like Firstname, Lastname, Birthday and so on. Then there’s a field that tells person’s marital status and let’s name it the same way; MaritalStatus. Now MaritalStatus is an enumeration. In C# I would definitely make it an enumeration with values likes Single, InRelationship, Married, Divorced. Now here comes the problem, SQL doesn’t have enumerations. The wrong way This is, in my opinion, absolutely the wrong way to do this. It has one upside though; you’ll see the enumeration’s description instantly when you do simple SELECT query and you don’t have to deal with mysterious values. There’s plenty of downsides too and one would be database fragmentation. Consider this (I’ve left all indexes and constraints out of the query on purpose). CREATE TABLE [dbo].[Person] ( [Firstname] NVARCHAR(100), [Lastname] NVARCHAR(100), [Birthday] datetime, [MaritalStatus] NVARCHAR(10) ) You have nvarchar(20) field in the table that tells the marital status. Obvious problem with this is that what if you create a new value which doesn’t fit into 20 characters? You’ll have to come and alter the table. There are other problems also but I’ll leave those for the reader to think about. The correct way Here’s how I’ve done this in many projects. This model still has one problem but it can be alleviated in the application layer or with CHECK constraints if you like. First I will create a namespace table which tells the name of the enumeration. I will add one row to it too. I’ll write all the indexes and constraints here too. CREATE TABLE [CodeNamespace] ( [Id] INT IDENTITY(1, 1), [Name] NVARCHAR(100) NOT NULL, CONSTRAINT [PK_CodeNamespace] PRIMARY KEY ([Id]), CONSTRAINT [IXQ_CodeNamespace_Name] UNIQUE NONCLUSTERED ([Name]) ) GO INSERT INTO [CodeNamespace] SELECT 'MaritalStatus' GO Then I create a table that holds the actual values and which reference to namespace table in order to group the values under different namespaces. I’ll add couple of rows here too. CREATE TABLE [CodeValue] ( [CodeNamespaceId] INT NOT NULL, [Value] INT NOT NULL, [Description] NVARCHAR(100) NOT NULL, [OrderBy] INT, CONSTRAINT [PK_CodeValue] PRIMARY KEY CLUSTERED ([CodeNamespaceId], [Value]), CONSTRAINT [FK_CodeValue_CodeNamespace] FOREIGN KEY ([CodeNamespaceId]) REFERENCES [CodeNamespace] ([Id]) ) GO -- 1 is the 'MaritalStatus' namespace INSERT INTO [CodeValue] SELECT 1, 1, 'Single', 1 INSERT INTO [CodeValue] SELECT 1, 2, 'In relationship', 2 INSERT INTO [CodeValue] SELECT 1, 3, 'Married', 3 INSERT INTO [CodeValue] SELECT 1, 4, 'Divorced', 4 GO Now there’s four columns in CodeValue table. CodeNamespaceId tells under which namespace values belongs to. Value tells the enumeration value which is used in Person table (I’ll show how this is done below). Description tells what the value means. You can use this, for example, column in UI’s combo box. OrderBy tells if the values needs to be ordered in some way when displayed in the UI. And here’s the Person table again now with correct columns. I’ll add one row here to show how enumerations are to be used. CREATE TABLE [dbo].[Person] ( [Firstname] NVARCHAR(100), [Lastname] NVARCHAR(100), [Birthday] datetime, [MaritalStatus] INT ) GO INSERT INTO [Person] SELECT 'Marko', 'Parkkola', '1977-03-04', 3 GO Now I said earlier that there is one problem with this. MaritalStatus column doesn’t have any database enforced relationship to the CodeValue table so you can enter any value you like into this field. I’ve solved this problem in the application layer by selecting all the values from the CodeValue table and put them into a combobox / dropdownlist (with Value field as value and Description as text) so the end user can’t enter any illegal values; and of course I’ll check the entered value in data access layer also. I said in the “The wrong way” section that there is one benefit to it. In fact, you can have the same benefit here by using a simple view, which I schema bound so you can even index it if you like. CREATE VIEW [dbo].[Person_v] WITH SCHEMABINDING AS SELECT p.[Firstname], p.[Lastname], p.[BirthDay], c.[Description] MaritalStatus FROM [dbo].[Person] p JOIN [dbo].[CodeValue] c ON p.[MaritalStatus] = c.[Value] JOIN [dbo].[CodeNamespace] n ON n.[Id] = c.[CodeNamespaceId] AND n.[Name] = 'MaritalStatus' GO -- Select from View SELECT * FROM [dbo].[Person_v] GO This is excellent write up byMarko Parkkola. Do you have this kind of design setup at your organization? Let us know your opinion. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Database, DBA, Readers Contribution, Software Development, SQL, SQL Authority, SQL Documentation, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Which is Better? The Start Screen in Windows 8 or the Old Start Menu? [Analysis]

    - by Asian Angel
    There has been quite a bit of controversy surrounding Microsoft’s emphasis on the new Metro UI Start Screen in Windows 8, but when it comes down to it which is really better? The Start Screen in Windows 8 or the old Start Menu? Tech blog 7 Tutorials has done a quick analysis to see which one actually works better (and faster) when launching applications and doing searches. Images courtesy of 7 Tutorials. You can view the results and a comparison table by visiting the blog post linked below. Windows 8 Analysis: Is the Start Screen an Improvement vs. the Start Menu? [7 Tutorials] How to Stress Test the Hard Drives in Your PC or Server How To Customize Your Android Lock Screen with WidgetLocker The Best Free Portable Apps for Your Flash Drive Toolkit

    Read the article

  • Week in Geek: Facebook Valentine’s Day Scams Edition

    - by Asian Angel
    This week we learned how to get started with the Linux command-line text editor Nano, “speed up Start Menu searching, halt auto-rotating Android screens, & set up Dropbox-powered torrenting”, change the default application for Android tasks, find great gift recommendations for Valentine’s Day using the How-To Geek Valentine’s Day gift guide, had fun decorating our desktops with TRON and TRON Legacy theme items, and more Latest Features How-To Geek ETC Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide Inspire Geek Love with These Hilarious Geek Valentines Four Awesome TRON Legacy Themes for Chrome and Iron Anger is Illogical – Old School Style Instructional Video [Star Trek Mashup] Get the Old Microsoft Paint UI Back in Windows 7 Relax and Sleep Is a Soothing Sleep Timer Google Rolls Out Two-Factor Authentication Peaceful Early Morning by the Riverside Wallpaper

    Read the article

  • Best way of learning Python + GUI when coming from .NET

    - by Oscar Mederos
    I've been developing applications in C# / VB.NET for about 3-4 years (.NET Framework v2.0, 3.5, 4). I have also developed some command-line applications or scripts in C, and Python under Linux. Sometimes I need to develop my applications in another languages, like Python, but the problem thing is that lots of those applications require a GUI. Maybe not a too complex one, but it does require some windows with buttons, text boxes, list boxes,... What books/tips/tutorials do you suggest me to start working with that language and be able to deploy my deliverables not only in .NET? Note: Learning python is not the big deal here, because I already know the basic of it. I just want to focus on the GUI. Maybe this question should be on UI instead of here? If so, please, migrate it :)

    Read the article

  • Daisies and Rye Swaying in the Summer Wind Wallpaper

    - by Asian Angel
    Flowers [DesktopNexus] Latest Features How-To Geek ETC Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) Reclaim Vertical UI Space by Moving Your Tabs to the Side in Firefox Wind and Water: Puzzle Battles – An Awesome Game for Linux and Windows How Star Wars Changed the World [Infographic] Tabs Visual Manager Adds Thumbnailed Tab Switching to Chrome Daisies and Rye Swaying in the Summer Wind Wallpaper Read On Phone Pushes Data from Your Desktop to the Appropriate Android App

    Read the article

< Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >