Search Results

Search found 89214 results on 3569 pages for 'code statistics'.

Page 737/3569 | < Previous Page | 733 734 735 736 737 738 739 740 741 742 743 744  | Next Page >

  • CCNet TFS Migration - Dealing with left over folders

    - by Michael Stephenson
    Im currently in the process of migrating our many BizTalk projects from MKS source control to TFS.  While we will be using TFS for work item tracking and source control etc we will be continuing to use Cruise Control for continuous integration although im updating this to CCNet 1.5 at the same time. Ill post a few things as much as a reminder to myself about some of the problems we come across. Problem After the first build of our code the next time a build is triggered an error is encountered by the TFS source control block refreshing the source code. System.IO.IOException: The directory is not empty.    at System.IO.Directory.DeleteHelper(String fullPath, String userPath, Boolean recursive)    at System.IO.Directory.Delete(String fullPath, String userPath, Boolean recursive)    at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Vsts.deleteDirectory(String path)    at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Vsts.GetSource(IIntegrationResult result)    at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Build(IIntegrationResult result)    at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Integrate(IntegrationRequest request) System.IO.IOException: The directory is not empty. at System.IO.Directory.DeleteHelper(String fullPath, String userPath, Boolean recursive) at System.IO.Directory.Delete(String fullPath, String userPath, Boolean recursive) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Vsts.deleteDirectory(String path) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Vsts.GetSource(IIntegrationResult result) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Build(IIntegrationResult result) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Integrate(IntegrationRequest request) Project: Bupa.BPI.Documents Date of build: 2011-01-28 14:54:21 Running time: 00:00:05 Integration Request: Build (ForceBuild) triggered from VMOPBZDEV11 Solution The problem seems to be with a folder called TestLocations which is created by the build process and used along with the file adapter as a way to get messages into BizTalk.  For some reason the source control block when it does a full refresh of the code does not get rid of this folder and then complains thats a problem and fails the build. Interestingly there are other folders created by the build which are deleted fine.  My assumption is that this if something to do with the file adapter polling the directory.  However note that we have not had this problem with other source control blocks in the past. To workaround this I have added a prebuild task to the ccnet.config file to delete this folder before the source control block is executed.  See below for example < prebuild> exec>executable>cmd.exe</executable>buildArgs>/c "if exist "C:\<MyCode>\TestLocations" rd /s /q "C:\<MyCode>\TestLocations""</buildArgs>exec> prebuild> < < < </ </

    Read the article

  • Launching a URL from an OOB Silverlight Application

    So I'm working on this code browser mostly to help me with my fading memory. In the app I want to be able to launch a url. So I do my normal thing and put a line of code that looks like this:System.Windows.Browser.HtmlPage.Window.Navigate(new Uri(ThisURI), "_blank");I was agast when I realized that this didn't work, for that matter it didn't even blow... grr... but with a bit of research I found that the hyper link button worked so a ended up making a little class like this:public class MyHyperLink...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • 5 Useful Wordpress Plugins For Google Adsense

    - by Jyoti
    Google Adsense has become the most popular online contextual advertising program and proper custom integration with Wordpress can help to increase Adsense earnings. Now on this post we have describe 5 useful wordpress plugin for google adsense. Few weeks ago we did a "10 Wordpress Plugins For Google Adsense ". Wordpress allows bloggers to easily integrate Google Adsense inside wordpress using plugins. Adsense Integrator : The Adsense Integrator plugin supports lot of programs other then adsense like AdBrite, AffiliateBOT, SHAREASALE, LinkShare, ClickBank, Oxado, Adpinion, AdGridWork, Adroll, Commission Junction, CrispAds, ShoppingAds, Yahoo!PN so this can be used when you are looking to have adsense as well as other alternatives. The rest of the features of the plugin are same where you give your adsense code into options field and it get inserted into blog posts. All In One Adsense And YPN : This is one of the most powerful adsense plugin for wordpress. Jut like other plugins, you can use this to insert your ads in the post but the plugin has some really good features like randomness which shows ad at random location in your blog which reduces ad blindness for viewers. You can also stop ads being shown on some pages using tags. Adsense Now : Other then the previous plugins , you can also give it a try to Adsense now. I haven’t used it (I have only used the first two) so its difficult to comment on it. It looks to be a lightweight plugin which insert adsense ads between posts and in posts body. Adsense Manager : Adsense Manager is one of the most popular and used plugin to manage adsense in wordpress blogs. Infact its newer version not only supports adsense, it also supports various other programs like adbrite, Commission Junction, YPN etc which makes it very powerful ad management plugin. You can inject adsense code anywhere in your blog posts as well as can put in different regions of your blog. Easy Adsense : Easy adsense is one of the new wordpress adsense plugin and that is why more feature rich. You can have different code for different themes using this plugin. It also support link units. To know all features, check out the plugin page.

    Read the article

  • TSAM 11gR1

    - by todd.little
    The Tuxedo System and Application Monitor (TSAM) 11gR1 release provides powerful new application monitoring capabilities, as well as significant improvements in ease of use. The first thing users will notice is the completely redesigned user interface in the TSAM console. Based on Oracle ADF, the console is much easier to navigate, provides a Web 2.0 style interface with dynamically updating panels, and a look and feel familiar to those that have used Oracle Enterprise Manager. Monitoring data can be viewed in both tabular and graphical form and exported to Excel for further analysis. A number of new metrics are collected and displayed in this release. Call path monitoring now displays CPU time, message size, total transport time, and client address giving even more end-to-end information about a specific Tuxedo request. As well the call path display has been completely revamped to make it much easier to see the branches of the call path. The call pattern display now provides statistics on successful vs failed calls, system and application failures, and end-to-end average elapsed time. Service monitoring now displays minimum and maximum message size, CPU usage, and client address. System server monitoring now includes monitoring the SALT gateway servers to provide detailed performance metrics about those servers. Perhaps the most significant new feature is the consolidation of alert definitions and policy management. In previous versions of TSAM, some alerts were defined and checked on the monitored systems while others were defined and checked in the console. Policy management could be performed on both the monitored node via environment variable or command, as well as from the console. Now all alert definitions and policy definitions are only made using the console. For alerts this means that regardless of where the alert is evaluated it is defined in one and only one place. Thus the plug-in alert mechanism of previous releases can now be managed using the TSAM console, making SLA alert definition much easier and cleaner. Finally there is support in TSAM for monitoring rehosted mainframe applications. The newly announced Oracle Tuxedo Application Runtime for CICS and Batch can be monitored in the TSAM console using traditional mainframe views of the application such as regions. Look for a future blog entry with more details on this as well as some entries providing a glimpse of the console. TSAM gives users a single point for monitoring the performance of all of their Tuxedo applications.

    Read the article

  • Ranking Part III

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved   Ranking Part III In a previous blogs “Ranking an Introduction” and  “Ranking Part II” , you have already praised me in “Rank the Author” and learned how to create a new element on a page and how to place it where you need it. For this installment, I just added code to keep the number of votes (you vote by clicking one of the stars) and the total vote. Using these two, we can compute the average rating. It’s a small step, but its purpose is to show that we do not need a detailed history in order to compute the average. A running total is sufficient. Please note that once you close the game, you will lose your previous total. In real life, we persist the totals in the list itself. We also keep a list of actual votes, but its purpose is to prevent double votes. If a person has already voted, his user id is already on the list and our program will check for it and bar the person from voting again. This is coded in an event receiver, which is a SharePoint server piece of code. I will show you how to do this part in a subsequent blog. Again, go to the page and look at the code. The gist of it is here. avg, votes, and stars are global variables that I defined before. function sendRate(sel){//I hate long line so I created pieces of the message in their own vars            var s1 = "Your Rating Was: ";            var s2 = ".. ";            var s3 = "\nVotes = ";            var s4 = "\nTotal Stars = ";            var s5 = "\nAverage = ";            var s;            s = parseInt(sel.id.replace("_", '')); // Get the selected star number            votes = parseInt(votes) + 1;            stars = parseInt(stars) + s;            avg = parseFloat(stars) / parseFloat(votes);            alert(s1 + sel.id + s2 +sel.title + s3 + votes + s4 + stars + s5 + avg);} Click on the link to play and examine “Ranking with Stats” That’s all folks!

    Read the article

  • Why not Green Threads?

    - by redjamjar
    Whilst I know questions on this have been covered already (e.g. http://stackoverflow.com/questions/5713142/green-threads-vs-non-green-threads), I don't feel like I've got a satisfactory answer. The question is: why don't JVM's support green threads anymore? It says this on the code-style Java FAQ: A green thread refers to a mode of operation for the Java Virtual Machine (JVM) in which all code is executed in a single operating system thread. And this over on java.sun.com: The downside is that using green threads means system threads on Linux are not taken advantage of and so the Java virtual machine is not scalable when additional CPUs are added. It seems to me that the JVM could have a pool of system processes equal to the number of cores, and then run green threads on top of that. This could offer some big advantages when you have a very number large of threads which block often (mostly because current JVM's cap the number of threads). Thoughts?

    Read the article

  • Auto Mocking using JustMock

    - by mehfuzh
    Auto mocking containers are designed to reduce the friction of keeping unit test beds in sync with the code being tested as systems are updated and evolve over time. This is one sentence how you define auto mocking. Of course this is a more or less formal. In a more informal way auto mocking containers are nothing but a tool to keep your tests synced so that you don’t have to go back and change tests every time you add a new dependency to your SUT or System Under Test. In Q3 2012 JustMock is shipped with built in auto mocking container. This will help developers to have all the existing fun they are having with JustMock plus they can now mock object with dependencies in a more elegant way and without needing to do the homework of managing the graph. If you are not familiar with auto mocking then I won't go ahead and educate you rather ask you to do so from contents that is already made available out there from community as this is way beyond the scope of this post. Moving forward, getting started with Justmock auto mocking is pretty simple. First, I have to reference Telerik.JustMock.Container.DLL from the installation folder along with Telerik.JustMock.DLL (of course) that it uses internally and next I will write my tests with mocking container. It's that simple! In this post first I will mock the target with dependencies using current method and going forward do the same with auto mocking container. In short the sample is all about a report builder that will go through all the existing reports, send email and log any exception in that process. This is somewhat my  report builder class looks like: Reporter class depends on the following interfaces: IReporBuilder: used to  create and get the available reports IReportSender: used to send the reports ILogger: used to log any exception. Now, if I just write the test without using an auto mocking container it might end up something like this: Now, it looks fine. However, the only issue is that I am creating the mock of each dependency that is sort of a grunt work and if you have ever changing list of dependencies then it becomes really hard to keep the tests in sync. The typical example is your ASP.NET MVC controller where the number of service dependencies grows along with the project. The same test if written with auto mocking container would look like: Here few things to observe: I didn't created mock for each dependencies There is no extra step creating the Reporter class and sending in the dependencies Since ILogger is not required for the purpose of this test therefore I can be completely ignorant of it. How cool is that ? Auto mocking in JustMock is just released and we also want to extend it even further using profiler that will let me resolve not just interfaces but concrete classes as well. But that of course starts the debate of code smell vs. working with legacy code. Feel free to send in your expert opinion in that regard using one of telerik’s official channels. Hope that helps

    Read the article

  • Silverlight Cream for June 06, 2010 -- #876

    - by Dave Campbell
    In this Issue: Brian Genisio, Michael Washington, Fons Sonnemans , Don Burnett, Xianzhong Zhu, Mike Snow, Jesse Liberty, Victor Gaudioso, David Kelley(-2-), and Matias Bonaventura . Shoutout: Anoop has a good post up: MEF or Managed Extensibility Framework and Lazy – Being Lazy with MEF, Custom Export Attributes etc Jesse Liberty's got a good post up if you are just Getting Started With Silverlight: A Path Through The Learning Material John Papa reports Updates and New Home for Sticky Plugin Tim Heuer announced Silverlight 4 Theme refresh including RIA Services templates From SilverlightCream.com: Adventures in MVVM – ViewModel Location and Creation Brian Genisio has a post up about ViewModels and how he attaches them to his views. Some discssion of MVVMLight, and other external links plus the code for the project. Simplified MEF: Dynamically Loading a Silverlight .xap Michael Washington has a good tutorial up on MEF, Silverlight, and ViewModel. In Michael's words: The goal here is to give you a quick easy win. You will be able to understand this one. You will come away with something you can use, and you will be able to tell your fellow colleagues, "MEF? yeah I'm using that, good stuff Touch Gesture Triggers for Windows Phone 7 projects in Blend 4.0 Fons Sonnemans has a post up about touch gestures for WP7 -- he's got 3 of them implemented using triggers, plus an external link to another, and the source. What the Heck is “MEF” for, and what Silverlight designers need to know about it? Don Burnett is also talking MEF... he does a good job of introducing MEF if you're not acquainted yet, plus some external information. Write Your Custom Effect Components in Silverlight 3 Xianzhong Zhu has a post up walking you through creating your own Custom Effect for Blend and Silverlight 3 ... lots of external links and the source project. Silverlight Tip of the Day #28 – Text Trimming Mike Snow's Tip #28 is about Text Trimming... what it does, and how it differs from WPF Windows Phone 7: Lists, Page Animation and oData Jesse Liberty called this a mini-tutorial, but it's not so mini... great tutorial on WP7, data, lists, and page transitions... oh, and the data is OData too... New Silveright Video Tutorial: How to Do Hit Detection Victor Gaudioso's latest video tutorial is up and he's demonstrating how to do Silverlight HitTesting via code from Andy Beaulieu Dependency Properties Made Easy Need a quick pick-up on Dependency Properties? David Kelley has a short post about them on his blog. Isolated Storage Made Easy David Kelley also has a quick post up about Isolated Storage ... going to keep an eye out for more of these quick "Made Easy" posts from David. Prism 4.0 First Drop – MVVM Matias Bonaventura has a post up about the recent Prism 4.0 drop and highlights a bunch of the features/enhancements in this... some code snippets and a linnk out to the CodePlex drop. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Tip: Replacing Html.Encode Calls With New Html Encoding Syntax

    Like the well disciplined secure developer that you are, when you built your ASP.NET MVC 1.0 application, you remembered to call Html.Encode every time you output a value that came from user input. Didnt you? Well, in ASP.NET MVC 2 running on ASP.NET 4, those calls can be replaced with the new HTML encoding syntax (aka code nugget). Ive written a three part series on the topic. Html Encoding Code Blocks With ASP.NET 4 Html Encoding Nuggets With ASP.NET MVC 2 Using AntiXss as the default...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • International Radio Operators Alphabet in F# &amp; Silverlight &ndash; Part 2

    - by MarkPearl
    So the brunt of my my very complex F# code has been done. Now it’s just putting the Silverlight stuff in. The first thing I did was add a new project to my solution. I gave it a name and VS2010 did the rest of the magic in creating the .Web project etc. In this instance because I want to take the MVVM approach and make use of commanding I have decided to make the frontend a Silverlight4 project. I now need move my F# code into a proper Silverlight Library. Warning – when you create the Silverlight Library VS2010 will ask you whether you want it to be based on Silverlight3 or Silverlight4. I originally went for Silverlight4 only to discover when I tried to compile my solution that I was given an error… Error 12 F# runtime for Silverlight version v4.0 is not installed. Please go to http://go.microsoft.com/fwlink/?LinkId=177463 to download and install matching.. After asking around I discovered that the Silverlight4 F# runtime is not available yet. No problem, the suggestion was to change the F# Silverlight Library to a Silverlight3 project however when going to the properties of the project file – even though I changed it to Silverlight3, VS2010 did not like it and kept reverting it to a Silverlight4 project. After a few minutes of scratching my head I simply deleted Silverlight4 F# Library project and created a new F# Silverlight Library project in Silverlight3 and VS2010 was happy. Now that the project structure is set up, rest is fairly simple. You need to add the Silverlight Library as a reference to the C# Silverlight Front End. Then setup your views, since I was following the MVVM pattern I made a Views & ViewModel folder and set up the relevant View and ViewModels. The MainPageViewModel file looks as follows using System; using System.Net; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Ink; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Shapes; using System.Collections.ObjectModel; namespace IROAFrontEnd.ViewModels { public class MainPageViewModel : ViewModelBase { private string _iroaString; private string _inputCharacters; public string InputCharacters { get { return _inputCharacters; } set { if (_inputCharacters != value) { _inputCharacters = value; OnPropertyChanged("InputCharacters"); } } } public string IROAString { get { return _iroaString; } set { if (_iroaString != value) { _iroaString = value; OnPropertyChanged("IROAString"); } } } public ICommand MySpecialCommand { get { return new MyCommand(this); } } public class MyCommand : ICommand { readonly MainPageViewModel _myViewModel; public MyCommand(MainPageViewModel myViewModel) { _myViewModel = myViewModel; } public event EventHandler CanExecuteChanged; public bool CanExecute(object parameter) { return true; } public void Execute(object parameter) { var result = ModuleMain.ConvertCharsToStrings(_myViewModel.InputCharacters); var newString = ""; foreach (var Item in result) { newString += Item + " "; } _myViewModel.IROAString = newString.Trim(); } } } } One of the features I like in Silverlight4 is the new commanding. You will notice in my I have put the code under the command execute to reference to my F# module. At the moment this could be cleaned up even more, but will suffice for now.. public void Execute(object parameter) { var result = ModuleMain.ConvertCharsToStrings(_myViewModel.InputCharacters); var newString = ""; foreach (var Item in result) { newString += Item + " "; } _myViewModel.IROAString = newString.Trim(); } I then needed to set the view up. If we have a look at the MainPageView.xaml the xaml code will look like the following…. Nothing to fancy, but battleship grey for now… take careful note of the binding of the command in the button to MySpecialCommand which was created in the ViewModel. <UserControl x:Class="IROAFrontEnd.Views.MainPageView" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" d:DesignHeight="300" d:DesignWidth="400"> <Grid x:Name="LayoutRoot" Background="White"> <Grid.RowDefinitions> <RowDefinition/> <RowDefinition/> <RowDefinition/> </Grid.RowDefinitions> <TextBox Grid.Row="0" Text="{Binding InputCharacters, Mode=TwoWay}"/> <Button Grid.Row="1" Command="{Binding MySpecialCommand}"> <TextBlock Text="Generate"/> </Button> <TextBlock Grid.Row="2" Text="{Binding IROAString}"/> </Grid> </UserControl> Finally in the App.xaml.cs file we need to set the View and link it to the ViewModel. private void Application_Startup(object sender, StartupEventArgs e) { var myView = new MainPageView(); var myViewModel = new MainPageViewModel(); myView.DataContext = myViewModel; this.RootVisual = myView; }   Once this is done – hey presto – it worked. I typed in some “Test Input” and clicked the generate button and the correct Radio Operators Alphabet was generated. And that’s the end of my first very basic F# Silverlight application.

    Read the article

  • Adam Bien Testimonial at GlassFish Community Event, JavaOne 2012

    - by arungupta
    Adam Bien, a self-employed enterprise Java consultant, an author of five star-rated books, a presenter, a Java Champion, a NetBeans Dream Team member, a JCP member, a JCP Expert Group Member of several Java EE groups, and with several other titles is one of the most vocal advocate of the Java EE platform. His code-driven workshops using Java EE 6, NetBeans, and GlassFish have won accolades at several developers' conferences all around the world. Adam has been using GlassFish for all his projects for many years. One of the reasons he uses GlassFish is because of high confidence that the Java EE compliance bug will be fixed faster. He find GlassFish very capable application server for faster development and continuous deployment. His own media properties are running on GlassFish with an Apache front-end. Good documentation, accessible source code, REST/Web/CLI administration and monitoring facilities are some other reasons to pick GlassFish. He presented at the recently concluded GlassFish community event at JavaOne 2012. You can watch the video (with transcript) below showing him in full action:

    Read the article

  • KISS and Tell - MVVM and the ViewModelLocator

    - by Bobby Diaz
    A popular topic that comes up when talking about MVVM is the use of a ViewModelLocator and the many different ways one can be implemented.  Rather than getting into the pros and cons on when or why you should use it, I decided I would just post my version of a simple ViewModelLocator and let those who like it use it, and those who don’t, well you know…  :) First, a disclaimer.  I have not used this code in a production application, it is just something I was tossing around while reading others’ posts on the subject. 1. MainView.xaml   2. MainViewModel.cs 3. ViewModelLocator.cs   I have a codepaste of the ViewModelLocator.cs file if you are interested but don’t feel like re-typing the 50 lines of code! Enjoy! Additional Resources Simple ViewModel Locator for MVVM: The Patients Have Left the Asylum - by John Papa ViewModel binding with the Managed Extensibility Framework - by Jeremy Likness MVVM Light Toolkit - by Laurent Bugnion

    Read the article

  • Microsoft Press Deal of the Day - 9/Jul/2012 - Windows® Communication Foundation 4 Step by Step

    - by TATWORTH
    Today's Deal of the Day from Microsoft Press at http://shop.oreilly.com/product/0790145302403.do?code=MSDEAL is Windows® Communication Foundation 4 Step by Step"Teach yourself the essentials of Windows Communication Foundation (WCF) 4 -- one step at a time. With this practical, learn-by-doing tutorial, you get the clear guidance and hands-on examples you need to begin creating Web services for robust Windows-based business applications. Discover how to: Build and host SOAP and REST servicesMaintain service contracts and data contractsControl configuration and communications programmaticallyImplement message encryption, authentication, and authorizationManage identity with Windows CardSpaceBegin working with Windows Workflow Foundation to create scalable and durable business servicesImplement service discovery and message routingOptimize performance with service throttling, encoding, and streamingIntegrate WCF services with ASP.NET clients and enterprise services components"  Note the comment:Use code: MSDEAL

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 24 (sys.dm_db_index_operational_stats)

    - by Tamarick Hill
    The sys.dm_db_index_operational_stats Dynamic Management Function returns information about the IO, locking, and access methods for the indexes that you currently have on your SQL Server Instance. This function takes four input parameters which are (1) database_id, (2) object_id, (3) index_id, and (4) partition_number. Let’s have a look at the results from this function against our AdventureWorks2012 database. This function returns a ton of columns, so not only will I not attempt to describe each of the columns, I wont even attempt to display all of them here. My query below will give you a subset of the columns returned from this function. SELECT database_id, object_id, index_id, partition_number, leaf_insert_count, leaf_delete_count, leaf_update_count, leaf_ghost_count, nonleaf_insert_count, nonleaf_delete_count, nonleaf_update_count, range_scan_count, forwarded_fetch_count, row_lock_count, row_lock_wait_count, page_lock_count, page_lock_wait_count, Index_lock_promotion_attempt_count, index_lock_promotion_count, page_compression_attempt_count, page_compression_success_count FROM sys.dm_db_index_operational_stats(db_id('AdventureWorks2012'), NULL, NULL, NULL) The first four columns in the result set represent the values that we passed in as our input parameters. If you use NULL’s as I did, then you will see results for every index on your system. I specified a database_id so my result set only shows those records pertaining to my AdventureWorks2012 database. The next columns in the result set provide you with information on how may inserts, deletes, or updates that have taken place on your leaf and nonleaf index levels. The nonleaf levels would refer to the intermediate and root index levels. In the middle of these you see a leaf_ghost_count column, which represents the number of records that have been logically deleted and marked as “ghosted”  and are waiting on the background ghost cleanup process to physically remove them. The range_scan_count column represents the number of range or table scans that have been performed against an index. The forwarded_fetch_count column represents the number of rows that were returned from a forwarding row pointer. The row_lock_count and row_lock_wait_count represent the number of row locks that have been requested for an index and the number of times SQL has had to wait on a row lock respectively. The page_lock_count and page_lock_wait_count represent the number of page locks that have been requested for an index and the number of times SQL has had to wait on a page lock respectively. The index_lock_promotion_attempt_count represents the number of times the database engine has attempted to promote a lock to the index level. The index_lock_promotion_count column displays how many times that index lock promotion was successful. Lastly the page_compression_attempt_count and page_compression_success_count represents how many times a page was attempted to be compressed and how many times the attempt was successful. As you can see there is a ton of information returned from this DMV. The DMV we reviewed on yesterday (sys.dm_db_index_usage_stats) provided you with good information on when and how indexes have been used, but this DMF takes an even deeper dive into these statistics. If you are interested in performing a very detailed analysis on the operational stats of your indexes, this is not only a good place to start, but more than likely the best place. For more information on this Dynamic Management Function, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms174281.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • Are technical books easy to read on the Kindle (or other small screens) [closed]

    - by Peter Recore
    Possible Duplicate: eBook editions of programming books I am considering getting a kindle or other e reader. (Kindle is my top choice for the eink vs lcd factor) I have been able to try reading fiction on a Kindle, and it seemed pretty nice, even with the small screen. However, most books I buy are actually technical books, which tend to have figures, code samples, and other odd things. How well do the various ereaders handle books like this? In particular, does the kindle render code samples or figures in an easy to read way?

    Read the article

  • Executable Resumes

    - by Liam McLennan
    Over the past twelve months I have been thinking a lot about executable specifications. Long considered the holy grail of agile software development, executable specifications means expressing a program’s functionality in a way that is both readable by the customer and computer verifiable in an automatic, repeatable way. With the current generation of BDD and ATDD tools executable specifications seem finally within the reach of a significant percentage of the development community. Lately, and partly as a result of my craftsmanship tour, I have decided that soon I am going to have to get a job (gasp!). As Dave Hoover describes in Apprenticeship Patters, “you … have mentors and kindred spirits that you meet with periodically, [but] when it comes to developing software, you work alone.” The time may have come where the only way for me to feel satisfied and enriched by my work is to seek out a work environment where I can work with people smarter and more knowledgeable than myself. Having been on both sides of the interview desk many times I know how difficult and unreliable the process can be. Therefore, I am proposing the idea of executable resumes. As a journeyman programmer looking for a fruitful work environment I plan to write an application that demonstrates my understanding of the state of the art. Potential employers can download, view and execute my executable resume and judge wether my aesthetic sensibility matches their own. The concept of the executable resume is based upon the following assertion: A line of code answers a thousand interview questions Asking people about their experiences and skills is not a direct way of assessing their value to your organisation. Often it simple assesses their ability to mislead an interviewer. An executable resume demonstrates: The highest quality code that the person is able to produce. That the person is sufficiently motivated to produce something of value in their own time. That the person loves their craft. The idea of publishing a program to demonstrate a developer’s skills comes from Rob Conery, who suggested that each developer should build their own blog engine since it is the public representation of their level of mastery. Rob said: Luke had to build his own lightsaber – geeks should have to build their own blogs. And that should be their resume. In honour of Rob’s inspiration I plan to build a blog engine as my executable resume. While it is true that the world does not need another blog engine it is as good a project as any, it is a well understood domain, and I have not found an existing blog engine that I like. Executable resumes fit well with the software craftsmanship metaphor. It is not difficult to imagine that under the guild system master craftsmen may have accepted journeymen based on the quality of the work they had produced in the past. We now understand that when it comes to the functionality of an application that code is the final arbiter. Why not apply the same rule to hiring?

    Read the article

  • Silverlight Cream for March 13, 2011 -- #1059

    - by Dave Campbell
    In this Issue: András Velvárt, WIndowsPhoneGeek(-2-), Jesse Liberty(-2-), Victor Gaudioso, Kunal Chowdhury, Jeremy Likness, Michael Crump, and Dhananjay Kumar. Above the Fold: Silverlight: "Application Library Caching in Silverlight 4" Kunal Chowdhury WP7: "Handling WP7 orientation changes via Visual States" András Velvárt Shoutouts: Joe McBride gave a MEF Head User Group presentation and has posted How to Become a MEF Head – Slides & Code From SilverlightCream.com: Handling WP7 orientation changes via Visual States András Velvárt has an Expression Blend/WP7 post up discussing WP7 orientation changes and handling them via Visual States ... see an example from his SurfCube app, and a behavior to handle the control... with source. WP7 PerformanceProgressBar in depth WIndowsPhoneGeek has a post up discussing the WP7 Performance bar from the Windows Phone Toolkit. This is an update on the Toolkit based on the Feb 2011 release. Great explanation of the PerformanceProgressBar, external links, and sample code. Getting data out of WP7 WMAppManifest is easy with Coding4Fun PhoneHelper Next WindowsPhoneGeek has a post up about the PhoneHelper in the Coding4Fun TOolkit, and using it to get data out of the WMAppManifest easily. Good discussion, Links, and code as always Silverlight Unit Test For Phone In Jesse Liberty's "Windows Phone From Scratch" number 41, he's discussing Unit Testing for WP7... he gives some good external links and some good examples. Yet Another Podcast #27–Paul Betts Jesse Liberty's next post is his "Yet Another Podcast" number 27, and an interview with Paul Betts, the creator of Reactive UI... check out the podcast and also the good links listed. New Silverlight Video Tutorial: How to use the Fluid Move Behavior Victor Gaudioso has a new video tutorial up on using the Fluid Move Behavior... making a selected item animate from a ListBox to a Master Details Grid. Application Library Caching in Silverlight 4 Kunal Chowdhury takes a break from SilverlightZone long enough to write a post about Application Library Caching... for example on-demand loading of a 3rd-party XAP. Jounce Part 13: Navigation Parameters Jeremy Likness has his 13th post of a series in understanding his Jounce MVVM framework up. This episode surrounds a new release and what it contains, the primary focus being navigation parameters... that is you can raise a navigation event with a payload. Profiling Silverlight Applications after installing Visual Studio 2010 Service Pack 1 Michael Crump digs into the performance wizard for Silverlight that we get with VS2010 SP1. He shows how to get and read a profile... great intro to a new tool. Binding XML File to Data Grid in Silverlight Dhananjay Kumar demonstrates reading an XML file using LINQ to XML and binding the result to a Silverlight DataGrid Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Visual Programming paradigms

    - by Rego
    As the number of "visual" OS's such as Android, iOS and the promised Windows 8 are becoming more popular, it does not seem to me that we programmers have new ways to code using these new technologies, due to a possible lack in new visual programming languages paradigms. I've seen several discussions about incompatibilities between the current coding development environment, and the new OS approaches from Windows 8, Android and other tablets OS's. I mean, today if we have a new tablet, it's almost a requirement for coding, to have, for instance, an external keyboard (due it seems to me it's very difficult to program using the touch screen), exactly because the coding assistance is not conceived to "write" thousands of lines of code. So, how advanced should be the "new" visual programming languages paradigms? Which characteristics these new paradigms would be required?

    Read the article

  • Application de téléchargements de sources Qt : téléchargez, proposez, commentez vos sources en C++ et en Python avec Qt

    Bonjour, Ce forum a été ouvert en vue de recueillir toutes les discussions sur les sources et outils proposés sur la page Sources de la rubrique Qt : vous pouvez dorénavant les commenter, leur mettre une note via les étoiles en début de topic, proposer des améliorations, etc.. Et ce, directement sur le forum ! Vous pouvez également proposer vos propres sources sur la nouvelle page Sources : une fois identifié avec votre compte forum, cliquez sur le lien Ajouter en haut à droite de la page. N'hésitez surtout pas à proposer toutes vos sources, tous vos bouts de code utiles, l'application est là pour ça, le forum vous permettra de récupérer des commentaires précieux sur votre code !...

    Read the article

  • Silverlight Cream for January 11, 2011 -- #1024

    - by Dave Campbell
    1,000 blogposts is quite a few, but to die-hard geeks, 1000 isn't the number... 1K is the number, and today is my 1K blogpost! I've been working up to this for at least 11 months. Way back at MIX10, I approached some vendors about an idea I had. A month ago I contacted them and others, and everyone I contacted was very generous and supportive of my idea. My idea was not to run a contest, but blog as normal, and whoever ended up on my 1K post would get some swag... and I set a cut-off at 13 posts. So... blogging normally, I had some submittals, and then ran my normal process to pick up the next posts until I hit a total of 13. To provide a distribution channel for the swag, everyone on the list, please send me your snail mail (T-shirts) and email (licenses) addresses as soon as possible.   I'd like to thank the following generous sponsors for their contributions to my fun (in alphabetic order): and Rachel Hawley for contributing 4 Silverlight control sets First Floor Software and Koen Zwikstra for contributing 13 licenses for Silverlight Spy and Sara Faatz/Jason Beres for contributing 13 licenses for Silverlight Data Visualization controls and Svetla Stoycheva for contributing T-Shirts for everyone on the post and Ina Tontcheva for contributing 13 licenses for RadControls for Silverlight + RadControls for Windows Phone and Charlene Kozlan for contributing 1 combopack standard, 2 DataGrid for Silverlight, and 2 Listbox for Silverlight Standard And now finally...in this Issue: Nigel Sampson, Jeremy Likness, Dan Wahlin, Kunal Chowdhurry, Alex Knight, Wei-Meng Lee, Michael Crump, Jesse Liberty, Peter Kuhn, Michael Washington, Tau Sick, Max Paulousky, Damian Schenkelman Above the Fold: Silverlight: "Demystifying Silverlight Dependency Properties" Dan Wahlin WP7: "Using Windows Phone Gestures as Triggers" Nigel Sampson Expression Blend: "PathListBox: making data look cool" Alex Knight From SilverlightCream.com: Using Windows Phone Gestures as Triggers Nigel Sampson blogged about WP7 Gestures, the Toolkit, and using Gestures as Triggers, and actually makes it looks simple :) Jounce Part 9: Static and Dynamic Module Management Jeremy Likness has episode 9 of his explanation of his MVVM framework, Jounce, up... and a big discussion of Modules and Module Management from a Jounce perspective. Demystifying Silverlight Dependency Properties Dan Wahlin takes a page from one of his teaching opportunities, and shares his knowledge of Dependency Properties with us... beginning with what they are, defining them in code, and demonstrating their use. Customizing Silverlight ChildWindow Style using Blend Kunal Chowdhurry has a great post up about getting your Child Windows to match the look & feel of the rest of youra app... plus a bunch of Blend goodness thrown in. PathListBox: making data look cool File this post by Alex Knight in the 'holy crap' file along with the others in this series! ... just check out that cool Ticker Style Path ListBox at the top of the blog... too cool! Web Access in Windows Phone 7 Apps Wei-Meng Lee has the 3rd part of his series on WP7 development up and in this one is discussing Web Access... I mean *discussing* it... tons of detail, code, and explanation... great post. Prevent your Silverlight XAP file from caching in your browser. Michael Crump helps relieve stress on Silverlight developers everywhere by exploring how to avoid caching of your XAP in the browser... (WPFS) MVVM Light Toolkit: Soup To Nuts Part I Jesse Liberty continues his Windows Phone from Scratch series with a new segment exploring Laurent Bugnion's MVVMLight Toolkit beginning with acquiring and installing the toolkit, then proceeds to discuss linking the View and ViewModel, the ViewModel Locator, and page navigation. Silverlight: Making a DateTimePicker Peter Kuhn attacks a problem that crops up on the forums a lot -- a DateTimePicker control for Silverlight... following the "It's so simple to build one yourself" advice, he did so, and provides the code for all of us! Windows Phone 7 Animated Button Press Michael Washington took exception to button presses that gave no visual feedback and produced a behavior that does just that. Using TweetSharp in a Windows Phone 7 app Tau Sick demonstrates using TweetSharp to put a twitter feed into a WP7 app, as he did in "Hangover Helper"... all the instructions from getting Tweeetshaprt to the code necessary. Bindable Application Bar Extensions for Windows Phone 7 Max Paulousky has a post discussing some real extensions to the ApplicationBar for WP7.. he begins with a bindable application bar by Nicolas Humann that I've missed, probably because his blog is in French... and extends it to allow using DelegateCommand. How to: Load Prism modules packaged in a separate XAP file in an OOB application Damian Schenkelman posts about Prism, AppModules in separate XAPs and running OOB... if you've tried this, you know it's a hassle.. Damian has the solution. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • How do you structure computer science University notes?

    - by Sai Perchard
    I am completing a year of postgraduate study in CS next semester. I am finishing a law degree this year, and I will use this to briefly explain what I mean when I refer to the 'structure' of University notes. My preferred structure for authoring law notes: Word Two columns 0.5cm margins (top, right, bottom, middle, left) Body text (10pt, regular), 3 levels of headings (14/12/10pt, bold), 3 levels of bulleted lists Color A background for cases Color B background for legislation I find that it's crucial to have a good structure from the outset. My key advice to a law student would be to ensure styles allows cases and legislation to be easily identified from supporting text, and not to include too much detail regarding the facts of cases. More than 3 levels of headings is too deep. More than 3 levels of a bulleted list is too deep. In terms of CS, I am interested in similar advice; for example, any strategies that have been successfully employed regarding structure, and general advice regarding note taking. Has latex proved better than Word? Code would presumably need to be stylistically differentiated, and use a monospaced font - perhaps code could be written in TextMate so that it could be copied to retain syntax highlighting? (Are notes even that useful in a CS degree? I am tempted to simply use a textbook. They are crucial in law.) I understand that different people may employ varying techniques and that people will have personal preferences, however I am interested in what these different techniques are. Update Thank you for the responses so far. To clarify, I am not suggesting that the approach should be comparable to that I employ for law. I could have been clearer. The consensus so far seems to be - just learn it. Structure of notes/notes themselves are not generally relevant. This is what I was alluding to when I said I was just tempted to use a textbook. Re the comment that said textbooks are generally useless - I strongly disagree. Sure, perhaps the recommended textbook is useless. But if I'm going to learn a programming language, I will (1) identify what I believe to be the best textbook, and (2) read it. I was unsure if the combination of theory with code meant that lecture notes may be a more efficient way to study for an exam. I imagine that would depend on the subject. A subject specifically on a programming language, reading a textbook and coding would be my preferred approach. But I was unsure if, given a subject containing substantive theory that may not be covered in a single textbook, people may have preferences regarding note taking and structure.

    Read the article

  • openGL migration from SFML to glut, vertices arrays or display lists are not displayed

    - by user3714670
    Due to using quad buffered stereo 3D (which i have not included yet), i need to migrate my openGL program from a SFML window to a glut window. With SFML my vertices and display list were properly displayed, now with glut my window is blank white (or another color depending on the way i clear it). Here is the code to initialise the window : int type; int stereoMode = 0; if ( stereoMode == 0 ) type = GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH; else type = GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH | GLUT_STEREO; glutInitDisplayMode(type); int argc = 0; char *argv = ""; glewExperimental = GL_TRUE; glutInit(&argc, &argv); bool fullscreen = false; glutInitWindowSize(width,height); int win = glutCreateWindow(title.c_str()); glutSetWindow(win); assert(win != 0); if ( fullscreen ) { glutFullScreen(); width = glutGet(GLUT_SCREEN_WIDTH); height = glutGet(GLUT_SCREEN_HEIGHT); } GLenum err = glewInit(); if (GLEW_OK != err) { fprintf(stderr, "Error: %s\n", glewGetErrorString(err)); } glutDisplayFunc(loop_function); This is the only code i had to change for now, but here is the code i used with sfml and displayed my objects in the loop, if i change the value of glClearColor, the window's background does change color : glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glClearColor(255.0f, 255.0f, 255.0f, 0.0f); glLoadIdentity(); sf::Time elapsed_time = clock.getElapsedTime(); clock.restart(); camera->animate(elapsed_time.asMilliseconds()); camera->look(); for (auto i = objects->cbegin(); i != objects->cend(); ++i) (*i)->draw(camera); glutSwapBuffers(); Is there any other changes i should have done switching to glut ? that would be great if someone could enlighten me on the subject. In addition to that, i found out that adding too many objects (that were well handled before with SFML), openGL gives error 1285: out of memory. Maybe this is related. EDIT : Here is the code i use to draw each object, maybe it is the problem : GLuint LightID = glGetUniformLocation(this->shaderProgram, "LightPosition_worldspace"); if(LightID ==-1) cout << "LightID not found ..." << endl; GLuint MaterialAmbientID = glGetUniformLocation(this->shaderProgram, "MaterialAmbient"); if(LightID ==-1) cout << "LightID not found ..." << endl; GLuint MaterialSpecularID = glGetUniformLocation(this->shaderProgram, "MaterialSpecular"); if(LightID ==-1) cout << "LightID not found ..." << endl; glm::vec3 lightPos = glm::vec3(0,150,150); glUniform3f(LightID, lightPos.x, lightPos.y, lightPos.z); glUniform3f(MaterialAmbientID, MaterialAmbient.x, MaterialAmbient.y, MaterialAmbient.z); glUniform3f(MaterialSpecularID, MaterialSpecular.x, MaterialSpecular.y, MaterialSpecular.z); // Get a handle for our "myTextureSampler" uniform GLuint TextureID = glGetUniformLocation(shaderProgram, "myTextureSampler"); if(!TextureID) cout << "TextureID not found ..." << endl; glActiveTexture(GL_TEXTURE0); sf::Texture::bind(texture); glUniform1i(TextureID, 0); // 2nd attribute buffer : UVs GLuint vertexUVID = glGetAttribLocation(shaderProgram, "color"); if(vertexUVID==-1) cout << "vertexUVID not found ..." << endl; glEnableVertexAttribArray(vertexUVID); glBindBuffer(GL_ARRAY_BUFFER, color_array_buffer); glVertexAttribPointer(vertexUVID, 2, GL_FLOAT, GL_FALSE, 0, 0); GLuint vertexNormal_modelspaceID = glGetAttribLocation(shaderProgram, "normal"); if(!vertexNormal_modelspaceID) cout << "vertexNormal_modelspaceID not found ..." << endl; glEnableVertexAttribArray(vertexNormal_modelspaceID); glBindBuffer(GL_ARRAY_BUFFER, normal_array_buffer); glVertexAttribPointer(vertexNormal_modelspaceID, 3, GL_FLOAT, GL_FALSE, 0, 0 ); GLint posAttrib; posAttrib = glGetAttribLocation(shaderProgram, "position"); if(!posAttrib) cout << "posAttrib not found ..." << endl; glEnableVertexAttribArray(posAttrib); glBindBuffer(GL_ARRAY_BUFFER, position_array_buffer); glVertexAttribPointer(posAttrib, 3, GL_FLOAT, GL_FALSE, 0, 0); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, elements_array_buffer); glDrawElements(GL_TRIANGLES, indices.size(), GL_UNSIGNED_INT, 0); GLuint error; while ((error = glGetError()) != GL_NO_ERROR) { cerr << "OpenGL error: " << error << endl; } disableShaders();

    Read the article

  • Using TPL and PLINQ to raise performance of feed aggregator

    - by DigiMortal
    In this posting I will show you how to use Task Parallel Library (TPL) and PLINQ features to boost performance of simple RSS-feed aggregator. I will use here only very basic .NET classes that almost every developer starts from when learning parallel programming. Of course, we will also measure how every optimization affects performance of feed aggregator. Feed aggregator Our feed aggregator works as follows: Load list of blogs Download RSS-feed Parse feed XML Add new posts to database Our feed aggregator is run by task scheduler after every 15 minutes by example. We will start our journey with serial implementation of feed aggregator. Second step is to use task parallelism and parallelize feeds downloading and parsing. And our last step is to use data parallelism to parallelize database operations. We will use Stopwatch class to measure how much time it takes for aggregator to download and insert all posts from all registered blogs. After every run we empty posts table in database. Serial aggregation Before doing parallel stuff let’s take a look at serial implementation of feed aggregator. All tasks happen one after other. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();           for (var index = 0; index <blogs.Count; index++)         {              ImportFeed(blogs[index]);         }     }       private void ImportFeed(BlogDto blog)     {         if(blog == null)             return;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                 }       private void ImportRssFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = RssFeed.Create(uri);           foreach (var item in feed.Channel.Items)         {             SaveRssFeedItem(item, blog.Id, blog.CreatedById);         }     }       private void ImportAtomFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           foreach (var item in feed.Entries)         {             SaveAtomFeedEntry(item, blog.Id, blog.CreatedById);         }     } } Serial implementation of feed aggregator downloads and inserts all posts with 25.46 seconds. Task parallelism Task parallelism means that separate tasks are run in parallel. You can find out more about task parallelism from MSDN page Task Parallelism (Task Parallel Library) and Wikipedia page Task parallelism. Although finding parts of code that can run safely in parallel without synchronization issues is not easy task we are lucky this time. Feeds import and parsing is perfect candidate for parallel tasks. We can safely parallelize feeds import because importing tasks doesn’t share any resources and therefore they don’t also need any synchronization. After getting the list of blogs we iterate through the collection and start new TPL task for each blog feed aggregation. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();                var tasks = new Task[blogs.Count];           for (var index = 0; index <blogs.Count; index++)         {             tasks[index] = new Task(ImportFeed, blogs[index]);             tasks[index].Start();         }           Task.WaitAll(tasks);     }       private void ImportFeed(object blogObject)     {         if(blogObject == null)             return;         var blog = (BlogDto)blogObject;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                }       private void ImportRssFeed(BlogDto blog)     {          var uri = new Uri(blog.RssUrl);          var feed = RssFeed.Create(uri);           foreach (var item in feed.Channel.Items)          {              SaveRssFeedItem(item, blog.Id, blog.CreatedById);          }     }     private void ImportAtomFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           foreach (var item in feed.Entries)         {             SaveAtomFeedEntry(item, blog.Id, blog.CreatedById);         }     } } You should notice first signs of the power of TPL. We made only minor changes to our code to parallelize blog feeds aggregating. On my machine this modification gives some performance boost – time is now 17.57 seconds. Data parallelism There is one more way how to parallelize activities. Previous section introduced task or operation based parallelism, this section introduces data based parallelism. By MSDN page Data Parallelism (Task Parallel Library) data parallelism refers to scenario in which the same operation is performed concurrently on elements in a source collection or array. In our code we have independent collections we can process in parallel – imported feed entries. As checking for feed entry existence and inserting it if it is missing from database doesn’t affect other entries the imported feed entries collection is ideal candidate for parallelization. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();                var tasks = new Task[blogs.Count];           for (var index = 0; index <blogs.Count; index++)         {             tasks[index] = new Task(ImportFeed, blogs[index]);             tasks[index].Start();         }           Task.WaitAll(tasks);     }       private void ImportFeed(object blogObject)     {         if(blogObject == null)             return;         var blog = (BlogDto)blogObject;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                }       private void ImportRssFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = RssFeed.Create(uri);           feed.Channel.Items.AsParallel().ForAll(a =>         {             SaveRssFeedItem(a, blog.Id, blog.CreatedById);         });      }        private void ImportAtomFeed(BlogDto blog)      {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           feed.Entries.AsParallel().ForAll(a =>         {              SaveAtomFeedEntry(a, blog.Id, blog.CreatedById);         });      } } We did small change again and as the result we parallelized checking and saving of feed items. This change was data centric as we applied same operation to all elements in collection. On my machine I got better performance again. Time is now 11.22 seconds. Results Let’s visualize our measurement results (numbers are given in seconds). As we can see then with task parallelism feed aggregation takes about 25% less time than in original case. When adding data parallelism to task parallelism our aggregation takes about 2.3 times less time than in original case. More about TPL and PLINQ Adding parallelism to your application can be very challenging task. You have to carefully find out parts of your code where you can safely go to parallel processing and even then you have to measure the effects of parallel processing to find out if parallel code performs better. If you are not careful then troubles you will face later are worse than ones you have seen before (imagine error that occurs by average only once per 10000 code runs). Parallel programming is something that is hard to ignore. Effective programs are able to use multiple cores of processors. Using TPL you can also set degree of parallelism so your application doesn’t use all computing cores and leaves one or more of them free for host system and other processes. And there are many more things in TPL that make it easier for you to start and go on with parallel programming. In next major version all .NET languages will have built-in support for parallel programming. There will be also new language constructs that support parallel programming. Currently you can download Visual Studio Async to get some idea about what is coming. Conclusion Parallel programming is very challenging but good tools offered by Visual Studio and .NET Framework make it way easier for us. In this posting we started with feed aggregator that imports feed items on serial mode. With two steps we parallelized feed importing and entries inserting gaining 2.3 times raise in performance. Although this number is specific to my test environment it shows clearly that parallel programming may raise the performance of your application significantly.

    Read the article

  • What is meant by "Repeat Business" ?

    - by vinoth
    Repeat Business obviously happens because the company has a great product or a great service. In the software industry, do companies make the code base complex enough so that the maintenance comes back to them? I have heard of cases where companies say "ya this code base has minor errors, let's ship them anyway, and let the customers come back for another change request on these". Then they would sometimes charge the customer for that. This question is specific to the software services industry. Do these things happen in the real world? I am trying to understand the business process.

    Read the article

  • Returning a mock object from a mock object

    - by Songo
    I'm trying to return an object when mocking a parser class. This is the test code using PHPUnit 3.7 //set up the result object that I want to be returned from the call to parse method $parserResult= new ParserResult(); $parserResult->setSegment('some string'); //set up the stub Parser object $stubParser=$this->getMock('Parser'); $stubParser->expects($this->any()) ->method('parse') ->will($this->returnValue($parserResult)); //injecting the stub to my client class $fileHeaderParser= new FileWriter($stubParser); $output=$fileParser->writeStringToFile(); Inside my writeStringToFile() method I'm using $parserResult like this: writeStringToFile(){ //Some code... $parserResult=$parser->parse(); $segment=$parserResult->getSegment();//that's why I set the segment in the test. } Should I mock ParserResult in the first place, so that the mock returns a mock? Is it good design for mocks to return mocks? Is there a better approach to do this all?!

    Read the article

< Previous Page | 733 734 735 736 737 738 739 740 741 742 743 744  | Next Page >