Search Results

Search found 6952 results on 279 pages for 'gui designer'.

Page 265/279 | < Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >

  • The Incremental Architect&acute;s Napkin - #1 - It&acute;s about the money, stupid

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/24/the-incremental-architectacutes-napkin---1---itacutes-about-the.aspx Software development is an economic endeavor. A customer is only willing to pay for value. What makes a software valuable is required to become a trait of the software. We as software developers thus need to understand and then find a way to implement requirements. Whether or in how far a customer really can know beforehand what´s going to be valuable for him/her in the end is a topic of constant debate. Some aspects of the requirements might be less foggy than others. Sometimes the customer does not know what he/she wants. Sometimes he/she´s certain to want something - but then is not happy when that´s delivered. Nevertheless requirements exist. And developers will only be paid if they deliver value. So we better focus on doing that. Although is might sound trivial I think it´s important to state the corollary: We need to be able to trace anything we do as developers back to some requirement. You decide to use Go as the implementation language? Well, what´s the customer´s requirement this decision is linked to? You decide to use WPF as the GUI technology? What´s the customer´s requirement? You decide in favor of a layered architecture? What´s the customer´s requirement? You decide to put code in three classes instead of just one? What´s the customer´s requirement behind that? You decide to use MongoDB over MySql? What´s the customer´s requirement behind that? etc. I´m not saying any of these decisions are wrong. I´m just saying whatever you decide be clear about the requirement that´s driving your decision. You have to be able to answer the question: Why do you think will X deliver more value to the customer than the alternatives? Customers are not interested in romantic ideals of hard working, good willing, quality focused craftsmen. They don´t care how and why you work - as long as what you deliver fulfills their needs. They want to trust you to recognize this as your top priority - and then deliver. That´s all. Fundamental aspects of requirements If you´re like me you´re probably not used to such scrutinization. You want to be trusted as a professional developer - and decide quite a few things following your gut feeling. Or by relying on “established practices”. That´s ok in general and most of the time - but still… I think we should be more conscious about our decisions. Which would make us more responsible, even more professional. But without further guidance it´s hard to reason about many of the myriad decisions we´ve to make over the course of a software project. What I found helpful in this situation is structuring requirements into fundamental aspects. Instead of one large heap of requirements then there are smaller blobs. With them it´s easier to check if a decisions falls in their scope. Sure, every project has it´s very own requirements. But all of them belong to just three different major categories, I think. Any requirement either pertains to functionality, non-functional aspects or sustainability. For short I call those aspects: Functionality, because such requirements describe which transformations a software should offer. For example: A calculator software should be able to add and multiply real numbers. An auction website should enable you to set up an auction anytime or to find auctions to bid for. Quality, because such requirements describe how functionality is supposed to work, e.g. fast or secure. For example: A calculator should be able to calculate the sinus of a value much faster than you could in your head. An auction website should accept bids from millions of users. Security of Investment, because functionality and quality need not just be delivered in any way. It´s important to the customer to get them quickly - and not only today but over the course of several years. This aspect introduces time into the “requrements equation”. Security of Investments (SoI) sure is a non-functional requirement. But I think it´s important to not subsume it under the Quality (Q) aspect. That´s because SoI has quite special properties. For one, SoI for software means something completely different from what it means for hardware. If you buy hardware (a car, a hair blower) you find that a worthwhile investment, if the hardware does not change it´s functionality or quality over time. A car still running smoothly with hardly any rust spots after 10 years of daily usage would be a very secure investment. So for hardware (or material products, if you like) “unchangeability” (in the face of usage) is desirable. With software you want the contrary. Software that cannot be changed is a waste. SoI for software means “changeability”. You want to be sure that the software you buy/order today can be changed, adapted, improved over an unforseeable number of years so as fit changes in its usage environment. But that´s not the only reason why the SoI aspect is special. On top of changeability[1] (or evolvability) comes immeasurability. Evolvability cannot readily be measured by counting something. Whether the changeability is as high as the customer wants it, cannot be determined by looking at metrics like Lines of Code or Cyclomatic Complexity or Afferent Coupling. They may give a hint… but they are far, far from precise. That´s because of the nature of changeability. It´s different from performance or scalability. Also it´s because a customer cannot tell upfront, “how much” evolvability he/she wants. Whether requirements regarding Functionality (F) and Q have been met, a customer can tell you very quickly and very precisely. A calculation is missing, the calculation takes too long, the calculation time degrades with increased load, the calculation is accessible to the wrong users etc. That´s all very or at least comparatively easy to determine. But changeability… That´s a whole different thing. Nevertheless over time the customer will develop a feedling if changeability is good enough or degrading. He/she just has to check the development of the frequency of “WTF”s from developers ;-) F and Q are “timeless” requirement categories. Customers want us to deliver on them now. Just focusing on the now, though, is rarely beneficial in the long run. So SoI adds a counterweight to the requirements picture. Customers want SoI - whether they know it or not, whether they state if explicitly or not. In closing A customer´s requirements are not monolithic. They are not all made the same. Rather they fall into different categories. We as developers need to recognize these categories when confronted with some requirement - and take them into account. Only then can we make true professional decisions, i.e. conscious and responsible ones. I call this fundamental trait of software “changeability” and not “flexibility” to distinguish to whom it´s a concern. “Flexibility” to me means, software as is can easily be adapted to a change in its environment, e.g. by tweaking some config data or adding a library which gets picked up by a plug-in engine. “Flexibiltiy” thus is a matter of some user. “Changeability”, on the other hand, to me means, software can easily be changed in its structure to adapt it to new requirements. That´s a matter of the software developer. ?

    Read the article

  • Building a Windows Phone 7 Twitter Application using Silverlight

    - by ScottGu
    On Monday I had the opportunity to present the MIX 2010 Day 1 Keynote in Las Vegas (you can watch a video of it here).  In the keynote I announced the release of the Silverlight 4 Release Candidate (we’ll ship the final release of it next month) and the VS 2010 RC tools for Silverlight 4.  I also had the chance to talk for the first time about how Silverlight and XNA can now be used to build Windows Phone 7 applications. During my talk I did two quick Windows Phone 7 coding demos using Silverlight – a quick “Hello World” application and a “Twitter” data-snacking application.  Both applications were easy to build and only took a few minutes to create on stage.  Below are the steps you can follow yourself to build them on your own machines as well. [Note: In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] Building a “Hello World” Windows Phone 7 Application First make sure you’ve installed the Windows Phone Developer Tools CTP – this includes the Visual Studio 2010 Express for Windows Phone development tool (which will be free forever and is the only thing you need to develop and build Windows Phone 7 applications) as well as an add-on to the VS 2010 RC that enables phone development within the full VS 2010 as well. After you’ve downloaded and installed the Windows Phone Developer Tools CTP, launch the Visual Studio 2010 Express for Windows Phone that it installs or launch the VS 2010 RC (if you have it already installed), and then choose “File”->”New Project.”  Here, you’ll find the usual list of project template types along with a new category: “Silverlight for Windows Phone”. The first CTP offers two application project templates. The first is the “Windows Phone Application” template - this is what we’ll use for this example. The second is the “Windows Phone List Application” template - which provides the basic layout for a master-details phone application: After creating a new project, you’ll get a view of the design surface and markup. Notice that the design surface shows the phone UI, letting you easily see how your application will look while you develop. For those familiar with Visual Studio, you’ll also find the familiar ToolBox, Solution Explorer and Properties pane. For our HelloWorld application, we’ll start out by adding a TextBox and a Button from the Toolbox. Notice that you get the same design experience as you do for Silverlight on the web or desktop. You can easily resize, position and align your controls on the design surface. Changing properties is easy with the Properties pane. We’ll change the name of the TextBox that we added to username and change the page title text to “Hello world.” We’ll then write some code by double-clicking on the button and create an event handler in the code-behind file (MainPage.xaml.cs). We’ll start out by changing the title text of the application. The project template included this title as a TextBlock with the name textBlockListTitle (note that the current name incorrectly includes the word “list”; that will be fixed for the final release.)  As we write code against it we get intellisense showing the members available.  Below we’ll set the Text property of the title TextBlock to “Hello “ + the Text property of the TextBox username: We now have all the code necessary for a Hello World application.  We have two choices when it comes to deploying and running the application. We can either deploy to an actual device itself or use the built-in phone emulator: Because the phone emulator is actually the phone operating system running in a virtual machine, we’ll get the same experience developing in the emulator as on the device. For this sample, we’ll just press F5 to start the application with debugging using the emulator.  Once the phone operating system loads, the emulator will run the new “Hello world” application exactly as it would on the device: Notice that we can change several settings of the emulator experience with the emulator toolbar – which is a floating toolbar on the top right.  This includes the ability to re-size/zoom the emulator and two rotate buttons.  Zoom lets us zoom into even the smallest detail of the application: The orientation buttons allow us easily see what the application looks like in landscape mode (orientation change support is just built into the default template): Note that the emulator can be reused across F5 debug sessions - that means that we don’t have to start the emulator for every deployment. We’ve added a dialog that will help you from accidentally shutting down the emulator if you want to reuse it.  Launching an application on an already running emulator should only take ~3 seconds to deploy and run. Within our Hello World application we’ll click the “username” textbox to give it focus.  This will cause the software input panel (SIP) to open up automatically.  We can either type a message or – since we are using the emulator – just type in text.  Note that the emulator works with Windows 7 multi-touch so, if you have a touchscreen, you can see how interaction will feel on a device just by pressing the screen. We’ll enter “MIX 10” in the textbox and then click the button – this will cause the title to update to be “Hello MIX 10”: We provide the same Visual Studio experience when developing for the phone as other .NET applications. This means that we can set a breakpoint within the button event handler, press the button again and have it break within the debugger: Building a “Twitter” Windows Phone 7 Application using Silverlight Rather than just stop with “Hello World” let’s keep going and evolve it to be a basic Twitter client application. We’ll return to the design surface and add a ListBox, using the snaplines within the designer to fit it to the device screen and make the best use of phone screen real estate.  We’ll also rename the Button “Lookup”: We’ll then return to the Button event handler in Main.xaml.cs, and remove the original “Hello World” line of code and take advantage of the WebClient networking class to asynchronously download a Twitter feed. This takes three lines of code in total: (1) declaring and creating the WebClient, (2) attaching an event handler and then (3) calling the asynchronous DownloadStringAsync method. In the DownloadStringAsync call, we’ll pass a Twitter Uri plus a query string which pulls the text from the “username” TextBox. This feed will pull down the respective user’s most frequent posts in an XML format. When the call completes, the DownloadStringCompleted event is fired and our generated event handler twitter_DownloadStringCompleted will be called: The result returned from the Twitter call will come back in an XML based format.  To parse this we’ll use LINQ to XML. LINQ to XML lets us create simple queries for accessing data in an xml feed. To use this library, we’ll first need to add a reference to the assembly (right click on the References folder in the solution explorer and choose “Add Reference): We’ll then add a “using System.Xml.Linq” namespace reference at the top of the code-behind file at the top of Main.xaml.cs file: We’ll then add a simple helper class called TwitterItem to our project. TwitterItem has three string members – UserName, Message and ImageSource: We’ll then implement the twitter_DownloadStringCompleted event handler and use LINQ to XML to parse the returned XML string from Twitter.  What the query is doing is pulling out the three key pieces of information for each Twitter post from the username we passed as the query string. These are the ImageSource for their profile image, the Message of their tweet and their UserName. For each Tweet in the XML, we are creating a new TwitterItem in the IEnumerable<XElement> returned by the Linq query.  We then assign the generated TwitterItem sequence to the ListBox’s ItemsSource property: We’ll then do one more step to complete the application. In the Main.xaml file, we’ll add an ItemTemplate to the ListBox. For the demo, I used a simple template that uses databinding to show the user’s profile image, their tweet and their username. <ListBox Height="521" HorizonalAlignment="Left" Margin="0,131,0,0" Name="listBox1" VerticalAlignment="Top" Width="476"> <ListBox.ItemTemplate> <DataTemplate> <StackPanel Orientation="Horizontal" Height="132"> <Image Source="{Binding ImageSource}" Height="73" Width="73" VerticalAlignment="Top" Margin="0,10,8,0"/> <StackPanel Width="370"> <TextBlock Text="{Binding UserName}" Foreground="#FFC8AB14" FontSize="28" /> <TextBlock Text="{Binding Message}" TextWrapping="Wrap" FontSize="24" /> </StackPanel> </StackPanel> </DataTemplate> </ListBox.ItemTemplate> </ListBox> Now, pressing F5 again, we are able to reuse the emulator and re-run the application. Once the application has launched, we can type in a Twitter username and press the  Button to see the results. Try my Twitter user name (scottgu) and you’ll get back a result of TwitterItems in the Listbox: Try using the mouse (or if you have a touchscreen device your finger) to scroll the items in the Listbox – you should find that they move very fast within the emulator.  This is because the emulator is hardware accelerated – and so gives you the same fast performance that you get on the actual phone hardware. Summary Silverlight and the VS 2010 Tools for Windows Phone (and the corresponding Expression Blend Tools for Windows Phone) make building Windows Phone applications both really easy and fun.  At MIX this week a number of great partners (including Netflix, FourSquare, Seesmic, Shazaam, Major League Soccer, Graphic.ly, Associated Press, Jackson Fish and more) showed off some killer application prototypes they’ve built over the last few weeks.  You can watch my full day 1 keynote to see them in action. I think they start to show some of the promise and potential of using Silverlight with Windows Phone 7.  I’ll be doing more blog posts in the weeks and months ahead that cover that more. Hope this helps, Scott

    Read the article

  • Recover Deleted Files on an NTFS Hard Drive from a Ubuntu Live CD

    - by Trevor Bekolay
    Accidentally deleting a file is a terrible feeling. Not being able to boot into Windows and undelete that file makes that even worse. Fortunately, you can recover deleted files on NTFS hard drives from an Ubuntu Live CD. To show this process, we created four files on the desktop of a Windows XP machine, and then deleted them. We then booted up the same machine with the bootable Ubuntu 9.10 USB Flash Drive that we created last week. Once Ubuntu 9.10 boots up, open a terminal by clicking Applications in the top left of the screen, and then selecting Accessories > Terminal. To undelete our files, we first need to identify the hard drive that we want to undelete from. In the terminal window, type in: sudo fdisk –l and press enter. What you’re looking for is a line that ends with HPSF/NTFS (under the heading System). In our case, the device is “/dev/sda1”. This may be slightly different for you, but it will still begin with /dev/. Note this device name. If you have more than one hard drive partition formatted as NTFS, then you may be able to identify the correct partition by the size. If you look at the second line of text in the screenshot above, it reads “Disk /dev/sda: 136.4 GB, …” This means that the hard drive that Ubuntu has named /dev/sda is 136.4 GB large. If your hard drives are of different size, then this information can help you track down the right device name to use. Alternatively, you can just try them all, though this can be time consuming for large hard drives. Now that you know the name Ubuntu has assigned to your hard drive, we’ll scan it to see what files we can uncover. In the terminal window, type: sudo ntfsundelete <HD name> and hit enter. In our case, the command is: sudo ntfsundelete /dev/sda1 The names of files that can recovered show up in the far right column. The percentage in the third column tells us how much of that file can be recovered. Three of the four files that we originally deleted are showing up in this list, even though we shut down the computer right after deleting the four files – so even in ideal cases, your files may not be recoverable. Nevertheless, we have three files that we can recover – two JPGs and an MPG. Note: ntfsundelete is immediately available in the Ubuntu 9.10 Live CD. If you are in a different version of Ubuntu, or for some other reason get an error when trying to use ntfsundelete, you can install it by entering “sudo apt-get install ntfsprogs” in a terminal window. To quickly recover the two JPGs, we will use the * wildcard to recover all of the files that end with .jpg. In the terminal window, enter sudo ntfsundelete <HD name> –u –m *.jpg which is, in our case, sudo ntfsundelete /dev/sda1 –u –m *.jpg The two files are recovered from the NTFS hard drive and saved in the current working directory of the terminal. By default, this is the home directory of the current user, though we are working in the Desktop folder. Note that the ntfsundelete program does not make any changes to the original NTFS hard drive. If you want to take those files and put them back in the NTFS hard drive, you will have to move them there after they are undeleted with ntfsundelete. Of course, you can also put them on your flash drive or open Firefox and email them to yourself – the sky’s the limit! We have one more file to undelete – our MPG. Note the first column on the far left. It contains a number, its Inode. Think of this as the file’s unique identifier. Note this number. To undelete a file by its Inode, enter the following in the terminal: sudo ntfsundelete <HD name> –u –i <Inode> In our case, this is: sudo ntfsundelete /dev/sda1 –u –i 14159 This recovers the file, along with an identifier that we don’t really care about. All three of our recoverable files are now recovered. However, Ubuntu lets us know visually that we can’t use these files yet. That’s because the ntfsundelete program saves the files as the “root” user, not the “ubuntu” user. We can verify this by typing the following in our terminal window: ls –l We want these three files to be owned by ubuntu, not root. To do this, enter the following in the terminal window: sudo chown ubuntu <Files> If the current folder has other files in it, you may not want to change their owner to ubuntu. However, in our case, we only have these three files in this folder, so we will use the * wildcard to change the owner of all three files. sudo chown ubuntu * The files now look normal, and we can do whatever we want with them. Hopefully you won’t need to use this tip, but if you do, ntfsundelete is a nice command-line utility. It doesn’t have a fancy GUI like many of the similar Windows programs, but it is a powerful tool that can recover your files quickly. See ntfsundelete’s manual page for more detailed usage information Similar Articles Productive Geek Tips Reset Your Ubuntu Password Easily from the Live CDUse Ubuntu Live CD to Backup Files from Your Dead Windows ComputerCreate a Bootable Ubuntu 9.10 USB Flash DriveCreate a Bootable Ubuntu USB Flash Drive the Easy WayGuide to Using Check Disk in Windows Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Windows 7 Easter Theme YoWindoW, a real time weather screensaver Optimize your computer the Microsoft way Stormpulse provides slick, real time weather data Geek Parents – Did you try Parental Controls in Windows 7? Change DNS servers on the fly with DNS Jumper

    Read the article

  • Entity Framework v1 &hellip; Brief Synopsis and Tips &ndash; Part 2

    - by Rohit Gupta
    Using Entity Framework with ASMX Web sErvices and WCF Web Service: If you use ASMX WebService to expose Entity objects from Entity Framework... then the ASMX Webservice does not  include object graphs, one work around is to use Facade pattern or to use WCF Service. The other important aspect of using ASMX Web Services along with Entity Framework is that the ASMX Client is not aware of the existence of EF v1 since the client solely deals with C# objects (not EntityObjects or ObjectContext). Since the client is not aware of the ObjectContext hence the client cannot participate in change tracking since the client only receives the Current Values and not the Orginal values when the service sends the the Entity objects to the client. Thus there are 2 drawbacks to using EntityFramework with ASMX Web Service: 1. Object state is not maintained... so to overcome this limitation we need insert/update single entity at a time and retrieve the original values for the entity being updated on the server/service end before calling Save Changes. 2. ASMX does not maintain object graphs... i.e. Customer.Reservations or Customer.Reservations.Trip relationships are not maintained. Thus you need to send these relationships separately from service to client. WCF Web Service overcomes the object graph limitation of ASMX Web Service, but we need to insure that we are populating all the non-null scalar properties of all the objects in the object graph before calling Update. WCF Web service still cannot overcome the second limitation of tracking changes to entities at the client end. Also note that the "Customer" class in the Client is very different from the "Customer" class in the Entity Framework Model Entities. They are incompatible with each other hence we cannot cast one to the other. However the .NET Framework translates the client "Customer" Entity to the EFv1 Model "customer" Entity once the entity is serialzed back on the ASMX server end. If you need change tracking enabled on the client then we need to use WCF Data Services which is available with VS 2010. ====================================================================================================== In WCF when adding an object that has relationships, the framework assumes that every object in the object graph needs to be added to store. for e.g. in a Customer.Reservations.Trip object graph, when a Customer Entity is added to the store, the EFv1 assumes that it needs to a add a Reservations collection and also Trips for each Reservation. Thus if we need to use existing Trips for reservations then we need to insure that we null out the Trip object reference from Reservations and set the TripReference to the EntityKey of the desired Trip instead. ====================================================================================================== Understanding Relationships and Associations in EFv1 The Golden Rule of EF is that it does not load entities/relationships unless you ask it to explicitly do so. However there is 1 exception to this rule. This exception happens when you attach/detach entities from the ObjectContext. If you detach an Entity in a ObjectGraph from the ObjectContext, then the ObjectContext removes the ObjectStateEntry for this Entity and all the relationship Objects associated with this Entity. For e.g. in a Customer.Order.OrderDetails if the Customer Entity is detached from the ObjectContext then you cannot traverse to the Order and OrderDetails Entities (that still exist in the ObjectContext) from the Customer Entity(which does not exist in the Object Context) Conversely, if you JOIN a entity that is not in the ObjectContext with a Entity that is in the ObjContext then the First Entity will automatically be added to the ObjContext since relationships for the 2 Entities need to exist in the ObjContext. ========================================================= You cannot attach an EntityCollection to an entity through its navigation property for e.g. you cannot code myContact.Addresses = myAddressEntityCollection ========================================================== Cascade Deletes in EDM: The Designer does not support specifying cascase deletes for a Entity. To enable cascasde deletes on a Entity in EDM use the Association definition in CSDL for the Entity. for e.g. SalesOrderDetail (SOD) has a Foreign Key relationship with SalesOrderHeader (SalesOrderHeader 1 : SalesOrderDetail *) if you specify a cascade Delete on SalesOrderHeader Entity then calling deleteObject on SalesOrderHeader (SOH) Entity will send delete commands for SOH record and all the SOD records that reference the SOH record. ========================================================== As a good design practise, if you use Cascade Deletes insure that Cascade delete facet is used both in the EDM as well as in the database. Even though it is not absolutely mandatory to have Cascade deletes on both Database and EDM (since you can see that just the Cascade delete spec on the SOH Entity in EDM will insure that SOH record and all related SOD records will be deleted from the database ... even though you dont have cascade delete configured in the database in the SOD table) ============================================================== Maintaining relationships in Code When Setting a Navigation property of a Entity (for e.g. setting the Contact Navigation property of Address Entity) the following rules apply : If both objects are detached, no relationship object will be created. You are simply setting a property the CLR way. If both objects are attached, a relationship object will be created. If only one of the objects is attached, the other will become attached and a relationship object will be created. If that detached object is new, when it is attached to the context its EntityState will be Added. One important rule to remember regarding synchronizing the EntityReference.Value and EntityReference.EntityKey properties is that when attaching an Entity which has a EntityReference (e.g. Address Entity with ContactReference) the Value property will take precedence and if the Value and EntityKey are out of sync, the EntityKey will be updated to match the Value. ====================================================== If you call .Load() method on a detached Entity then the .Load() operation will throw an exception. There is one exception to this rule. If you load entities using MergeOption.NoTracking, you will be able to call .Load() on such entities since these Entities are accessible by the ObjectContext. So the bottomline is that we need Objectontext to be able to call .Load() method to do deffered loading on EntityReference or EntityCollection. Another rule to remember is that you cannot call .Load() on entities that have a EntityState.Added State since the ObjectContext uses the EntityKey of the Primary (Parent) Entity when loading the related (Child) Entity (and not the EntityKey of the child (even if the EntityKey of the child is present before calling .Load()) ====================================================== You can use ObjContext.Add() to add a entity to the ObjContext and set the EntityState of the new Entity to EntityState.Added. here no relationships are added/updated. You can also use EntityCollection.Add() method to add an entity to another entity's related EntityCollection for e.g. contact has a Addresses EntityCollection so to add a new address use contact.Addresses.Add(newAddress) to add a new address to the Addresses EntityCollection. Note that if the entity does not already exist in the ObjectContext then calling contact.Addresses.Add(myAddress) will cause a new Address Entity to be added to the ObjContext with EntityState.Added and it will also add a RelationshipEntry (a relationship object) with EntityState.Added which connects the Contact (contact) with the new address newAddress. Note that if the entity already exists in the Objectcontext (being part theOtherContact.Addresses Collection), then calling contact.Addresses.Add(existingAddress) will add 2 RelationshipEntry objects to the ObjectStateEntry Collection, one with EntityState.Deleted and the other with EntityState.Added. This implies that the existingAddress Entity is removed from the theOtherContact.Addresses Collection and Added to the contact.Addresses Collection..effectively reassigning the address entity from the theOtherContact to "contact". This is called moving an existing entity to a new object graph. ====================================================== You usually use ObjectContext.Attach() and EntityCollection.Attach() methods usually when you need to reconstruct the ObjectGraph after deserializing the objects as received from a ASMX Web Service Client. Attach is usually used to connect existing Entities in the ObjectContext. When EntityCollection.Attach() is called the EntityState of the RelationshipEntry (the relationship object) remains as EntityState.unchanged whereas when EntityCollection.Add() method is called the EntityState of the relationship object changes to EntityState.Added or EntityState.Deleted as the situation demands. ========================================================= LINQ To Entities Tips: Select Many does Inner Join by default.   for e.g. from c in Contact from a in c.Address select c ... this will do a Inner Join between the Contacts and Addresses Table and return only those Contacts that have a Address. ======================================================== Group Joins Do LEFT Join by default. e.g. from a in Address join c in Contact ON a.Contact.ContactID == c.ContactID Into g WHERE a.CountryRegion == "US" select g; This query will do a left join on the Contact table and return contacts that have a address in "US" region The following query : from c in Contact join a in Address.Where(a1 => a1.CountryRegion == "US") on c.ContactID  equals a.Contact.ContactID into addresses select new {c, addresses} will do a left join on the Address table and return All Contacts. In these Contacts only those will have its Address EntityCollection Populated which have a Address in the "US" region, the other contacts will have 0 Addresses in the Address collection (even if addresses for those contacts exist in the database but are in a different region) ======================================================== Linq to Entities does not support DefaultIfEmpty().... instead use .Include("Address") Query Builder method to do a Left JOIN or use Group Joins if you need more control like Filtering on the Address EntityCollection of Contact Entity =================================================================== Use CreateSourceQuery() on the EntityReference or EntityCollection if you need to add filters during deferred loading of Entities (Deferred loading in EFv1 happens when you call Load() method on the EntityReference or EntityCollection. for e.g. var cust=context.Contacts.OfType<Customer>().First(); var sq = cust.Reservations.CreateSourceQuery().Where(r => r.ReservationDate > new DateTime(2008,1,1)); cust.Reservations.Attach(sq); This populates only those reservations that are older than Jan 1 2008. This is the only way (in EFv1) to Attach a Range of Entities to a EntityCollection using the Attach() method ================================================================== If you need to get the Foreign Key value for a entity e.g. to get the ContactID value from a Address Entity use this :                                address.ContactReference.EntityKey.EntityKeyValues.Where(k=> k.Key == "ContactID")

    Read the article

  • Preserving Permalinks

    - by Daniel Moth
    One of the things that gets me on a rant is websites that break permalinks. If you have posted something somewhere and there is a public URL pointing to it, that URL should never ever return a 404. You are breaking all websites that ever linked to you and you are breaking all search engine links to your content (that others will try and follow). It is a pet peeve of mine. So when I had to move my blog, obviously I would preserve the root URL (www.danielmoth.com/Blog/), but I also wanted to preserve every URL my blog has generated over the years. To be clear, our focus here is on the URL formatting, not the content migration which I'll talk about in my next post. In this post, I'll describe my solution first and then what it solves. 1. The IIS7 Rewrite Module and web.config There are a few ways you can map an old URL to a new one (so when requests to the old URL come in, they get redirected to the new one). The new blog engine I use (dasBlog) has built-in functionality to do that (Scott refers to it here). Instead, the way I chose to address the issue was to use the IIS7 rewrite module. The IIS7 rewrite module allows redirecting URLs based on pattern matching, regular expressions and, of course, hardcoded full URLs for things that don't fall into any pattern. You can configure it visually from IIS Manager using a handy dialog that allows testing patterns against input URLs. Here is what mine looked like after configuring a few rules: To learn more about this technology check out this video, the reference page and this overview blog post; all 3 pages have a collection of related resources at the bottom worth checking out too. All the visual configuration ends up in a web.config file at the root folder of your website. If you are on a shared hosting service, probably the only way you can use the Rewrite Module is by directly editing the web.config file. Next, I'll describe the URLs I had to map and how that manifested itself in the web.config file. What I did was create the rules locally using the GUI, and then took the generated web.config file and uploaded it to my live site. You can view my web.config here. 2. Monthly Archives Observe the difference between the way the two blog engines generate this type of URL Blogger: /Blog/2004_07_01_mothblog_archive.html dasBlog: /Blog/default,month,2004-07.aspx In my web.config file, the rule that deals with this is the one named "monthlyarchive_redirect". 3. Categories Observe the difference between the way the two blog engines generate this type of URL Blogger: /Blog/labels/Personal.html dasBlog: /Blog/CategoryView,category,Personal.aspx In my web.config file the rule that deals with this is the one named "category_redirect". 4. Posts Observe the difference between the way the two blog engines generate this type of URL Blogger: /Blog/2004/07/hello-world.html dasBlog: /Blog/Hello-World.aspx In my web.config file the rule that deals with this is the one named "post_redirect". Note: The decision is taken to use dasBlog URLs that do not include the date info (see the description of my Appearance settings). If we included the date info then it would have to include the day part, which blogger did not generate. This makes it impossible to redirect correctly and to have a single permalink for blog posts moving forward. An implication of this decision, is that no two blog posts can have the same title. The tool I will describe in my next post (inelegantly) deals with duplicates, but not with triplicates or higher. 5. Unhandled by a generic rule Unfortunately, the two blog engines use different rules for generating URLs for blog posts. Most of the time the conversion is as simple as the example of the previous section where a post titled "Hello World" generates a URL with the words separated by a hyphen. Some times that is not the case, for example: /Blog/2006/05/medc-wrap-up.html /Blog/MEDC-Wrapup.aspx or /Blog/2005/01/best-of-moth-2004.html /Blog/Best-Of-The-Moth-2004.aspx or /Blog/2004/11/more-windows-mobile-2005-details.html /Blog/More-Windows-Mobile-2005-Details-Emerge.aspx In short, blogger does not add words to the title beyond ~39 characters, it drops some words from the title generation (e.g. a, an, on, the), and it preserve hyphens that appear in the title. For this reason, we need to detect these and explicitly list them for redirects (no regular expression can help here because the full set of rules is not listed anywhere). In my web.config file the rule that deals with this is the one named "Redirect rule1 for FullRedirects" combined with the rewriteMap named "StaticRedirects". Note: The tool I describe in my next post will detect all the URLs that need to be explicitly redirected and will list them in a file ready for you to copy them to your web.config rewriteMap. 6. C# code doing the same as the web.config I wrote some naive code that does the same thing as the web.config: given a string it will return a new string converted according to the 3 rules above. It does not take into account the 4th case where an explicit hard-coded conversion is needed (the tool I present in the next post does take that into account). static string REGEX_post_redirect = "[0-9]{4}/[0-9]{2}/([0-9a-z-]+).html"; static string REGEX_category_redirect = "labels/([_0-9a-z-% ]+).html"; static string REGEX_monthlyarchive_redirect = "([0-9]{4})_([0-9]{2})_[0-9]{2}_mothblog_archive.html"; static string Redirect(string oldUrl) { GroupCollection g; if (RunRegExOnIt(oldUrl, REGEX_post_redirect, 2, out g)) return string.Concat(g[1].Value, ".aspx"); if (RunRegExOnIt(oldUrl, REGEX_category_redirect, 2, out g)) return string.Concat("CategoryView,category,", g[1].Value, ".aspx"); if (RunRegExOnIt(oldUrl, REGEX_monthlyarchive_redirect, 3, out g)) return string.Concat("default,month,", g[1].Value, "-", g[2], ".aspx"); return string.Empty; } static bool RunRegExOnIt(string toRegEx, string pattern, int groupCount, out GroupCollection g) { if (pattern.Length == 0) { g = null; return false; } g = new Regex(pattern, RegexOptions.IgnoreCase | RegexOptions.Compiled).Match(toRegEx).Groups; return (g.Count == groupCount); } Comments about this post welcome at the original blog.

    Read the article

  • CodePlex Daily Summary for Wednesday, May 12, 2010

    CodePlex Daily Summary for Wednesday, May 12, 2010New ProjectsAMP (Adamo Media Player): AMP is a custom media player that specializes in large media collections! Change the way you manage your media!BoogiePartners: BoogiePartners provides a loose collection of utilities or extensions for Boogie and SpecSharp developers.C# Developer Utility Library: Collection of helpful code functions including zero-config rolling file logging, parameter validation, reflection & object construction, I18N count...Commerce Server 2009 Orders using Pipelines in a Console Application: Using Commerce Server 2009 outside of a Web Application this project show how to create dummy Orders using Pipelines in a NON WEB CONTEXT, an examp...CRM Queue Manager C#: CRM 4 Queue Manager written in C#. Based on the original VB.Net CRM Queue Manager with some changes, additions and feature changes. Converts emai...DbBuilder: This is a tool indended for creation of MS SQL database from scratch, incremental updates, management of redeployable objects, etc. using scripted ...DbNetData: A collection of cross vendor database interface classes for .NET written in C# providing a consistent and simplified way of accessing SQL Server,Or...Deploy Workflow Manager: Kick off a workflow associated with a SharePoint List on all list items at once. SharePoint Designer Workflows will also be included. This projec...DigiLini - digital on screen ruler: Handy desktop tool for everyone who does ever do something graphical on his computer. There is an sizable horizontal and vertial ruler. You can pos...Dynamic Grid Data Type for Umbraco: The Dynamic Grid Data Type for Umbraco is a custom ASCX/C# control that was created to store tabular data as an Umbraco "Data Type". There's an abi...FlashPlus: An extension for Google Chrome and a bookmarklet for Internet Explorer and Firefox, this project makes media content on browsers more usable. This...Flat Database: Simple project making it really quick and easy, to implement a simple flat-file database for an application.Flood Watch Spam Control: Flood watch is a C# based application meant to help prevent site black listings. This application monitors outbound smtp streams, if a stream excee...Gamebox: gameboxKill Bill: Kill Bill covers the areas of customers, suppliers, products, sales and administration divided into modules which together form the system for the ...KND Decoder: Der KND Decoder konvertiert die lästigen Zahlenwerte, in den Java Dateien vom Knuddels Applet, zu lesbaren Strings. Die neue dekodier Methode wurde...Moonbeam: D3D11 FrameworkSite Defender: Add-on for blogs or anywhere else there could be people spamming your website.sMODfix: -Stringzilla: A string formatting classes designed for .NET 4 to enable named string formatting and conditional string formatting options based upon bound data c...The CQRS Kitchen: The CQRS Kitchen is an example application build with Silverlight 4 that demonstrates how to implement a CQRS / Event Sourcing application with the...XELF Framework for XNA / .NET: XELF Framework for XNA / .NET  XNA Game Studioおよび.NET Framework用のライブラリ・フレームワークとしてXELFが開発したC#ソースコードを含むプロジェクトです。  現在は、「XELF.Framework」のWindows用XNA G...xxxxxxxxx: xxxxxxxNew ReleasesASP.NET MVC 2 - CommonLibrary.CMS: CommonLibrary CMS - Alpha: CommonLibrary CMSA simple yet powerful CMS system in ASP.NET MVC 2 using C# 3.5. ActiveRecord based support for Blogs, Widgets, Pages, Parts, Ev...ASP.NET MVC Extensions: v1.0 RTM: v1.0 RTMAWS SimpleDB Browser: Release 2: Miscellaneous fixes from the Alpha release. Built to work with the released version of the .Net Framework 4.0.BIDS Helper: 1.4.3.0: This release addresses the following issues: For some people the BIDS Helper extensions to the Project Properties page for the SSIS Deploy plugin w...Bojinx: Bojinx Core V4.5.13: Fixes / Enhancements: Sequencer now accepts Commands directly without using events. Command queue now accepts both events and commands EventBu...CF-Soft: TestCases_DROP1: TestCases_DROP1CodePlex Runtime Intelligence Integration: PreEmptive.Attributes distributable: Contains a signed redistributable version of the PreEmptive.Attributes.dll library that non .NET 4 applications can use support in-code instrumenta...Convection Game Engine (Basic Edition): Convection Basic (44772): Compiled version of Convection Basic change set 44772.CRM Queue Manager C#: Initial release: Initial release of the CRM Queue Manager in C#. Release includes both the installer as well as the latest source code in zippped format. Running...DbNetData: DbNetData.1.0: Initial releaseDigiLini - digital on screen ruler: DigiLini 1.0: First stable version.Facturator - Create invoices easy and fast: Facturator.zip: current stable versionFlat Database: Initial Release: This is the working initial release of FlatDBKharaPOS: MSDN Magazine Sample: This is the release that supports the MSDN magazine article "Enterprise Patterns with WCF RIA Services. Some of the project was affected by the upg...Let's Up: 1.1 (Build 100511): - Add short and long break feature.Mesopotamia Experiment: Mesopotamia 1.2.88: Primarly bug fixes... Bug Fixes - fixed bug in synapse mutating whereby new ones were of one side only eg, source or target - fixed bug in screen ...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Class Libraries, version 1.0.1.123: The NodeXL class libraries can be used to display network graphs in .NET applications. To include a NodeXL network graph in a WPF desktop or Windo...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel 2007 Template, version 1.0.1.123: The NodeXL Excel 2007 template displays a network graph using edge and vertex lists stored in an Excel 2007 workbook. What's NewThis version makes...Over Store: OverStore 1.18.0.0: Version number is increased. New callback events added: Persisting and Persisted, which are raised on actually writing object data into the stora...Reusable Library: V1.0.7: A collection of reusable abstractions for enterprise application developerReusable Library Demo: V1.0.5: A demonstration of reusable abstractions for enterprise application developerRuntime Intelligence Endpoint Starter Kit: Runtime Intelligence Endpoint Starter Kit 20100511: Added crossdomain policy files so Silverlight applications can send usage data from anywhere.Scorm2CC: SCORM2CC Release 0.12.0.58711 net-2.0 Alpha: SCORM2CC Release 0.12.0.58711 net-2.0 AlphaShake - C# Make: Shake v0.1.14: Updated API to match with the current documentation.SharePoint LogViewer: SharePoint Log Viewer 2.6: This is a maintenance release. It has bug several fixes.Site Directory for SharePoint 2010 (from Microsoft Consulting Services, UK): v1.4: As 1.3 with the following changes Addition of a 'Site Data' webpart Site Directory can now be a site collection root or sub-site Scan job now ...sMODfix: sMODfix v1.0: Basic Versionsmtp4dev: smtp4dev 2.0: Smtp4dev 2.0 is powered by a completely re-written server component and now offers SSL/TLS and AUTH support.SocialScapes: SocialScapes Sidebar 1.0: The SocialScapes Sidebar is the first release of the new SocialScapes suite. There will be more modules to come along with a complete data aggrega...SSIS Multiple Hash: Multiple Hash V1.2: This is version 1.2 of the Multiple Hash SSIS Component. It supports SQL 2005 and SQL 2008, although you have to download the correct install pack...Surfium: Beta build: Somehow testedTerminals: Terminals 1.9a - RDP6 Support: The major change in this release is that Terminals has been upgraded to require RDP Client 6 to be installed for creating RDP connections. Backward...TFS Compare: Release 3.0: This release provides a new feature - the ability to navigate back and forth between document differences. Also, this release provides support for...VCC: Latest build, v2.1.30511.0: Automatic drop of latest buildyoutubeFisher: youtubeFisher v2.0: What's new new method of youtube parameters capturing HD 720p video support full-HD 1080p video support add Cancel option to stop file downl...Most Popular ProjectsWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active Projectspatterns & practices – Enterprise LibraryMirror Testing SystemRawrThe Information Literacy Education Learning Environment (ILE)Caliburn: An Application Framework for WPF and SilverlightwhiteBlogEngine.NETPHPExceljQuery Library for SharePoint Web ServicesTweetSharp

    Read the article

  • Regression testing with Selenium GRID

    - by Ben Adderson
    A lot of software teams out there are tasked with supporting and maintaining systems that have grown organically over time, and the web team here at Red Gate is no exception. We're about to embark on our first significant refactoring endeavour for some time, and as such its clearly paramount that the code be tested thoroughly for regressions. Unfortunately we currently find ourselves with a codebase that isn't very testable - the three layers (database, business logic and UI) are currently tightly coupled. This leaves us with the unfortunate problem that, in order to confidently refactor the code, we need unit tests. But in order to write unit tests, we need to refactor the code :S To try and ease the initial pain of decoupling these layers, I've been looking into the idea of using UI automation to provide a sort of system-level regression test suite. The idea being that these tests can help us identify regressions whilst we work towards a more testable codebase, at which point the more traditional combination of unit and integration tests can take over. Ending up with a strong battery of UI tests is also a nice bonus :) Following on from my previous posts (here, here and here) I knew I wanted to use Selenium. I also figured that this would be a good excuse to put my xUnit [Browser] attribute to good use. Pretty quickly, I had a raft of tests that looked like the following (this particular example uses Reflector Pro). In a nut shell the test traverses our shopping cart and, for a particular combination of number of users and months of support, checks that the price calculations all come up with the correct values. [BrowserTheory] [Browser(Browsers.Firefox3_6, "http://www.red-gate.com")] public void Purchase1UserLicenceNoSupport(SeleniumProvider seleniumProvider) {     //Arrange     _browser = seleniumProvider.GetBrowser();     _browser.Open("http://www.red-gate.com/dynamic/shoppingCart/ProductOption.aspx?Product=ReflectorPro");                  //Act     _browser = ShoppingCartHelpers.TraverseShoppingCart(_browser, 1, 0, ".NET Reflector Pro");     //Assert     var priceResult = PriceHelpers.GetNewPurchasePrice(db, "ReflectorPro", 1, 0, Currencies.Euros);         Assert.Equal(priceResult.Price, _browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl01_Price"));     Assert.Equal(priceResult.Tax, _browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl02_Tax"));     Assert.Equal(priceResult.Total, _browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl02_Total")); } These tests are pretty concise, with much of the common code in the TraverseShoppingCart() and GetNewPurchasePrice() methods. The (inevitable) problem arose when it came to execute these tests en masse. Selenium is a very slick tool, but it can't mask the fact that UI automation is very slow. To give you an idea, the set of cases that covers all of our products, for all combinations of users and support, came to 372 tests (for now only considering purchases in dollars). In the world of automated integration tests, that's a very manageable number. For unit tests, it's a trifle. However for UI automation, those 372 tests were taking just over two hours to run. Two hours may not sound like a lot, but those cases only cover one of the three currencies we deal with, and only one of the many different ways our systems can be asked to calculate a price. It was already pretty clear at this point that in order for this approach to be viable, I was going to have to find a way to speed things up. Up to this point I had been using Selenium Remote Control to automate Firefox, as this was the approach I had used previously and it had worked well. Fortunately,  the guys at SeleniumHQ also maintain a tool for executing multiple Selenium RC tests in parallel: Selenium Grid. Selenium Grid uses a central 'hub' to handle allocation of Selenium tests to individual RCs. The Remote Controls simply register themselves with the hub when they start, and then wait to be assigned work. The (for me) really clever part is that, as far as the client driver library is concerned, the grid hub looks exactly the same as a vanilla remote control. To create a new browser session against Selenium RC, the following C# code suffices: new DefaultSelenium("localhost", 4444, "*firefox", "http://www.red-gate.com"); This assumes that the RC is running on the local machine, and is listening on port 4444 (the default). Assuming the hub is running on your local machine, then to create a browser session in Selenium Grid, via the hub rather than directly against the control, the code is exactly the same! Behind the scenes, the hub will take this request and hand it off to one of the registered RCs that provides the "*firefox" execution environment. It will then pass all communications back and forth between the test runner and the remote control transparently. This makes running existing RC tests on a Selenium Grid a piece of cake, as the developers intended. For a more detailed description of exactly how Selenium Grid works, see this page. Once I had a test environment capable of running multiple tests in parallel, I needed a test runner capable of doing the same. Unfortunately, this does not currently exist for xUnit (boo!). MbUnit on the other hand, has the concept of concurrent execution baked right into the framework. So after swapping out my assembly references, and fixing up the resulting mismatches in assertions, my example test now looks like this: [Test] public void Purchase1UserLicenceNoSupport() {    //Arrange    ISelenium browser = BrowserHelpers.GetBrowser();    var db = DbHelpers.GetWebsiteDBDataContext();    browser.Start();    browser.Open("http://www.red-gate.com/dynamic/shoppingCart/ProductOption.aspx?Product=ReflectorPro");                 //Act     browser = ShoppingCartHelpers.TraverseShoppingCart(browser, 1, 0, ".NET Reflector Pro");    var priceResult = PriceHelpers.GetNewPurchasePrice(db, "ReflectorPro", 1, 0, Currencies.Euros);    //Assert     Assert.AreEqual(priceResult.Price, browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl01_Price"));     Assert.AreEqual(priceResult.Tax, browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl02_Tax"));     Assert.AreEqual(priceResult.Total, browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl02_Total")); } This is pretty much the same as the xUnit version. The exceptions are that the attributes have changed,  the //Arrange phase now has to handle setting up the ISelenium object, as the attribute that previously did this has gone away, and the test now sets up its own database connection. Previously I was using a shared database connection, but this approach becomes more complicated when tests are being executed concurrently. To avoid complexity each test has its own connection, which it is responsible for closing. For the sake of readability, I snipped out the code that closes the browser session and the db connection at the end of the test. With all that done, there was only one more step required before the tests would execute concurrently. It is necessary to tell the test runner which tests are eligible to run in parallel, via the [Parallelizable] attribute. This can be done at the test, fixture or assembly level. Since I wanted to run all tests concurrently, I marked mine at the assembly level in the AssemblyInfo.cs using the following: [assembly: DegreeOfParallelism(3)] [assembly: Parallelizable(TestScope.All)] The second attribute marks all tests in the assembly as [Parallelizable], whilst the first tells the test runner how many concurrent threads to use when executing the tests. I set mine to three since I was using 3 RCs in separate VMs. With everything now in place, I fired up the Icarus* test runner that comes with MbUnit. Executing my 372 tests three at a time instead of one at a time reduced the running time from 2 hours 10 minutes, to 55 minutes, that's an improvement of about 58%! I'd like to have seen an improvement of 66%, but I can understand that either inefficiencies in the hub code, my test environment or the test runner code (or some combination of all three most likely) contributes to a slightly diminished improvement. That said, I'd love to hear about any experience you have in upping this efficiency. Ultimately though, it was a saving that was most definitely worth having. It makes regression testing via UI automation a far more plausible prospect. The other obvious point to make is that this approach scales far better than executing tests serially. So if ever we need to improve performance, we just register additional RC's with the hub, and up the DegreeOfParallelism. *This was just my personal preference for a GUI runner. The MbUnit/Gallio installer also provides a command line runner, a TestDriven.net runner, and a Resharper 4.5 runner. For now at least, Resharper 5 isn't supported.

    Read the article

  • Run Your Tests With Any NUnit Version

    - by Alois Kraus
    I always thought that the NUnit test runners and the test assemblies need to reference the same NUnit.Framework version. I wanted to be able to run my test assemblies with the newest GUI runner (currently 2.5.3). Ok so all I need to do is to reference both NUnit versions the newest one and the official for the current project. There is a nice article form Kent Bogart online how to reference the same assembly multiple times with different versions. The magic works by referencing one NUnit assembly with an alias which does prefix all types inside it. Then I could decorate my tests with the TestFixture and Test attribute from both NUnit versions and everything worked fine except that this was ugly. After playing a little bit around to make it simpler I found that I did not need to reference both NUnit.Framework assemblies. The test runners do not require the TestFixture and Test attribute in their specific version. That is really neat since the test runners are instructed by attributes what to do in a declarative way there is really no need to tie the runners to a specific version. At its core NUnit has this little method hidden to find matching TestFixtures and Tests   public bool CanBuildFrom(Type type) {     if (!(!type.IsAbstract || type.IsSealed))     {         return false;     }     return (((Reflect.HasAttribute(type,           "NUnit.Framework.TestFixtureAttribute", true) ||               Reflect.HasMethodWithAttribute(type, "NUnit.Framework.TestAttribute"       , true)) ||               Reflect.HasMethodWithAttribute(type, "NUnit.Framework.TestCaseAttribute"   , true)) ||               Reflect.HasMethodWithAttribute(type, "NUnit.Framework.TheoryAttribute"     , true)); } That is versioning and backwards compatibility at its best. I tell NUnit what to do by decorating my tests classes with NUnit Attributes and the runner executes my intent without the need to bind me to a specific version. The contract between NUnit versions is actually a bit more complex (think of AssertExceptions) but this is also handled nicely by using not the concrete type but simply to check for the catched exception type by string. What can we learn from this? Versioning can be easy if the contract is small and the users of your library use it in a declarative way (Attributes). Everything beyond it will force you to reference several versions of the same assembly with all its consequences. Type equality is lost between versions so none of your casts will work. That means that you cannot simply use IBigInterface in two versions. You will need a wrapper to call the correct versioned one. To get out of this mess you can use one (and only one) version agnostic driver to encapsulate your business logic from the concrete versions. This is of course more work but as NUnit shows it can be easy. Simplicity is therefore not a nice thing to have but also requirement number one if you intend to make things more complex in version two and want to support any version (older and newer). Any interaction model above easy will not be maintainable. There are different approached to versioning. Below are my own personal observations how versioning works within the  .NET Framwork and NUnit.   Versioning Models 1. Bug Fixing and New Isolated Features When you only need to fix bugs there is no need to break anything. This is especially true when you have a big API surface. Microsoft did this with the .NET Framework 3.0 which did leave the CLR as is but delivered new assemblies for the features WPF, WCF and Windows Workflow Foundations. Their basic model was that the .NET 2.0 assemblies were declared as red assemblies which must not change (well mostly but each change was carefully reviewed to minimize the risk of breaking changes as much as possible) whereas the new green assemblies of .NET 3,3.5 did not have such obligations since they did implement new unrelated features which did not have any impact on the red assemblies. This is versioning strategy aimed at maximum compatibility and the delivery of new unrelated features. If you have a big API surface you should strive hard to do the same or you will break your customers code with every release. 2. New Breaking Features There are times when really new things need to be added to an existing product. The .NET Framework 4.0 did change the CLR in many ways which caused subtle different behavior although the API´s remained largely unchanged. Sometimes it is possible to simply recompile an application to make it work (e.g. changed method signature void Func() –> bool Func()) but behavioral changes need much more thought and cannot be automated. To minimize the impact .NET 2.0,3.0,3.5 applications will not automatically use the .NET 4.0 runtime when installed but they will keep using the “old” one. What is interesting is that a side by side execution model of both CLR versions (2 and 4) within one process is possible. Key to success was total isolation. You will have 2 GCs, 2 JIT compilers, 2 finalizer threads within one process. The two .NET runtimes cannot talk  (except via the usual IPC mechanisms) to each other. Both runtimes share nothing and run independently within the same process. This enables Explorer plugins written for the CLR 2.0 to work even when a CLR 4 plugin is already running inside the Explorer process. The price for isolation is an increased memory footprint because everything is loaded and running two times.   3. New Non Breaking Features It really depends where you break things. NUnit has evolved and many different Assert, Expect… methods have been added. These changes are all localized in the NUnit.Framework assembly which can be easily extended. As long as the test execution contract (TestFixture, Test, AssertException) remains stable it is possible to write test executors which can run tests written for NUnit 10 because the execution contract has not changed. It is possible to write software which executes other components in a version independent way but this is only feasible if the interaction model is relatively simple.   Versioning software is hard and it looks like it will remain hard since you suddenly work in a severely constrained environment when you try to innovate and to keep everything backwards compatible at the same time. These are contradicting goals and do not play well together. The easiest way out of this is to carefully watch what your customers are doing with your software. Minimizing the impact is much easier when you do not need to guess how many people will be broken when this or that is removed.

    Read the article

  • Understanding LINQ to SQL (11) Performance

    - by Dixin
    [LINQ via C# series] LINQ to SQL has a lot of great features like strong typing query compilation deferred execution declarative paradigm etc., which are very productive. Of course, these cannot be free, and one price is the performance. O/R mapping overhead Because LINQ to SQL is based on O/R mapping, one obvious overhead is, data changing usually requires data retrieving:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { Product product = database.Products.Single(item => item.ProductID == id); // SELECT... product.UnitPrice = unitPrice; // UPDATE... database.SubmitChanges(); } } Before updating an entity, that entity has to be retrieved by an extra SELECT query. This is slower than direct data update via ADO.NET:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (SqlConnection connection = new SqlConnection( "Data Source=localhost;Initial Catalog=Northwind;Integrated Security=True")) using (SqlCommand command = new SqlCommand( @"UPDATE [dbo].[Products] SET [UnitPrice] = @UnitPrice WHERE [ProductID] = @ProductID", connection)) { command.Parameters.Add("@ProductID", SqlDbType.Int).Value = id; command.Parameters.Add("@UnitPrice", SqlDbType.Money).Value = unitPrice; connection.Open(); command.Transaction = connection.BeginTransaction(); command.ExecuteNonQuery(); // UPDATE... command.Transaction.Commit(); } } The above imperative code specifies the “how to do” details with better performance. For the same reason, some articles from Internet insist that, when updating data via LINQ to SQL, the above declarative code should be replaced by:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.ExecuteCommand( "UPDATE [dbo].[Products] SET [UnitPrice] = {0} WHERE [ProductID] = {1}", id, unitPrice); } } Or just create a stored procedure:CREATE PROCEDURE [dbo].[UpdateProductUnitPrice] ( @ProductID INT, @UnitPrice MONEY ) AS BEGIN BEGIN TRANSACTION UPDATE [dbo].[Products] SET [UnitPrice] = @UnitPrice WHERE [ProductID] = @ProductID COMMIT TRANSACTION END and map it as a method of NorthwindDataContext (explained in this post):private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.UpdateProductUnitPrice(id, unitPrice); } } As a normal trade off for O/R mapping, a decision has to be made between performance overhead and programming productivity according to the case. In a developer’s perspective, if O/R mapping is chosen, I consistently choose the declarative LINQ code, unless this kind of overhead is unacceptable. Data retrieving overhead After talking about the O/R mapping specific issue. Now look into the LINQ to SQL specific issues, for example, performance in the data retrieving process. The previous post has explained that the SQL translating and executing is complex. Actually, the LINQ to SQL pipeline is similar to the compiler pipeline. It consists of about 15 steps to translate an C# expression tree to SQL statement, which can be categorized as: Convert: Invoke SqlProvider.BuildQuery() to convert the tree of Expression nodes into a tree of SqlNode nodes; Bind: Used visitor pattern to figure out the meanings of names according to the mapping info, like a property for a column, etc.; Flatten: Figure out the hierarchy of the query; Rewrite: for SQL Server 2000, if needed Reduce: Remove the unnecessary information from the tree. Parameterize Format: Generate the SQL statement string; Parameterize: Figure out the parameters, for example, a reference to a local variable should be a parameter in SQL; Materialize: Executes the reader and convert the result back into typed objects. So for each data retrieving, even for data retrieving which looks simple: private static Product[] RetrieveProducts(int productId) { using (NorthwindDataContext database = new NorthwindDataContext()) { return database.Products.Where(product => product.ProductID == productId) .ToArray(); } } LINQ to SQL goes through above steps to translate and execute the query. Fortunately, there is a built-in way to cache the translated query. Compiled query When such a LINQ to SQL query is executed repeatedly, The CompiledQuery can be used to translate query for one time, and execute for multiple times:internal static class CompiledQueries { private static readonly Func<NorthwindDataContext, int, Product[]> _retrieveProducts = CompiledQuery.Compile((NorthwindDataContext database, int productId) => database.Products.Where(product => product.ProductID == productId).ToArray()); internal static Product[] RetrieveProducts( this NorthwindDataContext database, int productId) { return _retrieveProducts(database, productId); } } The new version of RetrieveProducts() gets better performance, because only when _retrieveProducts is first time invoked, it internally invokes SqlProvider.Compile() to translate the query expression. And it also uses lock to make sure translating once in multi-threading scenarios. Static SQL / stored procedures without translating Another way to avoid the translating overhead is to use static SQL or stored procedures, just as the above examples. Because this is a functional programming series, this article not dive into. For the details, Scott Guthrie already has some excellent articles: LINQ to SQL (Part 6: Retrieving Data Using Stored Procedures) LINQ to SQL (Part 7: Updating our Database using Stored Procedures) LINQ to SQL (Part 8: Executing Custom SQL Expressions) Data changing overhead By looking into the data updating process, it also needs a lot of work: Begins transaction Processes the changes (ChangeProcessor) Walks through the objects to identify the changes Determines the order of the changes Executes the changings LINQ queries may be needed to execute the changings, like the first example in this article, an object needs to be retrieved before changed, then the above whole process of data retrieving will be went through If there is user customization, it will be executed, for example, a table’s INSERT / UPDATE / DELETE can be customized in the O/R designer It is important to keep these overhead in mind. Bulk deleting / updating Another thing to be aware is the bulk deleting:private static void DeleteProducts(int categoryId) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.Products.DeleteAllOnSubmit( database.Products.Where(product => product.CategoryID == categoryId)); database.SubmitChanges(); } } The expected SQL should be like:BEGIN TRANSACTION exec sp_executesql N'DELETE FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0',N'@p0 int',@p0=9 COMMIT TRANSACTION Hoverer, as fore mentioned, the actual SQL is to retrieving the entities, and then delete them one by one:-- Retrieves the entities to be deleted: exec sp_executesql N'SELECT [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock], [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued] FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0',N'@p0 int',@p0=9 -- Deletes the retrieved entities one by one: BEGIN TRANSACTION exec sp_executesql N'DELETE FROM [dbo].[Products] WHERE ([ProductID] = @p0) AND ([ProductName] = @p1) AND ([SupplierID] IS NULL) AND ([CategoryID] = @p2) AND ([QuantityPerUnit] IS NULL) AND ([UnitPrice] = @p3) AND ([UnitsInStock] = @p4) AND ([UnitsOnOrder] = @p5) AND ([ReorderLevel] = @p6) AND (NOT ([Discontinued] = 1))',N'@p0 int,@p1 nvarchar(4000),@p2 int,@p3 money,@p4 smallint,@p5 smallint,@p6 smallint',@p0=78,@p1=N'Optimus Prime',@p2=9,@p3=$0.0000,@p4=0,@p5=0,@p6=0 exec sp_executesql N'DELETE FROM [dbo].[Products] WHERE ([ProductID] = @p0) AND ([ProductName] = @p1) AND ([SupplierID] IS NULL) AND ([CategoryID] = @p2) AND ([QuantityPerUnit] IS NULL) AND ([UnitPrice] = @p3) AND ([UnitsInStock] = @p4) AND ([UnitsOnOrder] = @p5) AND ([ReorderLevel] = @p6) AND (NOT ([Discontinued] = 1))',N'@p0 int,@p1 nvarchar(4000),@p2 int,@p3 money,@p4 smallint,@p5 smallint,@p6 smallint',@p0=79,@p1=N'Bumble Bee',@p2=9,@p3=$0.0000,@p4=0,@p5=0,@p6=0 -- ... COMMIT TRANSACTION And the same to the bulk updating. This is really not effective and need to be aware. Here is already some solutions from the Internet, like this one. The idea is wrap the above SELECT statement into a INNER JOIN:exec sp_executesql N'DELETE [dbo].[Products] FROM [dbo].[Products] AS [j0] INNER JOIN ( SELECT [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock], [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued] FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0) AS [j1] ON ([j0].[ProductID] = [j1].[[Products])', -- The Primary Key N'@p0 int',@p0=9 Query plan overhead The last thing is about the SQL Server query plan. Before .NET 4.0, LINQ to SQL has an issue (not sure if it is a bug). LINQ to SQL internally uses ADO.NET, but it does not set the SqlParameter.Size for a variable-length argument, like argument of NVARCHAR type, etc. So for two queries with the same SQL but different argument length:using (NorthwindDataContext database = new NorthwindDataContext()) { database.Products.Where(product => product.ProductName == "A") .Select(product => product.ProductID).ToArray(); // The same SQL and argument type, different argument length. database.Products.Where(product => product.ProductName == "AA") .Select(product => product.ProductID).ToArray(); } Pay attention to the argument length in the translated SQL:exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(1)',@p0=N'A' exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(2)',@p0=N'AA' Here is the overhead: The first query’s query plan cache is not reused by the second one:SELECT sys.syscacheobjects.cacheobjtype, sys.dm_exec_cached_plans.usecounts, sys.syscacheobjects.[sql] FROM sys.syscacheobjects INNER JOIN sys.dm_exec_cached_plans ON sys.syscacheobjects.bucketid = sys.dm_exec_cached_plans.bucketid; They actually use different query plans. Again, pay attention to the argument length in the [sql] column (@p0 nvarchar(2) / @p0 nvarchar(1)). Fortunately, in .NET 4.0 this is fixed:internal static class SqlTypeSystem { private abstract class ProviderBase : TypeSystemProvider { protected int? GetLargestDeclarableSize(SqlType declaredType) { SqlDbType sqlDbType = declaredType.SqlDbType; if (sqlDbType <= SqlDbType.Image) { switch (sqlDbType) { case SqlDbType.Binary: case SqlDbType.Image: return 8000; } return null; } if (sqlDbType == SqlDbType.NVarChar) { return 4000; // Max length for NVARCHAR. } if (sqlDbType != SqlDbType.VarChar) { return null; } return 8000; } } } In this above example, the translated SQL becomes:exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(4000)',@p0=N'A' exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(4000)',@p0=N'AA' So that they reuses the same query plan cache: Now the [usecounts] column is 2.

    Read the article

  • CodePlex Daily Summary for Friday, June 29, 2012

    CodePlex Daily Summary for Friday, June 29, 2012Popular ReleasesSupporting Guidance and Whitepapers: v1 - Supporting Media: Welcome to the Release Candidate (RC) release of the ALM Rangers Readiness supporting edia As this is a RC release and the quality bar for the final Release has not been achieved, we value your candid feedback and recommend that you do not use or deploy these RC artifacts in a production environment. Quality-Bar Details All critical bugs have been resolved Known Issues / Bugs Practical Ruck training workshop not yet includedDesigning Windows 8 Applications with C# and XAML: Chapters 1 - 7 Release Preview: Source code for all examples from Chapters 1 - 7 for the Release PreviewDataBooster - Extension to ADO.NET Data Provider: DataBooster Library for Oracle + SQL Server Beta2: This is a derivative library of dbParallel project http://dbparallel.codeplex.com. All above binaries releases require .NET Framework 4.0 or later. SQL Server support is always build-in (can't be unplugged). The first download (DLL) also requires ODP.NET to connect Oracle; The second download (DLL) also requires DataDirect(3.5) to connect Oracle; The third download (DLL) doesn't support Oracle. Please download the source code if the provider need to be replaced by others. For example ODP.NE...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.57: Fix for issue #18284: evaluating literal expressions in the pattern c1 * (x / c2) where c1/c2 is an integer value (as opposed to c2/c1 being the integer) caused the expression to be destroyed.Visual Studio ALM Quick Reference Guidance: v2 - Visual Studio 2010 (Japanese): Rex Tang (?? ??) http://blogs.msdn.com/b/willy-peter_schaub/archive/2011/12/08/introducing-the-visual-studio-alm-rangers-rex-tang.aspx, Takaho Yamaguchi (?? ??), Masashi Fujiwara (?? ??), localized and reviewed the Quick Reference Guidance for the Japanese communities, based on http://vsarquickguide.codeplex.com/releases/view/52402. The Japanese guidance is available in AllGuides and Everything packages. The AllGuides package contains guidances in PDF file format, while the Everything packag...Visual Studio Team Foundation Server Branching and Merging Guide: v1 - Visual Studio 2010 (Japanese): Rex Tang (?? ??) http://blogs.msdn.com/b/willy-peter_schaub/archive/2011/12/08/introducing-the-visual-studio-alm-rangers-rex-tang.aspx, Takaho Yamaguchi (?? ??), Hirokazu Higashino (?? ??), localized and reviewed the Branching Guidance for the Japanese communities, based on http://vsarbranchingguide.codeplex.com/releases/view/38849. The Japanese guidance is available in AllGuides and Everything packages. The AllGuides package contains guidances in PDF file format, while the Everything packag...SQL Server FineBuild: Version 3.1.0: Top SQL Server FineBuild Version 3.1.0This is the stable version of FineBuild for SQL Server 2012, 2008 R2, 2008 and 2005 Documentation FineBuild Wiki containing details of the FineBuild process Known Issues Limitations with this release FineBuild V3.1.0 Release Contents List of changes included in this release Please DonateFineBuild is free, but please donate what you think FineBuild is worth as everything goes to charity. Tearfund is one of the UK's leading relief and de...EasySL: RapidSL V2: Rewrite RapidSL UI Framework, Using Silverlight 5.0 EF4.1 Code First Ria Service SP2 + Lastest Silverlight Toolkit.NETDeob0: NETDeob 0.1.1: http://i.imgur.com/54C78.pngSOLID by example: All examples: All solid examplesSiteMap Editor for Microsoft Dynamics CRM 2011: SiteMap Editor (1.1.1726.406): Use of new version of connection controls for a full support of OSDP authentication mechanism for CRM Online.Umbraco CMS: Umbraco CMS 5.2: Development on Umbraco v5 discontinued After much discussion and consultation with leaders from the Umbraco community it was decided that work on the v5 branch would be discontinued with efforts being refocused on the stable and feature rich v4 branch. For full details as to why this decision was made please watch the CodeGarden 12 Keynote. What about all that hard work?!?? We are not binning everything and it does not mean that all work done on 5 is lost! we are taking all of the best and m...CodeGenerate: CodeGenerate Alpha: The Project can auto generate C# code. Include BLL Layer、Domain Layer、IDAL Layer、DAL Layer. Support SqlServer And Oracle This is a alpha program,but which can run and generate code. Generate database table info into MS WordXDA ROM HUB: XDA ROM HUB v0.9: Kernel listing added -- Thanks to iONEx Added scripts installer button. Added "Nandroid On The Go" -- Perform a Nandroid backup without a PC! Added official Android app!ExtAspNet: ExtAspNet v3.1.8.2: +2012-06-24 v3.1.8 +????Grid???????(???????ExpandUnusedSpace????????)(??)。 -????MinColumnWidth(??????)。 -????AutoExpandColumn,???????????????(ColumnID)(?????ForceFitFirstTime??ForceFitAllTime,??????)。 -????AutoExpandColumnMax?AutoExpandColumnMin。 -????ForceFitFirstTime,????????????,??????????(????????????)。 -????ForceFitAllTime,????????????,??????????(??????????????????)。 -????VerticalScrollWidth,????????(??????????,0?????????????)。 -????grid/grid_forcefit.aspx。 -???????????En...AJAX Control Toolkit: June 2012 Release: AJAX Control Toolkit Release Notes - June 2012 Release Version 60623June 2012 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4 – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Notes: - The current version of the AJAX Control Toolkit is not compatible with ASP.NET 2.0. The latest version that is compatible with ASP.NET 2.0 can be found here: 11121. - Pages using ...WPF Application Framework (WAF): WPF Application Framework (WAF) 2.5.0.5: Version: 2.5.0.5 (Milestone 5): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Changelog Legend: [B] Breaking change; [O] Marked member as obsolete WAF: Add IsInDesignMode property to the WafConfiguration class. WAF: Introduce the IModuleController interface. WAF: Add ...Windows 8 Metro RSS Reader: Metro RSS Reader.v7: Updated for Windows 8 Release Preview Changed background and foreground colors Used VariableSizeGrid layout to wrap blog posts with images Sort items with Images first, text-only last Enabled Caching to improve navigation between framesConfuser: Confuser 1.9: Change log: * Stable output (i.e. given the same seed & input assemblies, you'll get the same output assemblies) + Generate debug symbols, now it is possible to debug the output under a debugger! (Of course without enabling anti debug) + Generating obfuscation database, most of the obfuscation data in stored in it. + Two tools utilizing the obfuscation database (Database viewer & Stack trace decoder) * Change the protection scheme -----Please read Bug Report before you report a bug-----...XDesigner.Development: First release: First releaseNew ProjectsArcGIS Server Rest Catalog: jquery widget that display all MapServer service of ESRI ArcGIS Server in accordion control. Require jQuery and JsRender. BugsBox.Lib: BugsBoxlibCodeStudy: The project includes all my code written for .NET studyCommunity Tfs Team Tools: Community TFS Team Tools is a community project based on the example code from ALM Rangers - Quick Response Sample Command line utility to manage TFS TeamsDiveBaseManager: DiveBaseManagerEclipse App: Aplicacion para android para el envio de coordenaads a un servidorJQMdotNET: JQMdotNet is an early attempt to make a series of MVC HTML helpers to quickly render JQuery Mobile pages.MathBuilderFramework: Math Builder FX is a framework for math problems. The idea of this framework is to create real exercises of mathematics throught base classes.MDS MODELING WORKBOOK: MDS Modeling Workbook is a modeling tool and a solution accelerator for Microsoft Master Data Services. Minesweeper: a clean old minesweeperMultithreaded Port Scanner Utility: Mulithreaded Port Scanner Utility is a very simple port scanner that take advantage of the new System.Threading.Task namespace in .Net 4.0Nonocast.Data: Nonocast.Data is a free, open source developer focused object persistence for small and medium software.On{X} Scripts: This project is to share scripts I create or modify from available free/opensource scripts (off-course with due mentions & links to original developer).PCS MAP: ladPowerShell Module for Mayhem: A module for Mayhem with a reaction which executes a PowerShell script.PunkBuster™ Screenshot Viewer: PunkBuster™ Screenshot Viewer shows screenshots of all games protected with PunkBuster™.Random reminder: Objective: To create a simple Windows application that can schedule and automatically reschedule a random timer that falls within a defined interval.Remembrall: Some useful stuff (at least for me) ! Javascript plugins jQuery plugins C# utilities ASP.NET MVC helper extensionsrepotfs: Repositório de exemplosSharepoint Designer: Sharepoint ExplororSPFluid: Simple modularity and data access framework.SPManager: Helper classSQL Server : xp_fixeddrives2: C++ Extended Stored Procedure Get volumes InfoSWORD Extensions for Sharepoint: Push documents in and out of Sharepoint using the SWORD protocol. ***incomplete***Test projects: My testing projecttestdd06282012git01: xzc testdd06282012git1: sadtestdd06282012tfs01: xzctestddhg06282012: xctestddtfs062820121: ,lm;.testhg06282012dd1: zxTlrk: NATotal_Mayhem: Add-on for Mayhem ( http://mayhem.codeplex.com/ ) that includes networking events, power reactions, and more.WallSwitch: An application to cycle your desktop wallpaper.??????: ????????

    Read the article

  • CodePlex Daily Summary for Monday, June 10, 2013

    CodePlex Daily Summary for Monday, June 10, 2013Popular ReleasesNexusCamera: NexusCamera: Nexus Camera is a control for Windows Phone 7 & 8, which can be used as a menu on the Camera. The idea in making this control when we use a camera nexus. Thanks for Nexus. Need Windows Phone Toolkit https://phone.codeplex.com/ View Sample Camera http://tctechcrunch2011.files.wordpress.com/2012/11/nexus4-camera.jpgVR Player: VR Player 0.3 ALPHA: New plugin system with individual folders TrackIR support Maya and 3ds max formats support Dual screen support Mono layouts (left and right) Cylinder height parameter Barel effect factor parameter Razer hydra filter parameter VRPN bug fixes UI improvements Performances improvements Stabilization and logging with Log4Net New default values base on users feedback CTRL key to open menuZXMAK2: Version 2.7.5.4: - add hayes modem device (thanks to Eltaron) - add host joystick selection - fix joystick bits (swapped in previous version)SimCityPak: SimCityPak 0.1.0.8: SimCityPak 0.1.0.8 New features: Import BMP color palettes for vehicles Import RASTER file (uncompressed 8.8.8.8 DDS files) View different channels of RASTER files or preview of all layers combined Find text in javascripts TGA viewer Ground textures added to lot editor Many additional identified instances and propertiesWsus Package Publisher: Release v1.2.1306.09: Add more verifications on certificate validation. WPP will not let user to try publishing an update until the certificate is valid. Add certificate expiration date on the 'About' form. Filter Approbation to avoid a user to try to approve an update for uninstallation when the update do not support uninstallation. Add the server and console version on the 'About' form. WPP will not let user to publish an update until the server and console are not at the same level. WPP do not let user ...AJAX Control Toolkit: June 2013 Release: AJAX Control Toolkit Release Notes - June 2013 Release Version 7.0607June 2013 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4.5 – AJAX Control Toolkit for .NET 4.5 and sample site (Recommended). AJAX Control Toolkit .NET 4 – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Notes: - Instructions for using the AJAX Control Toolkit with ASP.NET 4.5 can be found at...Rawr: Rawr 5.2.1: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr Addon (NOT UPDATED YET FOR MOP)We now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including ba...VG-Ripper & PG-Ripper: PG-Ripper 1.4.13: changes NEW: Added Support for "ImageJumbo.com" links FIXED: Ripping of Threads with multiple pagesXomega Framework: Xomega.Framework 1.4: Adding support for Visual Studio 2012 and .Net framework 4.5. Minor bug fixes and enhancements.sb0t v.5: sb0t 5.14: Stability fix in script engine. Avatar.exists property fixed in scripting. cb0t custom font protocol re-added and updated to support new Ares.ASP.NET MVC Forum: MVCForum v1.3.5: This is a bug release version, with a couple of small usability features and UI changes. All the small amount of bugs reported in v1.3 have been fixed, no upgrade needed just overwrite the files and everything should just work.Json.NET: Json.NET 5.0 Release 6: New feature - Added serialized/deserialized JSON to verbose tracing New feature - Added support for using type name handling with ISerializable content Fix - Fixed not using default serializer settings with primitive values and JToken.ToObject Fix - Fixed error writing BigIntegers with JsonWriter.WriteToken Fix - Fixed serializing and deserializing flag enums with EnumMember attribute Fix - Fixed error deserializing interfaces with a valid type converter Fix - Fixed error deser...Christoc's DotNetNuke Module Development Template: DotNetNuke 7 Project Templates V2.3 for VS2012: V2.3 - Release Date 6/5/2013 Items addressed in this 2.3 release Fixed bad namespace for BusinessController in one of the C# templates. Updated documentation in all templates. Setting up your DotNetNuke Module Development Environment Installing Christoc's DotNetNuke Module Development Templates Customizing the latest DotNetNuke Module Development Project TemplatesPulse: Pulse 0.6.7.0: A number of small bug fixes to stabilize the previous Beta. Sorry about the never ending "New Version" bug!QlikView Extension - Animated Scatter Chart: Animated Scatter Chart - v1.0: Version 1.0 including Source Code qar File Example QlikView application Tested With: Browser Firefox 20 (x64) Google Chrome 27 (x64) Internet Explorer 9 QlikView QlikView Desktop 11 - SR2 (x64) QlikView Desktop 11.2 - SR1 (x64) QlikView Ajax Client 11.2 - SR2 (based on x64)BarbaTunnel: BarbaTunnel 7.2: Warning: HTTP Tunnel is not compatible with version 6.x and prior, HTTP packet format has been changed. Check Version History for more information about this release.SuperWebSocket, a .NET WebSocket Server: SuperWebSocket 0.8: This release includes these changes below: Upgrade SuperSocket to 1.5.3 which is much more stable Added handshake request validating api (WebSocketServer.ValidateHandshake(TWebSocketSession session, string origin)) Fixed a bug that the m_Filters in the SubCommandBase can be null if the command's method LoadSubCommandFilters(IEnumerable<SubCommandFilterAttribute> globalFilters) is not invoked Fixed the compatibility issue on Origin getting in the different version protocols Marked ISub...BlackJumboDog: Ver5.9.0: 2013.06.04 Ver5.9.0 (1) ?????????????????????????????????($Remote.ini Tmp.ini) (2) ThreadBaseTest?? (3) ????POP3??????SMTP???????????????? (4) Web???????、?????????URL??????????????? (5) Ftp???????、LIST?????????????? (6) ?????????????????????Media Companion: Media Companion MC3.569b: New* Movies - Autoscrape/Batch Rescrape extra fanart and or extra thumbs. * Movies - Alternative editor can add manually actors. * TV - Batch Rescraper, AutoScrape extrafanart, if option enabled. Fixed* Movies - Slow performance switching to movie tab by adding option 'Disable "Not Matching Rename Pattern"' to Movie Preferences - General. * Movies - Fixed only actors with images were scraped and added to nfo * Movies - Fixed filter reset if selected tab was above Home Movies. * Updated Medi...Nearforums - ASP.NET MVC forum engine: Nearforums v9.0: Version 9.0 of Nearforums with great new features for users and developers: SQL Azure support Admin UI for Forum Categories Avoid html validation for certain roles Improve profile picture moderation and support Warn, suspend, and ban users Web administration of site settings Extensions support Visit the Roadmap for more details. Webdeploy package sha1 checksum: 9.0.0.0: e687ee0438cd2b1df1d3e95ecb9d66e7c538293b New ProjectsASP.NET MVC 4 and RequireJS: ASP.NET MVC 4 application with Areas and RequireJSBaseX - Base converter and calculator: Dealing with numbers of any base in .NET.C# Exercises: C# ExercisesClassfinder: ClassfinderCreative OS ALPHA: This is a OS!!!!CSS Exercises: CSS ExercisesCustom Workflow Action: Project showing how to create and use Custom Workflow Action for SharePoint Designer 2013.Devshed Tools: Provides easy to use and compile-time-support solution for various type of projects on the .NET framework. Currently Devshed.Web is in development.Envar Editor: Edit environment variables easily on windowsExcel To Sql: A simplified tool for importing Excel data into SQL.HTML Exercises: HTML ExercisesKnockout.js with ASP.NET MVC: This project implements a system which maps .NET ViewModels to javascript ViewModels for use with knockout.js, using Razor markup syntax.LogoBids: LOGO??????,ORM??OpenAccess ORMManagistics: Management Logistics Application (including: Warehouse, Sale, Purchase, ...)Matrix Switch Preset Utility: A small utility for managing the inputs and outputs from a matrix switch via RS-232. Developed in WPF (VB9) and running on the .Net3.5SP1 framework.MvcSystemsCommander: An ASP.NET C# MVC4 webapp to help systems administrators consolidate common systems administration tasksNewspaperAgent: My small projectOutlook Recovery Software - Efficiently Repair Damaged PST File: This project tells you the easiest way to recover PST file of Outlook. Complete information has been given here to help users.Pattern: Testprocedure: a new procedural programming framework based on .net, by using lambda expression, it can handle async io friendly and provide a full lock-free solutionSharePoint 2013 custom field samples: SharePoint 2013 custom field samples is a research project aims to provide samples for developing custom fields in SharePoint 2013.SharePoint 2013 List Forms: This small framework allows you to manage custom list forms using rendering templates and controls stored in a SharePoint library.The Coconut Cranium Decision Engine: The Coconut Cranium Decision Engine is a boolean decision engine using the most mind-bendingly worse way of working.TxtToSeq: Command line utility to convert Commodore SEQ files to TXT files and vice-versa.ultgw: ult gw

    Read the article

  • Can I delete libc-bin?

    - by Balazs Szikszay
    Question is simple, I need to know because I cant upgrade/install anything, because it always says I have to uninstall/delete it to continue. It also says dont do it, if I dont know what I am doing. EDIT: szikszay@szikszay-Latitude-E5530-non-vPro:~$ sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done You might want to run ‘apt-get -f install’ to correct these. The following packages have unmet dependencies. ia32-libs-multiarch:i386 : Depends: libqtcore4:i386 but it is not installed Depends: libqtgui4:i386 but it is not installed Depends: libqt4-dbus:i386 but it is not installed Depends: libqt4-network:i386 but it is not installed Depends: libqt4-opengl:i386 but it is not installed Depends: libqt4-qt3support:i386 but it is not installed Depends: libqt4-script:i386 but it is not installed Depends: libqt4-scripttools:i386 but it is not installed Depends: libqt4-sql:i386 but it is not installed Depends: libqt4-svg:i386 but it is not installed Depends: libqt4-test:i386 but it is not installed Depends: libqt4-xml:i386 but it is not installed Depends: libqt4-xmlpatterns:i386 but it is not installed Depends: libcups2:i386 but it is not installed Depends: libcupsimage2:i386 but it is not installed Depends: libcurl3:i386 but it is not installed Depends: libnss3:i386 but it is not installed Depends: libnspr4:i386 but it is not installed Depends: libssl1.0.0:i386 but it is not installed Recommends: libgl1-mesa-glx:i386 but it is not installed Recommends: libgl1-mesa-dri:i386 but it is not installed lib32ffi6 : Depends: libc6-i386 (= 2.4) but it is not installed lib32gcc1 : Depends: libc6-i386 (= 2.5) but it is not installed lib32nss-mdns : Depends: libc6-i386 (= 2.4) but it is not installed lib32stdc++6 : Depends: libc6-i386 (= 2.4) but it is not installed lib32z1 : Depends: libc6-i386 (= 2.4) but it is not installed libacl1:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libattr1:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libaudio2:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libavahi-client3:i386 : Depends: libc6:i386 (= 2.4) but it is not installed Depends: libdbus-1-3:i386 (= 1.1.1) but it is not installed libavahi-common3:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libcomerr2:i386 : Depends: libc6:i386 (= 2.12) but it is not installed libdb5.1:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libdrm-intel1:i386 : Depends: libc6:i386 (= 2.3.4) but it is not installed libdrm-nouveau1a:i386 : Depends: libc6:i386 (= 2.1.3) but it is not installed libdrm-radeon1:i386 : Depends: libc6:i386 (= 2.3.4) but it is not installed libdrm2:i386 : Depends: libc6:i386 (= 2.7) but it is not installed libffi6:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libfontconfig1:i386 : Depends: libc6:i386 (= 2.7) but it is not installed Depends: libexpat1:i386 (= 1.95.8) but it is not installed Depends: libfreetype6:i386 (= 2.2.1) but it is not installed libgcc1:i386 : Depends: libc6:i386 (= 2.2.4) but it is not installed libgcrypt11:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libgdbm3:i386 : Depends: libc6:i386 (= 2.1.3) but it is not installed libglib2.0-0:i386 : Depends: libc6:i386 (= 2.9) but it is not installed libgpg-error0:i386 : Depends: libc6:i386 (= 2.1.3) but it is not installed libice6:i386 : Depends: libc6:i386 (= 2.11) but it is not installed libidn11:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libjpeg62:i386 : Depends: libc6:i386 (= 2.7) but it is not installed libkeyutils1:i386 : Depends: libc6:i386 (= 2.1.3) but it is not installed liblcms1:i386 : Depends: libc6:i386 (= 2.7) but it is not installed libllvm2.9:i386 : Depends: libc6:i386 (= 2.11) but it is not installed libmng1:i386 : Depends: libc6:i386 (= 2.11) but it is not installed libpciaccess0:i386 : Depends: libc6:i386 (= 2.7) but it is not installed libpcre3:i386 : Depends: libc6:i386 (= 2.4) but it is not installed librtmp0:i386 : Depends: libc6:i386 (= 2.7) but it is not installed Depends: libgnutls26:i386 (= 2.9.11-0) but it is not installed libsasl2-2:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libsasl2-modules:i386 : Depends: libc6:i386 (= 2.4) but it is not installed Depends: libssl1.0.0:i386 (= 1.0.0) but it is not installed libselinux1:i386 : Depends: libc6:i386 (= 2.8) but it is not installed libsm6:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libsqlite3-0:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libstdc++6:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libuuid1:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libx11-6:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libxau6:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libxcb1:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libxdamage1:i386 : Depends: libc6:i386 (= 2.1.3) but it is not installed libxdmcp6:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libxext6:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libxfixes3:i386 : Depends: libc6:i386 (= 2.1.3) but it is not installed libxrender1:i386 : Depends: libc6:i386 (= 2.1.3) but it is not installed libxss1:i386 : Depends: libc6:i386 (= 2.1.3) but it is not installed libxt6:i386 : Depends: libc6:i386 (= 2.7) but it is not installed libxxf86vm1:i386 : Depends: libc6:i386 (= 2.1.3) but it is not installed zlib1g:i386 : Depends: libc6:i386 (= 2.4) but it is not installed E: Unmet dependencies. Try using -f. szikszay@szikszay-Latitude-E5530-non-vPro:~$ sudo apt-get upgrade -f Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages will be REMOVED libc-bin The following NEW packages will be installed libc-bin:i386 libc6:i386 libc6-i386 libcups2:i386 libcupsimage2:i386 libcurl3:i386 libdbus-1-3:i386 libexpat1:i386 libfreetype6:i386 libgl1-mesa-dri:i386 libgl1-mesa-glx:i386 libglapi-mesa:i386 libgnutls26:i386 libgssapi-krb5-2:i386 libk5crypto3:i386 libkrb5-3:i386 libkrb5support0:i386 libldap-2.4-2:i386 libnspr4:i386 libnss3:i386 libpng12-0:i386 libqt4-dbus:i386 libqt4-declarative:i386 libqt4-designer:i386 libqt4-network:i386 libqt4-opengl:i386 libqt4-qt3support:i386 libqt4-script:i386 libqt4-scripttools:i386 libqt4-sql:i386 libqt4-svg:i386 libqt4-test:i386 libqt4-xml:i386 libqt4-xmlpatterns:i386 libqtcore4:i386 libqtgui4:i386 libssl1.0.0:i386 libtasn1-3:i386 libtiff4:i386 libxi6:i386 The following packages have been kept back: ginn libgrip0 linux-headers-generic linux-image-generic unity unity-common xserver-xorg-input-evdev xserver-xorg-input-synaptics The following packages will be upgraded: accountsservice acpi-support acpid aisleriot alsa-utils app-install-data-partner apparmor appmenu-qt apport apport-gtk apt apt-transport-https apt-utils aptdaemon aptdaemon-data apturl apturl-common at-spi2-core bamfdaemon banshee banshee-extension-soundmenu banshee-extension-ubuntuonemusicstore baobab bind9-host binutils bluez bluez-alsa bluez-cups bluez-gstreamer brasero brasero-cdrkit brasero-common brltty bzip2 ca-certificates-java checkbox checkbox-gtk colord command-not-found command-not-found-data compiz compiz-core compiz-gnome compiz-plugins-default compiz-plugins-main-default cups cups-bsd cups-client cups-common cups-ppdc dbus dbus-x11 deja-dup desktop-file-utils dnsutils dpkg ecryptfs-utils empathy empathy-common eog evince evince-common evolution-data-server evolution-data-server-common file-roller firefox firefox-globalmenu firefox-gnome-support firefox-locale-en firefox-locale-hu gbrainy gcalctool gconf2 gconf2-common gedit gedit-common ghostscript ghostscript-cups ghostscript-x gir1.2-atspi-2.0 gir1.2-gconf-2.0 gir1.2-gnomebluetooth-1.0 gir1.2-gtk-3.0 gir1.2-gtksource-3.0 gir1.2-totem-1.0 gir1.2-unity-4.0 gir1.2-webkit-3.0 gnome-accessibility-themes gnome-bluetooth gnome-control-center gnome-control-center-data gnome-desktop3-data gnome-font-viewer gnome-games-common gnome-icon-theme gnome-keyring gnome-mahjongg gnome-online-accounts gnome-orca gnome-power-manager gnome-screenshot gnome-search-tool gnome-session gnome-session-bin gnome-session-canberra gnome-session-common gnome-settings-daemon gnome-sudoku gnome-system-log gnome-system-monitor gnome-utils-common gnomine gnupg gpgv grub-common grub-pc grub-pc-bin grub2-common gstreamer0.10-gconf gstreamer0.10-plugins-good gstreamer0.10-pulseaudio gvfs gvfs-backends gvfs-bin gvfs-fuse gwibber gwibber-service gwibber-service-facebook gwibber-service-identica gwibber-service-twitter gzip hpijs hplip hplip-cups hplip-data icedtea-6-jre-cacao icedtea-6-jre-jamvm icedtea-netx ifupdown im-switch indicator-datetime indicator-session indicator-sound initramfs-tools initramfs-tools-bin initscripts insserv isc-dhcp-client isc-dhcp-common iso-codes jockey-common jockey-gtk language-pack-en language-pack-en-base language-pack-gnome-en language-pack-gnome-en-base language-pack-gnome-hu language-pack-gnome-hu-base language-pack-hu language-pack-hu-base language-selector-common language-selector-gnome libaccountsservice0 libapt-inst1.3 libapt-pkg4.11 libarchive1 libasound2-plugins libatk-adaptor libatspi2.0-0 libbamf0 libbamf3-0 libbind9-60 libbluetooth3 libbrasero-media3-1 libbrlapi0.5 libbz2-1.0 libc-dev-bin libc6 libc6-dev libcamel-1.2-29 libcanberra-gtk-module libcanberra-gtk0 libcanberra-gtk3-0 libcanberra-gtk3-module libcanberra-pulse libcanberra0 libcolord1 libcups2 libcupscgi1 libcupsdriver1 libcupsimage2 libcupsmime1 libcupsppdc1 libcurl3-gnutls libdbus-1-3 libdbus-glib-1-2 libdecoration0 libdns69 libebackend-1.2-1 libebook1.2-12 libecal1.2-10 libecryptfs0 libedata-book-1.2-11 libedata-cal-1.2-13 libedataserver1.2-15 libedataserverui-3.0-1 libevince3-3 libexif12 libexpat1 libfreetype6 libgail-3-0 libgail-3-common libgck-1-0 libgconf2-4 libgcr-3-1 libgdata-common libgdata13 libgl1-mesa-dri libgl1-mesa-glx libglapi-mesa libglu1-mesa libgnome-bluetooth8 libgnome-control-center1 libgnome-desktop-3-2 libgnutls26 libgoa-1.0-0 libgs9 libgs9-common libgssapi-krb5-2 libgtk-3-0 libgtk-3-bin libgtk-3-common libgtksourceview-3.0-0 libgtksourceview-3.0-common libgudev-1.0-0 libgweather-3-0 libgweather-common libgwibber-gtk2 libgwibber2 libhpmud0 libicu44 libimobiledevice2 libisc62 libisccc60 libisccfg62 libjasper1 libjs-jquery libk5crypto3 libkrb5-3 libkrb5support0 libldap-2.4-2 liblightdm-gobject-1-0 liblwres60 libmetacity-private0 libmission-control-plugins0 libmono-cairo4.0-cil libmono-corlib4.0-cil libmono-csharp4.0-cil libmono-i18n-west4.0-cil libmono-i18n4.0-cil libmono-posix4.0-cil libmono-security4.0-cil libmono-sharpzip4.84-cil libmono-system-configuration4.0-cil libmono-system-core4.0-cil libmono-system-drawing4.0-cil libmono-system-security4.0-cil libmono-system-xml4.0-cil libmono-system4.0-cil libmono-zeroconf1.0-cil libmysqlclient16 libnautilus-extension1 libncurses5 libncursesw5 libnm-glib-vpn1 libnm-glib4 libnm-gtk-common libnm-gtk0 libnm-util2 libnotify0.4-cil libnspr4 libnss3 libnss3-1d libnux-1.0-0 libnux-1.0-common libpam-gnome-keyring libpam-modules libpam-modules-bin libpam-runtime libpam0g libperl5.12 libpng12-0 libpoppler-glib6 libpoppler13 libproxy0 libpulse-mainloop-glib0 libpulse0 libpurple-bin libpurple0 libpython2.7 libqt4-dbus libqt4-declarative libqt4-network libqt4-opengl libqt4-script libqt4-sql libqt4-sql-mysql libqt4-svg libqt4-xml libqt4-xmlpatterns libqtcore4 libqtgui4 libreoffice-base-core libreoffice-calc libreoffice-common libreoffice-core libreoffice-draw libreoffice-emailmerge libreoffice-gnome libreoffice-gtk libreoffice-help-en-gb libreoffice-help-en-us libreoffice-help-hu libreoffice-impress libreoffice-l10n-common libreoffice-l10n-en-gb libreoffice-l10n-en-za libreoffice-l10n-hu libreoffice-math libreoffice-style-human libreoffice-writer libsane-hpaio libsmbclient libsnmp-base libsnmp15 libssl1.0.0 libsyncdaemon-1.0-1 libt1-5 libtasn1-3 libtiff4 libtinfo5 libtotem0 libubuntuone-1.0-1 libubuntuone1.0-cil libudev0 libunity-core-4.0-4 libunity6 libusbmuxd1 libv4l-0 libvorbis0a libvorbisenc2 libvorbisfile3 libwbclient0 libwebkitgtk-1.0-0 libwebkitgtk-1.0-common libwebkitgtk-3.0-0 libwebkitgtk-3.0-common libxi6 libxml2 libxslt1.1 lightdm linux-firmware linux-libc-dev mawk metacity metacity-common mobile-broadband-provider-info modemmanager mono-4.0-gac mono-gac mono-runtime mousetweaks multiarch-support mysql-common nautilus nautilus-data nautilus-sendto-empathy ncurses-base ncurses-bin network-manager network-manager-gnome nux-tools onboard openjdk-6-jre openjdk-6-jre-headless openjdk-6-jre-lib openssl perl perl-base perl-modules poppler-utils pulseaudio pulseaudio-esound-compat pulseaudio-module-bluetooth pulseaudio-module-gconf pulseaudio-module-x11 pulseaudio-utils python-apport python-aptdaemon python-aptdaemon-gtk python-aptdaemon.gtk3widgets python-aptdaemon.gtkwidgets python-brlapi python-crypto python-cups python-cupshelpers python-egenix-mxdatetime python-egenix-mxtools python-gobject python-gobject-cairo python-httplib2 python-keyring python-launchpadlib python-libproxy python-libxml2 python-pam python-papyon python-pkg-resources python-problem-report python-pyatspi2 python-software-properties python-ubuntuone-client python-ubuntuone-storageprotocol python-uno python2.7 python2.7-minimal qdbus samba-common samba-common-bin seahorse shotwell simple-scan smbclient sni-qt software-center software-properties-common software-properties-gtk sudo system-config-printer-common system-config-printer-gnome system-config-printer-udev sysv-rc sysvinit-utils telepathy-indicator telepathy-mission-control-5 thunderbird thunderbird-globalmenu thunderbird-gnome-support thunderbird-locale-en thunderbird-locale-en-gb thunderbird-locale-en-us thunderbird-locale-hu tomboy totem totem-common totem-mozilla totem-plugins transmission-common transmission-gtk ttf-opensymbol tzdata tzdata-java ubuntu-desktop ubuntu-docs ubuntu-minimal ubuntu-sso-client ubuntu-standard ubuntuone-client ubuntuone-client-gnome ubuntuone-couch udev unity-lens-applications unity-services uno-libs3 update-manager update-manager-core update-notifier update-notifier-common upstart ure usbmuxd vim-common vim-tiny vinagre vino whois x11-common xdiagnose xorg xserver-common xserver-xorg xserver-xorg-core xserver-xorg-input-all xserver-xorg-video-all xserver-xorg-video-intel xserver-xorg-video-openchrome xserver-xorg-video-qxl xul-ext-ubufox WARNING: The following essential packages will be removed. This should NOT be done unless you know exactly what you are doing! libc-bin 498 upgraded, 40 newly installed, 1 to remove and 8 not upgraded. 69 not fully installed or removed. Need to get 439 MB of archives. After this operation, 135 MB of additional disk space will be used. You are about to do something potentially harmful To continue type in the phrase ‘Yes, do as I say!’ ?] I tried to upgrade but it gives me an error, when i try to upgrade-f it says i should delete libc-bin. Thanks for the answers btw. EDIT2: it also says this: The package system is broken If you are using third party repositories then disable them, since they are a common source of problems. Now run the following command in a terminal: apt-get install -f

    Read the article

  • A pseudo-listener for AlwaysOn Availability Groups for SQL Server virtual machines running in Azure

    - by MikeD
    I am involved in a project that is implementing SharePoint 2013 on virtual machines hosted in Azure. The back end data tier consists of two Azure VMs running SQL Server 2012, with the SharePoint databases contained in an AlwaysOn Availability Group. I used this "Tutorial: AlwaysOn Availability Groups in Windows Azure (GUI)" to help me implement this setup.Because Azure DHCP will not assign multiple unique IP addresses to the same VM, having an AG Listener in Azure is not currently supported.  I wanted to figure out another mechanism to support a "pseudo listener" of some sort. First, I created a CNAME (alias) record in the DNS zone with a short TTL (time to live) of 5 minutes (I may yet make this even shorter). The record represents a logical name (let's say the alias is SPSQL) of the server to connect to for the databases in the availability group (AG). When Server1 was hosting the primary replica of the AG, I would set the CNAME of SPSQL to be SERVER1. When the AG failed over to Server1, I wanted to set the CNAME to SERVER2. Seemed simple enough.(It's important to point out that the connection strings for my SharePoint services should use the CNAME alias, and not the actual server name. This whole thing falls apart otherwise.)To accomplish this, I created identical SQL Agent Jobs on Server1 and Server2, with two steps:1. Step 1: Determine if this server is hosting the primary replica.This is a TSQL step using this script:declare @agName sysname = 'AGTest'set nocount on declare @primaryReplica sysnameselect @primaryReplica = agState.primary_replicafrom sys.dm_hadr_availability_group_states agState   join sys.availability_groups ag on agstate.group_id = ag.group_id   where ag.name = @AGname if not exists(   select *    from sys.dm_hadr_availability_group_states agState   join sys.availability_groups ag on agstate.group_id = ag.group_id   where @@Servername = agstate.primary_replica    and ag.name = @AGname)begin   raiserror ('Primary replica of %s is not hosted on %s, it is hosted on %s',17,1,@Agname, @@Servername, @primaryReplica) endThis script determines if the primary replica value of the AG group is the same as the server name, which means that our server is hosting the current AG (you should update the value of the @AgName variable to the name of your AG). If this is true, I want the DNS alias to point to this server. If the current server is not hosting the primary replica, then the script raises an error. Also, if the script can't be executed because it cannot connect to the server, that also will generate an error. For the job step settings, I set the On Failure option to "Quit the job reporting success". The next step in the job will set the DNS alias to this server name, and I only want to do that if I know that it is the current primary replica, otherwise I don't want to do anything. I also include the step output in the job history so I can see the error message.Job Step 2: Update the CNAME entry in DNS with this server's name.I used a PowerShell script to accomplish this:$cname = "SPSQL.contoso.com"$query = "Select * from MicrosoftDNS_CNAMEType"$dns1 = "dc01.contoso.com"$dns2 = "dc02.contoso.com"if ((Test-Connection -ComputerName $dns1 -Count 1 -Quiet) -eq $true){    $dnsServer = $dns1}elseif ((Test-Connection -ComputerName $dns2 -Count 1 -Quiet) -eq $true) {   $dnsServer = $dns2}else{  $msg = "Unable to connect to DNS servers: " + $dns1 + ", " + $dns2   Throw $msg}$record = Get-WmiObject -Namespace "root\microsoftdns" -Query $query -ComputerName $dnsServer  | ? { $_.Ownername -match $cname }$thisServer = [System.Net.Dns]::GetHostEntry("LocalHost").HostName + "."$currentServer = $record.RecordData if ($currentServer -eq $thisServer ) {     $cname + " CNAME is up to date: " + $currentServer}else{    $cname + " CNAME is being updated to " + $thisServer + ". It was " + $currentServer    $record.RecordData = $thisServer    $record.put()}This script does a few things:finds a responsive domain controller (Test-Connection does a ping and returns a Boolean value if you specify the -Quiet parameter)makes a WMI call to the domain controller to get the current CNAME record value (Get-WmiObject)gets the FQDN of this server (GetHostEntry)checks if the CNAME record is correct and updates it if necessary(You should update the values of the variables $cname, $dns1 and $dns2 for your environment.)Since my domain controllers are also hosted in Azure VMs, either one of them could be down at any point in time, so I need to find a DC that is responsive before attempting the DNS call. The other little thing here is that the CNAME record contains the FQDN of a machine, plus it ends with a period. So the comparison of the CNAME record has to take the trailing period into account. When I tested this step, I was getting ACCESS DENIED responses from PowerShell for the Get-WmiObject cmdlet that does a remote lookup on the DC. This occurred because the SQL Agent service account was not a member of the Domain Admins group, so I decided to create a SQL Credential to store the credentials for a domain administrator account and use it as a PowerShell proxy (rather than give the service account Domain Admins membership).In SQL Management Studio, right click on the Credentials node (under the server's Security node), and choose New Credential...Then, under SQL Agent-->Proxies, right click on the PowerShell node and choose New Proxy...Finally, in the job step properties for the PowerShell step, select the new proxy in the Run As drop down.I created this two step Job on both nodes of the Availability Group, but if you had more than two nodes, just create the same job on all the servers. I set the schedule for the job to execute every minute.When the server that is hosting the primary replica is running the job, the job history looks like this:The job history on the secondary server looks like this: When a failover occurs, the SQL Agent job on the new primary replica will detect that the CNAME needs to be updated within a minute. Based on the TTL of the CNAME (which I said at the beginning was 5 minutes), the SharePoint servers will get the new alias within five minutes and should be able to reconnect. I may want to shorten up the TTL to reduce the time it takes for the client connections to use the new alias. Using a DNS CNAME and a SQL Agent Job on all servers hosting AG replicas, I was able to create a pseudo-listener to automatically change the name of the server that was hosting the primary replica, for a scenario where I cannot use a regular AG listener (in this case, because the servers are all hosted in Azure).    

    Read the article

  • Looking Back at MIX10

    - by WeigeltRo
    It’s the sad truth of my life that even though I’m fascinated by airplanes and flight in general since my childhood days, my body doesn’t like flying. Even the ridiculously short flights inside Germany are taking their toll on me each time. Now combine this with sitting in the cramped space of economy class for many hours on a transatlantic flight from Germany to Las Vegas and back, and factor in some heavy dose of jet lag (especially on my way eastwards), and you get an idea why after coming back home I had this question on my mind: Was it really worth it to attend MIX10? This of course is a question that will also be asked by my boss at Comma Soft (for other reasons, obviously), who decided to send me and my colleague Jens Schaller, to the MIX10 conference. (A note to my German readers: An dieser Stelle der Hinweis, dass Comma Soft noch Silverlight-Entwickler und/oder UI-Designer für den Standort Bonn sucht – aussagekräftige Bewerbungen bitte an [email protected]) Too keep things short: My answer is yes. Before I’ll go into detail, let me ask the heretical questions whether tech conferences in general still make sense. There was a time, where actually being at a tech conference gave you a head-start in regard to learning about new technologies. Nowadays this is no longer true, where every bit of information and every detail is immediately twittered, blogged and whatevered to death. In the case of MIX10 you even can download the video-taped sessions shortly after. So: Does visiting a conference still make sense? It depends on what you expect from a conference. It should be clear to everybody that you’ll neither get exclusive information, nor receive training in a small group. What a conference does offer that sitting in front of your computer does not can be summarized as follows: Focus Being away from work and home will help you to focus on the presented information. Of course there are always the poor guys who are haunted by their work (with mails and short text messages reporting the latest showstopper problem), but in general being out of your office makes a huge difference. Inspiration With the focus comes the emotional involvement. I find it much easier to absorb information if I feel that certain vibe when sitting in a session. This still means that I have put work into reviewing the information later, but it’s a better starting point. And all the impressions collected at a (good) conference combined lead to a higher motivation – be it by the buzz (“this is gonna be sooo cool!”) or by the fear to fall behind (“man, we’ll have work on this, or else…”). People At a conference it’s pretty easy to get into contact with other people during breakfast, lunch and other breaks. This is a good opportunity to get a feel for what other development teams are doing (on a very general level of course, nobody will tell you about their secret formula) and what they are thinking about specific technologies. So MIX10 did offer focus, inspiration and people, but that would have meant nothing without valuable content. When I (being a frontend developer with a strong interest in UI/UX) planned my visit to MIX10, I made the decision to focus on the "soft" topics of design, interaction and user experience. I figured that I would be bombarded with all the technical details about Silverlight 4 anyway in the weeks and months to come. Actually, I would have liked to catch a few technical sessions, but the agenda wasn’t exactly in favor of people interested in any kind of Silverlight and UI/UX/Design topics. That’s one of my few complaints about the conference – I would have liked one more day and/or more sessions per day. Overall, the quality of the workshops and sessions was pretty high. In fact, looking back at my collection of conferences I’ve visited in the past I’d say that MIX10 ranks somewhere near the top spot. Here’s an overview of the workshops/sessions I attended (I’ll leave out the keynotes): Day 0 (Workshops on Sunday) Design Fundamentals for Developers Robby Ingebretsen is the man! Great workshop in three parts with the perfect mix of examples, well-structured definition of terminology and the right dose of humor. Robby was part of the WPF team before founding his own company so he not only has a strong interest in design (and the skillz!) but also the technical background.   Design Tools and Techniques Originally announced to be held by Arturo Toledo, the Rosso brothers from ArcheType filled in for the first two parts, and Corrina Black had a pretty general part about the Windows Phone UI. The first two thirds were a mixed bag; the two guys definitely knew what they were talking about, and the demos were great, but the talk lacked the preparation and polish of a truly great presentation. Corrina was not allowed to go into too much detail before the keynote on Monday, but the session was still very interesting as it showed how much thought went into the Windows Phone UI (and there’s always a lot to learn when people talk about their thought process). Day 1 (Monday) Designing Rich Experiences for Data-Centric Applications I wonder whether there was ever a test-run for this session, but what Ken Azuma and Yoshihiro Saito delivered in the first 15 minutes of a 30-minutes-session made me walk out. A commercial for a product (just great: a video showing a SharePoint plug-in in an all-Japanese UI) combined with the most generic blah blah one could imagine. EPIC FAIL.   Great User Experiences: Seamlessly Blending Technology & Design I switched to this session from the one above but I guess I missed the interesting part – what I did catch was what looked like a “look at the cool stuff we did” without being helpful. Or maybe I was just in a bad mood after the other session.   The Art, Technology and Science of Reading This talk by Kevin Larson was very interesting, but was more a presentation of what Microsoft is doing in research (pretty impressive) and in the end lacked a bit the helpful advice one could have hoped for.   10 Ways to Attack a Design Problem and Come Out Winning Robby Ingebretsen again, and again a great mix of theory and practice. The clean and simple, yet effective, UI of the reader app resulted in a simultaneous “wow” of Jens and me. If you’d watch only one session video, this should be it. Microsoft has to bring Robby back next year! Day 2 (Tuesday) Touch in Public: Multi-touch Interaction Design for Kiosks & Architectural Experiences Very interesting session by Jason Brush, a great inspiration with many details to look out for in the examples. Exactly what I was hoping for – and then some!   Designing Bing: Heart and Science How hard can it be to design the UI for a search engine? An input field and a list of results, that should be it, right? Well, not so fast! The talk by Paul Ray showed the many iterations to finally get it right (up to the choice of a specific blue for the links). And yes, I want an eye-tracking device to play around with!   The Elephant in the Room When Nishant Kothary presented a long list of what his session was not about, I told to myself (not having the description text present) “Am I in the wrong talk? Should I leave?”. Boy, was I wrong. A great talk about human factors in the process of designing stuff.   An Hour with Bill Buxton Having seen Bill Buxton’s presentation in the keynote, I just had to see this man again – even though I didn’t know what to expect. Being more or less unplanned and intended to be more of a conversation, the session didn’t provide a wealth of immediately useful information. Nevertheless Bill Buxton was impressive with his huge knowledge of seemingly everything. But this could/should have been a session some when in the evening and not in parallel to at least two other interesting talks. Day 3 (Wednesday) Design the Ordinary, Like the Fixie This session by DL Byron and Kevin Tamura started really well and brought across the message to keep things simple. But towards the end the talk lost some of its steam. And, as a member of the audience pointed out, they kind of ignored their own advice when they used a fancy presentation software other then PowerPoint that sometimes got in the way of showing things.   Developing Natural User Interfaces Speaking of alternative presentation software, Joshua Blake definitely had the most remarkable alternative to PowerPoint, a self-written program called NaturalShow that was controlled using multi-touch on a touch screen. Not a PowerPoint-killer, but impressive nevertheless. The (excellent) talk itself was kind of eye-opening in regard to what “multi-touch support” on various platforms (WPF, Silverlight, Windows Phone) actually means.   Treat your Content Right The talk by Tiffani Jones Brown wasn’t even on my planned schedule, but somehow I ended up in that session – and it was great. And even for people who don’t necessarily have to write content for websites, some points made by Tiffani are valid in many places, notably wherever you put texts with more than a single word into your UI. Creating Effective Info Viz in Microsoft Silverlight The last session of MIX10 I attended was kind of disappointing. At first things were very promising, with Matthias Shapiro giving a brief but well-structured introduction to info graphics and interactive visualizations. Then the live-coding began and while the result was interesting, too much time was spend on wrestling to get the code working. Ending earlier than planned, the talk was a bit light on actual content, but at least it included a nice list of resources. Conclusion It could be felt all across MIX10, UIs will take a huge leap forward; in fact, there are enough examples that have already. People who both have the technical know-how and at least a basic understanding of design (“literacy” as Bill Buxton called it) are in high demand. The concept of the MIX conference and initiatives like design.toolbox shows that Microsoft understands very well that frontend developers have to acquire new knowledge besides knowing how to hack code and putting buttons on a form. There are extremely exciting times before us, with lots of opportunity for those who are eager to develop their skills, that is for sure.

    Read the article

  • Networking in VirtualBox

    - by Fat Bloke
    Networking in VirtualBox is extremely powerful, but can also be a bit daunting, so here's a quick overview of the different ways you can setup networking in VirtualBox, with a few pointers as to which configurations should be used and when. VirtualBox allows you to configure up to 8 virtual NICs (Network Interface Controllers) for each guest vm (although only 4 are exposed in the GUI) and for each of these NICs you can configure: Which virtualized NIC-type is exposed to the Guest. Examples include: Intel PRO/1000 MT Server (82545EM),  AMD PCNet FAST III (Am79C973, the default) or  a Paravirtualized network adapter (virtio-net). How the NIC operates with respect to your Host's physical networking. The main modes are: Network Address Translation (NAT) Bridged networking Internal networking Host-only networking NAT with Port-forwarding The choice of NIC-type comes down to whether the guest has drivers for that NIC.  VirtualBox, suggests a NIC based on the guest OS-type that you specify during creation of the vm, and you rarely need to modify this. But the choice of networking mode depends on how you want to use your vm (client or server) and whether you want other machines on your network to see it. So let's look at each mode in a bit more detail... Network Address Translation (NAT) This is the default mode for new vm's and works great in most situations when the Guest is a "client" type of vm. (i.e. most network connections are outbound). Here's how it works: When the guest OS boots,  it typically uses DHCP to get an IP address. VirtualBox will field this DHCP request and tell the guest OS its assigned IP address and the gateway address for routing outbound connections. In this mode, every vm is assigned the same IP address (10.0.2.15) because each vm thinks they are on their own isolated network. And when they send their traffic via the gateway (10.0.2.2) VirtualBox rewrites the packets to make them appear as though they originated from the Host, rather than the Guest (running inside the Host). This means that the Guest will work even as the Host moves from network to network (e.g. laptop moving between locations), and from wireless to wired connections too. However, how does another computer initiate a connection into a Guest?  e.g. connecting to a web server running in the Guest. This is not (normally) possible using NAT mode as there is no route into the Guest OS. So for vm's running servers we need a different networking mode.... Bridged Networking Bridged Networking is used when you want your vm to be a full network citizen, i.e. to be an equal to your host machine on the network. In this mode, a virtual NIC is "bridged" to a physical NIC on your host, like this: The effect of this is that each VM has access to the physical network in the same way as your host. It can access any service on the network such as external DHCP services, name lookup services, and routing information just as the host does. Logically, the network looks like this: The downside of this mode is that if you run many vm's you can quickly run out of IP addresses or your network administrator gets fed up with you asking for statically assigned IP addresses. Secondly, if your host has multiple physical NICs (e.g. Wireless and Wired) you must reconfigure the bridge when your host jumps networks.  Hmm, so what if you want to run servers in vm's but don't want to involve your network administrator? Maybe one of the next 2 modes is for you... Internal Networking When you configure one or more vm's to sit on an Internal network, VirtualBox ensures that all traffic on that network stays within the host and is only visible to vm's on that virtual network. Configuration looks like this: The internal network ( in this example "intnet" ) is a totally isolated network and so is very "quiet". This is good for testing when you need a separate, clean network, and you can create sophisticated internal networks with vm's that provide their own services to the internal network. (e.g. Active Directory, DHCP, etc). Note that not even the Host is a member of the internal network, but this mode allows vm's to function even when the Host is not connected to a network (e.g. on a plane). Note that in this mode, VirtualBox provides no "convenience" services such as DHCP, so your machines must be statically configured or one of the vm's needs to provide a DHCP/Name service. Multiple internal networks are possible and you can configure vm's to have multiple NICs to sit across internal and other network modes and thereby provide routes if needed. But all this sounds tricky. What if you want an Internal Network that the host participates on with VirtualBox providing IP addresses to the Guests? Ah, then for this, you might want to consider Host-only Networking... Host-only Networking Host-only Networking is like Internal Networking in that you indicate which network the Guest sits on, in this case, "vboxnet0": All vm's sitting on this "vboxnet0" network will see each other, and additionally, the host can see these vm's too. However, other external machines cannot see Guests on this network, hence the name "Host-only". Logically, the network looks like this: This looks very similar to Internal Networking but the host is now on "vboxnet0" and can provide DHCP services. To configure how a Host-only network behaves, look in the VirtualBox Manager...Preferences...Network dialog: Port-Forwarding with NAT Networking Now you may think that we've provided enough modes here to handle every eventuality but here's just one more... What if you cart around a mobile-demo or dev environment on, say, a laptop and you have one or more vm's that you need other machines to connect into? And you are continually hopping onto different (customer?) networks. In this scenario: NAT - won't work because external machines need to connect in. Bridged - possibly an option, but does your customer want you eating IP addresses and can your software cope with changing networks? Internal - we need the vm(s) to be visible on the network, so this is no good. Host-only - same problem as above, we want external machines to connect in to the vm's. Enter Port-forwarding to save the day! Configure your vm's to use NAT networking; Add Port Forwarding rules; External machines connect to "host":"port number" and connections are forwarded by VirtualBox to the guest:port number specified. For example, if your vm runs a web server on port 80, you could set up rules like this:  ...which reads: "any connections on port 8080 on the Host will be forwarded onto this vm's port 80".  This provides a mobile demo system which won't need re-configuring every time you open your laptop lid. Summary VirtualBox has a very powerful set of options allowing you to set up almost any configuration your heart desires. For more information, check out the VirtualBox User Manual on Virtual Networking. -FB 

    Read the article

  • tfs 2010 RC Agile Process template update New Task progress report

    Maybe my next post will just be about why I am so excited and impressed with the out of the box templates.  But, for this first blog with my new focus, I thought I would just walk through the process I went through to create a task progress report (to enhance the out of the box Agile template). So, I started with the MSF for Agile Development 5.0 RC template.  After reviewing the template, I came away pretty excited about many of the new reports.  I am especially excited about the reporting services reports.  The big advantage I see here is that these are querying the Warehouse directly instead of the Analysis Services Cube which means that they are much closer to real-time which I find very important for reports like Burndown and task status.  One report that I focused on right away was the User Story Progress Report.  An overview is shown below: This report is very useful, but a lot of our internal managers really prefer to manage at the task level and either dont have stories in TFS or would like to view this type of report for tasks in addition to the User Stories.  So, what did I do? Step 1: Download the Agile Template In VS 2010 RC, open Process Template Manager from Team->Team Project Collection Settings.  Download the MSF for Agile Development template to your local file system.  A project template is a folder of xml files.  There is a ProcessTemplate.xml in the root and then a bunch of directories for things like Work Item Definitions and Queries, Reports, Shared Documents and Source Control Settings.  Step 2: Copy the folder My plan here is to make a new template with all of my modifications.  You can also just enhance update the MSF template.  However, I think it is cleaner when you start making modifications to make your own template.  So, copy the folder and name it with your new template name. Step 3: Change Template Name Open ProcessTemplate.xml and change the <name> of the template. Step 4: Copy the rdl of the Report you want to use a starting point In my case, I copied Stories Progress.rdl and named the file Task Progress Breakdown.rdl.  I reviewed the requirements for the new report with some of the users here and came up with this plan.  Should show tasks and be expandable to show subtasks.  Should add Assigned To and Estimated Finish Date as 2 extra columns. Step 5: Walkthrough the existing report to understand how it works The main thing that I do here is try to get the sql to run in SQL Management Studio.  So, I can walkthrough the process of building up the data for the report. After analyzing this particular report I found a couple of very useful things.  One, this report is already built to display subtasks if I just flip the IncludeTasks flag to 1.  So, if you are using Stories and have tasks assigned to each story.  This might give you everything you want.  For my purposes, I did make that change to the Stories Progress report as I find it to be a more useful report to be able to see the tasks that comprise each story.  But, I still wanted a task only version with the additional fields. Step 6: Update the report definition I tend to work on rdl in visual studio directly as xml.  Especially when I am just altering an existing report, I find it easier than trying to deal with the BI Studio designer.  For my report I made the following changes. Updated Fields Removed Stack Rank and Replaced with Priority since we dont use Stack Rank Added FinishDate and AssignedTo Changed the root deliverable SQL to pull @tasks instead of @deliverablecategory and added a join CurrentWorkItemView for FinishDate and Assigned to SELECT cwi.[System_Id] AS ID FROM [CurrentWorkItemView] cwi             WHERE cwi.[System_WorkItemType] IN (@Task)             AND cwi.[ProjectNodeGUID] = @ProjectGuid SELECT lh.SourceWorkItemID AS ID FROM FactWorkItemLinkHistory lh             INNER JOIN [CurrentWorkItemView] cwi ON lh.TargetWorkItemID = cwi.[System_Id]             WHERE lh.WorkItemLinkTypeSK = @ParentWorkItemLinkTypeSK                 AND lh.RemovedDate = CONVERT(DATETIME, '9999', 126)                 AND lh.TeamProjectCollectionSK = @TeamProjectCollectionSK                 AND cwi.[System_WorkItemType] NOT IN (@DeliverableCategory) Added AssignedTo and FinishDate columns to the @Rollups table Added two columns to the table used for column headers <Tablix Name="ProgressTable">         <TablixBody>           <TablixColumns>             <TablixColumn>               <Width>2.7625in</Width>             </TablixColumn>             <TablixColumn>               <Width>0.5125in</Width>             </TablixColumn>             <TablixColumn>               <Width>3.4625in</Width>             </TablixColumn>             <TablixColumn>               <Width>0.7625in</Width>             </TablixColumn>             <TablixColumn>               <Width>1.25in</Width>             </TablixColumn>             <TablixColumn>               <Width>1.25in</Width>             </TablixColumn>           </TablixColumns> Added Cells for the two new headers Added Cells to the data table to include the two new values (Assigned to & Finish Date) Changed a bunch of widths that would change the format of the report to display landscape and have room for the two additional columns Set the Value of the IncludeTasks Parameter to 1 <ReportParameter Name="IncludeTasks">       <DataType>Integer</DataType>       <DefaultValue>         <Values>           <Value>=1</Value>         </Values>       </DefaultValue>       <Prompt>IncludeTasks</Prompt>       <Hidden>true</Hidden>     </ReportParameter> Change a few descriptions on how the report should be used This is the resulting report I have attached the final rdl. Step 7: Update ReportTasks.xml Last step before the template is ready for use is to update the reportTasks.xml file in the reports folder.  This file defines the reports that are available in the template.           <report name="Task Progress Breakdown" filename="Reports\Task Progress Breakdown.rdl" folder="Project Management" cacheExpiration="30">             <parameters>               <parameter name="ExplicitProject" value="" />             </parameters>             <datasources>               <reference name="/Tfs2010ReportDS" dsname="TfsReportDS" />             </datasources>           </report> Step 8: Upload the template Open the process Template Manager just like Step 1.  And upload the new template. Thats it.  One other note, if you want to add this report to existing team project you will have to go into reportmanager (the reporting services portal) and upload the rdl to that projects directory.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Understanding C# async / await (2) Awaitable / Awaiter Pattern

    - by Dixin
    What is awaitable Part 1 shows that any Task is awaitable. Actually there are other awaitable types. Here is an example: Task<int> task = new Task<int>(() => 0); int result = await task.ConfigureAwait(false); // Returns a ConfiguredTaskAwaitable<TResult>. The returned ConfiguredTaskAwaitable<TResult> struct is awaitable. And it is not Task at all: public struct ConfiguredTaskAwaitable<TResult> { private readonly ConfiguredTaskAwaiter m_configuredTaskAwaiter; internal ConfiguredTaskAwaitable(Task<TResult> task, bool continueOnCapturedContext) { this.m_configuredTaskAwaiter = new ConfiguredTaskAwaiter(task, continueOnCapturedContext); } public ConfiguredTaskAwaiter GetAwaiter() { return this.m_configuredTaskAwaiter; } } It has one GetAwaiter() method. Actually in part 1 we have seen that Task has GetAwaiter() method too: public class Task { public TaskAwaiter GetAwaiter() { return new TaskAwaiter(this); } } public class Task<TResult> : Task { public new TaskAwaiter<TResult> GetAwaiter() { return new TaskAwaiter<TResult>(this); } } Task.Yield() is a another example: await Task.Yield(); // Returns a YieldAwaitable. The returned YieldAwaitable is not Task either: public struct YieldAwaitable { public YieldAwaiter GetAwaiter() { return default(YieldAwaiter); } } Again, it just has one GetAwaiter() method. In this article, we will look at what is awaitable. The awaitable / awaiter pattern By observing different awaitable / awaiter types, we can tell that an object is awaitable if It has a GetAwaiter() method (instance method or extension method); Its GetAwaiter() method returns an awaiter. An object is an awaiter if: It implements INotifyCompletion or ICriticalNotifyCompletion interface; It has an IsCompleted, which has a getter and returns a Boolean; it has a GetResult() method, which returns void, or a result. This awaitable / awaiter pattern is very similar to the iteratable / iterator pattern. Here is the interface definitions of iteratable / iterator: public interface IEnumerable { IEnumerator GetEnumerator(); } public interface IEnumerator { object Current { get; } bool MoveNext(); void Reset(); } public interface IEnumerable<out T> : IEnumerable { IEnumerator<T> GetEnumerator(); } public interface IEnumerator<out T> : IDisposable, IEnumerator { T Current { get; } } In case you are not familiar with the out keyword, please find out the explanation in Understanding C# Covariance And Contravariance (2) Interfaces. The “missing” IAwaitable / IAwaiter interfaces Similar to IEnumerable and IEnumerator interfaces, awaitable / awaiter can be visualized by IAwaitable / IAwaiter interfaces too. This is the non-generic version: public interface IAwaitable { IAwaiter GetAwaiter(); } public interface IAwaiter : INotifyCompletion // or ICriticalNotifyCompletion { // INotifyCompletion has one method: void OnCompleted(Action continuation); // ICriticalNotifyCompletion implements INotifyCompletion, // also has this method: void UnsafeOnCompleted(Action continuation); bool IsCompleted { get; } void GetResult(); } Please notice GetResult() returns void here. Task.GetAwaiter() / TaskAwaiter.GetResult() is of such case. And this is the generic version: public interface IAwaitable<out TResult> { IAwaiter<TResult> GetAwaiter(); } public interface IAwaiter<out TResult> : INotifyCompletion // or ICriticalNotifyCompletion { bool IsCompleted { get; } TResult GetResult(); } Here the only difference is, GetResult() return a result. Task<TResult>.GetAwaiter() / TaskAwaiter<TResult>.GetResult() is of this case. Please notice .NET does not define these IAwaitable / IAwaiter interfaces at all. As an UI designer, I guess the reason is, IAwaitable interface will constraint GetAwaiter() to be instance method. Actually C# supports both GetAwaiter() instance method and GetAwaiter() extension method. Here I use these interfaces only for better visualizing what is awaitable / awaiter. Now, if looking at above ConfiguredTaskAwaitable / ConfiguredTaskAwaiter, YieldAwaitable / YieldAwaiter, Task / TaskAwaiter pairs again, they all “implicitly” implement these “missing” IAwaitable / IAwaiter interfaces. In the next part, we will see how to implement awaitable / awaiter. Await any function / action In C# await cannot be used with lambda. This code: int result = await (() => 0); will cause a compiler error: Cannot await 'lambda expression' This is easy to understand because this lambda expression (() => 0) may be a function or a expression tree. Obviously we mean function here, and we can tell compiler in this way: int result = await new Func<int>(() => 0); It causes an different error: Cannot await 'System.Func<int>' OK, now the compiler is complaining the type instead of syntax. With the understanding of the awaitable / awaiter pattern, Func<TResult> type can be easily made into awaitable. GetAwaiter() instance method, using IAwaitable / IAwaiter interfaces First, similar to above ConfiguredTaskAwaitable<TResult>, a FuncAwaitable<TResult> can be implemented to wrap Func<TResult>: internal struct FuncAwaitable<TResult> : IAwaitable<TResult> { private readonly Func<TResult> function; public FuncAwaitable(Func<TResult> function) { this.function = function; } public IAwaiter<TResult> GetAwaiter() { return new FuncAwaiter<TResult>(this.function); } } FuncAwaitable<TResult> wrapper is used to implement IAwaitable<TResult>, so it has one instance method, GetAwaiter(), which returns a IAwaiter<TResult>, which wraps that Func<TResult> too. FuncAwaiter<TResult> is used to implement IAwaiter<TResult>: public struct FuncAwaiter<TResult> : IAwaiter<TResult> { private readonly Task<TResult> task; public FuncAwaiter(Func<TResult> function) { this.task = new Task<TResult>(function); this.task.Start(); } bool IAwaiter<TResult>.IsCompleted { get { return this.task.IsCompleted; } } TResult IAwaiter<TResult>.GetResult() { return this.task.Result; } void INotifyCompletion.OnCompleted(Action continuation) { new Task(continuation).Start(); } } Now a function can be awaited in this way: int result = await new FuncAwaitable<int>(() => 0); GetAwaiter() extension method As IAwaitable shows, all that an awaitable needs is just a GetAwaiter() method. In above code, FuncAwaitable<TResult> is created as a wrapper of Func<TResult> and implements IAwaitable<TResult>, so that there is a  GetAwaiter() instance method. If a GetAwaiter() extension method  can be defined for Func<TResult>, then FuncAwaitable<TResult> is no longer needed: public static class FuncExtensions { public static IAwaiter<TResult> GetAwaiter<TResult>(this Func<TResult> function) { return new FuncAwaiter<TResult>(function); } } So a Func<TResult> function can be directly awaited: int result = await new Func<int>(() => 0); Using the existing awaitable / awaiter - Task / TaskAwaiter Remember the most frequently used awaitable / awaiter - Task / TaskAwaiter. With Task / TaskAwaiter, FuncAwaitable / FuncAwaiter are no longer needed: public static class FuncExtensions { public static TaskAwaiter<TResult> GetAwaiter<TResult>(this Func<TResult> function) { Task<TResult> task = new Task<TResult>(function); task.Start(); return task.GetAwaiter(); // Returns a TaskAwaiter<TResult>. } } Similarly, with this extension method: public static class ActionExtensions { public static TaskAwaiter GetAwaiter(this Action action) { Task task = new Task(action); task.Start(); return task.GetAwaiter(); // Returns a TaskAwaiter. } } an action can be awaited as well: await new Action(() => { }); Now any function / action can be awaited: await new Action(() => HelperMethods.IO()); // or: await new Action(HelperMethods.IO); If function / action has parameter(s), closure can be used: int arg0 = 0; int arg1 = 1; int result = await new Action(() => HelperMethods.IO(arg0, arg1)); Using Task.Run() The above code is used to demonstrate how awaitable / awaiter can be implemented. Because it is a common scenario to await a function / action, so .NET provides a built-in API: Task.Run(): public class Task2 { public static Task Run(Action action) { // The implementation is similar to: Task task = new Task(action); task.Start(); return task; } public static Task<TResult> Run<TResult>(Func<TResult> function) { // The implementation is similar to: Task<TResult> task = new Task<TResult>(function); task.Start(); return task; } } In reality, this is how we await a function: int result = await Task.Run(() => HelperMethods.IO(arg0, arg1)); and await a action: await Task.Run(() => HelperMethods.IO());

    Read the article

  • How I do VCS

    - by Wes McClure
    After years of dabbling with different version control systems and techniques, I wanted to share some of what I like and dislike in a few blog posts.  To start this out, I want to talk about how I use VCS in a team environment.  These come in a series of tips or best practices that I try to follow.  Note: This list is subject to change in the future. Always use some form of version control for all aspects of software development. Development is an evolution.  Looking back at where we were is an invaluable asset in that process.  This includes data schemas and documentation. Reverting / reapplying changes is absolutely critical for efficient development. The tools I use: Code: Hg (preferred), SVN Database: TSqlMigrations Documents: Sometimes in code repository, also SharePoint with versioning Always tag a commit (changeset) with comments This is a quick way to describe to someone else (or your future self) what the changeset entails. Be brief but courteous. One or two sentences about the task, not the actual changes. Use precommit hooks or setup the central repository to reject changes without comments. Link changesets to documentation If your project management system integrates with version control, or has a way to externally reference stories, tasks etc then leave a reference in the commit.  This helps locate more information about the commit and/or related changesets. It’s best to have a precommit hook or system that requires this information, otherwise it’s easy to forget. Ability to work offline is required, including commits and history Yes this requires a DVCS locally but doesn’t require the central repository to be a DVCS.  I prefer to use either Git or Hg but if it isn’t possible to migrate the central repository, it’s still possible for a developer to push / pull changes to that repository from a local Hg or Git repository. Never lock resources (files) in a central repository… Rude! We have merge tools for a reason, merging sucked a long time ago, it doesn’t anymore… stop locking files! This is unproductive, rude and annoying to other team members. Always review everything in your commit. Never ever commit a set of files without reviewing the changes in each. Never add a file without asking yourself, deep down inside, does this belong? If you leave to make changes during a review, start the review over when you come back.  Never assume you didn’t touch a file, double check. This is another reason why you want to avoid large, infrequent commits. Requirements for tools Quickly show pending changes for the entire repository. Default action for a resource with pending changes is a diff. Pluggable diff & merge tool Produce a unified diff or a diff of all changes.  This is helpful to bulk review changes instead of opening each file. The central repository is not your own personal dump yard.  Breaking this rule is a sure fire way to get the F bomb dropped in front of your name, multiple times. If you turn on Visual Studio’s commit on closing studio option, I will personally break your fingers. By the way, the person(s) in charge of this feature should be fired and never be allowed near programming, ever again. Commit (integrate) to the central repository / branch frequently I try to do this before leaving each day, especially without a DVCS.  One never knows when they might need to work from remote the following day. Never commit commented out code If it isn’t needed anymore, delete it! If you aren’t sure if it might be useful in the future, delete it! This is why we have history. If you don’t know why it’s commented out, figure it out and then either uncomment it or delete it. Don’t commit build artifacts, user preferences and temporary files. Build artifacts do not belong in VCS, everything in them is present in the code. (ie: bin\*, obj\*, *.dll, *.exe) User preferences are your settings, stop overriding my preferences files! (ie: *.suo and *.user files) Most tools allow you to ignore certain files and Hg/Git allow you to version this as an ignore file.  Set this up as a first step when creating a new repository! Be polite when merging unresolved conflicts. Count to 10, cuss, grab a stress ball and realize it’s not a big deal.  Actually, it’s an opportunity to let you know that someone else is working in the same area and you might want to communicate with them. Following the other rules, especially committing frequently, will reduce the likelihood of this. Suck it up, we all have to deal with this unintended consequence at times.  Just be careful and GET FAMILIAR with your merge tool.  It’s really not as scary as you think.  I personally prefer KDiff3 as its merging capabilities rock. Don’t blindly merge and then blindly commit your changes, this is rude and unprofessional.  Make sure you understand why the conflict occurred and which parts of the code you want to keep.  Apply scrutiny when you commit a manual merge: review the diff! Make sure you test the changes (build and run automated tests) Become intimate with your version control system and the tools you use with it. Avoid trial and error as much as is possible, sit down and test the tool out, read some tutorials etc.  Create test repositories and walk through common scenarios. Find the most efficient way to do your work.  These tools will be used repetitively, so inefficiencies will add up. Sometimes this involves a mix of tools, both GUI and CLI. I like a combination of both Tortoise Hg and hg cli to get the job efficiently. Always tag releases Create a way to find a given release, whether this be in comments or an explicit tag / branch.  This should be readily discoverable. Create release branches to patch bugs and then merge the changes back to other development branch(es). If using feature branches, strive for periodic integrations. Feature branches often cause forked code that becomes irreconcilable.  Strive to re-integrate somewhat frequently with the branch this code will ultimately be merged into.  This will avoid merge conflicts in the future. Feature branches are best when they are mutually exclusive of active development in other branches. Use and abuse local commits , at least one per task in a story. This builds a trail of changes in your local repository that can be pushed to a central repository when the story is complete. Never commit a broken build or failing tests to the central repository. It’s ok for a local commit to break the build and/or tests.  In fact, I encourage this if it helps group the changes more logically.  This is one of the main reasons I got excited about DVCS, when I wanted more than one changeset for a set of pending changes but some files could be grouped into both changesets (like solution file / project file changes). If you have more than a dozen outstanding changed resources, there should probably be more than one commit involved. Exceptions when maintaining code bases that require shotgun surgery, in this case, it’s a design smell :) Don’t version sensitive information Especially usernames / passwords   There is one area I haven’t found a solution I like yet: versioning 3rd party libraries and/or code.  I really dislike keeping any assemblies in the repository, but seems to be a common practice for external libraries.  Please feel free to share your ideas about this below.    -Wes

    Read the article

  • Performance Optimization &ndash; It Is Faster When You Can Measure It

    - by Alois Kraus
    Performance optimization in bigger systems is hard because the measured numbers can vary greatly depending on the measurement method of your choice. To measure execution timing of specific methods in your application you usually use Time Measurement Method Potential Pitfalls Stopwatch Most accurate method on recent processors. Internally it uses the RDTSC instruction. Since the counter is processor specific you can get greatly different values when your thread is scheduled to another core or the core goes into a power saving mode. But things do change luckily: Intel's Designer's vol3b, section 16.11.1 "16.11.1 Invariant TSC The time stamp counter in newer processors may support an enhancement, referred to as invariant TSC. Processor's support for invariant TSC is indicated by CPUID.80000007H:EDX[8]. The invariant TSC will run at a constant rate in all ACPI P-, C-. and T-states. This is the architectural behavior moving forward. On processors with invariant TSC support, the OS may use the TSC for wall clock timer services (instead of ACPI or HPET timers). TSC reads are much more efficient and do not incur the overhead associated with a ring transition or access to a platform resource." DateTime.Now Good but it has only a resolution of 16ms which can be not enough if you want more accuracy.   Reporting Method Potential Pitfalls Console.WriteLine Ok if not called too often. Debug.Print Are you really measuring performance with Debug Builds? Shame on you. Trace.WriteLine Better but you need to plug in some good output listener like a trace file. But be aware that the first time you call this method it will read your app.config and deserialize your system.diagnostics section which does also take time.   In general it is a good idea to use some tracing library which does measure the timing for you and you only need to decorate some methods with tracing so you can later verify if something has changed for the better or worse. In my previous article I did compare measuring performance with quantum mechanics. This analogy does work surprising well. When you measure a quantum system there is a lower limit how accurately you can measure something. The Heisenberg uncertainty relation does tell us that you cannot measure of a quantum system the impulse and location of a particle at the same time with infinite accuracy. For programmers the two variables are execution time and memory allocations. If you try to measure the timings of all methods in your application you will need to store them somewhere. The fastest storage space besides the CPU cache is the memory. But if your timing values do consume all available memory there is no memory left for the actual application to run. On the other hand if you try to record all memory allocations of your application you will also need to store the data somewhere. This will cost you memory and execution time. These constraints are always there and regardless how good the marketing of tool vendors for performance and memory profilers are: Any measurement will disturb the system in a non predictable way. Commercial tool vendors will tell you they do calculate this overhead and subtract it from the measured values to give you the most accurate values but in reality it is not entirely true. After falling into the trap to trust the profiler timings several times I have got into the habit to Measure with a profiler to get an idea where potential bottlenecks are. Measure again with tracing only the specific methods to check if this method is really worth optimizing. Optimize it Measure again. Be surprised that your optimization has made things worse. Think harder Implement something that really works. Measure again Finished! - Or look for the next bottleneck. Recently I have looked into issues with serialization performance. For serialization DataContractSerializer was used and I was not sure if XML is really the most optimal wire format. After looking around I have found protobuf-net which uses Googles Protocol Buffer format which is a compact binary serialization format. What is good for Google should be good for us. A small sample app to check out performance was a matter of minutes: using ProtoBuf; using System; using System.Diagnostics; using System.IO; using System.Reflection; using System.Runtime.Serialization; [DataContract, Serializable] class Data { [DataMember(Order=1)] public int IntValue { get; set; } [DataMember(Order = 2)] public string StringValue { get; set; } [DataMember(Order = 3)] public bool IsActivated { get; set; } [DataMember(Order = 4)] public BindingFlags Flags { get; set; } } class Program { static MemoryStream _Stream = new MemoryStream(); static MemoryStream Stream { get { _Stream.Position = 0; _Stream.SetLength(0); return _Stream; } } static void Main(string[] args) { DataContractSerializer ser = new DataContractSerializer(typeof(Data)); Data data = new Data { IntValue = 100, IsActivated = true, StringValue = "Hi this is a small string value to check if serialization does work as expected" }; var sw = Stopwatch.StartNew(); int Runs = 1000 * 1000; for (int i = 0; i < Runs; i++) { //ser.WriteObject(Stream, data); Serializer.Serialize<Data>(Stream, data); } sw.Stop(); Console.WriteLine("Did take {0:N0}ms for {1:N0} objects", sw.Elapsed.TotalMilliseconds, Runs); Console.ReadLine(); } } The results are indeed promising: Serializer Time in ms N objects protobuf-net   807 1000000 DataContract 4402 1000000 Nearly a factor 5 faster and a much more compact wire format. Lets use it! After switching over to protbuf-net the transfered wire data has dropped by a factor two (good) and the performance has worsened by nearly a factor two. How is that possible? We have measured it? Protobuf-net is much faster! As it turns out protobuf-net is faster but it has a cost: For the first time a type is de/serialized it does use some very smart code-gen which does not come for free. Lets try to measure this one by setting of our performance test app the Runs value not to one million but to 1. Serializer Time in ms N objects protobuf-net 85 1 DataContract 24 1 The code-gen overhead is significant and can take up to 200ms for more complex types. The break even point where the code-gen cost is amortized by its faster serialization performance is (assuming small objects) somewhere between 20.000-40.000 serialized objects. As it turned out my specific scenario involved about 100 types and 1000 serializations in total. That explains why the good old DataContractSerializer is not so easy to take out of business. The final approach I ended up was to reduce the number of types and to serialize primitive types via BinaryWriter directly which turned out to be a pretty good alternative. It sounded good until I measured again and found that my optimizations so far do not help much. After looking more deeper at the profiling data I did found that one of the 1000 calls did take 50% of the time. So how do I find out which call it was? Normal profilers do fail short at this discipline. A (totally undeserved) relatively unknown profiler is SpeedTrace which does unlike normal profilers create traces of your applications by instrumenting your IL code at runtime. This way you can look at the full call stack of the one slow serializer call to find out if this stack was something special. Unfortunately the call stack showed nothing special. But luckily I have my own tracing as well and I could see that the slow serializer call did happen during the serialization of a bool value. When you encounter after much analysis something unreasonable you cannot explain it then the chances are good that your thread was suspended by the garbage collector. If there is a problem with excessive GCs remains to be investigated but so far the serialization performance seems to be mostly ok.  When you do profile a complex system with many interconnected processes you can never be sure that the timings you just did measure are accurate at all. Some process might be hitting the disc slowing things down for all other processes for some seconds as well. There is a big difference between warm and cold startup. If you restart all processes you can basically forget the first run because of the OS disc cache, JIT and GCs make the measured timings very flexible. When you are in need of a random number generator you should measure cold startup times of a sufficiently complex system. After the first run you can try again getting different and much lower numbers. Now try again at least two times to get some feeling how stable the numbers are. Oh and try to do the same thing the next day. It might be that the bottleneck you found yesterday is gone today. Thanks to GC and other random stuff it can become pretty hard to find stuff worth optimizing if no big bottlenecks except bloatloads of code are left anymore. When I have found a spot worth optimizing I do make the code changes and do measure again to check if something has changed. If it has got slower and I am certain that my change should have made it faster I can blame the GC again. The thing is that if you optimize stuff and you allocate less objects the GC times will shift to some other location. If you are unlucky it will make your faster working code slower because you see now GCs at times where none were before. This is where the stuff does get really tricky. A safe escape hatch is to create a repro of the slow code in an isolated application so you can change things fast in a reliable manner. Then the normal profilers do also start working again. As Vance Morrison does point out it is much more complex to profile a system against the wall clock compared to optimize for CPU time. The reason is that for wall clock time analysis you need to understand how your system does work and which threads (if you have not one but perhaps 20) are causing a visible delay to the end user and which threads can wait a long time without affecting the user experience at all. Next time: Commercial profiler shootout.

    Read the article

  • My View on ASP.NET Web Forms versus MVC

    - by Ricardo Peres
    Introduction A lot has been said on Web Forms and MVC, but since I was recently asked about my opinion on the subject, here it is. First, I have to say that I really like both technologies and I don’t think any is going away – just remember SharePoint, which is built on top of Web Forms. I see them as complementary, targeting different needs and leveraging different skills. Let’s go through some of their differences. Rapid Application Development Rapid Application Development (RAD) is the development process by which you have an Integrated Development Environment (IDE), a visual design surface and a toolbox, and you drag components from the toolbox to the design surface and set their properties through a property inspector. It was introduced with some of the earliest Windows graphical IDEs such as Visual Basic and Delphi. With Web Forms you have RAD out of the box. Visual Studio offers a generally good (and extensible) designer for the layout of pages and web user controls. Designing a page may simply be about dragging controls from the toolbox, setting their properties and wiring up some events to event handlers, which are implemented in code behind .NET classes. Most people will be familiar with this kind of development and enjoy it. You can see what you are doing from the beginning. MVC also has designable pages – called views in MVC terminology – the problem is that they can be built using different technologies, some of which, at the moment (MVC 4) do not support RAD – Razor, for example. I believe it is just a matter of time for that to be implemented in Visual Studio, but it will mostly consist on HTML editing, and until that day comes, you have to live with source editing. Development Model Web Forms features the same development model that you are used to from Windows Forms and other similar technologies: events fired by controls and automatic persistence of their properties between postbacks. For that, it uses concepts such as view state, which some may love and others may hate, because it may be misused quite easily, but otherwise does its job well. Another fundamental concept is data binding, by which a collection of data can be fed to a control and have it render that data somehow – just thing of the GridView control. The focus is on the page, that’s where it all starts, and you can place everything in the same code behind class: data access, business logic, layout, etc. The controls take care of generating a great part of the HTML and JavaScript for you. With MVC there is no free lunch when it comes to data persistence between requests, you have to implement it yourself. As for event handling, that is at the core of MVC, in the form of controllers and action methods, you just don’t think of them as event handlers. In MVC you need to think more in HTTP terms, so action methods such as POST and GET are relevant to you, and may write actions to handle one or the other. Also of crucial importance is model binding: the way by which MVC converts your posted data into a .NET class. This is something that ASP.NET 4.5 Web Forms has introduced as well, but it is a cornerstone in MVC. MVC also has built-in validation of these .NET classes, which out of the box uses the Data Annotations API. You have full control of the generated HTML - except for that coming from the helper methods, usually small fragments - which requires a greater familiarity with the specifications. You normally rely much more on JavaScript APIs, they are even included in the Visual Studio template, that is because much less is done for you. Reuse It is difficult to accept a professional company/project that does not employ reuse. It can save a lot of time thus cutting costs significantly. Code reused in several projects matures as time goes by and helps developers learn from past experiences. ASP.NET Web Forms was built with reuse in mind, in the form of controls. Controls encapsulate functionality and are generally portable from project to project (with the notable exception of web user controls, those with an associated .ASCX markup file). ASP.NET has dozens of controls and it is very easy to develop new ones, so I believe this is a great advantage. A control can inject JavaScript code and external references as well as generate HTML an CSS. MVC on the other hand does not use controls – it is possible to use them, with some view engines like ASPX, but it is just not advisable because it breaks the flow – where do Init, Load, PreRender, etc, fit? The most similar to controls is extension methods, or helpers. They serve the same purpose – generating HTML, CSS or JavaScript – and can be reused between different projects. What differentiates them from controls is that there is no inheritance and no context – an extension method is just a static method which doesn’t know where it is being called. You also have partial views, which you can reuse in the same project, but there is no inheritance as well. This, in my view, is a weakness of MVC. Architecture Both technologies are highly extensible. I have writtenstarted writing a series of posts on ASP.NET Web Forms extensibility and will probably write another series on MVC extensibility as well. A number of scenarios are covered in any of these models, and some extensibility points apply to both, because, of course both stand upon ASP.NET. With Web Forms, if you’re like me, you start by defining you master pages, pages and controls, with some helper classes to glue everything. You may as well throw in some JavaScript, but probably you’re main work will be with plain old .NET code. The controls you define have the chance to inject JavaScript code and references, through either the ScriptManager or the page’s ClientScript object, as well as generating HTML and CSS code. The master page and page model with code behind classes offer a number of “hooks” by which you can change the normal way of things, for example, in a page you can access any control on the master page, add script or stylesheet references to its head and even change the page’s title. Also, with Web Forms, you typically have URLs in the form “/SomePath/SomePage.aspx?SomeParameter=SomeValue”, which isn’t really SEO friendly, no to mention the HTML that some controls produce, far from standards, optimization and best practices. In MVC, you also normally start by defining the master page (or layout) and views, which are the visible parts, and then define controllers on separate files. These controllers do not know anything about the views, except the names and types of the parameters that will be passed to and from them. The controller will be responsible for the data access and business logic, eventually relying on additional classes for this purpose. On a controller you only receive parameters and return a result, which may be a request for the rendering of a view, a redirection to another URL or a JSON object, to name just a few. The controller class does not know anything about the web, so you can effectively reuse it in a non-web project. This separation and the lack of programmatic access to the UI elements, makes it very difficult to implement, for example, something like SharePoint with MVC. OK, I know about Orchard, but it isn’t really a general purpose development framework, but instead, a CMS that happens to use MVC. Not having controls render HTML for you gives you in turn much more control over it – it is your responsibility to create it, which you can either consider a blessing or a curse, in the later case, you probably shouldn’t be using MVC at all. Also MVC URLs tend to be much more SEO-oriented, if you design your controllers and actions properly. Testing In a well defined architecture, you should separate business logic, data access logic and presentation logic, because these are all different things and it might even be the need to switch one implementation for another: for example, you might design a system which includes a data access layer, a business logic layer and two presentation layers, one on top of ASP.NET and the other with WPF; and the data access layer might be implemented first using NHibernate and later on switched for Entity Framework Code First. These changes are not that rare, so care should be taken in designing the system to make them possible. Web Forms are difficult to test, because it relies on event handlers which are only fired in web contexts, when a form is submitted or a page is requested. You can call them with reflection, but you have to set up a number of mocking objects first, HttpContext.Current first coming to my mind. MVC, on the other hand, makes testing controllers a breeze, so much that it even includes a template option for generating boilerplate unit test classes up from start. A well designed – from the unit test point of view - controller will receive everything it needs to work as parameters to its action methods, so you can pass whatever values you need very easily. That doesn’t mean, of course, that everything can be tested: views, for instance, are difficult to test without actually accessing the site, but MVC offers the possibility to compile views at build time, so that, at least, you know you don’t have syntax errors beforehand. Myths Some popular but unfounded myths around MVC include: You cannot use controls in MVC: not true, actually, you can, at least with the Web Forms (ASPX) view engine; the declaration and usage is exactly the same as with Web Forms; You cannot specify a base class for a view: with the ASPX view engine you can use the Inherits Page directive, with this and all the others you can use the pageBaseType and userControlBaseType attributes of the <page> element; MVC shields you from doing “bad things” on your views: well, you can place any code on a code block, at least with the ASPX view engine (you may be starting to see a pattern here), even data access code; The model is the entity model, tied to an O/RM: the model is actually any class that you use to pass values to a view, including (but generally not recommended) an entity model; Unit tests come with no cost: unit tests generally don’t cover the UI, although there are frameworks just for that (see WatiN, for example); also, for some tests, you will have to mock or replace either the HttpContext.Current property or the HttpContextBase class yourself; Everything is testable: views aren’t, without accessing the site; MVC relies on HTML5/some_cool_new_javascript_framework: there is no relation whatsoever, MVC renders whatever you want it to render and does not require any framework to be present. The thing is, the subsequent releases of MVC happened in a time when Microsoft has become much more involved in standards, so the files and technologies included in the Visual Studio templates reflect this, and it just happens to work well with jQuery, for example. Conclusion Well, this is how I see it. Some folks may think that I am being too rude on MVC, probably because I don’t like it, but that’s not true: like I said, I do like MVC and I am starting my new projects with it. I just don’t want to go along with that those that say that MVC is much superior to Web Forms, in fact, some things you can do much more easily with Web Forms than with MVC. I will be more than happy to hear what you think on this!

    Read the article

  • CodePlex Daily Summary for Friday, October 04, 2013

    CodePlex Daily Summary for Friday, October 04, 2013Popular ReleasesMoreTerra (Terraria World Viewer): Version 1.11: Release Notes Release 1.11 =========== =Bug Fixes= =========== Now works with Terraria 1.2 wld files. =============== =Known Issues= =============== Not all tiles and items are accounted for. Missing tiles just show up as pink. This is actively being worked on but wanted to get a build out that works with 1.2VG-Ripper & PG-Ripper: PG-Ripper 1.4.19: NEW: Added Option to login as Guest NEW: Added Support for "ImageTeam.org linksStyleMVVM: 3.1.4: This release has virtually no code change but adds multiple new Item templates for the Windows Phone 8 platformSystem Center Orchestrator Community Project: Orchestrator Visio and Word Generator 1.5: This tool lets you export Orchestrator runbooks as a Visio diagram, and you can also generate an optional Word file as well. Components exported as of v1.5 are : - Title of the runbook - Activities and their names/thumbnails/description (description is displayed as a callout in the Visio diagram, attached to the shape of the activity) - Links and their names/colors - Looping and their interval Thumbnails and activities are grouped in the Visio diagram, for easy manipulation of the diagra...State of Decay Save Manager: Version 1.0.4: Add version at bottom of formDNN® Form and List: DNN Form and List 06.00.07: DotNetNuke Form and List 06.00.06 Changes to 6.0.7•Fixed an error in datatypes.config that caused calculated fields to be missing in 6.0.6 Changes to 6.0.6•Add in Sql to remove 'text on row' setting for UserDefinedTable to make SQL Azure compatible. •Add new azureCompatible element to manifest. •Added a fix for importing templates. Changes to 6.0.2•Fix: MakeThumbnail was broken if the application pool was configured to .Net 4 •Change: Data is now stored in nvarchar(max) instead of ntext C...SimpleExcelReportMaker: Serm 0.03: SourceCode and Sample .Net Framework 3.5 AnyCPU compile.RDFSharp - Start playing with RDF!: RDFSharp-0.6.6: GENERAL (NEW) Introduction of INT64 hashing engine (codenamed "Greta"); QUERY (FIX) Incorrect query evaluation due to faulty detection of optional patterns (v0.6.5 regression); (FIX) Missing update of PatternGroupID information after adding patterns and filters to a pattern group; (FIX) Ensure Context information of a pattern is not null before trying to collect it as variable; (MISC) Changed semantics of Context information of a pattern: if not provided, it will be ignored; (MISC...Application Architecture Guidelines: App Architecture Guidelines 3.0.8: This document is an overview of software qualities, principles, patterns, practices, tools and libraries.C# Intellisense for Notepad++: Release v1.0.7.2: - smart indentation - document formatting To avoid the DLLs getting locked by OS use MSI file for the installation.BlackJumboDog: Ver5.9.6: 2013.09.30 Ver5.9.6 (1)SMTP???????、???????????????? (2)WinAPI??????? (3)Web???????CGI???????????????????????Microsoft Ajax Minifier: Microsoft Ajax Minifier 5.2: Mostly internal code tweaks. added -nosize switch to turn off the size- and gzip-calculations done after minification. removed the comments in the build targets script for the old AjaxMin build task (discussion #458831). Fixed an issue with extended Unicode characters encoded inside a string literal with adjacent \uHHHH\uHHHH sequences. Fixed an IndexOutOfRange exception when encountering a CSS identifier that's a single underscore character (_). In previous builds, the net35 and net20...AJAX Control Toolkit: September 2013 Release: AJAX Control Toolkit Release Notes - September 2013 Release (Updated) Version 7.1002September 2013 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4.5 – AJAX Control Toolkit for .NET 4.5 and sample site (Recommended). AJAX Control Toolkit .NET 4 – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Important UpdateThis release has been updated to fix two issues: Upda...WDTVHubGen - Adds Metadata, thumbnails and subtitles to WDTV Live Hubs: WDTVHubGen.v2.1.4.apifix-alpha: WDTVHubGen.v2.1.4.apifix-alpha is for testers to figure out if we got the NEW api plugged in ok. thanksVisual Log Parser: VisualLogParser: Portable Visual Log Parser for Dotnet 4.0AudioWordsDownloader: AudioWordsDownloader 1.1 build 88: New features list of words (mp3 files) is available upon typing when a download path is defined list of download paths is added paths history settings added Bug fixed case mismatch in word search field fixed path not exist bug fixed when history has been used path, when filled from dialog, not stored refresh autocomplete list after path change word sought is deleted when path is changed at the end sought word list is deleted word list not refreshed download ends. word lis...Wsus Package Publisher: Release v1.3.1309.28: Fix a bug, where WPP crash when running on a computer where Windows was installed in another language than Fr, En or De, and launching the Update Creation Wizard. Fix a bug, where WPP crash if some Multi-Thread job are launch with more than 64 items. Add a button to abort "Install This Update" wizard. Allow WPP to remember which columns are shown last time. Make URL clickable on the Update Information Tab. Add a new feature, when Double-Clicking on an update, the default action exec...Tweetinvi a friendly Twitter C# API: Alpha 0.8.3.0: Version 0.8.3.0 emphasis on the FIlteredStream and ease how to manage Exceptions that can occur due to the network or any other issue you might encounter. Will be available through nuget the 29/09/2013. FilteredStream Features provided by the Twitter Stream API - Ability to track specific keywords - Ability to track specific users - Ability to track specific locations Additional features - Detect the reasons the tweet has been retrieved from the Filtered API. You have access to both the ma...WPF Extended DataGrid: WPF Extended DataGrid 2.0.0.4 binaries: Improved performance of GroupByAcDown?????: AcDown????? v4.5: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ??v4.5 ???? AcPlay????????v3.5 ????????,???????????30% ?? ???????GoodManga.net???? ?? ?????????? ?? ??Acfun?????????? ??Bilibili??????????? ?????????flvcd???????? ??SfAcg????????????? ???????????? ???????????????? ????32...New ProjectsAnalysis Services Activity Viewer 2012: This project was created from a blog post I wrote back in February 2013 http://redphoenix.me/2013/08/22/upgrade-activity-viewer-2008-to-sql-server-2012/ARYSTA SYSTEM: Notebook System A/R Sales * Sales Order * Delivery * Sales Invoice * Sales Return * A/R Credit Memo * A/R Debit Memo A/P Purchasing * Purchase Order BEWELL SYSTEM: This system is design for Bewell-C Incorporated. Project Manager: Ben Penafiel System Engineer: Jhay Camba Report Designer: William Damasco Caching IOC: Caching IOC Container This will cache any Interface return results making it really easy to introduce caching to your solution.Grupo Anclita: Trabajo Final Laboratorio 4Halcyonic Skin by HTML5-UP - for DNN: This skin was converted for use in DNN by Michael Doxsey. Original HTML template designed and built by HTML5-UP: http://html5up.net/halcyonic/ ProPro: project about other projectsS3Unlock: Unlocks S3 agents that are stuck installing.sdfsdlfsdlkj01: dsfdffsdSerendipity - Responsive Skin for DNN: This is an HTML Template by Elemis, converted for use in DNN. Elemis URL: http://elemisfreebies.com/premium-themes/ Free for personal use and ed. purposes only.SmartSystemMenu: Smart system menu for you.SP 2013 Custom MultiTenant Adminstration: This Project creates a custom SP 2013 Tenant Admin site template covering limitations of existing tenant admin site.Telephasic Skin by HTML5-UP - for DNN: This skin was converted for use in DNN by Michael Doxsey. Original HTML template designed and built by HTML5-UP: http://html5up.net/telephasic/TelerikedIn: Social web app trying to look and feel like the famous LinkedInTerra 2: Generador de personajes para el juego de rol Terra2tsydev01: TextLineUpdateVds2465 Parser: This Project is about implementing a Parser for the Vds2465 protocol. It includes parsing and generating Vds2465 telegram bytes.Your Appliances: Empresa De ElectrodomesticosZeroFour by HTML5-UP - for DNN: This skin was converted for use in DNN by Michael Doxsey. Original HTML template designed and built by HTML5-UP: http://html5up.net/zerofour.

    Read the article

  • How do I set up MVP for a Winforms solution?

    - by JonWillis
    Question moved from Stackoverflow - http://stackoverflow.com/questions/4971048/how-do-i-set-up-mvp-for-a-winforms-solution I have used MVP and MVC in the past, and I prefer MVP as it controls the flow of execution so much better in my opinion. I have created my infrastructure (datastore/repository classes) and use them without issue when hard coding sample data, so now I am moving onto the GUI and preparing my MVP. Section A I have seen MVP using the view as the entry point, that is in the views constructor method it creates the presenter, which in turn creates the model, wiring up events as needed. I have also seen the presenter as the entry point, where a view, model and presenter are created, this presenter is then given a view and model object in its constructor to wire up the events. As in 2, but the model is not passed to the presenter. Instead the model is a static class where methods are called and responses returned directly. Section B In terms of keeping the view and model in sync I have seen. Whenever a value in the view in changed, i.e. TextChanged event in .Net/C#. This fires a DataChangedEvent which is passed through into the model, to keep it in sync at all times. And where the model changes, i.e. a background event it listens to, then the view is updated via the same idea of raising a DataChangedEvent. When a user wants to commit changes a SaveEvent it fires, passing through into the model to make the save. In this case the model mimics the view's data and processes actions. Similar to #b1, however the view does not sync with the model all the time. Instead when the user wants to commit changes, SaveEvent is fired and the presenter grabs the latest details and passes them into the model. in this case the model does not know about the views data until it is required to act upon it, in which case it is passed all the needed details. Section C Displaying of business objects in the view, i.e. a object (MyClass) not primitive data (int, double) The view has property fields for all its data that it will display as domain/business objects. Such as view.Animals exposes a IEnumerable<IAnimal> property, even though the view processes these into Nodes in a TreeView. Then for the selected animal it would expose SelectedAnimal as IAnimal property. The view has no knowledge of domain objects, it exposes property for primitive/framework (.Net/Java) included objects types only. In this instance the presenter will pass an adapter object the domain object, the adapter will then translate a given business object into the controls visible on the view. In this instance the adapter must have access to the actual controls on the view, not just any view so becomes more tightly coupled. Section D Multiple views used to create a single control. i.e. You have a complex view with a simple model like saving objects of different types. You could have a menu system at the side with each click on an item the appropriate controls are shown. You create one huge view, that contains all of the individual controls which are exposed via the views interface. You have several views. You have one view for the menu and a blank panel. This view creates the other views required but does not display them (visible = false), this view also implements the interface for each view it contains (i.e. child views) so it can expose to one presenter. The blank panel is filled with other views (Controls.Add(myview)) and ((myview.visible = true). The events raised in these "child"-views are handled by the parent view which in turn pass the event to the presenter, and visa versa for supplying events back down to child elements. Each view, be it the main parent or smaller child views are each wired into there own presenter and model. You can literately just drop a view control into an existing form and it will have the functionality ready, just needs wiring into a presenter behind the scenes. Section E Should everything have an interface, now based on how the MVP is done in the above examples will affect this answer as they might not be cross-compatible. Everything has an interface, the View, Presenter and Model. Each of these then obviously has a concrete implementation. Even if you only have one concrete view, model and presenter. The View and Model have an interface. This allows the views and models to differ. The presenter creates/is given view and model objects and it just serves to pass messages between them. Only the View has an interface. The Model has static methods and is not created, thus no need for an interface. If you want a different model, the presenter calls a different set of static class methods. Being static the Model has no link to the presenter. Personal thoughts From all the different variations I have presented (most I have probably used in some form) of which I am sure there are more. I prefer A3 as keeping business logic reusable outside just MVP, B2 for less data duplication and less events being fired. C1 for not adding in another class, sure it puts a small amount of non unit testable logic into a view (how a domain object is visualised) but this could be code reviewed, or simply viewed in the application. If the logic was complex I would agree to an adapter class but not in all cases. For section D, i feel D1 creates a view that is too big atleast for a menu example. I have used D2 and D3 before. Problem with D2 is you end up having to write lots of code to route events to and from the presenter to the correct child view, and its not drag/drop compatible, each new control needs more wiring in to support the single presenter. D3 is my prefered choice but adds in yet more classes as presenters and models to deal with the view, even if the view happens to be very simple or has no need to be reused. i think a mixture of D2 and D3 is best based on circumstances. As to section E, I think everything having an interface could be overkill I already do it for domain/business objects and often see no advantage in the "design" by doing so, but it does help in mocking objects in tests. Personally I would see E2 as a classic solution, although have seen E3 used in 2 projects I have worked on previously. Question Am I implementing MVP correctly? Is there a right way of going about it? I've read Martin Fowler's work that has variations, and I remember when I first started doing MVC, I understood the concept, but could not originally work out where is the entry point, everything has its own function but what controls and creates the original set of MVC objects.

    Read the article

  • Does HTML 5 &ldquo;Rich vs. Reach&rdquo; a False Choice?

    - by andrewbrust
    The competition between the Web and proprietary rich platforms, including Windows, Mac OS, iPhone/iPad, Adobe’s Flash/AIR and Microsoft’s Silverlight, is not new. But with the emergence of HTML 5 and imminent support for it in the next release of the major Web browsers, the battle is heating up. And with the announcements made Wednesday at Google's I/O conference, it's getting kicked up yet another notch. The impact of this platform battle on companies in the media and advertising world, and the developers who serve them, is significant. The most prominent question is whether video and rich media online will shift towards pure HTML and away from plug-ins like Flash and Silverlight. In fact, certain features in HTML 5 make it suitable for development for line of business applications as well, further threatening those plug-in technologies. So what's the deal? Is this real or hype? To answer that question, I've done my own research into HTML 5's features and talked to several media-focused, New York area developers to get their opinions. I present my findings to you in this post. Before bearing down into HTML 5 specifics and practitioners’ quotes, let's set the context. To understand what HTML 5 can do, take a look at this video of Sports Illustrated’s HTML 5 prototype. This should start to get you bought into the idea that HTML 5 could be a game-changer. Next, if you happen to have installed the beta version of Google's Chrome 5 browser, take a look at the page linked to below, and in that page, click on any of the game thumbnails to see what's possible, without a plug-in, in this brave new world. (Note, although the instructions for each game tell you to press the A key to start, press the Z key instead.). Here's the link: http://www.kesiev.com/akihabara As an adjunct to what's enabled by HTML 5, consider the various transforms that are part of CSS 3. If you're running Safari as your browser, the following link will showcase this live; if not, you'll see a bitmap that will give you an idea of what's possible: http://webkit.org/blog/386/3d-transforms Are you starting to get the picture (literally)? What has up until now required browser plug-ins and other patches to HTML, most typically Flash, will soon be renderable, natively, in all major browsers. Moreover, it's looking likely that developers will be able to deliver such content and experiences in these browsers using one base of markup and script code (using straight JavaScript and/or jQuery), without resorting to browser-specific code and workarounds. If you're skeptical of this, I wouldn't blame you, especially with respect to Microsoft's Internet Explorer. However, i can tell you with confidence that even Microsoft is dedicated to full-on HTML 5 support in version 9 of that browser, which is currently under development. So what’s new in HTML 5, specifically, that makes sites like this possible?  The specification documents go into deep detail, and there’s no sense in rehashing them here, but a summary is probably in order.   Here is a non-authoritative, but useful, list of the major new feature areas in HTML 5: 2D drawing capabilities and 3D transforms. 2D drawing instructions can be embedded statically into a Web page; application interactivity and animation can be achieved through script.  As mentioned above, 3D transforms are technically part of version 3 of the CSS (Cascading Style Sheets) spec, rather than HTML 5, but they can nonetheless be thought of as part of the bundle.  They allow for rendering of 3D images and animations that, together with 2D drawing, make HTML-based games much more feasible than they are presently, as the links above demonstrate. Embedded audio and video. A media player can appear directly in a rendered Web page, using HTML markup and no plug-ins. Alternately, player controls can be hidden and the content can play automatically. Major enhancements to form-based input. This includes such things as specification of required fields, embedding of text “hints” into a control, limiting valid input on a field to dates, email addresses or a list of values.  There’s more to this, but the gist is that line-of-business applications, with complicated input and data validation, are supported directly Offline caching, local storage and client-side SQL database. These facilities allow Web applications to function more like native apps, even if no internet connection is available. User-defined data. Data (or metadata – data about data) can easily be embedded statically and/or retrieved and updated with Javascript code. This avoids having to embed that data in a separate file, or within script code. Taken together, these features position HTML to compete with, and perhaps overtake, Adobe’s Flash/AIR (and Microsoft’s Silverlight) as a viable Web platform for media, RIAs (rich internet applications – apps that function more like desktop software than Web sites) and interactive Web content, including games. What do players in the media world think about this?  From the embedded video above, we know what Sports Illustrated (and, therefore, Time Warner) think.  Hulu, the major Internet site for broadcast TV content, is on record as saying HTML5 video does not pass muster with them, at least not yet.  YouTube, on the other hand, already has an experimental HTML 5-based version of their site.  TechCrunch has reported that NetFlix is flirting with HTML 5 too, especially as it pertains to embedded browsers in TV-based devices.  And the New York Times’ Web site now embeds some video clips without resorting to Flash.  They have to – otherwise iPhone, iPod Touch and iPad users couldn’t see them in the Mobile Safari browser. What do media-focused developers think about all this?  I talked to several to get their opinions. Michael Pinto is CEO and Founder of Very Memorable Design whose primary focus has been to help marketing directors get traction online.  The firm’s client roster includes the likes Time, Inc., Scholastic and PBS.  Pinto predicts that “More and more microsites that were done entirely in Flash will be done more and more using jQuery. I can also see slideshows and video now being done without Flash. However if you needed to create a game or highly interactive activity Flash would still be the way to go for the web.” A dissenting view comes from Jesse Erlbaum, CEO of The Erlbaum Group, LLC, which serves numerous clients in the magazine publishing sector.  When I asked Erlbaum whether he thought HTML 5 and jQuery/JavaScript would steal significant market share from Flash, he responded “Not at all!  In particular, not for media and advertising customers!  These sectors are not generally in the business of making highly functional applications, which is the one place where HTML5/jQuery/etc really shines.” Ironically, Pinto’s firm is a heavy user of Flash for its projects and Erlbaum’s develops atop the “LAMP” (Linux, Apache, MySQL and PHP/Perl) stack.  For whatever reason, each firm seems to see the other’s toolset as a more viable choice.  But both agree that the developer tool story around HTML 5 is deficient.  Pinto explains “What’s lost with [HTML 5 and Javascript] techniques is that there isn’t a single widely favored easy-to-use tool of choice for authoring. So with Flash you can get up and running right away and not worry about what is different from one browser to the next.“  Erlbaum agrees, saying: “HTML5/Javascript lacks a sophisticated integrated development environment (IDE) which is an essential part of Flash.  If what someone is trying to make is primarily animation, it's a waste of time…to do this in Javascript.  It can be done much more easily in Flash, and with greater cross-browser compatibility and consistency due to the ubiquity of Flash.” Adobe (maker of Flash since its 2005 acquisition of Macromedia) likely agrees.  And for better or worse, they’ve decided to address this shortcoming of HTML 5, even at risk of diminishing their Flash platfrom. Yesterday Adobe announced that their hugely popular Deamweaver Web design authoring tool would directly support HTML 5 and CSS 3 development.  In fact, the Adobe Dreamweaver CS5 HTML5 Pack is downloadable now from Adobe Labs. Maybe Adobe is bowing to pressure from ardent Web professionals like Scott Kellum, Lead Designer at Channel V Media,  a digital and offline branding firm, serving the media and marketing sectors, among others.  Kellum told me that HTML 5 “…will definitely move people away from Flash. It has many of the same functionalities with faster load times and better accessibility. HTML5 will help Flash as well: with the new caching methods you can now even run Flash apps offline.” Although all three Web developers I interviewed would agree that Flash is still required for more sophisticated applications, Kellum seems to have put his finger on why HTML 5 may nonetheless dominate.  In his view, much of the Web development out there has little need for high-end capabilities: “Most people want to add a little punch to a navigation bar or some video and now you can get the biggest bang for your buck with HTML5, CSS3 and Javascript.” I’ve already mentioned that Google’s ongoing I/O conference, at the Moscone West center in San Francisco, is driving the HTML 5 news cycle, big time.  And Google made many announcements of their own, including the open sourcing of their VP8 video codec, new enterprise-oriented capabilities for its App Engine cloud offering, and the creation of the Chrome Web Store, which the company says will make it easier to find and “install” Web applications, in a fashion similar to  the way users procure native apps on various mobile platforms. HTML 5 looks to be disruptive, especially to the media world.  And even if the technology ends up disappointing, the chatter around it alone is causing big changes in the technology world.  If the richness it promises delivers, then magazine publishers and non-text digital advertisers may indeed have a platform for creating compelling content that loads quickly, is standards-based and will render identically in (the newest versions of) all major Web browsers.  Can this development in the digital arena save the titans of the print world?  I can’t predict, but it’s going to be fun to watch, and the competitive innovation from all players in both industries will likely be immense.

    Read the article

  • CodePlex Daily Summary for Tuesday, September 18, 2012

    CodePlex Daily Summary for Tuesday, September 18, 2012Popular ReleasesfastJSON: v2.0.5: 2.0.5 - fixed number parsing for invariant format - added a test for German locale number testing (,. problems)????????API for .Net SDK: SDK for .Net ??? Release 4: 2012?9?17??? ?????,???????????????。 ?????Release 3??????,???????,???,??? ??????????????????SDK,????????。 ??,??????? That's all.VidCoder: 1.4.0 Beta: First Beta release! Catches up to HandBrake nightlies with SVN 4937. Added PGS (Blu-ray) subtitle support. Additional framerates available: 30, 50, 59.94, 60 Additional sample rates available: 8, 11.025, 12 and 16 kHz Additional higher bitrates available for audio. Same as Source Constant Framerate available. Added Apple TV 3 preset. Added new Bob deinterlacing option. Introduced process isolation for encodes. Now if HandBrake crashes, VidCoder will keep running and continue pro...DNN Metro7 style Skin package: Metro7 style Skin for DotNetNuke 06.02.01: Stabilization release fixed this issues: Links not worked on FF, Chrome and Safari Modified packaging with own manifest file for install and source package. Moved the user Image on the Login to the left side. Moved h2 font-size to 24px. Note : This release Comes w/o source package about we still work an a solution. Who Needs the Visual Studio source files please go to source and download it from there. Known 16 CSS issues that related to the skin.css. All others are DNN default o...Visual Studio Icon Patcher: Version 1.5.1: This fixes a bug in the 1.5 release where it would crash when no language packs were installed for VS2010.sheetengine - Isometric HTML5 JavaScript Display Engine: sheetengine v1.1.0: This release of sheetengine introduces major drawing optimizations. A background canvas is created with the full drawn scenery onto which only the changed parts are redrawn. For example a moving object will cause only its bounding box to be redrawn instead of the full scene. This background canvas is copied to the main canvas in each iteration. For this reason the size of the bounding box of every object needs to be defined and also the width and height of the background canvas. The example...VFPX: Desktop Alerts 1.0.2: This update for the Desktop Alerts contains changes to behavior for setting custom sounds for alerts. I have removed ALERTWAV.TXT from the project, and also removed DA_DEFAULTSOUND from the VFPALERT.H file. The AlertManager class and Alert class both have a "default" cSound of ADDBS(JUSTPATH(_VFP.ServerName))+"alert.wav" --- so, as long as you distribute a sound file with the file name "alert.wav" along with the EXE, that file will be used. You can set your own sound file globally by setti...MCEBuddy 2.x: MCEBuddy 2.2.15: Changelog for 2.2.15 (32bit and 64bit) 1. Added support for %originalfilepath% to get the source file full path. Used for custom commands only. 2. Added support for better parsing of Media Portal XML files to extract ShowName and Episode Name and download additional details from TVDB (like Season No, Episode No etc). 3. Added support for TVDB seriesID in metadata 4. Added support for eMail non blocking UI testCrashReporter.NET : Exception reporting library for C# and VB.NET: CrashReporter.NET 1.2: *Added html mail format which shows hierarchical exception report for better understanding.PDF Viewer Web part: PDF Viewer Web Part: PDF Viewer Web PartMicrosoft Ajax Minifier: Microsoft Ajax Minifier 4.67: Fix issue #18629 - incorrectly handling null characters in string literals and not throwing an error when outside string literals. update for Issue #18600 - forgot to make the ///#DEBUG= directive also set a known-global for the given debug namespace. removed the kill-switch for disregarding preprocessor define-comments (///#IF and the like) and created a separate CodeSettings.IgnorePreprocessorDefines property for those who really need to turn that off. Some people had been setting -kil...MPC-BE: Media Player Classic BE 1.0.1.0 build 1122: MPC-BE is a free and open source audio and video player for Windows. MPC-BE is based on the original "Media Player Classic" project (Gabest) and "Media Player Classic Home Cinema" project (Casimir666), contains additional features and bug fixes. Supported Operating Systems: Windows XP SP2, Vista, 7 32bit/64bit System Requirements: An SSE capable CPU The latest DirectX 9.0c runtime (June 2010). Install it regardless of the operating system, they all need it. Web installer: http://www.micro...Preactor Object Model: Visual Studio Template .NET 3.5: Visual Studio Template with all the necessary files to get started with POM. You will still need to Get the Preactor.ObjectModel and Preactor.ObjectModleExtensions libraries from Nuget though. You will also need to sign with assembly with a strong name key.Lakana - WPF Framework: Lakana V2: Lakana V2 contains : - Lakana WPF Forms (with sample project) - Lakana WPF Navigation (with sample project)myCollections: Version 2.3.0.0: New in this version : Added TheGamesDB.net API for games and nds Added Fast search options Added order by Artist/Album for music Fixed several provider Performance improvement New Splash Screen BugFixingMicrosoft SQL Server Product Samples: Database: OData QueryFeed workflow activity: The OData QueryFeed sample activity shows how to create a workflow activity that consumes an OData resource, and renders entity properties in a Microsoft Excel 2010 worksheet or Microsoft Word 2010 document. Using the sample QueryFeed activity, you can consume any OData resource. The sample activity uses LINQ to project OData metadata into activity designer expression items. By setting activity expressions, a fully qualified OData query string is constructed consisting of Resource, Filter, Or...F# 3.0 Sample Pack: FSharp 3.0 Sample Pack for Visual Studio 2012 RTM: F# 3.0 Sample Pack for Visual Studio 2012 RTMANPR MX: ANPR_MX Release 1: ANPR MX Release 1 Features: Correctly detects plate area for the average North American plate. (It won't work for the "European" plate size.) Provides potential values for the recognized plate. Allows images 800x600 and below (.jpg / .png). The example requires the VC 10 runtime & .NET 4 Framework to be already installed. The Source code project was made on Visual Studio 2010.Cocktail: Cocktail v1.0.1: PrerequisitesVisual Studio 2010 with SP1 (any edition but Express) Optional: Silverlight 4 or 5 Note: Install Silverlight 4 Tools and then the Silverlight 4 Toolkit. Likewise for Silverlight 5 Tools and the Silverlight 5 Toolkit DevForce Express 6.1.8.1 Included in the Cocktail download, DevForce Express requires registration) Important: Install DevForce after all other components. Download contentsDebug and release assemblies API documentation Source code License.txt Re...weber: weber v0.1: first release, creates a basic browser shell and allows user to navigate to web sites.New Projects.NET Code Editor & Compiler Component: .Net compiler component with integrated advanced text box, VisualStudio like highlightning, ability to intercept and show StandardOutput strings.NET Plugin Manager: Provides agnostic functionality for tiered plugin loading, unloading, and plugin collection management.Amazon Control Panel v2: Amazon Control Panel is a application that lets you control you Amazon Seller Central account using the Amazon MWS (Merchant Web Service) API.AutoSPSourceBuilder: AutoSPSourceBuilder: a utility for automatically building a SharePoint 2010 or 2013 install source including service packs, language packs & cumulative updates.CAOS: RBAC acess controllChat Forum: An Internet  forum,  or message  board,  is  an online discussion  site conversations  in  the  form  of  posted  messages.CRM 2011 - Many-To-Many Relationship Entity View: This Silverlight Web Resource for CRM 2011 will allow user to see N:N relationship entity data from single place.dardasim: dardasim gil and lior Tel Cabir DolphinsDBAManage: ???????ERP??,????!DimDate Generator: A SSIS project for generation a data dimansion table and data.DNL: eine grße .net bibliothek für entwicklerDouban FM for Metro: A music radio client for http://douban.fm running on Windows 8 / WinRTExtended WPF Control: Extended WPF Control for research and learning.FizzBuzzDaveC: Implements the classic FizzBuzz programmine exercise.HamStart: Nothing for now...Infopath XSN Modifier: A tool for editing the dataconnections of Infopath.KH Picture Resizer: Picture Resizer ermöglicht es Bilder per Drag and Dop zu verkleinern. Das Program wurde in C# geschrieben und nutzt Windows Forms.Korean String Extension for .NET: ?? ??? ??? ????? ???? string??? ??? ????? Extension library for "string" class that enhances "Hangul Jamo system" features Lucky Loot - Tattoo Shop Management Application: Lucky Loot - Tattoo Shop Management Application Por: Eric Gabriel Rodrigues Castoldi Objetivo: Trabalho de Conclusão de Curso de Sistemas de InformaçãoMagnOS - C# Cosmos Operating System: MagnOS is an Open Source operating system, made to learn how to make operating systems with Cosmos.Móa mày: Project m?iOData Samples: A collection of samples demonstrating solutions and functionality in WCF Data Services, ODataLib and EdmLib.Online Image Editor: Online Photo CanvasOptimuss Administración: La mejor aplicación de Gestión y Control EscolarOptimuss Obelix: La mejor aplicación de Gestión y Control EscolarPersonal Website: My personal websitePlanisoft: Proyecto de Planilla para clinica los fresnosPROYECTODT: ..................................................................................................................PtLibrary: PtLibrary stands for Peter Thönell's Delphi library. PtSettings and PtSettingsGUI make the management and use of settings extremely easy and powerful.RTS WebServer: A small lightweight, modern and fast webserver (template). with in the feature the newest technologies like SPDY and websocketsStandards: Standards is an Intranet application (using Windows authentication) designed to document and manage company standards. It is written in C#/MVC 4.Truttle OS: This is an OS I made with CosmosWfp System: zdgdsfgdsfgzpo: projekt na zaliczenie zpo???: ???

    Read the article

  • VSNewFile: A Visual Studio Addin to More Easily Add New Items to a Project

    - by InfinitiesLoop
    My first Visual Studio Add-in! Creating add-ins is pretty simple, once you get used to the CommandBar model it is using, which is apparently a general Office suite extensibility mechanism. Anyway, let me first explain my motivation for this. It started out as an academic exercise, as I have always wanted to dip my feet in a little VS extensibility. But I thought of a legitimate need for an add-in, at least in my personal experience, so it took on new life. But I figured I can’t be the only one who has felt this way, so I decided to publish the add-in, and host it on GitHub (VSNewFile on GitHub) hoping to spur contributions. Adding Files the Built-in Way Here’s the problem I wanted to solve. You’re working on a project, and it’s time to add a new file to the project. Whatever it is – a class, script, html page, aspx page, or what-have-you, you go through a menu or keyboard shortcut to get to the “Add New Item” dialog. Typically, you do it by right-clicking the location where you want the file (the project or a folder of it): This brings up a dialog the contains, well, every conceivable type of item you might want to add. It’s all the available item templates, which can result in anywhere from a ton to a veritable sea of choices. To be fair, this dialog has been revamped in Visual Studio 2010, which organizes it a little better than Visual Studio 2008, and adds a search box. It also loads noticeably faster.   To me, this dialog is just getting in my way. If I want to add a JavaScript script to my project, I don’t want to have to hunt for the script template item in this dialog. Yes, it is categorized, and yes, it now has a search box. But still, all this UI to swim through when all I need is a new file in the project. I will name it. I will provide the content, I don’t even need a ‘template’. VS kind of realizes this. In the add menu in a class library project, for example, there is a “Add Class…” choice. But all this really does is select that project item from the dialog by default. You still must wait for the dialog, see it, and type in a name for the file. How is that really any different than hitting F2 on an existing item? It isn’t. Adding Files the Hack Way What I often find myself doing, just to avoid going through this dialog, is to copy and paste an existing file, rename it, then “CTRL-A, DEL” the content. In a few short keystrokes I’ve got my new file. Even if the original file wasn’t the right type, it doesn’t matter – I will rename it anyway, including the extension. It works well enough if the place I am adding the file to doesn’t have much in it already. But if there are a lot of files at that level, it sucks, because the new file will have the name “Copy of xyz”, causing it to be moved into the ‘C’ section of the alphabetically sorted items, which might be far, far away from the original file (and so I tend to try and copy a file that starts with ‘C’ *evil grin*). Using ‘Export Template’ To be completely fair I should at least mention this feature. I’m not even sure if this is new in VS 2010 or not (I think so). But it allows you to export a project item or items, including potential project references required by it. Then it becomes a new item in the available ‘installed templates’. No doubt this is useful to help bootstrap new projects. But that still requires you to go through the ‘New Item’ dialog. Adding Files with VSNewFile So hopefully I have sufficiently defined the problem and got a few of you to think, “Yeah, me too!”… What VSNewFile does is let you skip the dialog entirely by adding project items directly to the context menu. But it does a bit more than that, so do read on. For example, to add a new class, you can right-click the location and pick that option. A new .cs file is instantly added to the project, and the new item is selected and put into the ‘rename’ mode immediately. The default items available are shown here. But you can customize them. You can also customize the content of each template. To do so, you create a directory in your documents folder, ‘VSNewFile Templates’. In there, you drop the templates you want to use, but you name them in a particular way. For example, here’s a template that will add a new item named “Add TITLE”. It will add a project item named “SOMEFILE.foo” (or ‘SOMEFILE1.foo’ if that exists, etc). The format of the file name is: <ORDER>_<KEY>_<BASE FILENAME>_<ICON ID>_<TITLE>.<EXTENTION> Where: <ORDER> is a number that lets you determine the order of the items in the menu (relative to each other). <KEY> is a case sensitive identifier different for each template item. More on that later. <BASE FILENAME> is the default name of the file, which doesn’t matter that much, since they will be renaming it anyway. <ICON ID> is a number the dictates the icon used for the menu item. There are a huge number of built-in choices. More on that later. <TITLE> is the string that will appear in the menu. And, the contents of the file are the default content for the item (the ‘template’). The content of the file can contain anything you want, of course. But it also supports two tokens: %NAMESPACE% and %FILENAME%, which will be replaced with the corresponding values. Here is the content of this sample: testing Namespace = %NAMESPACE% Filename = %FILENAME% I kind went back and forth on this. I could have made it so there’d be an XML or JSON file that defines the templates, instead of cramming all this data into the filename itself. I like the simplicity of this better. It makes it easy to customize since you can literally just throw these files around, copy them from someone else, etc, without worrying about merge data into a central description file, in whatever format. Here’s our new item showing up: Practical Use One immediate thing I am using this for is to make it easier to add very commonly used scripts to my web projects. For example, uh, say, jQuery? :) All I need to do is drop jQuery-1.4.2.js and jQuery-1.4.2.min.js into the templates folder, provide the order, title, etc, and then instantly, I can now add jQuery to any project I have without even thinking about “where is jQuery? Can I copy it from that other project?”   Using the KEY There are two reasons for the ‘key’ portion of the item. First, it allows you to turn off the built-in, default templates, which are: FILE = Add File (generic, empty file) VB = Add VB Class CS = Add C# Class (includes some basic usings) HTML = Add HTML page (includes basic structure, doctype, etc) JS = Add Script (includes an immediately-invoking function closure) To turn one off, just include a file with the name “_<KEY>”. For example, to turn off all the items except our custom one, you do this: The other reason for the key is that there are new Visual Studio Commands created for each one. This makes it possible to bind a keyboard shortcut to one of them. So you could, for example, have a keyboard combination that adds a new web page to your website, or a new CS class to your class library, etc. Here is our sample item showing up in the keyboard bindings option. Even though the contents of the template directory may change from one launch of Visual Studio to the next, the bindings will remain attached to any item with a particular key, thanks to it taking care not to lose keyboard bindings even though the commands are completely recreated each time. The Icon Face ID Visual Studio uses a Microsoft Office style add-in mechanism, I gather. There are a predetermined set of built-in icons available. You can use your own icons when developing add-ins, of course, but I’m no designer. I just wanted to find appropriate-ish icons for the built-in templates, and allow you to choose from an existing built-in icon for your own. Unfortunately, there isn’t a lot out there on the interwebs that helps you figure out what the built-in types are. There’s an MSDN article that describes at length a way to create a program that lists all the icons. But I don’t want to write a program to figure them out! Just show them to me! Sheesh :) Thankfully, someone out there felt the same way, and uses a novel hack to get the icons to show up in an outlook toolbar. He then painstakingly took screenshots of them, one group at a time. It isn’t complete though – there are tens of thousands of icons. But it’s good enough. If anyone has an exhaustive list, please let me, and the rest of the add-in community know. Icon Face ID Reference Installing the Add-in It will work with Visual Studio 2008 and Visual Studio 2010. Just unzip the release into your Documents\Visual Studio 20xx\Addins folder. It contains the binary and the Visual Studio “.addin” file. For example, the path to mine is: C:\Users\InfinitiesLoop\Documents\Visual Studio 2010\Addins Conclusion So that’s it! I hope you find it as useful as I have. It’s on GitHub, so if you’re into this kind of thing, please do fork it and improve it! Reference: VSNewFile on GitHub VSNewFile release on GitHub Icon Face ID Reference

    Read the article

< Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >