Search Results

Search found 17192 results on 688 pages for 'geeks with blogs'.

Page 90/688 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • Using Umbraco&rsquo;s Dropdown Datatype

    - by MightyZot
    In Umbraco, you could consider document types like models and data types as property types for those modes. For example, you may create a document type called “Prices” to represent a page that displays a list of prices. And, then, you might create a document type called “Price Item” to represent the price list items. A property called “Price” could then represent the price of an item. When you create the Price property, you specify the data type, which in this case might be “Number”, indicating that this particular field accepts only numerical values. Consequently, you could also create a drop down list property called “Category”, allowing you to categorize the items in your price list. To add items to the drop down list, you modify the data type definition in the Developer module of the Umbraco administrative utility. Instead of modifying the drop down data type itself, you should first make a copy by right-clicking on the Data Types node and choosing Create. Give your new data type a name in the Create dialog and click the Create button. In the Edit datatype dialog, change “Render control” to “Dropdown list”.  To add your list items, simply enter that value into the textbox next to “Add prevalue” and click Save or press the Enter key. Now that you have a new drop down list data type created, along with assigned list items, you can use it in your document type definitions just like you would the other data types.

    Read the article

  • Windows Azure Evolution &ndash; Deploy Web Sites (WAWS Part 3)

    - by Shaun
    This is the sixth post of my Windows Azure Evolution series. After talked a bit about the new caching preview feature in the previous one, let’s back to the Windows Azure Web Sites (WAWS).   Git and GitHub Integration In the third post I introduced the overview functionality of WAWS and demonstrated how to create a WordPress blog through the build-in application gallery. And in the fourth post I covered how to use the TFS service preview to deploy an ASP.NET MVC application to the web site through the TFS integration. WAWS also have the Git integration. I’m not going to talk very detailed about the Git and GitHub integration since there are a bunch of information on the internet you can refer to. To enable the Git just go to the web site item in the developer portal and click the “Set up Git publishing”. After specified the username and password the windows azure platform will establish the Git integration and provide some basic guide. As you can see, you can download the Git binaries, commit the files and then push to the remote repository. Regarding the GitHub, since it’s built on top of Git it should work. Maarten Balliauw have a wonderful post about how to integrate GitHub to Windows Azure Web Site you can find here.   WebMatrix 2 RC WebMatrix is a lightweight web application development tool provided by Microsoft. It utilizes WebDeploy or FTP to deploy the web application to the server. And in WebMatrix 2.0 RC it added the feature to work with Windows Azure. First of all we need to download the latest WebMatrix 2 through the Web Platform Installer 4.0. Just open the WebPI and search “WebMatrix”, or go to its home page download its web installer. Once we have WebMatrix 2, we need to download the publish file of our WAWS. Let’s go to the developer portal and open the web site we want to deploy and download the publish file from the link on the right hand side. This file contains the necessary information of publishing the web site through WebDeploy and FTP, which can be used in WebMatrix, Visual Studio, etc.. Once we have the publish file we can open the WebMatrix, click the Open Site, Remote Site. Then it will bring up a dialog where we can input the information of the remote site. Since we have our publish file already, we can click the “Import publish settings” and select the publish file, then we can see the site information will be populated automatically. Click OK, the WebMatrix will connect to the remote site, which is the WAWS we had deployed already, retrieve the folders and files information. We can open files in WebMatrix and modify. But since WebMatrix is a lightweight web application tool, we cannot update the backend C# code. So in this case, we will modify the frontend home page only. After saved our modification, WebMatrix will compare the files between in local and remote and then it will only upload the modified files to Windows Azure through the connection information in the publish file. Since it only update the files which were changed, this minimized the bandwidth and deployment duration. After few seconds we back to the website and the modification had been applied.   Visual Studio and WebDeploy The publish file we had downloaded can be used not only in WebMatrix but also Visual Studio. As we know in Visual Studio we can publish a web application by clicking the “Publish” item from the project context menu in the solution explorer, and we can specify the WebDeploy, FTP or File System for the publish target. Now we can use the WAWS publish file to let Visual Studio publish the web application to WAWS. Let’s create a new ASP.NET MVC Web Application in Visual Studio 2010 and then click the “Publish” in solution explorer. Once we have the Windows Azure SDK 1.7 installed, it will update the web application publish dialog. So now we can import the publish information from the publish file. Select WebDeploy as the publish method. We can select FTP as well, which is supported by Windows Azure and the FTP information was in the same publish file. In the last step the publish wizard can check the files which will be uploaded to the remote site before the actually publishing. This gives us a chance to review and amend the files. Same as the WebMatrix, Visual Studio will compare the files between local and WAWS and determined which had been changed and need to be published. Finally Visual Studio will publish the web application to windows azure through WebDeploy protocol. Once it finished we can browse our website.   FTP Deployment The publish file we downloaded contains the connection information to our web site via both WebDeploy and FTP. When using WebMatrix and Visual Studio we can select WebDeploy or FTP. WebDeploy method can be used very easily from WebMatrix and Visual Studio, with the file compare feature. But the FTP gives more flexibility. We can use any FTP client to upload files to windows azure regardless which client and OS we are using. Open the publish file in any text editor, we can find the connection information very easily. As you can see the publish file is actually a XML file with WebDeploy and FTP information in plain text attributes. And once we have the FTP URL, username and password, when can connect to the site and upload and download files. For example I opened FileZilla and connected to my WAWS through FTP. Then I can download files I am interested in and modify them on my local disk. Then upload back to windows azure through FileZilla. Then I can see the new page.   Summary In this simple and quick post I introduced vary approaches to deploy our web application to Windows Azure Web Site. It supports TFS integration which I mentioned previously. It also supports Git and GitHub, WebDeploy and FTP as well.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Cloud-Burst 2012&ndash;Windows Azure Developer Conference in Sweden

    - by Alan Smith
    The Sweden Windows Azure Group (SWAG) will running “Cloud-Burst 2012”, a two-day Windows Azure conference hosted at the Microsoft offices in Akalla, near Stockholm on the 27th and 28th September, with an Azure Hands-on Labs Day at AddSkills on the 29th September. The event is free to attend, and will be featuring presentations on the latest Azure technologies from Microsoft MVPs and evangelists. The following presentations will be delivered on the Thursday (27th) and Friday (29th): · Connecting Devices to Windows Azure - Windows Azure Technical Evangelist Brady Gaster · Grid Computing with 256 Windows Azure Worker Roles - Connected System Developer MVP Alan Smith · ‘Warts and all’. The truth about Windows Azure development - BizTalk MVP Charles Young · Using Azure to Integrate Applications - BizTalk MVP Charles Young · Riding the Windows Azure Service Bus: Cross-‘Anything’ Messaging - Windows Azure MVP & Regional Director Christian Weyer · Windows Azure, Identity & Access - and you - Developer Security MVP Dominick Baier · Brewing Beer with Windows Azure - Windows Azure MVP Maarten Balliauw · Architectural patterns for the cloud - Windows Azure MVP Maarten Balliauw · Windows Azure Web Sites and the Power of Continuous Delivery - Windows Azure MVP Magnus Mårtensson · Advanced SQL Azure - Analyze and Optimize Performance - Windows Azure MVP Nuno Godinho · Architect your SQL Azure Databases - Windows Azure MVP Nuno Godinho   There will be a chance to get your hands on the latest Azure bits and an Azure trial account at the Hands-on Labs Day on Saturday (29th) with Brady Gaster, Magnus Mårtensson and Alan Smith there to provide guidance, and some informal and entertaining presentations. Attendance for the conference and Hands-on Labs Day is free, but please only register if you can make it, (and cancel if you cannot). Cloud-Burst 2012 event details and registration is here: http://www.azureug.se/CloudBurst2012/ Registration for Sweden Windows Azure Group Stockholm is here: swagmembership.eventbrite.com The event has been made possible by kind contributions from our sponsors, Knowit, AddSkills and Microsoft Sweden.

    Read the article

  • Paper Gold Rush

    - by Chris G. Williams
    The last few days at the shop have been reminiscent of a marathon of Pawn Stars. Quite a few people have come in wanting to trade for store credit. Most of them have left disappointed. We did pick up a few things here and there (which hopefully I can sell.) The problem, in a nutshell, is that people get it in their head that a (YuGiOh) card is worth X amount because they looked it up 2-3 years a...go, or someone told them it was valuable... then they play it in their deck for a year without sleeves, and cram it in a binder covered in duct tape. By the time they bring the cards in to me, new sets have come out which often de-value the tournament usefulness of the card from $20 to *maybe* 50 cents, in mint condition. Which means I can offer them about 10-15 cents... only they are almost never in mint condition, which means I usually offer them nothing at all. Most of the time, you can watch their smile fade as I start going through their cards. It's kinda sad, really, since I know they think they've spent the last two years walking around with the keys to their own personal gold mine. I don't really enjoy seeing that look on a child's face. I like kids and I remember those moments when perception and reality crashed headlong into each other. It was seldom pretty. So, when I'm talking to a child, I try to take it easy on them and give them some suggestions on how to better preserve their cards. Sometimes though, it's an adult. Depending on the situation, my response to them varies pretty broadly. Most of the time though, I still feel pretty bad when it doesn't go their way.

    Read the article

  • Silverlight Cream for November 16, 2011 -- #1167

    - by Dave Campbell
    In this Issue: Michael Crump, Andrea Boschin, Michael Sync, WindowsPhoneGeek(-2-), Erno de Weerd, Jesse Liberty, Derik Whittaker, Antoni Dol, Walter Ferrari, and Jeff Blankenburg(-2-). Above the Fold: Silverlight: "10 Laps around Silverlight 5 (Part 6 of 10)" Michael Crump WP7: "31 Days of Mango | Day #2: Device Status" Jeff Blankenburg Metro/WinRT/W8: "Lighting up your C# Metro apps by being a Share Target" Derik Whittaker Shoutouts: Michael Palermo's latest Desert Mountain Developers is up Michael Washington's latest Visual Studio #LightSwitch Daily is up SilverlightShow has announced a webinar you probably don't want to miss: Webinar – Introduction to XAML Development on Windows 8 Check out the top 5 from last week at SilverlightShow: SilverlightShow for November 07 - 13, 2011 From SilverlightCream.com: 10 Laps around Silverlight 5 (Part 6 of 10) Michael Crump covers a lot of territory in this Part 6 of his Silverlight 5 Beta series at SilverlightShow: P/Invoke, Multiple Windows, and Full Trust Windows Phone 7.5 - Manipulating camera stream Andrea Boschin has Part 4 of his Mango series up at SilverlightShow. He's discussing accessing the raw stream from the camera and saving it to a file. Blend 4 + VS 2011 (Preview) = Problem? Michael Sync reports a problem with Blend 4 and the VS2011 preview... followed up by a set of scripts that were posted on Connect to make the problem go away (at least for Michael) Windows Phone Toolkit MultiselectList in depth | Part1: key concepts and API WindowsPhoneGeek begins a series on the MultiselectList in the Phone Toolkit... if you've seen his tutorials, you know they're great... this one is no exception.. lots of code, info and notes getting you on-board with the features Getting Started with Windows Phone Alarms WindowsPhoneGeek next takes a sidestep from his new series and has this post on Alarms in WP7 apps .. one of the type of scheduled actions in WP7.1 ... good write-up, pictures and code Using AppHarbor, Bitbucket and Mercurial with ASP.NET and Silverlight – Part 3 Membership and Role Provider in SQL Server Erno de Weerd's part 3 of his series is up... adding Role and Membership to his application... check it out in this 17-step tutorial Yet Another Podcast #51–Shawn Wildermuth: //build, Xaml Programming & Beyond Jesse Liberty has another of his Yet Another Podcasts up and he's talking with Jon Galloway and Shawn Wildermuth... hear what *that* trio has to say about post //BUILD, and all things XAML Lighting up your C# Metro apps by being a Share Target Derik Whittaker continues to work with Metro... evidenced by this post on wiring your app up to be a Share Target .. allowing your app to consume data from other apps Photoshop in METRO style 2: Filters Antoni Dol follows up his Photoshop in Metro post with this one on filters... he's got some great screenshots... was hoping to see a link to the code... maybe I missed it! Silverlight and Sharepoint working together: a Silverlight menu for Sharepoint - Part 1 Walter Ferrari has part 1 of a series up at SilverlightShow talking about Sharepoint and Silverlight, and using Silverlight Navigation in place of what Sharepoint offers up. 31 Days of Mango | Day #2: Device Status Jeff Blankenburg is motoring along on his 31 Days of Mango. This is his Day 2 post and all about DeviceStatus, or just about everything you would like to know about your user's phone 31 Days of Mango | Day #3: Alarms and Reminders Day 3 of Jeff Blankenburg's series is about Alarms and Reminders... a way to alert your user that something needs to be done... you can create, edit, and delete them as needed Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • SyncToBlog #11 Stuff and more stuff

    - by Eric Nelson
    Just getting more stuff “down on paper” which grabbed my attention over the last couple of weeks. http://www.koodibook.com/ is live. This is a a rich desktop application built in WPF by some ex-colleagues and current friends :-) Check it out if “photo books” is your thing or you like sweet WPF UX. Study rates Microsoft .NET Framework rated top, Ruby on Rails 2nd bottom. I know a bit about both of these frameworks. Both are sweet for different reasons. .NET top. Ok – I liked that. But Ruby on Rails 2nd bottom just blows away the credibility of the survey results for me. Stylecop is going Open Source. Sweet. ”…will be taking code submissions from the open source community” VMforce for running Java in the cloud. Hmmmmm… Windows Azure Guidance Code and Docs available on patterns and practices. Download both zip files. – One is just the code and the other is 7 chapters of the guide to migration. UK Architect Insight Conference post event presentations are here including a full day track of cloud stuff. http://uxkit.cloudapp.net/ This appears to be a well-kept secret but the Silverlight Demo Kit is on-line in Windows Azure. You already knew! Ok – just me then :-) 3 day Silverlight Masterclass training in the UK from people I trust and like :-) http://silverlightmasterclass.net/ (£995) SQL Server Driver for PHP 2.0 CTP adds PHP's PDO style data access for SQL Server/SQL Azure A Domain Oriented N-Layered .NET 4.0 App Sample from Microsoft Spain. Not looked at it yet – but had it recommended to me (tx Torkil Pedersen) You might also want to check out delicious stream – a blur of azure, ruby and gaming right now http://delicious.com/ericnel :-)

    Read the article

  • Responsible BI for Excel, Even for Older Versions

    - by andrewbrust
    On Wednesday, I will have the honor of co-presenting, for both The Data Warehouse Institute (TDWI) and the New York Technology Council. on the subject of Excel and BI. My co-presenter will be none other than Bill Baker, who was a Microsoft Distinguished Engineer and, essentially, the father of BI at that company.  Details on the events are here and here. We'll be talking about PowerPivot, of course, but that's not all. Probably even more important than any one product, will be our discussion of whether the usual characterization of Excel as the nemesis of IT, the guilty pleasure of business users and the antithesis of formal BI is really valid and/or hopelessly intractable. Without giving away our punchline, I'll tell you that we are much more optimistic than that. There are huge upsides to Excel and while there are real dangers to using it in the BI space, there are standards and practices you can employ to ensure Excel is used responsibly. And when those practices are followed, Excel becomes quite powerful indeed. One of the keys to this is using Excel as a data consumer rather than data storage mechanism. Caching data in Excel is OK, but only if that data is (a) not modified and (b) configured for automated periodic refresh. PowerPivot meets both criteria -- it stores a read-only copy of your data in the form of a model, and once workbook containing a PowerPivot model is published to SharePoint, it can be configured for scheduled data refresh, on the server, requiring no user intervention whatsoever. Data refresh is a bit like hard drive backup: it will only happen reliably if it's automated, and super-easy to configure. PowerPivot hits a real home run here (as does Windows Home Server for PC backup, but I digress). The thing about PowerPivot is that it's an add-in for Excel 2010. What if you're not planning to go to that new version for quite a while? What if you’ve just deployed Office 2007 in your organization? What if you're still on Office 2003, or an even earlier version? What can you do immediately to share data responsibly and easily? As it turns out, there's a feature in Excel that's been around for quite a while, that can help: Web Queries.  The Web Query feature was introduced, ostensibly, to allow Excel to pull data in from Internet Web pages…for example, data in a stock quote history table will come in nicely, as will any data in a Web page that is displayed in an HTML table.  To use the feature In Excel 2007 or 2010, click the Data Tab or the ribbon and click the “From Web” button towards the left; in older versions use the corresponding option in  the menu or  toolbars.  Next, paste a URL into the resulting dialog box and tap Enter or click the Go button.  A preview of the Web page will come up, and the dialog will allow you to select the specific table within the page whose data you’d like to import.  Here’s an example: Now just click the table, click the Import button, and the Import Data dialog appears.  You can simply click OK to bring in your data or you can first click the Properties… button and configure the data import to be refreshed at an interval in minutes that you select.  Now your data’s in the spreadsheet and ready to worked with: Your data may be vulnerable to modification, but if you’ve set up the data refresh, any accidental or malicious changes will be corrected in time anyway. The thing about this feature is that it’s most useful not for public Web pages, but for pages behind the firewall.  In effect, the Web Query feature provides an incredibly easy way to consume data in Excel that’s “published” from an application.  Users just need a URL.  They don’t need to know server and database names and since the data is read-only, providing credentials may be unnecessary, or can be handled using integrated security.  If that’s not good enough, the Web Query can be saved to a special .iqy file, which can be edited to provide POST parameter data. The only requirement is that the data must be provided in an HTML table, with the first row providing the column names.  From an ASP.NET project, it couldn’t be easier: a simple bound GridView control is totally compatible.  Use a data source control with it, and you don’t even have to write any code.  Users can link to pages that are part of an application’s UI, or developers can create pages that are specially designed for the purpose of providing an interface to the Web Query import feature.  And none of this is Microsoft- or .NET-specific.  You can create pages in any language you want (PHP comes to mind) that output the result set of a query in HTML table format, and then consume that data in a Web Query.  Then build PivotTables and charts on the data, and in Excel 2007 or 2010 you can use conditional formatting to create scorecards and dashboards. This strategy allows you to create pages that function quite similarly to the OData XML feeds rendered when .NET developers create an “Astoria” WCF Data Service.  And while it’s cool that PowerPivot and Excel 2010 can import such OData feeds, it’s good to know that older versions of Excel can function in a similar fashion, and can consume data produced by virtually any Web development platform. As a final matter, instead of just telling you that “older versions” of Excel support this feature, I’ll be more specific.  To discover what the first version of Excel was to support Web queries, go to http://bit.ly/OldSchoolXL.

    Read the article

  • Get Current QuarterEnd for a given FYE Date

    - by Rohit Gupta
    Here is the code to get the Current Quarter End for a Given FYE Date: 1: public static DateTime ThisQuarterEnd(this DateTime date, DateTime fyeDate) 2: { 3: IEnumerable<DateTime> candidates = 4: QuartersInYear(date.Year, fyeDate.Month).Union(QuartersInYear(date.Year + 1, fyeDate.Month)); 5: return candidates.Where(d => d.Subtract(date).Days >= 0).First(); 6: } 7:  8: public static IEnumerable<DateTime> QuartersInYear(int year, int q4Month) 9: { 10: int q1Month = 3, q2Month = 6, q3Month = 9; 11: int q1year = year, q2year = year, q3year = year; 12: int q1Day = 31, q2Day = 31, q3Day = 31, q4Day = 31; 13:  14: 15: q3Month = q4Month - 3; 16: if (q3Month <= 0) 17: { 18: q3Month = q3Month + 12; 19: q3year = year - 1; 20: } 21: q2Month = q4Month - 6; 22: if (q2Month <= 0) 23: { 24: q2Month = q2Month + 12; 25: q2year = year - 1; 26: } 27: q1Month = q4Month - 9; 28: if (q1Month <= 0) 29: { 30: q1Month = q1Month + 12; 31: q1year = year - 1; 32: } 33:  34: q1Day = new DateTime(q1year, q1Month, 1).AddMonths(1).AddDays(-1).Day; 35: q2Day = new DateTime(q2year, q2Month, 1).AddMonths(1).AddDays(-1).Day; 36: q3Day = new DateTime(q3year, q3Month, 1).AddMonths(1).AddDays(-1).Day; 37: q4Day = new DateTime(year, q4Month, 1).AddMonths(1).AddDays(-1).Day; 38:  39: return new List<DateTime>() { 40: new DateTime(q1year, q1Month, q1Day), 41: new DateTime(q2year, q2Month, q2Day), 42: new DateTime(q3year, q3Month, q3Day), 43: new DateTime(year, q4Month, q4Day), 44: }; 45:  46: } The code to get the NextQuarterEnd is simple, just Change the Where clause to read d.Subtract(date).Days > 0 instead of d.Subtract(date).Days >= 0 1: public static DateTime NextQuarterEnd(this DateTime date, DateTime fyeDate) 2: { 3: IEnumerable<DateTime> candidates = 4: QuartersInYear(date.Year, fyeDate.Month).Union(QuartersInYear(date.Year + 1, fyeDate.Month)); 5: return candidates.Where(d => d.Subtract(date).Days > 0).First(); 6: } Also if you need to get the Quarter Label for a given Date, given a particular FYE date then following is the code to use: 1: public static string GetQuarterLabel(this DateTime date, DateTime fyeDate) 2: { 3: int q1Month = fyeDate.Month - 9, q2Month = fyeDate.Month - 6, q3Month = fyeDate.Month - 3; 4:  5: int year = date.Year, q1Year = date.Year, q2Year = date.Year, q3Year = date.Year; 6: 7: if (q1Month <= 0) 8: { 9: q1Month += 12; 10: q1Year = year + 1; 11: } 12: if (q2Month <= 0) 13: { 14: q2Month += 12; 15: q2Year = year + 1; 16: } 17: if (q3Month <= 0) 18: { 19: q3Month += 12; 20: q3Year = year + 1; 21: } 22:  23: string qtr = ""; 24: if (date.Month == q1Month) 25: { 26: qtr = "Qtr1"; 27: year = q1Year; 28: } 29: else if (date.Month == q2Month) 30: { 31: qtr = "Qtr2"; 32: year = q2Year; 33: } 34: else if (date.Month == q3Month) 35: { 36: qtr = "Qtr3"; 37: year = q3Year; 38: } 39: else if (date.Month == fyeDate.Month) 40: { 41: qtr = "Qtr4"; 42: year = date.Year; 43: } 44:  45: return string.Format("{0} - {1}", qtr, year.ToString()); 46: }

    Read the article

  • xcopy file, suppress &ldquo;Does xxx specify a file name&hellip;&rdquo; message

    - by MarkPearl
    Today we had an interesting problem with file copying. We wanted to use xcopy to copy a file from one location to another and rename the copied file but do this impersonating another user. Getting the impersonation to work was fairly simple, however we then had the challenge of getting xcopy to work. The problem was that xcopy kept prompting us with a prompt similar to the following… Does file.xxx specify a file name or directory name on the target (F = file, D = directory)? At which point we needed to press ‘Y’. This seems to be a fairly common challenge with xcopy, as illustrated by the following stack overflow link… One of the solutions was to do the following… echo f | xcopy /f /y srcfile destfile This is fine if you are running from the command prompt, but if you are triggering this from c# how could we daisy chain a bunch of commands…. The solution was fairly simple, we eventually ended up with the following method… public void Copy(string initialFile, string targetFile) { string xcopyExe = @"C:\windows\system32\xcopy.exe"; string cmdExe = @"C:\windows\system32\cmd.exe"; ProcessStartInfo p = new ProcessStartInfo(); p.FileName = cmdExe; p.Arguments = string.Format(@"/c echo f | {2} {0} {1} /Y", initialFile, targetFile, xcopyExe); Process.Start(p); } Where we wrapped the commands we wanted to chain as arguments and instead of calling xcopy directly, we called cmd.exe passing xcopy as an argument.

    Read the article

  • C#: Does an IDisposable in a Halted Iterator Dispose?

    - by James Michael Hare
    If that sounds confusing, let me give you an example. Let's say you expose a method to read a database of products, and instead of returning a List<Product> you return an IEnumerable<Product> in iterator form (yield return). This accomplishes several good things: The IDataReader is not passed out of the Data Access Layer which prevents abstraction leak and resource leak potentials. You don't need to construct a full List<Product> in memory (which could be very big) if you just want to forward iterate once. If you only want to consume up to a certain point in the list, you won't incur the database cost of looking up the other items. This could give us an example like: 1: // a sample data access object class to do standard CRUD operations. 2: public class ProductDao 3: { 4: private DbProviderFactory _factory = SqlClientFactory.Instance 5:  6: // a method that would retrieve all available products 7: public IEnumerable<Product> GetAvailableProducts() 8: { 9: // must create the connection 10: using (var con = _factory.CreateConnection()) 11: { 12: con.ConnectionString = _productsConnectionString; 13: con.Open(); 14:  15: // create the command 16: using (var cmd = _factory.CreateCommand()) 17: { 18: cmd.Connection = con; 19: cmd.CommandText = _getAllProductsStoredProc; 20: cmd.CommandType = CommandType.StoredProcedure; 21:  22: // get a reader and pass back all results 23: using (var reader = cmd.ExecuteReader()) 24: { 25: while(reader.Read()) 26: { 27: yield return new Product 28: { 29: Name = reader["product_name"].ToString(), 30: ... 31: }; 32: } 33: } 34: } 35: } 36: } 37: } The database details themselves are irrelevant. I will say, though, that I'm a big fan of using the System.Data.Common classes instead of your provider specific counterparts directly (SqlCommand, OracleCommand, etc). This lets you mock your data sources easily in unit testing and also allows you to swap out your provider in one line of code. In fact, one of the shared components I'm most proud of implementing was our group's DatabaseUtility library that simplifies all the database access above into one line of code in a thread-safe and provider-neutral way. I went with my own flavor instead of the EL due to the fact I didn't want to force internal company consumers to use the EL if they didn't want to, and it made it easy to allow them to mock their database for unit testing by providing a MockCommand, MockConnection, etc that followed the System.Data.Common model. One of these days I'll blog on that if anyone's interested. Regardless, you often have situations like the above where you are consuming and iterating through a resource that must be closed once you are finished iterating. For the reasons stated above, I didn't want to return IDataReader (that would force them to remember to Dispose it), and I didn't want to return List<Product> (that would force them to hold all products in memory) -- but the first time I wrote this, I was worried. What if you never consume the last item and exit the loop? Are the reader, command, and connection all disposed correctly? Of course, I was 99.999999% sure the creators of C# had already thought of this and taken care of it, but inspection in Reflector was difficult due to the nature of the state machines yield return generates, so I decided to try a quick example program to verify whether or not Dispose() will be called when an iterator is broken from outside the iterator itself -- i.e. before the iterator reports there are no more items. So I wrote a quick Sequencer class with a Dispose() method and an iterator for it. Yes, it is COMPLETELY contrived: 1: // A disposable sequence of int -- yes this is completely contrived... 2: internal class Sequencer : IDisposable 3: { 4: private int _i = 0; 5: private readonly object _mutex = new object(); 6:  7: // Constructs an int sequence. 8: public Sequencer(int start) 9: { 10: _i = start; 11: } 12:  13: // Gets the next integer 14: public int GetNext() 15: { 16: lock (_mutex) 17: { 18: return _i++; 19: } 20: } 21:  22: // Dispose the sequence of integers. 23: public void Dispose() 24: { 25: // force output immediately (flush the buffer) 26: Console.WriteLine("Disposed with last sequence number of {0}!", _i); 27: Console.Out.Flush(); 28: } 29: } And then I created a generator (infinite-loop iterator) that did the using block for auto-Disposal: 1: // simply defines an extension method off of an int to start a sequence 2: public static class SequencerExtensions 3: { 4: // generates an infinite sequence starting at the specified number 5: public static IEnumerable<int> GetSequence(this int starter) 6: { 7: // note the using here, will call Dispose() when block terminated. 8: using (var seq = new Sequencer(starter)) 9: { 10: // infinite loop on this generator, means must be bounded by caller! 11: while(true) 12: { 13: yield return seq.GetNext(); 14: } 15: } 16: } 17: } This is really the same conundrum as the database problem originally posed. Here we are using iteration (yield return) over a large collection (infinite sequence of integers). If we cut the sequence short by breaking iteration, will that using block exit and hence, Dispose be called? Well, let's see: 1: // The test program class 2: public class IteratorTest 3: { 4: // The main test method. 5: public static void Main() 6: { 7: Console.WriteLine("Going to consume 10 of infinite items"); 8: Console.Out.Flush(); 9:  10: foreach(var i in 0.GetSequence()) 11: { 12: // could use TakeWhile, but wanted to output right at break... 13: if(i >= 10) 14: { 15: Console.WriteLine("Breaking now!"); 16: Console.Out.Flush(); 17: break; 18: } 19:  20: Console.WriteLine(i); 21: Console.Out.Flush(); 22: } 23:  24: Console.WriteLine("Done with loop."); 25: Console.Out.Flush(); 26: } 27: } So, what do we see? Do we see the "Disposed" message from our dispose, or did the Dispose get skipped because from an "eyeball" perspective we should be locked in that infinite generator loop? Here's the results: 1: Going to consume 10 of infinite items 2: 0 3: 1 4: 2 5: 3 6: 4 7: 5 8: 6 9: 7 10: 8 11: 9 12: Breaking now! 13: Disposed with last sequence number of 11! 14: Done with loop. Yes indeed, when we break the loop, the state machine that C# generates for yield iterate exits the iteration through the using blocks and auto-disposes the IDisposable correctly. I must admit, though, the first time I wrote one, I began to wonder and that led to this test. If you've never seen iterators before (I wrote a previous entry here) the infinite loop may throw you, but you have to keep in mind it is not a linear piece of code, that every time you hit a "yield return" it cedes control back to the state machine generated for the iterator. And this state machine, I'm happy to say, is smart enough to clean up the using blocks correctly. I suspected those wily guys and gals at Microsoft engineered it well, and I wasn't disappointed. But, I've been bitten by assumptions before, so it's good to test and see. Yes, maybe you knew it would or figured it would, but isn't it nice to know? And as those campy 80s G.I. Joe cartoon public service reminders always taught us, "Knowing is half the battle...". Technorati Tags: C#,.NET

    Read the article

  • TechEd 2012: Day 3 &ndash; Morning TFS

    - by Tim Murphy
    My morning sessions for day three were dominated by Team Foundation Server.  This has been a hot topic for our clients lately, so this topic really stuck a chord. The speaker for the first session was from Boeing.  It was nice to hear how how a company mixes both agile and waterfall project management.   The approaches that he presented were very pragmatic.  For their needs reporting is the crucial part of their decision to use TFS.  This was interesting since this is probably the last aspect that most shops would think about. The challenge of getting users to adopt TFS was brought up by the audience.  As with the other discussion point he took a very level headed stance.  The approach he was prescribing was to eat the elephant a bite at a time instead of all at once.  If you try to convert you entire shop at once the culture shock will most likely kill the effort. Another key point he reminded us of is that you need to make sure that standards and compliance are taken into account when you setup TFS.  If you don’t implement a tool and processes around it that comply with the standards bodies that govern your business you are in for a world of hurt. Ultimately the reason they chose TFS was because it was the first tool that incorporated all the ALM features that they needed. Reduced licensing cost because of all the different tools they would need to buy to complete the same tasks.  They got to this point by doing an industry evaluation.  Although TFS came out on top he said that it still has a big gap is in the Java area.  Of course in this market there are vendors helping to close that gap. The second session was on how continuous feedback in agile is a new focus in VS2012.  The problems they intended to address included cycle time and average time to repair, root cause analysis. The speakers fired features at us as if they were firing a machine gun.  I will just say that I am looking forward to digging into the product after seeing this presentation.  Beyond that I will simply list some of the key features that caught my attention. Feature – Ability to link documents into tasks as artifacts Web access portal PowerPoint storyboards Exploratory testing Request feedback (allows users to record notes, screen shots and video/audio) See you after the second half. del.icio.us Tags: TechEd,TechEd 2012,TFS,Team Foundation Server

    Read the article

  • LINQ to Twitter v2.0.8 Released

    - by Joe Mayo
    Today, I released LINQ to Twitter v2.0.8. Besides normal maintenance, this release includes the Twitter Geo API and the Suggested Users API. LINQ to Twitter is hosted on CodePlex.com: http://linqtotwitter.codeplex.com/ In addition to new functionality, I've made much progress toward LINQ to Twitter documentation; primarily in the Making API Calls area: http://linqtotwitter.codeplex.com/wikipage?title=Making%20API%20Calls&referringTitle=Documentation There's also a discussion forum where you can ask and view questions: http://linqtotwitter.codeplex.com/Thread/List.aspx As always, constructive feedback is welcome. Joe

    Read the article

  • ASP.NET developers turning to Visual WebGui for rich management system

    - by Webgui
    When The Center for Organ Recovery & Education (CORE) decided they needed a web application to allow easy access to the expenses management system they initially went to ASP.NET web forms combined with CSS. The outcome, however, was not satisfying enough as it appeared bland and lacked in richness. So in order to enrich the UI and give the web application some glitz, Visual WebGui was selected. Visual WebGui provided the needed richness and the familiar Windows look and feel also made the transition for the desktop users very easy. The richer GUI of Visual WebGui compared to ASP.NET conveyed some initial concerns about performance. But the Visual WebGui performance turned out to be a surprising advantage as the website maintained good response times. Working with Visual WebGui required a paradigm shift for the development process as some of the usual methods of coding with ASP.NET did not apply. However, the transition was fairly easy due to the simplicity and intuitiveness of Visual WebGui as well as the good support and documentation. “The shift into a different development paradigm was eased by the Visual WebGui web forums which are very active thanks to a large, involved community. There are also several video and web pages dedicated to answering the most commonly asked questions and pitfalls" Dave Bhatia, Systems Engineer who added "A couple of issues such as deploying on IIS7 seemed to be show stoppers at first, however the solution was readily available in a white paper on the Gizmox website.” The full story is found on the Visual WebGui website: http://www.visualwebgui.com/Gizmox/Resources/CaseStudies/tabid/358/articleType/ArticleView/articleId/964/The-Center-for-Organ-Recovery-Education-gets-a-web-based-expenses-management-system.aspx

    Read the article

  • Going Paperless

    - by Jesse
    One year ago I came to work for a company where the entire development team is 100% “remote”; we’re spread over 3 time zones and each of us works from home. This seems to be an increasingly popular way for people to work and there are many articles and blog posts out there enumerating the advantages and disadvantages of working this way. I had read a lot about telecommuting before accepting this job and felt as if I had a pretty decent idea of what I was getting into, but I’ve encountered a few things over the past year that I did not expect. Among the most surprising by-products of working from home for me has been a dramatic reduction in the amount of paper that I use on a weekly basis. Hoarding In The Workplace Prior to my current telecommute job I worked in what most would consider pretty traditional office environments. I sat in cubicles furnished with an enormous plastic(ish) modular desks, had a mediocre (at best) PC workstation, and had ready access to a seemingly endless supply of legal pads, pens, staplers and paper clips. The ready access to paper, countless conference room meetings, and abundance of available surface area on my desk and in drawers created a perfect storm for wasting paper. I brought a pad of paper with me to every meeting I ever attended, scrawled some brief notes, and then tore that sheet off to keep next to my keyboard to follow up on any needed action items. Once my immediate need for the notes was fulfilled, that sheet would get shuffled off into a corner of my desk or filed away in a drawer “just in case”. I would guess that for all of the notes that I ever filed away, I might have actually had to dig up and refer to 2% of them (and that’s probably being very generous). That said, on those rare occasions that I did have to dig something up from old notes, it was usually pretty important and I ended up being very glad that I saved them. It was only when I would leave a job or move desks that I would finally gather all those notes together and take them to shredding bin to be disposed of. When I left my last job the amount of paper I had accumulated over my three years there was absurd, and I knew coworkers who had substance-abuse caliber paper wasting addictions that made my bad habit look like nail-biting in comparison. A Product Of My Environment I always hated using all of this paper, but simply couldn’t bring myself to stop. It would look bad if I showed up to an important conference room meeting without a pad of paper. What if someone said something profound! Plus, everyone else always brought paper with them. If you saw someone walking down the hallway with a pad of paper in hand you knew they must be on their way to a conference room meeting. Some people even had fancy looking portfolio notebook sheaths that gave their legal pads all the prestige of a briefcase. No one ever worried about running out of fresh paper because there was an endless supply, and there certainly was no shortage of places to store and file used paper. In short, the traditional office was setup for using tons and tons of paper; it’s baked into the culture there. For that reason, it didn’t take long for me to kick the paper habit once I started working from home. In my home office, desk and drawer space are at a premium. I don’t have the budget (or the tolerance) for huge modular office furniture in my spare bedroom. I also no longer have access to a bottomless pit of office supplies stock piled in cabinets and closets. If I want to use some paper, I have to go out and buy it. Finally (and most importantly), all of the meetings that I have to attend these days are “virtual”. We use instant messaging, VOIP, video conferencing, and e-mail to communicate with each other. All I need to take notes during a meeting is my computer, which I happen to be sitting right in front of all day. I don’t have any hard numbers for this, but my gut feeling is that I actually take a lot more notes now than I ever did when I worked in an office. The big difference is I don’t have to use any paper to do so. This makes it far easier to keep important information safe and organized. The Right Tool For The Job When I first started working from home I tried to find a single application that would fill the gap left by the pen and paper that I always had at my desk when I worked in an office. Well, there are no silver bullets and I’ve evolved my approach over time to try and find the best tool for the job at hand. Here’s a quick summary of how I take notes and keep everything organized. Notepad++ – This is the first application I turn to when I feel like there’s some bit of information that I need to write down and save. I use Launchy, so opening Notepad++ and creating a new file only takes a few keystrokes. If I find that the information I’m trying to get down requires a more sophisticated application I escalate as needed. The Desktop – By default, I save every file or other bit of information to the desktop. Anyone who has ever had to fix their parents computer before knows that this is a dangerous game (any file my mother has ever worked on is saved directly to the desktop and rarely moves anywhere else). I agree that storing things on the desktop isn’t a great long term approach to keeping organized, which is why I treat my desktop a bit like my e-mail inbox. I strive to keep both empty (or as close to empty as I possibly can). If something is on my desktop, it means that it’s something relevant to a task or project that I’m currently working on. About once a week I take things that I’m not longer working on and put them into my ‘Notes’ folder. The ‘Notes’ Folder – As I work on a task, I tend to accumulate multiple files associated with that task. For example, I might have a bit of SQL that I’m working on to gather data for a new report, a quick C# method that I came up with but am not yet ready to commit to source control, a bulleted list of to-do items in a .txt file, etc. If the desktop starts to get too cluttered, I create a new sub-folder in my ‘Notes’ folder. Each sub-folder’s name is the current date followed by a brief description of the task or project. Then all files related to that task or project go into that sub folder. By using the date as the first part of the folder name, these folders are automatically sorted in reverse chronological order. This means that things I worked on recently will generally be near the top of the list. Using the built-in Windows search functionality I now have a pretty quick and easy way to try and find something that I worked on a week ago or six months ago. Dropbox – Dropbox is a free service that lets you store up to 2GB of files “in the cloud” and have those files synced to all of the different computers that you use. My ‘Notes’ folder lives in Dropbox, meaning that it’s contents are constantly backed up and are always available to me regardless of which computer I’m using. They also have a pretty decent iPhone application that lets you browse and view all of the files that you have stored there. The free 2GB edition is probably enough for just storing notes, but I also pay $99/year for the 50GB storage upgrade and keep all of my music, e-books, pictures, and documents in Dropbox. It’s a fantastic service and I highly recommend it. Evernote – I use Evernote mostly to organize information that I access on a fairly regular basis. For example, my Evernote account has a running grocery shopping list, recipes that my wife and I use a lot, and contact information for people I contact infrequently enough that I don’t want to keep them in my phone. I know some people that keep nearly everything in Evernote, but there’s something about it that I find a bit clunky, so I tend to use it sparingly. Google Tasks – One of my biggest paper wasting habits was keeping a running task-list next to my computer at work. Every morning I would sit down, look at my task list, cross off what was done and add new tasks that I thought of during my morning commute. This usually resulted in having to re-copy the task list onto a fresh sheet of paper when I was done. I still keep a running task list at my desk, but I’ve started using Google Tasks instead. This is a dead-simple web-based application for quickly adding, deleting, and organizing tasks in a simple checklist style. You can quickly move tasks up and down on the list (which I use for prioritizing), and even create sub-tasks for breaking down larger tasks into smaller pieces. Balsamiq Mockups – This is a simple and lightweight tool for creating drawings of user interfaces. It’s great for sketching out a new feature, brainstorm the layout of a interface, or even draw up a quick sequence diagram. I’m terrible at drawing, so Balsamiq Mockups not only lets me create sketches that other people can actually understand, but it’s also handy because you can upload a sketch to a common location for other team members to access. I can honestly say that using these tools (and having limited resources at home) have lead me to cut my paper usage down to virtually none. If I ever were to return to a traditional office workplace (hopefully never!) I’d try to employ as many of these applications and techniques as I could to keep paper usage low. I feel far less cluttered and far better organized now.

    Read the article

  • Build Your Own CE6 Kernel

    - by Kate Moss' Big Fan
    The Share Source Program in Windows CE provides many modules in %_WINCEROOT%\Private\ tree, and the kernel is one of them! Although it is not full source of kernel but it is good enough for tracing it, even tweak the kernel. Tracing the kernel and see how it works is lots of fun, but it is fascinated to modify and verify the change you made. So first comes first, where is the source of kernel? It's in your %_WINCEROOT%\private\winceos\COREOS\nk\ And next question will be "How do I build it?", Some of you may say just "build -c" there and it should be good. If you are the owner of kernel and got full source, that is definitely the right answer, but none of them are applied to our case though. So what should I do? Let's dig deeper into the coreos\nk folder, there are a couples of subfolder, CELOG, KDSTUB, KERNEL and etc. KERNEL\ is the main component of kernel.dll, in the other word, most of the modify to kernel is going to happen here. And the good thing is, you could "build -c" in %_WINCEROOT%\private\winceos\COREOS\nk\kernel\ with no error at all. But before doing that, remember to backup eveything you are going to modify, including the source and binaries; remember, this is not something belong to you, and if you didn't restore them back later, it could end up confuse the subsequence QFE updates! Here is the steps Backup the source code, I will suggest the whole %_WINCEROOT%\private\winceos\COREOS\nk\ Backup the binaries in common\oak\lib\, and again if you are not sure which files, backup the whole %_WINCEROOT%\common\oak\lib\ is the safest way. Do whatever modification you want in %_WINCEROOT%\private\winceos\COREOS\nk\kernel\ build -c in %_WINCEROOT%\private\winceos\COREOS\nk\kernel If everything went well so far, you should get a new nkmain.lib,nkmain.pdb, nkprmain.lib and nkprmain.pdb in %_WINCEROOT%\public\common\oak\lib\%_TGTCPU%\%WINCEDEBUG%\ Basically, you just rebuild your new kernel, the rest is to "blddemo clean -q" to have your new kernel SYSGEN'd and include in your OS Image. Or just "set WINCEREL=1" then "sysgen -p common nk nkprof" and "makeimg" if you can't wait another minutes for "blddemo clean -q" Tat sounds good, but some of you may not like the idea to alter any code in private folder, and not to mention how annoying to backup/restore files every time. Better idea? Yes, Microsoft provides a tool SYSGEN_CAPTURE (http://msdn.microsoft.com/en-us/library/ee504678.aspx for detail and usage) to creates Sources files for public drivers that you want to modify and build in your platform directory. In fact, not only public drivers, virtually anything in the %_WINCEROOT%\public\<project name>\cesysgen\makefile can be captured, and of course including kernel. So I am going to introduce a second way to build your own kernel by using SYSGEN_CAPTURE tool. Again the steps Create a folder in your BSP for building kernel, says %_TARGETPLATROOT%\SRC\Kernel. Use "SYSGEN_CAPTURE -p common nk" and then you will get a SOURCES.KERN, you could also "SYSGEN_CAPTURE -p common nkprof" to generate profiler enabled kernel. rename the SOURCE.KERN to SOURCES and copy one of the sample makefile into your kernel directory. For example the one in PRIVATE\WINCEOS\COREOS\NK\KERNEL\NKNORMAL. Copy the source files you want to modify from private\winceos\coreos\nk\kernel\ into your kernel directory. Modifying the SOURCES= macro to the source files you addes in step 4. For example, if you copied the vm.c, it is going to be SOURCES=vm.c Refer to the private\winceos\COREOS\nk\kernel\sources.inc and add macro defines and proper include path in your SOURCES file. "set WINCEREL=1", "build -c" in your kernel directory and "makeimg", voila! Here is an example for the MACROS you need to add in x86 Here are the macros for x86 CDEFINES=$(CDEFINES) -DIN_KERNEL -DWINCEMACRO -DKERN_CORE # Machine independent defines CDEFINES=$(CDEFINES) -DDBGSUPPORT _COREOSROOT=$(_WINCEROOT)\private\winceos\coreos INCLUDES=$(_COREOSROOT)\inc;$(_COREOSROOT)\nk\inc !IFDEF DP_SETTINGS CDEFINES=$(CDEFINES) -DDP_SETTINGS=$(DP_SETTINGS) !ENDIF ASM_SAFESEH=1 CDEFINES=$(CDEFINES) -Gs100000 -DENCODE_GS_COOKIE

    Read the article

  • IIS Logfile Visualization with XNA

    - by BobPalmer
    In my office, I have a wall mounted monitor who's whole purpose in life is to display perfmon stats from our various servers.  And on a fairly regular basis, I have folks walk by asking what the lines mean.    After providing the requisite explaination about CPU utilization, disk I/O bottlenecks, etc. this is usually followed by some blank stares from the user in question, and a distillation of all of our engineering wizardry down to the phrase 'So when the red line goes up that's bad then?'   This of course would not do.  So I talked to my friends and our network admin about an option to show something more eye catching and visual, with which we could catch at a glance a feel for what was up with our site.    He initially pointed me out to a video showing GLTail and Chipmunk done in Ruby.  Realizing this was both awesome, and that I needed an excuse to do something in XNA, I decided to knock out a proof of concept for something very similar, but with a few tweaks.   Here's a link to a video of the current prototype:   http://www.youtube.com/watch?v=jM_PWZbtH2I   Essentially this app opens up a log file (even an active one) and begins pulling out the lines of text.  (Here's a good Code Project link that covers how to do tail reading from an active text file: http://www.codeproject.com/KB/files/tail.aspx).   As new data is added, a bubble is generated in the application - a GET statement comes from the left, and a POST from the right.  I then run it through a series of expression checkers, and based on the kind of statement and the pattern, a bubble of an appropriate color is generated.   For example, if I get a 500, a huge red bubble pops out.  Others are based on the part of the system the page is from - i.e. green bubbles are from our claims management subsystem, and blue bubbles are from the pages our scheduling staff use to schedule patients.  Others include the purple bubbles for security and login, and yellow bubbles for some miscellaneous pages.   The little grey bubbles represent things like images, JS, CSS, etc - and their small size makes them work like grease to keep the larger page bubbles moving.   The app is also smart enough that if it is starting to bog down with handling the physics and interactions, it will suspend new bubbles until enough have dropped off that performance can resume (you can see this slight stuttering in the sample video).   The net result is that anyone will be able to look up on the wall monitor, and instantly get a quick feel for how things are going on the floor.  Website slow?  You can get a feel for both volume and utilized modules with one glance.  Website crashing?  Look for a wall of giant red bubbles.  No activity at all?  Maybe the site is down.  Now couple this with utilization within a farm, and cross referenced with a second app showing the same kind of data from your SQL database...   As for the app itself, it's a windows XNA project with the code in C#.   The physics are handled by the Farseer physicis eingine for XNA (http://www.codeplex.com/FarseerPhysics) which is just pure goodness.  The samples are great, and I had the app up and working in two evenings (half of that was fine tuning, and the other was me coding with a kid in my lap).   My next steps include wiring this to SQL (I have some ideas...), and adding a nice configuration module.  For example, you could use polygons, etc to tie to your regex - or more entertaining things like having a little human ragdoll to represent a user login.     Once that's wrapped up and I have a chance to complete some hardening, I will be releasing the whole thing into the wild as opensource.     Feel free to ping me if you have any questions! -Bob

    Read the article

  • Single Instance of Child Forms in MDI Applications

    - by Akshay Deep Lamba
    In MDI application we can have multiple forms and can work with multiple forms i.e. MDI childs at a time but while developing applications we don't pay attention to the minute details of memory management. Take this as an example, when we develop application say preferably an MDI application, we have multiple child forms inside one parent form. On MDI parent form we would like to have menu strip and tab strip which in turn calls other forms which build the other parts of the application. This also makes our application looks pretty and eye-catching (not much actually). Now on a first go when a user clicks a menu item or a button on a tab strip an application initialize a new instance of a form and shows it to the user inside the MDI parent, if a user again clicks the same button the application creates another new instance for the form and presents it to the user, this will result in the un-necessary usage of the memory. Therefore, if you wish to have your application to prevent generating new instances of the forms then use the below method which will first check if the the form is visible among the list of all the child forms and then compare their types, if the form types matches with the form we are trying to initialize then the form will get activated or we can say it will be bring to front else it will be initialize and set visible to the user in the MDI parent window. The method we are using: private bool CheckForDuplicateForm(Form newForm) { bool bValue = false; foreach (Form frm in this.MdiChildren) { if (frm.GetType() == newForm.GetType()) { frm.Activate(); bValue = true; } } return bValue; } Usage: First we need to initialize the form using the NEW keyword ReportForm ReportForm = new ReportForm(); We can now check if there is another form present in the MDI parent. Here, we will use the above method to check the presence of the form and set the result in a bool variable as our function return bool value. bool frmPresent = CheckForDuplicateForm(Reportfrm); Once the above check is done then depending on the value received from the method we can set our form. if (frmPresent) return; else if (!frmPresent) { Reportfrm.MdiParent = this; Reportfrm.Show(); } In the end this is the code you will have at you menu item or tab strip click: ReportForm Reportfrm = new ReportForm(); bool frmPresent = CheckForDuplicateForm(Reportfrm); if (frmPresent) return; else if (!frmPresent) { Reportfrm.MdiParent = this; Reportfrm.Show(); }

    Read the article

  • How Microsoft Lost the API War - by Joel Spolsky

    - by TechTwaddle
    Came across another gem of an article by Joel Spolsky. It's a pretty old article written in June of 2004, has lot of tidbits and I really enjoyed reading it, so much in fact that I read it twice! So hit the link below and give it a read if you haven't already, How Microsoft Lost the API War - Joel Spolsky excerpt, "I first heard about this from one of the developers of the hit game SimCity, who told me that there was a critical bug in his application: it used memory right after freeing it, a major no-no that happened to work OK on DOS but would not work under Windows where memory that is freed is likely to be snatched up by another running application right away. The testers on the Windows team were going through various popular applications, testing them to make sure they worked OK, but SimCity kept crashing. They reported this to the Windows developers, who disassembled SimCity, stepped through it in a debugger, found the bug, and added special code that checked if SimCity was running, and if it did, ran the memory allocator in a special mode in which you could still use memory after freeing it."

    Read the article

  • Entity Framework 4, WCF &amp; Lazy Loading Tip

    - by Dane Morgridge
    If you are doing any work with Entity Framework and custom WCF services in EFv1, everything works great.  As soon as you jump to EFv4, you may find yourself getting odd errors that you can’t seem to catch.  The problem is almost always has something to do with the new lazy loading feature in Entity Framework 4.  With Entity Framework 1, you didn’t have lazy loading so this problem didn’t surface.  Assume I have a Person entity and an Address entity where there is a one-to-many relationship between Person and Address (Person has many Addresses). In Entity Framework 1 (or in EFv4 with lazy loading turned off), I would have to load the Address data by hand by either using the Include or Load Method: var people = context.People.Include("Addresses"); or people.Addresses.Load(); Lazy loading works when the first time the Person.Addresses collection is accessed: 1: var people = context.People.ToList(); 2:  3: // only person data is currently in memory 4:  5: foreach(var person in people) 6: { 7: // EF determines that no Address data has been loaded and lazy loads 8: int count = person.Addresses.Count(); 9: } 10:  Lazy loading has the useful (and sometimes not useful) feature of fetching data when requested.  It can make your life easier or it can make it a big pain.  So what does this have to do with WCF?  One word: Serialization. When you need to pass data over the wire with WCF, the data contract is serialized into either XML or binary depending on the binding you are using.  Well, if I am using lazy loading, the Person entity gets serialized and during that process, the Addresses collection is accessed.  When that happens, the Address data is lazy loaded.  Then the Address is serialized, and the Person property is accessed, and then also serialized and then the Addresses collection is accessed.  Now the second time through, lazy loading doesn’t kick in, but you can see the infinite loop caused by this process.  This is a problem with any serialization, but I personally found it trying to use WCF. The fix for this is to simply turn off lazy Loading.  This can be done at each call by using context options: context.ContextOptions.LazyLoadingEnabled = false; Turning lazy loading off will now allow your classes to be serialized properly.  Note, this is if you are using the standard Entity Framework classes.  If you are using POCO,  you will have to do something slightly different.  With POCO, the Entity Framework will create proxy classes by default that allow things like lazy loading to work with POCO.  This proxy basically creates a proxy object that is a full Entity Framework object that sits between the context and the POCO object.  When using POCO with WCF (or any serialization) just turning off lazy loading doesn’t cut it.  You have to turn off the proxy creation to ensure that your classes will serialize properly: context.ContextOptions.ProxyCreationEnabled = false; The nice thing is that you can do this on a call-by-call basis.  If you use a new context for each set of operations (which you should) then you can turn either lazy loading or proxy creation on and off as needed.

    Read the article

  • 10 Best Programming Podcast 2010 Edition

    - by mbcrump
    This list is in no particular order. Just the 10 best programming podcast that I have found so far. Stack Overflow Podcast -  Jeff Atwood (of codinghorror.com) and Joel Spolsky (of joelonsoftware.com) discuss the development of their new programming community, StackOverflow.com. [This Podcast hasn’t been updated in a while, but its always great to hear more from Jeff Atwood] Hanselminutes - Hanselminutes is a weekly audio talk show with noted web developer and technologist Scott Hanselman and hosted by Carl Franklin. Scott discusses utilities and tools, gives practical how-to advice, and discusses ASP.NET or Windows issues and workarounds. [This Podcast has recently started talking about random topics like diabetes, plane travel and geek relationship tips.  I am not sure if Scott is trying to move to a more mainstream audience or not] Herding Code - A weekly discussion featuring K. Scott Allen (odetocode.com), Kevin Dente, Scott Koon (lazycoder.com), and Jon Galloway. [Great all all-around podcast that I would recommend to all] Deep Fried Bytes - Deep Fried Bytes is an audio talk show with a Southern flavor hosted by technologists and developers Keith Elder and Chris Woodruff. The show discusses a wide range of topics including application development, operating systems and technology in general. Anything is fair game if it plugs into the wall or takes a battery. [This is one that just keeps getting better] Dot Net Rocks - .NET Rocks! is an Internet Audio Talk Show for Microsoft .NET Developers. [One of the first and usually very high quality content] Connected Show - Connected Show Podcast! A podcast covering new Microsoft technology for the developer community. The show is hosted by Dmitry Lyalin and Peter Laudati. [This and Polymorphic are one of my favorite podcast – Dmitry is a great host and would recommend this to all] Polymorphic Podcast - Object oriented development, architecture and best practices in .NET [Craig is a ASP.NET MVP and a great presenter. His podcast is great and it could only be better if he recorded it more often] ASP.NET Podcast - Wallace B. (Wally) McClure presents interviews and short technical talks on .NET Technologies. [Has great information on ASP.NET of course as well as iPhone Dev] Ruby on Rails Podcast - News and interviews about the Ruby language and the Rails website framework. [Even though I am not a Ruby programmer, I’ve found this podcast very interesting] Software Engineering Radio - Software Engineering Radio is a podcast targeted at the professional software developer. The goal is to be a lasting educational resource, not a newscast. Every ten days, a new episode is published that covers all topics software engineering. Episodes are either tutorials on a specific topic, or an interview with a well-known character from the software engineering world. All SE Radio episodes are original content ? we do not record conferences or talks given in other venues. Each episode comprises two speakers to ensure a lively listening experience. SE Radio is an independent and non-commercial organization. [Another excellent podcast – I would recommend any programmer add this to his/her drive home] If I have missed something, please feel free to email me and it might make the 2011 list. =)

    Read the article

  • What is "Open" anyway?

    - by EmbeddedInsider
    This terms is often used with many meanings.  For example, some people consider Flash 'open' and 'multi-platform' .  But Flash is a product of Adobe systems, locked down, copy protected and distribution restricted.  And versions for other than standard PC, home use, may carry licence fees. Check it out: 3.1 Adobe Runtime Restrictions. You will not use any Adobe Runtime on any non-PC device or with any embedded or device version of any operating system. For the avoidance of doubt, and by example only, you may not use an Adobe Runtime on any (a) mobile device, set top box (STB), handheld, phone, web pad, tablet and Tablet PC (other than with Windows XP Tablet PC Edition and its successors), game console, TV, DVD player, media center (other than with Windows XP Media Center Edition and its successors), electronic billboard or other digital signage, Internet appliance or other Internet-connected device, PDA, medical device, ATM, telematic device, gaming machine, home automation system, kiosk, remote control device, or any other consumer electronics device, (b) operator-based mobile, cable, satellite, or television system or (c) other closed system device. For information on licensing Adobe Runtimes for use on such systems please visit http://www.adobe.com/go/licensing. You will notice, for its embedded operating systems, Microsoft buys and includes a fully paid license for Adobe.   Do you get this with Linux?  Unix?  QNX? So, what is 'open'? Lawrence Ricci www.EmbeddedInsider.com

    Read the article

  • Part 2&ndash;Load Testing In The Cloud

    - by Tarun Arora
    Welcome to Part 2, In Part 1 we discussed the advantages of creating a Test Rig in the cloud, the Azure edge and the Test Rig Topology we want to get to. In Part 2, Let’s start by understanding the components of Azure we’ll be making use of followed by manually putting them together to create the test rig, so… let’s get down dirty start setting up the Test Rig.  What Components of Azure will I be using for building the Test Rig in the Cloud? To run the Test Agents we’ll make use of Windows Azure Compute and to enable communication between Test Controller and Test Agents we’ll make use of Windows Azure Connect.  Azure Connect The Test Controller is on premise and the Test Agents are in the cloud (How will they talk?). To enable communication between the two, we’ll make use of Windows Azure Connect. With Windows Azure Connect, you can use a simple user interface to configure IPsec protected connections between computers or virtual machines (VMs) in your organization’s network, and roles running in Windows Azure. With this you can now join Windows Azure role instances to your domain, so that you can use your existing methods for domain authentication, name resolution, or other domain-wide maintenance actions. For more details refer to an overview of Windows Azure connect. A very useful video explaining everything you wanted to know about Windows Azure connect.  Azure Compute Windows Azure compute provides developers a platform to host and manage applications in Microsoft’s data centres across the globe. A Windows Azure application is built from one or more components called ‘roles.’ Roles come in three different types: Web role, Worker role, and Virtual Machine (VM) role, we’ll be using the Worker role to set up the Test Agents. A very nice blog post discussing the difference between the 3 role types. Developers are free to use the .NET framework or other software that runs on Windows with the Worker role or Web role. Developers can also create applications using languages such as PHP and Java. More on Windows Azure Compute. Each Windows Azure compute instance represents a virtual server... Virtual Machine Size CPU Cores Memory Cost Per Hour Extra Small Shared 768 MB $0.04 Small 1 1.75 GB $0.12 Medium 2 3.50 GB $0.24 Large 4 7.00 GB $0.48 Extra Large 8 14.00 GB $0.96   You might want to review the Windows Azure Pricing FAQ. Let’s Get Started building the Test Rig… Configuration Machine Role Comments VM – 1 Domain Controller for Playpit.com On Premise VM – 2 TFS, Test Controller On Premise VM – 3 Test Agent Cloud   In this blog post I would assume that you have the domain, Team Foundation Server and Test Controller Installed and set up already. If not, please refer to the TFS 2010 Installation Guide and this walkthrough on MSDN to set up your Test Controller. You can also download a preconfigured TFS 2010 VM from Brian Keller's blog, Brian also has some great hands on Labs on TFS 2010 that you may want to explore. I. Lets start building VM – 3: The Test Agent Download the Windows Azure SDK and Tools Open Visual Studio and create a new Windows Azure Project using the Cloud Template                   Choose the Worker Role for reasons explained in the earlier post         The WorkerRole.cs implements the Run() and OnStart() methods, no code changes required. You should be able to compile the project and run it in the compute emulator (The compute emulator should have been installed as part of the Windows Azure Toolkit) on your local machine.                   We will only be making changes to WindowsAzureProject, open ServiceDefinition.csdef. Ensure that the vmsize is small (remember the cost chart above). Import the “Connect” module. I am importing the Connect module because I need to join the Worker role VM to the Playpit domain. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect"/> </Imports> </WorkerRole> </ServiceDefinition> Go to the ServiceConfiguration.Cloud.cscfg and note that settings with key ‘Microsoft.WindowsAzure.Plugins.Connect.%%%%’ have been added to the configuration file. This is because you decided to import the connect module. See the config below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration>             Let’s go step by step and understand all the highlighted parameters and where you can find the values for them.       osFamily – By default this is set to 1 (Windows Server 2008 SP2). Change this to 2 if you want the Windows Server 2008 R2 operating system. The Advantage of using osFamily = “2” is that you get Powershell 2.0 rather than Powershell 1.0. In Powershell 2.0 you could simply use “powershell -ExecutionPolicy Unrestricted ./myscript.ps1” and it will work while in Powershell 1.0 you will have to change the registry key by including the following in your command file “reg add HKLM\Software\Microsoft\PowerShell\1\ShellIds\Microsoft.PowerShell /v ExecutionPolicy /d Unrestricted /f” before you can execute any power shell. The other reason you might want to move to os2 is if you wanted IIS 7.5.       Activation Token – To enable communication between the on premise machine and the Windows Azure Worker role VM both need to have the same token. Log on to Windows Azure Management Portal, click on Connect, click on Get Activation Token, this should give you the activation token, copy the activation token to the clipboard and paste it in the configuration file. Note – Later in the blog I’ll be showing you how to install connect on the on premise machine.                       EnableDomainJoin – Set the value to true, ofcourse we want to join the on windows azure worker role VM to the domain.       DomainFQDN, DomainControllerFQDN, DomainAccountName, DomainPassword, DomainOU, Administrators – This information is specific to your domain. I have extracted this information from the ‘service manager’ and ‘Active Directory Users and Computers’. Also, i created a new Domain-OU namely ‘CloudInstances’ so all my cloud instances joined to my domain show up here, this is optional. You can encrypt the DomainPassword – refer to the instructions here. Or hold fire, I’ll be covering that when i come to certificates and encryption in the coming section.       Now once you have filled all this information up, the configuration file should look something like below, <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration> Next we will be enabling the Remote Desktop module in to the ServiceDefinition.csdef, we could make changes manually or allow a beautiful wizard to help us make changes. I prefer the second option. So right click on the Windows Azure project and choose Publish       Now once you get the publish wizard, if you haven’t already you would be asked to import your Windows Azure subscription, this is simply the Msdn subscription activation key xml. Once you have done click Next to go to the Settings page and check ‘Enable Remote Desktop for all roles’.       As soon as you do that you get another pop up asking you the details for the user that you would be logging in with (make sure you enter a reasonable expiry date, you do not want the user account to expire today). Notice the more information tag at the bottom, click that to get access to the certificate section. See screen shot below.       From the drop down select the option to create a new certificate        In the pop up window enter the friendly name for your certificate. In my case I entered ‘WAC – Test Rig’ and click ok. This will create a new certificate for you. Click on the view button to see the certificate details. Do you see the Thumbprint, this is the value that will go in the config file (very important). Now click on the Copy to File button to copy the certificate, we will need to import the certificate to the windows Azure Management portal later. So, make sure you save it a safe location.                                Click Finish and enter details of the user you would like to create with permissions for remote desktop access, once you have entered the details on the ‘Remote desktop configuration’ screen click on Ok. From the Publish Windows Azure Wizard screen press Cancel. Cancel because we don’t want to publish the role just yet and Yes because we want to save all the changes in the config file.       Now if you go to the ServiceDefinition.csdef file you will see that the RemoteAccess and RemoteForwarder roles have been imported for you. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect" /> <Import moduleName="RemoteAccess" /> <Import moduleName="RemoteForwarder" /> </Imports> </WorkerRole> </ServiceDefinition> Now go to the ServiceConfiguration.Cloud.cscfg file and you see a whole bunch for setting “Microsoft.WindowsAzure.Plugins.RemoteAccess.%%%” values added for you. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountUsername" value="Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountEncryptedPassword" value="MIIBnQYJKoZIhvcNAQcDoIIBjjCCAYoCAQAxggFOMIIBSgIBADAyMB4xHDAaBgNVBAMME1dpbmRvd 3MgQXp1cmUgVG9vbHMCEGa+B46voeO5T305N7TSG9QwDQYJKoZIhvcNAQEBBQAEggEABg4ol5Xol66Ip6QKLbAPWdmD4ae ADZ7aKj6fg4D+ATr0DXBllZHG5Umwf+84Sj2nsPeCyrg3ZDQuxrfhSbdnJwuChKV6ukXdGjX0hlowJu/4dfH4jTJC7sBWS AKaEFU7CxvqYEAL1Hf9VPL5fW6HZVmq1z+qmm4ecGKSTOJ20Fptb463wcXgR8CWGa+1w9xqJ7UmmfGeGeCHQ4QGW0IDSBU6ccg vzF2ug8/FY60K1vrWaCYOhKkxD3YBs8U9X/kOB0yQm2Git0d5tFlIPCBT2AC57bgsAYncXfHvPesI0qs7VZyghk8LVa9g5IqaM Cp6cQ7rmY/dLsKBMkDcdBHuCTAzBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECDRVifSXbA43gBApNrp40L1VTVZ1iGag+3O1" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" value="2012-11-27T23:59:59.0000000+00:00" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" value="true" /> </ConfigurationSettings> <Certificates> <Certificate name="Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption" thumbprint="AA23016CF0BDFC344400B5B82706B608B92E4217" thumbprintAlgorithm="sha1" /> </Certificates> </Role> </ServiceConfiguration>          Okay let’s look at them one at a time,       Enabled - Yes, we would like to enable Remote Access.       AccountUserName – This is the user name you entered while you were on the publish windows azure role screen, as detailed above.       AccountEncrytedPassword – Try and decode that, the certificate is used to encrypt the password you specified for the user account. Remember earlier i said, either use the instructions or wait and i’ll be showing you encryption, now the user account i am using for rdp has the same password as my domain password, so i can simply copy the value of the AccountEncryptedPassword to the DomainPassword as well.       AccountExpiration – This is the expiration as you specified in the wizard earlier, make sure your account does not expire today.       Remote Forwarder – Check out the documentation, below is how I understand it, -- One role in an application that implements a remote desktop connection must import the RemoteForwarder module. The two modules work together to enable the remote desktop connections to role instances. -- If you have multiple roles defined in the service model, it does not matter which role you add the RemoteForwarder module to, but you must add it to only one of the role definitions.       Certificate – Remember the certificate thumbprint from the wizard, the on premise machine and windows azure role machine that need to speak to each other must have the same thumbprint. More on that when we install Windows Azure connect Endpoints on the on premise machine. As i said earlier, in this blog post, I’ll be showing you the manual process so i won’t be scripting any star up tasks to install the test agent or register the test agent with the TFS Server. I’ll be showing you all this cool stuff in the next blog post, that’s because it’s important to understand the manual side of it, it becomes easier for you to troubleshoot in case something fails. Having said that, the changes we have made are sufficient to spin up the Windows Azure Worker Role aka Test Agent VM, have it connected with the play.pit.com domain and have remote access enabled on it. Before we deploy the Test Agent VM we need to set up Windows Azure Connect on the TFS Server. II. Windows Azure Connect: Setting up Connect on VM – 2 i.e. TFS & Test Controller Glad you made it so far, now to enable communication between the on premise TFS/Test Controller and Azure-ed Test Agent we need to enable communication. We have set up the Azure connect module in the Test Agent configuration, now the connect end points need to be enabled on the on premise machines, let’s have a look at how we can do this. Log on to VM – 2 running the TFS Server and Test Controller Log on to the Windows Azure Management Portal and click on Virtual Network Click on Virtual Network, if you already have a subscription you should see the below screen shot, if not, you would be asked to complete the subscription first        Click on Install Local Endpoints from the top left on the panel and you get a url appended with a token id in it, remember the token i showed you earlier, in theory the token you get here should match the token you added to the Test Agent config file.        Copy the url to the clip board and paste it in IE explorer (important, the installation at present only works out of IE and you need to have cookies enabled in order to complete the installation). As stated in the pop up, you can NOT download and run the software later, you need to run it as is, since it contains a token. Once the installation completes you should see the Windows Azure connect icon in the system tray.                         Right click the Azure Connect icon, choose Diagnostics and refer to this link for diagnostic detail terminology. NOTE – Unfortunately I could not see the Windows Azure connect icon in the system tray, a bit of binging with Google revealed that the azure connect icon is only shown when the ‘Windows Azure Connect Endpoint’ Service is started. So go to services.msc and make sure that the service is started, if not start it, unfortunately again, the service did not start for me on a manual start and i realised that one of the dependant services was disabled, you can look at the service dependencies and start them and then start windows azure connect. Bottom line, you need to start Windows Azure connect service before you can proceed. Please refer here on MSDN for more on Troubleshooting Windows Azure connect. (Follow the next step as well)   Now go back to the Windows Azure Management Portal and from Groups and Roles create a new group, lets call it ‘Test Rig’. Make sure you add the VM – 2 (the TFS Server VM where you just installed the endpoint).       Now if you go back to the Azure Connect icon in the system tray and click ‘Refresh Policy’ you will notice that the disconnected status of the icon should change to ready for connection. III. Importing Certificate in to Windows Azure Management Portal But before that you need to import the certificate you created in Step I in to the Windows Azure Management Portal. Log on to the Windows Azure Management Portal and click on ‘Hosted Services, Storage Accounts & CDN’ and then ‘Management Certificates’ followed by Add Certificates as shown in the screen shot below        Browse to the location where you saved the certificate earlier, remember… Refer to Step I in case you forgot.        Now you should be able to see the imported certificate here, make sure the thumbprint of the certificate matches the one you inserted in the config files        IV. Publish Windows Azure Worker Role aka Test Agent Having completed I, II and III, you are ready to publish the Test Agent VM – 3 to the cloud. Go to Visual Studio and right click the Windows Azure project and select Publish. Verify the infomration in the wizard, from the advanced settings tab, you can also enabled capture of intellitrace or profiling information.         Click Next and Click Publish! From the view menu bar select the Windows Azure Activity Log window.       Now you should be able to see the deployment progress in real time.             In the Windows Azure Management Portal, you should also be able to see the progress of creation of a new Worker Role.       Once the deployment is complete you should be able to RDP (go to run prompt type mstsc and in the pop up the machine name) in to the Test Agent Worker Role VM from the Playpit network using the domain admin user account. In case you are unable to log in to the Test Agent using the domain admin user account it means the process of joining the Test Agent to the domain has failed! But the good news is, because you imported the connect module, you can connect to the Test Agent machine using Windows Azure Management Portal and troubleshoot the reason for failure, you will be able to log in with the user name and password you specified in the config file for the keys ‘RemoteAccess.AccountUsername, RemoteAccess.EncryptedPassword (just that enter the password unencrypted)’, fix it or manually join the machine to the domain. Once you have managed to Join the Test Agent VM to the Domain move to the next step.      So, log in to the Test Agent Worker Role VM with the Playpit Domain Administrator and verify that you can log in, the machine is connected to the domain and the connect service is successfully running. If yes, give your self a pat on the back, you are 80% mission accomplished!         Go to the Windows Azure Management Portal and click on Virtual Network, click on Groups and Roles and click on Test Rig, click Edit Group, the edit the Test Rig group you created earlier. In the Connect to section, click on Add to select the worker role you have just deployed. Also, check the ‘Allow connections between endpoints in the group’ with this you will enable to communication between test controller and test agents and test agents/test agents. Click Save.      Now, you are ready to deploy the Test Agent software on the Worker Role Test Agent VM and configure it to work with the Test Controller. V. Configuring VM – 3: Installing Test Agent and Associating Test Agent to Controller Log in to the Worker Role Test Agent VM that you have just successfully deployed, make sure you log in with the domain administrator account. Download the All Agents software from MSDN, ‘en_visual_studio_agents_2010_x86_x64_dvd_509679.iso’, extract the iso and navigate to where you have extracted the iso. In my case, i have extracted the iso to “C:\Resources\Temp\VsAgentSetup”. Open the Test Agent folder and double click on setup.exe. Once you have installed the Test Agent you should reach the configuration window. If you face any issues installing TFS Test Agent on the VM, refer to the walkthrough on MSDN.       Once you have successfully installed the Test Agent software you will need to configure the test agent. Right click the test agent configuration tool and run as a different user. i.e. an Administrator. This is really to run the configuration wizard with elevated privileges (you might have UAC block something's otherwise).        In the run options, you can select ‘service’ you do not need to run the agent as interactive un less you are running coded UI tests. I have specified the domain administrator to connect to the TFS Test Controller. In real life, i would never do that, i would create a separate test user service account for this purpose. But for the blog post, we are using the most powerful user so that any policies or restrictions don’t block you.        Click the Apply Settings button and you should be all green! If not, the summary usually gives helpful error messages that you can resolve and proceed. As per my experience, you may run in to either a permission or a firewall blocking communication issue.        And now the moment of truth! Go to VM –2 open up Visual Studio and from the Test Menu select Manage Test Controller       Mission Accomplished! You should be able to see the Test Agent that you have just configured here,         VI. Creating and Running Load Tests on your brand new Azure-ed Test Rig I have various blog posts on Performance Testing with Visual Studio Ultimate, you can follow the links and videos below, Blog Posts: - Part 1 – Performance Testing using Visual Studio 2010 Ultimate - Part 2 – Performance Testing using Visual Studio 2010 Ultimate - Part 3 – Performance Testing using Visual Studio 2010 Ultimate Videos: - Test Tools Configuration & Settings in Visual Studio - Why & How to Record Web Performance Tests in Visual Studio Ultimate - Goal Driven Load Testing using Visual Studio Ultimate Now that you have created your load tests, there is one last change you need to make before you can run the tests on your Azure Test Rig, create a new Test settings file, and change the Test Execution method to ‘Remote Execution’ and select the test controller you have configured the Worker Role Test Agent against in our case VM – 2 So, go on, fire off a test run and see the results of the test being executed on the Azur-ed Test Rig. Review and What’s next? A quick recap of the benefits of running the Test Rig in the cloud and what i will be covering in the next blog post AND I would love to hear your feedback! Advantages Utilizing the power of Azure compute to run a heavy virtual user load. Benefiting from the Azure flexibility, destroy Test Agents when not in use, takes < 25 minutes to spin up a new Test Agent. Most important test Network Latency, (network latency and speed of connection are two different things – usually network latency is very hard to test), by placing the Test Agents in Microsoft Data centres around the globe, one can actually test the lag in transferring the bytes not because of a slow connection but because the page has been requested from the other side of the globe. Next Steps The process of spinning up the Test Agents in windows Azure is not 100% automated. I am working on the Worker process and power shell scripts to make the role deployment, unattended install of test agent software and registration of the test agent to the test controller automated. In the next blog post I will show you how to make the complete process unattended and automated. Remember to subscribe to http://feeds.feedburner.com/TarunArora. Hope you enjoyed this post, I would love to hear your feedback! If you have any recommendations on things that I should consider or any questions or feedback, feel free to leave a comment. See you in Part III.   Share this post : CodeProject

    Read the article

  • BizTalk: mapping with Xslt

    - by Leonid Ganeline
    BizTalk Map Editor (Mapper) is a good editor, especially in the last 2010 version of the BizTalk. But still sometimes it cannot do the tasks easily. It is time for the Xslt code, It is time to remember that the maps are executed by the Xslt engine.  Right-click the Mapper Grid (a field between the source and target schemas) and choose Properties /Custom XSLT Path.  Input here a name of the file with Xslt code. Only this code will be executed, forget the picture in the Mapper, all those links and functoids.  Let’s see the real-life example. There are two source Addresses. One is on the top level and the second is inside the Member_Address record with MaxOccurs=* . The target address is placed inside the Locator record with MaxOccurs=*. The requirement is to map all source address to the one target address structure. The source Xml document looks like: The result Xml should be like this: Try to do this mapping with the Mapper and you will spent good amount of time and the result map would be tricky. If we use the Xslt code, the mapping will be simple and unambiguous, like this: Simple, elegant.

    Read the article

  • VirtualBox

    - by DesigningCode
    I was wanting to play around with something in a VM the other day.  I was curious what was available for free, if anything, for windows.   I quickly came across Virtual Box  ( http://www.virtualbox.org/ ).   Downloaded, Installed. No Problem!  Works really nicely.   It was commercial software (by sun (now oracle)) that turned open source.   In terms of a license it says :- In summary, the VirtualBox PUEL allows you to use VirtualBox free of charge for personal use or, alternatively, for product evaluation. An interesting feature it has is built in RDP.   Which is useful if you have a guest OS that doesn’t support RDP.   Speaking of RDP…..  which I will in my next blog post… I learnt something REALLY useful the other day.

    Read the article

  • Announcement: Employee Info Starter Kit (v5.0) is Released

    - by Mohammad Ashraful Alam
    Ever wanted to have a simple jQuery menu bound with ASP.NET web site map file? Ever wanted to have cool css design stuffs implemented on your ASP.NET data bound controls? Ever wanted to let Visual Studio generate logical layers for you, which can be easily tested, customized and bound with ASP.NET data controls? If your answers with respect to above questions are ‘yes’, then you will probably happy to try out latest release (v5.0) of Employee Starter Kit, which is intended to address different types of real world challenges faced by web application developers when performing common CRUD operations. Using a single database table ‘Employee’, the current release illustrates how to utilize Microsoft ASP.NET 4.0 Web Form Data Controls, Entity Framework 4.0 and Visual Studio 2010 effectively in that context. Employee Info Starter Kit is an open source ASP.NET project template that is highly influenced by the concept ‘Pareto Principle’ or 80-20 rule, where it is targeted to enable a web developer to gain 80% productivity with 20% of effort with respect to learning curve and production. This project template is titled as “Employee Info Starter Kit”, which was initially hosted on Microsoft Code Gallery and been downloaded 1, 50,000+ of copies afterword.  The latest version of this starter kit is hosted in Codeplex. Release Highlights User End Functional Specification The user end functionalities of this starter kit are pretty simple and straight forward that are focused in to perform CRUD operation on employee records as described below. Creating a new employee record Read existing employee records Update an existing employee record Delete existing employee records Architectural Overview Simple 3 layer architecture (presentation, business logic and data access layer) ASP.NET web form based user interface Built-in code generators for logical layers, implemented in Visual Studio default template engine (T4) Built-in Entity Framework entities as business entities (aka: data containers) Data Mapper design pattern based Data Access Layer, implemented in C# and Entity Framework Domain Model design pattern based Business Logic Layer, implemented in C# Object Model for Cross Cutting Concerns (such as validation, logging, exception management) Minimum System Requirements Visual Studio 2010 (Web Developer Express Edition) or higher Sql Server 2005 (Express Edition) or higher Technology Utilized Programming Languages/Scripts Browser side: JavaScript Web server side: C# Code Generation Template: T-4 Template Frameworks .NET Framework 4.0 JavaScript Framework: jQuery 1.5.1 CSS Framework: 960 grid system .NET Framework Components .NET Entity Framework .NET Optional/Named Parameters (new in .net 4.0) .NET Tuple (new in .net 4.0) .NET Extension Method .NET Lambda Expressions .NET Anonymous Type .NET Query Expressions .NET Automatically Implemented Properties .NET LINQ .NET Partial Classes and Methods .NET Generic Type .NET Nullable Type ASP.NET Meta Description and Keyword Support (new in .net 4.0) ASP.NET Routing (new in .net 4.0) ASP.NET Grid View (CSS support for sorting - (new in .net 4.0)) ASP.NET Repeater ASP.NET Form View ASP.NET Login View ASP.NET Site Map Path ASP.NET Skin ASP.NET Theme ASP.NET Master Page ASP.NET Object Data Source ASP.NET Role Based Security Getting Started Guide To see Employee Info Starter Kit in action is pretty easy! Download the latest version. Extract the file. From the extracted folder click the C# project file (Eisk.Web.csproj) to open it in Visual Studio 2010 Hit Ctrl+F5! The current release (v5.0) of Employee Info Starter Kit is properly packaged, fully documented and well tested. If you want to learn more about it in details, just check the following links: Release Home Page Installation Walkthrough Hand on Coding Walkthrough Technical Reference Enjoy!

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >