Search Results

Search found 17192 results on 688 pages for 'geeks with blogs'.

Page 118/688 | < Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >

  • Cowboy Agile?

    - by Robert May
    In a previous post, I outlined the rules of Scrum.  This post details one of those rules. I’ve often heard similar phrases around Scrum that clue me in to someone who doesn’t understand Scrum.  The phrases go something like this: “We don’t do Agile because the idea of letting people just do whatever they want is wrong.  We believe in a more structured approach.” (i.e. Work is Prison, and I’m the Warden!) “I love Agile.  Agile lets us do whatever we want!” (Cowboy Agile?) “We’re Agile, but we use a process that I’ve created.” (Cowboy Agile?) All of those phrases have one thing in common:  The assumption that Agile, and I mean Scrum, lets you do whatever you want.  This is simply not true. Executing Scrum properly requires more dedication, rigor, and diligence than happens in most traditional development methods. Scrum and Waterfall Compared Since Scrum and Waterfall are two of the most commonly used methodologies, a little bit of contrasting and comparing is in order. Waterfall Scrum A project manager defines all tasks and then manages the tasks that team members are working on. The team members define the tasks and estimates of the stories for the current iteration.  Any team member may work on any task in the iteration. Usually only a few milestones that need to be met, the milestones are measured in months, and these milestones are expected to be missed.  Little work is ever done to improve estimates and poor estimators can hide behind high estimates. Stories must be delivered every iteration, milestones are measured in hours, and the team is expected to figure out why their estimates were wrong, even when they were under.  Repeated misses can get the entire team fired. Partially completed work is normal. Partially completed work doesn’t count. Nobody knows the task you’re working on. Everyone knows what you’re working on, whether or not you’re making progress and how much longer you think its going to take, in hours. Little requirement to show working code.  Prototypes are ok. Working code must be shown each iteration.  No smoke and mirrors allowed.  Testing is done in lengthy cycles at the end of development.  Developers aren’t held accountable. Testing is part of the team.  If the testers don’t accept the story as complete, the team can’t count it.  Complete means that the story’s functionality works as designed.  The team can’t have any open defects on the story. Velocity is rarely truly measured and difficult to evaluate. Velocity is integral to the process and can be seen at a glance and everyone in the company knows what it is. A business analyst writes requirements.  Designers mock up screens.  Developers hide behind “I did it just like the spec doc told me to and made the screen exactly like the picture” Developers are expected to collaborate in real time.  If a design is bad or lacks needed details, the developers are required to get it right in the iteration, because all software must be functional.  Designers and Business Analysts are part of the team and must do their work in iterations slightly ahead of the developers. Upper Management is often surprised.  “You told me things were going well two months ago!” Management receives updates at the end of every iteration showing them exactly what the team did and how that compares to what' is remaining in the backlog.  Managers know every iteration what their money is buying. Status meetings are rare or don’t occur.  Email is a primary form of communication. Teams coordinate every single day with each other and use other high bandwidth communication channels to make sure they’re making progress.  Email is used only as a last resort.  Instead, team members stand up, walk to each other, and talk, face to face.  If that’s not possible, they pick up the phone. IF someone asks what happened, its at the end of a lengthy development cycle measured in months, and nobody really knows why it happened. Someone asks what happened every iteration.  The team talks about what happened, and then adapts to make sure that what happened either never happens again or happens every time.   That’s probably enough for now.  As you can see, a lot is required of Scrum teams! One of the key differences in Scrum is that the burden for many activities is shifted to a group of people who share responsibility, instead of a single person having responsibility.  This is a very good thing, since small groups usually come up with better and more insightful work than single individuals.  This shift also results in better velocity.  Team members can take vacations and the rest of the team simply picks up the slack.  With Waterfall, if a key team member takes a vacation, delays can ensue. Scrum requires much more out of every team member and as a result, Scrum teams outperform non-Scrum teams working 60 hour weeks. Recommended Reading Everyone considering Scrum should read Mike Cohn’s excellent book, User Stories Applied. Technorati Tags: Agile,Scrum,Waterfall

    Read the article

  • Virtualising Windows 8 on OS X with VMware Fusion above the 8GB limit

    - by mbrit
    tl;dr - don't.VMware Fusion has an 8GB limit, so on my 16GB MacBook Pro (which is an old one with a 16GB memory upgrade), I wanted to use more than 8GB. You can fudge it my editing the .vmx file and changing the memsize value to whatever you want.However it turns out that if you do this the performance on the whole machine turns into a big. It beachballs all over the place, both in VMware and without. Constant hanging of VMware, fan running hot the whole time, etc.This is just an apocryphal view, but having spent a week and a half with it unusable and me thinking I was going to have to go back to Win7, turn it down back to using 8GB exactly seems to be working much more stably.

    Read the article

  • SQL Server: How do I generate the table schema and populate it with inserts in a script?

    - by Paula DiTallo
    Originally posted on: http://geekswithblogs.net/AskPaula/archive/2014/05/20/156469.aspx In SSMS, there's a Generate Script utility (read:  only available under version 2008 and up) . Here are the steps you would need to take to make use of the utility: Right click on the database you're interested in and go to Tasks -> Generate ScriptsSelect the tables and/or any other objects you'd like in order to get them into the script.Navigate to Set scripting options. Click on Advanced.Under the General category, navigate to Type of data to scriptSelect the Schema and Data option to get the insert statements generated. Click OK.

    Read the article

  • Useful utilities - LINQPAD

    - by TATWORTH
    Recently I came across LINQPAD at http://www.linqpad.net/ a free utility by Joseph Alabahari. This is an excellent tool for developing and testing LINQ queries before you incorporate them into your C# programs. If you get stuck as I did at one point recently there is the MSDN Linq forum at http://forums.microsoft.com/MSDN/ShowForum.aspx?siteid=1&ForumID=123 where  you can ask for help.

    Read the article

  • Deploying an SSL Application to Windows Azure &ndash; The Dark Secret

    - by ToStringTheory
    When working on an application that had been in production for some time, but was about to have a shopping cart added to it, the necessity for SSL certificates came up.  When ordering the certificates through the vendor, the certificate signing request (CSR) was generated through the providers (http://register.com) web interface, and within a day, we had our certificate. At first, I thought that the certification process would be the hard part…  Little did I know that my fun was just beginning… The Problem I’ll be honest, I had never really secured a site before with SSL.  This was a learning experience for me in the first place, but little did I know that I would be learning more than the simple procedure.  I understood a bit about SSL already, the mechanisms in how it works – the secure handshake, CA’s, chains, etc…  What I didn’t realize was the importance of the CSR in the whole process.  Apparently, when the CSR is created, a public key is created at the same time, as well as a private key that is stored locally on the PC that generated the request.  When the certificate comes back and you import it back into IIS (assuming you used IIS to generate the CSR), all of the information is combined together and the SSL certificate is added into your store. Since at the time the certificate had been ordered for our site, the selection to use the online interface to generate the CSR was chosen, the certificate came back to us in 5 separate files: A root certificate – (*.crt file) An intermediate certifcate – (*.crt file) Another intermediate certificate – (*.crt file) The SSL certificate for our site – (*.crt file) The private key for our certificate – (*.key file) Well, in case you don’t know much about Windows Azure and SSL certificates, the first thing you should learn is that certificates can only be uploaded to Azure if they are in a PFX package – securable by a password.  Also, in the case of our SSL certificate, you need to include the Private Key with the file.  As you can see, we didn’t have a PFX file to upload. If you don’t get the simple PFX from your hosting provider, but rather the multiple files, you will soon find out that the process has turned from something that should be simple – to one that borders on a circle of hell… Probably between the fifth and seventh somewhere… The Solution The solution is to take the files that make up the certificates chain and key, and combine them into a file that can be imported into your local computers store, as well as uploaded to Windows Azure.  I can not take the credit for this information, as I simply researched a while before finding out how to do this. Download the OpenSSL for Windows toolkit (Win32 OpenSSL v1.0.1c) Install the OpenSSL for Windows toolkit Download and move all of your certificate files to an easily accessible location (you'll be pointing to them in the command prompt, so I put them in a subdirectory of the OpenSSL installation) Open a command prompt Navigate to the folder where you installed OpenSSL Run the following command: openssl pkcs12 -export –out {outcert.pfx} –inkey {keyfile.key}      –in {sslcert.crt} –certfile {ca1.crt} –certfile (ca2.crt) From this command, you will get a file, outcert.pfx, with the sum total of your ssl certificate (sslcert.crt), private key {keyfile.key}, and as many CA/chain files as you need {ca1.crt, ca2.crt}. Taking this file, you can then import it into your own IIS in one operation, instead of importing each certificate individually.  You can also upload the PFX to Azure, and once you add the SSL certificate links to the cloud project in Visual Studio, your good to go! Conclusion When I first looked around for a solution to this problem, there were not many places online that had the information that I was looking for.  While what I ended up having to do may seem obvious, it isn’t for everyone, and I hope that this can at least help one developer out there solve the problem without hours of work!

    Read the article

  • Building the Bootsector of BIOSLOADER

    - by Kate Moss' Open Space
    Windows CE is a 32 bits OS since day one, so it makes sense tools shipped with PB, compiler, linker, assembler and etc, are for targeting to 32 bits system. But occasionally, if you are developing x86 based system and especially working on some boot code, such as boot sector of BIOSLOADER, that will be a problem. Normally, as PB provides the prebuilt boot sector image but if you ever need to rebuilt it, what should you do? You may say as it's an x86, perhaps you can use VS or Windows SDK to build it. But unfortunately, today's desktop Windows tool chains are also 32 or even 64 bits only, you need to find something older. VC++ 6.0, but how can you find one? This Website http://thestarman.pcministry.com/asm/masm.htm arranges some useful resources. Basically, you need 2 thing, the 16 bits MASM and 16 bits linker. Just make it even easier for you Download http://download.microsoft.com/download/vb60ent/Update/6/W9X2KXP/EN-US/vcpp5.exe for Assembler (MASM). Download http://download.microsoft.com/download/vc15/Update/1/WIN98/EN-US/Lnk563.exe for the Linker. And then just extract the archives and what you need is ml.exe, ml.err and link.exe

    Read the article

  • BizTalk 2009 - Creating a Custom Functoid Library

    - by StuartBrierley
    If you find that you have a need to created multiple Custom Functoids you may also choose to create a Custom Functoid Library - a single project containing many custom functoids.  As previsouly discussed, the Custom Functoid Wizard can be used to create a project with a new custom functoid inside.  But what if you want to extend this project to include more custom functoids and create your Custom Functoid Library?  First create a Custom Functoid Library project and your first Custom Functoid using the Custom Functoid Wizard. When you open your Custom Functoid Library project in Visual Studio you will see that it contains your custom functoid class file along with its resource file.  One of the items this resource file contains is the ID of the the custom functoid.  Each custom functoid needs a unique ID that is over 6000.  When creating a Custom Functoid Library I would first suggest that you delete the ID from this resource file and instead create a _FunctoidIDs class containing constants for each of your custom functoids.  In this way you can easily see which custom functoid IDs are assigned to which custom functoid and which ID is next in the sequence of availability: namespace MyCompany.BizTalk.Functoids.TestFunctoids {     class _FunctoidIDs     {         public const int TestFunctoid                       = 6001;     } } You will then need to update the base() function in your existing functoid class to reference these constant values rather than the current resource file. From:    int functoidID;    // This has to be a number greater than 6000    functoidID = System.Convert.ToInt32(resmgr.GetString("FunctoidId"));    this.ID = functoidID; To: this.ID = _FunctoidIDs.TestFunctoid; To create a new custom functoid you can copy the existing custom functoid, renaming the resultant class file as appropriate.  Once it is renamed you will need to change the Class name, ResourceName reference and Base function name in the class code to those of your new custom functoid.  You will also need to create a new constant value in the _FunctoidIDs class and update the ID reference in your code to match this.  Assuming that you need some different functionalty from your new  customfunctoid you will need to check or amend the following in your functoid class file: Min and Max connections Functoid Category Input and Output connection types The parameters and functionality of the Execute function To change the appearance of you new custom functoid you will need to check or amend the following in the functoid resource file: Name Description Tooltip Exception Icon You can change the String values by double clicking the resource file and amending the value fields in the string table. To amend the functoid icon you will need to create a 16x16 bitmap image.  Once you have saved this you are then ready to import it into the functoid resource file.  In Visual Studio change the resource view to images, right click the icon and choose import from file. You have now completed your new custom functoid and created a Custom Functoid Library.  You can test your new library of functoids by building the project, copying the resultant DLL to C:\Program Files\Microsoft BizTalk Server 2009\Developer Tools\Mapper Extensions and then resetting the toolbox in Visual Studio.

    Read the article

  • Get Started with .Net and Apache Cassandra

    - by Sazzad Hossain
    Just came across a easy and nice to read article explaining how to get started with noSQL database system. These no relational databases are getting increasingly popular to tackle the distribution and large data set problems.Cassandra's ColumnFamily data model offers the convenience of column indexes with the performance of log-structured updates, strong support for materialized views, and powerful built-in caching.The article is nicely written by Kellabyte  and shows step by step process how to get going with the programming in a .net platform.Read more here.

    Read the article

  • ASP.NET MVC 3 Hosting :: Error Handling and CustomErrors in ASP.NET MVC 3 Framework

    - by C. Miller
    So, what else is new in MVC 3? MVC 3 now has a GlobalFilterCollection that is automatically populated with a HandleErrorAttribute. This default FilterAttribute brings with it a new way of handling errors in your web applications. In short, you can now handle errors inside of the MVC pipeline. What does that mean? This gives you direct programmatic control over handling your 500 errors in the same way that ASP.NET and CustomErrors give you configurable control of handling your HTTP error codes. How does that work out? Think of it as a routing table specifically for your Exceptions, it's pretty sweet! Global Filters The new Global.asax file now has a RegisterGlobalFilters method that is used to add filters to the new GlobalFilterCollection, statically located at System.Web.Mvc.GlobalFilter.Filters. By default this method adds one filter, the HandleErrorAttribute. public class MvcApplication : System.Web.HttpApplication {     public static void RegisterGlobalFilters(GlobalFilterCollection filters)     {         filters.Add(new HandleErrorAttribute());     } HandleErrorAttributes The HandleErrorAttribute is pretty simple in concept: MVC has already adjusted us to using Filter attributes for our AcceptVerbs and RequiresAuthorization, now we are going to use them for (as the name implies) error handling, and we are going to do so on a (also as the name implies) global scale. The HandleErrorAttribute has properties for ExceptionType, View, and Master. The ExceptionType allows you to specify what exception that attribute should handle. The View allows you to specify which error view (page) you want it to redirect to. Last but not least, the Master allows you to control which master page (or as Razor refers to them, Layout) you want to render with, even if that means overriding the default layout specified in the view itself. public class MvcApplication : System.Web.HttpApplication {     public static void RegisterGlobalFilters(GlobalFilterCollection filters)     {         filters.Add(new HandleErrorAttribute         {             ExceptionType = typeof(DbException),             // DbError.cshtml is a view in the Shared folder.             View = "DbError",             Order = 2         });         filters.Add(new HandleErrorAttribute());     }Error Views All of your views still work like they did in the previous version of MVC (except of course that they can now use the Razor engine). However, a view that is used to render an error can not have a specified model! This is because they already have a model, and that is System.Web.Mvc.HandleErrorInfo @model System.Web.Mvc.HandleErrorInfo           @{     ViewBag.Title = "DbError"; } <h2>A Database Error Has Occurred</h2> @if (Model != null) {     <p>@Model.Exception.GetType().Name<br />     thrown in @Model.ControllerName @Model.ActionName</p> }Errors Outside of the MVC Pipeline The HandleErrorAttribute will only handle errors that happen inside of the MVC pipeline, better known as 500 errors. Errors outside of the MVC pipeline are still handled the way they have always been with ASP.NET. You turn on custom errors, specify error codes and paths to error pages, etc. It is important to remember that these will happen for anything and everything outside of what the HandleErrorAttribute handles. Also, these will happen whenever an error is not handled with the HandleErrorAttribute from inside of the pipeline. <system.web>  <customErrors mode="On" defaultRedirect="~/error">     <error statusCode="404" redirect="~/error/notfound"></error>  </customErrors>Sample Controllers public class ExampleController : Controller {     public ActionResult Exception()     {         throw new ArgumentNullException();     }     public ActionResult Db()     {         // Inherits from DbException         throw new MyDbException();     } } public class ErrorController : Controller {     public ActionResult Index()     {         return View();     }     public ActionResult NotFound()     {         return View();     } } Putting It All Together If we have all the code above included in our MVC 3 project, here is how the following scenario's will play out: 1.       A controller action throws an Exception. You will remain on the current page and the global HandleErrorAttributes will render the Error view. 2.       A controller action throws any type of DbException. You will remain on the current page and the global HandleErrorAttributes will render the DbError view. 3.       Go to a non-existent page. You will be redirect to the Error controller's NotFound action by the CustomErrors configuration for HTTP StatusCode 404. But don't take my word for it, download the sample project and try it yourself. Three Important Lessons Learned For the most part this is all pretty straight forward, but there are a few gotcha's that you should remember to watch out for: 1) Error views have models, but they must be of type HandleErrorInfo. It is confusing at first to think that you can't control the M in an MVC page, but it's for a good reason. Errors can come from any action in any controller, and no redirect is taking place, so the view engine is just going to render an error view with the only data it has: The HandleError Info model. Do not try to set the model on your error page or pass in a different object through a controller action, it will just blow up and cause a second exception after your first exception! 2) When the HandleErrorAttribute renders a page, it does not pass through a controller or an action. The standard web.config CustomErrors literally redirect a failed request to a new page. The HandleErrorAttribute is just rendering a view, so it is not going to pass through a controller action. But that's ok! Remember, a controller's job is to get the model for a view, but an error already has a model ready to give to the view, thus there is no need to pass through a controller. That being said, the normal ASP.NET custom errors still need to route through controllers. So if you want to share an error page between the HandleErrorAttribute and your web.config redirects, you will need to create a controller action and route for it. But then when you render that error view from your action, you can only use the HandlerErrorInfo model or ViewData dictionary to populate your page. 3) The HandleErrorAttribute obeys if CustomErrors are on or off, but does not use their redirects. If you turn CustomErrors off in your web.config, the HandleErrorAttributes will stop handling errors. However, that is the only configuration these two mechanisms share. The HandleErrorAttribute will not use your defaultRedirect property, or any other errors registered with customer errors. In Summary The HandleErrorAttribute is for displaying 500 errors that were caused by exceptions inside of the MVC pipeline. The custom errors are for redirecting from error pages caused by other HTTP codes.

    Read the article

  • Using the latest (stable release) of Oracle Developer Tools for Visual Studio 11.1.0.7.20.

    - by mbcrump
    +  = Simple and safe Data connections.   This guide is for someone wanting to use the latest ODP.NET quickly without reading the official documentation. This guide will get you up and running in about 15 minutes. I have reviewed my referral link to my simple Setting up ODP.net with Win7 x64 and noticed most people were searching for one of the following terms: “how to use odp.net with vs” “setup connection odp.net” “query db using odp and vs” While my article provided links and a sample tnsnames.ora file, it really didn’t tell you how to use it. I’m hoping that this brief tutorial will help. So before we get started, you will need the following: Download the following: www.oracle.com/technology/software/tech/dotnet/utilsoft.html from oracle and install it. It is the first one on the page. Visual Studio 2008 or 2010. It should be noted that The System.Data.OracleClient namespace is the OLD .NET Framework Data Provider for Oracle. It should not be used anymore as it has been depreciated. The latest version which is what we are using is Oracle.DataAccess.Client. First things first, Add a reference to the Oracle.DataAccess.Client after you install ODP.NET   Copy and paste the following C# code into your project and replace the relevant info including the query string and you should be able to return data. I have commented several lines of code to assist in understanding what it is doing.   Lambda Expression. using System; using System.Data; using Oracle.DataAccess.Client;   namespace ConsoleApplication1 {     class Program     {         static void Main(string[] args)         {           try         {             //Setup DataSource             string oradb = "Data Source=(DESCRIPTION ="                                    + "(ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = hostname)(PORT = 1521)))"                                    + "(CONNECT_DATA = (SERVICE_NAME = SERVICENAME))) ;"                                    + "Persist Security Info=True;User ID=USER;Password=PASSWORD;";                        //Open Connection to Oracle - this could be moved outside the try.             OracleConnection conn = new OracleConnection(oradb);             conn.Open();               //Create cmd and use parameters to prevent SQL injection attacks.             OracleCommand cmd = new OracleCommand();             cmd.Connection = conn;               cmd.CommandText = "select username from table where username = :username";               OracleParameter p1 = new OracleParameter("username", OracleDbType.Varchar2);             p1.Value = username;             cmd.Parameters.Add(p1);               cmd.CommandType = CommandType.Text;               OracleDataReader dr = cmd.ExecuteReader();             dr.Read();               //Contains the value of the datarow             Console.WriteLine(dr["username"].ToString());               //Disposes of objects.             dr.Dispose();             cmd.Dispose();             conn.Dispose();         }           catch (OracleException ex) // Catches only Oracle errors         {             switch (ex.Number)             {                 case 1:                     Console.WriteLine("Error attempting to insert duplicate data.");                     break;                 case 12545:                     Console.WriteLine("The database is unavailable.");                     break;                 default:                     Console.WriteLine(ex.Message.ToString());                     break;             }         }           catch (Exception ex) // Catches any error not previously caught         {                   Console.WriteLine("Unidentified Error: " + ex.Message.ToString());              }         }       }           } At this point, you should have a working Program that returns data from an oracle database. If you are still having trouble then drop me a line and I will be happy to assist. As of this writing, oracle has announced the latest beta release of ODP.NET 11.2.0.1.1 Beta.  This release includes .NET Framework 4 and .NET Framework 4 Client Profile support. You may want to hold off on this version for a while as its BETA, and I wouldn’t want any production code using any BETA software.

    Read the article

  • Pagination In blogengine.net 2.0

    - by anirudha
    blogengine.net 2.0 is a great platform for make blogging easier. whenever you update the blog in blogengine.net 2.0 you found that pagination not looking great. BE.net show previous post instead of next post and next post instead of previous post. well here is a solution. you need to solve the module for pagination here is code replace them then blogengine.net 2.0 pagination work well. go to App_Code/Controls/postPager.cs replace the folllowing code or change them by this file I put here download pagination module search related to pagination not work in blogengine.net 2.0 pagination bug in blogengine.net 2.0 make pagination work in blogengine.net 2.0

    Read the article

  • Running ASP.Net MVC3 Alongside ASP.Net WebForms in the Same Project

    - by Sam Abraham
    I previously blogged on running ASP.Net MVC in an ASP.Net WebForms project. My reference at the time was a freely-available PDF document by Scott Guthrie which covered the setup process in good detail.   As I am preparing references to share with our audience at my upcoming talk at the Deerfield Beach Coders Café on March 1st (http://www.fladotnet.com/Reg.aspx?EventID=514), I found a nice blog post by Scott Hanselman on running both ASP.Net 4.0 WebForms along with ASP.Net MVC 3.0 in the same project. You can access this article here.   Moreover, Scott later followed-up with a blog showing how to leverage NuGet to automate the setup of ASP.Net MVC3 in an existing ASP.Net WebForms project.   One frequent question that usually comes up when discussing this side-by-side setup is the loss of the convenient Visual Studio Solution Explorer context menu which enable us to easily create controllers and views with a few mouse clicks.   A good suggestion brought up in the comments section of Scott’s article presented a good work-around to this problem: Manually add the MVC Visual Studio Project Type GUID in your .sln solution file ({E53F8FEA-EAE0-44A6-8774-FFD645390401}) which then brings back the MVC-specific context menu functionality in solution explorer of the hybrid project. (Thank James Raden!)

    Read the article

  • Getting Internal Name of a Share Point List Fields

    - by Gino Abraham
    Over the last 2 weeks i was developing a tool to migrate Lotus notes data base to Share point. The mapping between Lotus notes schema and share point list schema was done manually in an xml file for out tool. To map the columns we wanted internal names of each field. There are quite a few ways to achieve this, have explained few below. If you want internal names for one or 2 columns you can do so by navigating to the list setting and clicking on the column name. Once you are in column's details, you can check the query string of the page. The last item in the query string would be field's internal. Replace all "%5f" with '_' will give you the field internal name. In my case there were more than 80 columns. I used power shell to get the list of columns with details. Open windows Powershell and paste the following script after modifying the url and list name. [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint") $site = new-object Microsoft.SharePoint.SPSite(http://yousitecolurl) $web = $site.OpenWeb() $list = $web.Lists["yourlist name"] $list.Fields | Format-Table Title, InternalName, TypeAsString I also found a tool in Codeplex.com which can generate a wrapper class for a list. The wrapper class will give you the guid and internal name for all fields in the list.  You can download the tool from http://imtech.codeplex.com/ Just enter the url in the text box and hit open. All the site content will be listed at the left hand side, expand the list, right click and select generate wrapper class.

    Read the article

  • Azure Florida Association

    - by Dave Noderer
    Herve Roggero, SQL Azure MVP,  has created a virtual community to focus on Azure. Here is the outline from Herve:   User Group Name:  Azure Florida Association Purpose: Start a virtual Florida user group that targets the Azure platform Venues: Most meetings will be virtual; however I plan to host a few physical events across Florida if possible from time to time; physical events may be a few hours long with potentially more than one speaker Possible Topics: The topics will touch Azure generally speaking, but can have a wide array of concern such as Integration, Data Migration, Hosting, Security, Scalability, Mobile Device integration, successful ventures/lessons learned, cross cloud integration patterns, testing in the cloud, deployment management, reporting… Target Members: Architects, Developers, IT Managers Membership: Membership will be free; virtual events will be free; physical events may involve a minimal cover charge Speakers: If you are interested in speaking or if you have topic ideas, please let me know Frequency: Initially these meetings will be held every other month   The first meeting will be held on January 25, 2012 at 4PM EST. Vikas Sahni, SQL Azure MVP, will be presenting on Demystifying SQL Azure. Vikas will introduce SQL Azure, value proposition, usage scenarios, concepts and architecture, what is there and what is not, including Tips and Tricks.  The actual meeting link will be available in January but please join the linked in group now to be kept informed of this and future events: http://www.linkedin.com/groups?gid=4177626.

    Read the article

  • Avatar (spoiler alert!)

    - by Dave Yasko
    This past weekend we finally saw “Avatar,” or as I like to call it “Dances with Smurfs.”  It was rather light on the story, heavy on the message, and incredibly well done.  The eye for detail is what blew me away, especially the visual distortion (presumably due to density) when the two atmospheres mixed.  The only thing I thought they might have missed was why so many (presumably) mammals had 6 appendages (4 arms + 2 legs) and breathed through passages near their clavicles, but the Na’vi (sp?) didn’t have/do either.  Also, James Cameron just loves to telegraph upcoming events: Riding the big red bird thing has only happened 5 times before – Sully is on the job.  The tree is going to download dying Ripley to her avatar body – Sully is going to do that too.  I’ve seen worse foreshadowing, but not in a long time. I give it 4 Papa Smurfs out of 5.

    Read the article

  • Computer Bugs - Etymology and Entomology

    - by PointsToShare
    Whatever bugs you My wife and I used to take some of our summer vacation I a cabin on the shore of Lake Atsion in NJ. I t is a delightful place in the Wharton forest with Brown yet fresh water, where we would canoe, swim and enjoy true rest. Alas, in the last few years, yellow flies also discovered the area’s pastoral delights and came in hoards to bug us. So much so that we had to give up. As a computer programmer I abhor bugs. The bugs that bug me – except the pesky yellow flies – are program bugs , a specific variety of computer bugs. You can find an excellent take on the etymology of the word ‘bug” in this delightful monogram: http://www.jamesshuggins.com/h/tek1/first_computer_bug.htm In my youth, I worked on Burroughs computers. Unlike their IBM brethren, the Burroughs used a 96 column card. The cards were much smaller than the 80 column IBM cards. We wrote our programs on coding sheets and then a key-punch operator transcribed them into punched cards. These were fed into a card reader and compiled. The compiler would notify us of compiler errors or bugs, but it was not always easy to get the meaning of the message. My friend Mark Wildt, also a Burroughs veteran, gave me an old punched card from one of his programs. Obviously a bug!! Here It Is!! That’s All Folks!

    Read the article

  • Management Reporter Installation – Lessons Learned

    - by Ryan McBee
    After successfully completing several installations of Management Reporter this year, I wanted to share a few lessons learned that should help you. First, you will want to make sure that you install Management Reporter under a domain account as opposed to a local system or network service account. Management Reporter gives you the option to install under these accounts, but it is a be a best practice approach to use a domain account. Upon installation of Management Report, you will want to make sure that Directory Browsing is enabled within the IIS server of your site or you will have problems when you go to use Management Reporter. By default, it will be disabled in Server 2008 R2 and you will need to make the setting change under the Actions pane shown below. Lastly, you will want to make sure that SQL Server is running under a domain account. I have had multiple situations where reports have been stuck in the Queued status rather than Processing status of Management Reporter. After reviewing resolution 5 of KB 2298248, it was determined that running SQL Server under a domain account is the way to go.

    Read the article

  • I finished my #TechEd 2010, may I have another??

    - by T
    It has been another fantastic year for TechEd North America.  I always love my time here.  First, I have to give a huge thank you to Ineta for giving me the opportunity to work the Ineta booth and BOF’s (birds of a feather).   I can not even begin to list how many fantastic leaders in the .Net space and Developers from all over I have met through Ineta at this event.  It has been truly amazing and great fun!! New Orlean’s has been awesome.  The night life is hoppin’.  In addition to enjoying a few (too many??) of the local hurricanes in New Orleans, I have hung out with some of the coolest people  Deepesh Mohnani, David Poll, Viresh, Alan Stephens, Shawn Wildermuth, Greg Leonardo, Doug Seven, Chris Willams, David Carley and some of our southcentral hero’s Jeffery Palermo, Todd Anglin, Shawn Weisfeld, Randy Walker, The midnight DBA’s, Zeeshan Hirani, Dennis Bottjer just to name a few. A big thanks to Microsoft and everyone that has helped to put TechEd together.  I have loved hanging out with people from the Silverlight and Expression Teams and have learned a ton.  I am ramped up and ready to take all that knowledge back to my co-workers and my community. I can not wait to see you all again next year in Atlanta!!! Here are video links to some of my fav sessions: Using MVVM Design Pattern with VS 2010 XAML Designer – Rockford Lhotka Effective RIA: Tips and Tricks for Building Effective Rich Internet Applications – Deepesh Mohani Taking Microsoft Silverlight 4 Applications Beyond the Browser – David Poll Jump into Silvelright! and become immediately effective – Tim Huckaby Prototyping Rich Microsoft Silverlight 4 Applications with MS Expression Blend + SketchFlow – David Carley Tales from the Trenches: Building a Real-World Microsoft Silvelright Line-of-Business Application – Dan Wahlin

    Read the article

  • TechEd Europe early bird saving &ndash; register by 5th July

    - by Eric Nelson
    Another event advert alert :-) But this one comes with a cautious warning. I spoke at TechEd Europe last year. I found TechEd to be a huge, extremely well run conference filled with great speakers and passionate attendees in a top notch venue and fascinating city. As an “IT Pro” I think it is the premiere conference for Microsoft technologies in Europe. However, IMHO and those of others I trust, I didn’t think it hit the mark for developers in 2009. There was a fairly obvious reason – the PDC was scheduled to take place only a couple of weeks later which meant the “powder was being kept dry” and (IMHO) some of the best speakers on developer technologies were elsewhere. But I’m reasonably certain that this won’t be repeated this year (Err… Have I missed an announcement about “no pdc in 2010”?) Enjoy: Register for Tech·Ed Europe by 5 July and Save €500 Tech·Ed Europe returns to Berlin this November 8 – 12, for a full week of deep technical education, hands-on-learning and opportunities to connect with Microsoft and Community experts one-on-one.  Register by 5 July and receive your conference pass for only €1,395 – a €500 savings. Arrive Early and Get a Jumpstart on Technical Sessions Choose from 8 pre-conference seminars led by Microsoft and industry experts, and selected to give you a jumpstart on technical learning.  Additional fees apply.  Conference attendees receive a €100 discount.   Join the Tech·Ed Europe Email List for Event Updates Get the latest event news before the event, and find out more about what’s happening onsite.  Join the Tech·Ed Europe email list today!

    Read the article

  • Time Passes

    - by Robert May
    It’s been half a year since my last post.  My how time flies.  My new years resolution is to post more frequently.  After a short stint at a local company, which shall remain nameless, I’m back at Veracity.  Overall, Veracity Solutions is one of the best companies I’ve worked for, and I’m relieved to be back. So, this year, I’m going to do the following on my blog: Finish the Agile posts I started (IN MAY!!!). Blog about some code for a logging helper to make debug logging easier. Blog some resharper snippets to help with logging. Blog a Unity Container helper to allow you to specify dependency mappings with attributes on interfaces. If I can accomplish all of those, I’ll have done well this year, and since I’ve put this out to the public, I’m accountable for it, right? Technorati Tags: Agile,logging

    Read the article

  • A tale of two dev accounts

    - by TechTwaddle
    Note: I am currently in the process of relocating my blog from http://www.geekswithblogs.net/techtwaddle to my new address at http://www.techtwaddle.net I suggest you point your feed readers to the new address as I slowly transition to my new shared-hosted, ad-free wordpress blog :)   You probably remember my rant from a while back about my windows mobile developer account having problems with the new AppHub, well, there have been few developments and I thought I should share it with you. First up, the issue isn’t fixed yet. I still cannot login to AppHub using my windows mobile 6.x developer account and can’t view details of my Minesweeper app. Who knows how many copies its sold. I had numerous exchanges with Microsoft’s support team on the AppHub forums and via email as well (support ticket), but somehow we never managed to get to the root of it. In fact, the support team itself grew so tired of the problem that they suggested I create a new dev account. I grew impatient, and it was really frustrating to have an app ready for submission but not being able to do anything with it. Eventually, the frustration had to show somewhere, and it was on this forum thread Prabhu Kumar in reply to Nick Nick, I feel for you and totally understand the frustration. Since day one I have been getting the XBOX profile linking error, We encountered an issue connecting your App Hub account with your Xbox Live Profile. Please visit Xbox.com and update your contact information. After you have updated your contact information, please return to the App Hub (https://users.create.msdn.com/Register) to continue. I have an app published on the Windows Mobile 6.x marketplace since Aug, now I can't view the details of this app. I completed work on my WP7 application 1.5 months ago and the first version is ready for submission to marketplace, only if I can login. You can imagine how frustrating all this can be, the issue has taken far too long to be fixed, this has drained all my motivation. I have exchanged numerous mails with Microsoft support team on this issue, and from the looks of it they really are trying their best, unfortunately, their best is not good enough for some of us. During the first week of December I was told that there would be an update happening to AppHub around mid of December. I was hoping that the issue would be fixed but it wasn't. After the update the only change I notice is that the xbox.com link on the error page now takes me to the correct link. Previously, this link used to take me to the 404 page you mentioned above. Out of desperation, I am now considering creating another developer account on AppHub with a new live id, even this I am not 100% sure will work. I asked the support team when the next update to AppHub was planned and got this reply, "We do not have  release date to announce for the next App Hub update at this time. In regards to the login issue you are experiencing at this point the only solution would be to create a new account with a different live ID but make sure to go to xbox.com before hand to get all the information in order on that side." I know it's an extra $99, and not that I can't afford it but it doesn't feel right and I shouldn't have to be doing it in the first place. I have lost all hope of this issue being resolved. I went ahead and created a new dev account, the id verification was in progress when Shaun Taulbee of Microsoft, who has been really helpful in the forums, replied saying, If you find it necessary to pay again to create a new account due to a Microsoft problem, send in a support request asking for a refund and we'll review it (and likely approve it given the circumstances). The thought of refund made me happy, but I had my doubts. So once my second account was verified by Geotrust I applied for a refund through the developer dashboard, by creating a support ticket. Couple of days later I got an email from Microsoft saying that the refund had been approved! yay! Few days and the refund showed up on my bill, Well, thank you Microsoft, it means a lot. I am glad it’s over now. The new account works flawlessly. I would still like to get my first account working again and look at my app numbers for Win Mo 6.x, and probably transfer the credits to the new account somehow, but I’ll save it for another day. If you’ve had similar problems with the AppHub, and had to create a new account to submit your app, I suggest you contact the support team and get your dollars refunded!

    Read the article

  • Set Context User Principal for Customized Authentication in SignalR

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2014/05/27/set-context-user-principal-for-customized-authentication-in-signalr.aspxCurrently I'm working on a single page application project which is built on AngularJS and ASP.NET WebAPI. When I need to implement some features that needs real-time communication and push notifications from server side I decided to use SignalR. SignalR is a project currently developed by Microsoft to build web-based, read-time communication application. You can find it here. With a lot of introductions and guides it's not a difficult task to use SignalR with ASP.NET WebAPI and AngularJS. I followed this and this even though it's based on SignalR 1. But when I tried to implement the authentication for my SignalR I was struggled 2 days and finally I got a solution by myself. This might not be the best one but it actually solved all my problem.   In many articles it's said that you don't need to worry about the authentication of SignalR since it uses the web application authentication. For example if your web application utilizes form authentication, SignalR will use the user principal your web application authentication module resolved, check if the principal exist and authenticated. But in my solution my ASP.NET WebAPI, which is hosting SignalR as well, utilizes OAuth Bearer authentication. So when the SignalR connection was established the context user principal was empty. So I need to authentication and pass the principal by myself.   Firstly I need to create a class which delivered from "AuthorizeAttribute", that will takes the responsible for authenticate when SignalR connection established and any method was invoked. 1: public class QueryStringBearerAuthorizeAttribute : AuthorizeAttribute 2: { 3: public override bool AuthorizeHubConnection(HubDescriptor hubDescriptor, IRequest request) 4: { 5: } 6:  7: public override bool AuthorizeHubMethodInvocation(IHubIncomingInvokerContext hubIncomingInvokerContext, bool appliesToMethod) 8: { 9: } 10: } The method "AuthorizeHubConnection" will be invoked when any SignalR connection was established. And here I'm going to retrieve the Bearer token from query string, try to decrypt and recover the login user's claims. 1: public override bool AuthorizeHubConnection(HubDescriptor hubDescriptor, IRequest request) 2: { 3: var dataProtectionProvider = new DpapiDataProtectionProvider(); 4: var secureDataFormat = new TicketDataFormat(dataProtectionProvider.Create()); 5: // authenticate by using bearer token in query string 6: var token = request.QueryString.Get(WebApiConfig.AuthenticationType); 7: var ticket = secureDataFormat.Unprotect(token); 8: if (ticket != null && ticket.Identity != null && ticket.Identity.IsAuthenticated) 9: { 10: // set the authenticated user principal into environment so that it can be used in the future 11: request.Environment["server.User"] = new ClaimsPrincipal(ticket.Identity); 12: return true; 13: } 14: else 15: { 16: return false; 17: } 18: } In the code above I created "TicketDataFormat" instance, which must be same as the one I used to generate the Bearer token when user logged in. Then I retrieve the token from request query string and unprotect it. If I got a valid ticket with identity and it's authenticated this means it's a valid token. Then I pass the user principal into request's environment property which can be used in nearly future. Since my website was built in AngularJS so the SignalR client was in pure JavaScript, and it's not support to set customized HTTP headers in SignalR JavaScript client, I have to pass the Bearer token through request query string. This is not a restriction of SignalR, but a restriction of WebSocket. For security reason WebSocket doesn't allow client to set customized HTTP headers from browser. Next, I need to implement the authentication logic in method "AuthorizeHubMethodInvocation" which will be invoked when any SignalR method was invoked. 1: public override bool AuthorizeHubMethodInvocation(IHubIncomingInvokerContext hubIncomingInvokerContext, bool appliesToMethod) 2: { 3: var connectionId = hubIncomingInvokerContext.Hub.Context.ConnectionId; 4: // check the authenticated user principal from environment 5: var environment = hubIncomingInvokerContext.Hub.Context.Request.Environment; 6: var principal = environment["server.User"] as ClaimsPrincipal; 7: if (principal != null && principal.Identity != null && principal.Identity.IsAuthenticated) 8: { 9: // create a new HubCallerContext instance with the principal generated from token 10: // and replace the current context so that in hubs we can retrieve current user identity 11: hubIncomingInvokerContext.Hub.Context = new HubCallerContext(new ServerRequest(environment), connectionId); 12: return true; 13: } 14: else 15: { 16: return false; 17: } 18: } Since I had passed the user principal into request environment in previous method, I can simply check if it exists and valid. If so, what I need is to pass the principal into context so that SignalR hub can use. Since the "User" property is all read-only in "hubIncomingInvokerContext", I have to create a new "ServerRequest" instance with principal assigned, and set to "hubIncomingInvokerContext.Hub.Context". After that, we can retrieve the principal in my Hubs through "Context.User" as below. 1: public class DefaultHub : Hub 2: { 3: public object Initialize(string host, string service, JObject payload) 4: { 5: var connectionId = Context.ConnectionId; 6: ... ... 7: var domain = string.Empty; 8: var identity = Context.User.Identity as ClaimsIdentity; 9: if (identity != null) 10: { 11: var claim = identity.FindFirst("Domain"); 12: if (claim != null) 13: { 14: domain = claim.Value; 15: } 16: } 17: ... ... 18: } 19: } Finally I just need to add my "QueryStringBearerAuthorizeAttribute" into the SignalR pipeline. 1: app.Map("/signalr", map => 2: { 3: // Setup the CORS middleware to run before SignalR. 4: // By default this will allow all origins. You can 5: // configure the set of origins and/or http verbs by 6: // providing a cors options with a different policy. 7: map.UseCors(CorsOptions.AllowAll); 8: var hubConfiguration = new HubConfiguration 9: { 10: // You can enable JSONP by uncommenting line below. 11: // JSONP requests are insecure but some older browsers (and some 12: // versions of IE) require JSONP to work cross domain 13: // EnableJSONP = true 14: EnableJavaScriptProxies = false 15: }; 16: // Require authentication for all hubs 17: var authorizer = new QueryStringBearerAuthorizeAttribute(); 18: var module = new AuthorizeModule(authorizer, authorizer); 19: GlobalHost.HubPipeline.AddModule(module); 20: // Run the SignalR pipeline. We're not using MapSignalR 21: // since this branch already runs under the "/signalr" path. 22: map.RunSignalR(hubConfiguration); 23: }); On the client side should pass the Bearer token through query string before I started the connection as below. 1: self.connection = $.hubConnection(signalrEndpoint); 2: self.proxy = self.connection.createHubProxy(hubName); 3: self.proxy.on(notifyEventName, function (event, payload) { 4: options.handler(event, payload); 5: }); 6: // add the authentication token to query string 7: // we cannot use http headers since web socket protocol doesn't support 8: self.connection.qs = { Bearer: AuthService.getToken() }; 9: // connection to hub 10: self.connection.start(); Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Informed TDD &ndash; Kata &ldquo;To Roman Numerals&rdquo;

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/28/informed-tdd-ndash-kata-ldquoto-roman-numeralsrdquo.aspxIn a comment on my article on what I call Informed TDD (ITDD) reader gustav asked how this approach would apply to the kata “To Roman Numerals”. And whether ITDD wasn´t a violation of TDD´s principle of leaving out “advanced topics like mocks”. I like to respond with this article to his questions. There´s more to say than fits into a commentary. Mocks and TDD I don´t see in how far TDD is avoiding or opposed to mocks. TDD and mocks are orthogonal. TDD is about pocess, mocks are about structure and costs. Maybe by moving forward in tiny red+green+refactor steps less need arises for mocks. But then… if the functionality you need to implement requires “expensive” resource access you can´t avoid using mocks. Because you don´t want to constantly run all your tests against the real resource. True, in ITDD mocks seem to be in almost inflationary use. That´s not what you usually see in TDD demonstrations. However, there´s a reason for that as I tried to explain. I don´t use mocks as proxies for “expensive” resource. Rather they are stand-ins for functionality not yet implemented. They allow me to get a test green on a high level of abstraction. That way I can move forward in a top-down fashion. But if you think of mocks as “advanced” or if you don´t want to use a tool like JustMock, then you don´t need to use mocks. You just need to stand the sight of red tests for a little longer ;-) Let me show you what I mean by that by doing a kata. ITDD for “To Roman Numerals” gustav asked for the kata “To Roman Numerals”. I won´t explain the requirements again. You can find descriptions and TDD demonstrations all over the internet, like this one from Corey Haines. Now here is, how I would do this kata differently. 1. Analyse A demonstration of TDD should never skip the analysis phase. It should be made explicit. The requirements should be formalized and acceptance test cases should be compiled. “Formalization” in this case to me means describing the API of the required functionality. “[D]esign a program to work with Roman numerals” like written in this “requirement document” is not enough to start software development. Coding should only begin, if the interface between the “system under development” and its context is clear. If this interface is not readily recognizable from the requirements, it has to be developed first. Exploration of interface alternatives might be in order. It might be necessary to show several interface mock-ups to the customer – even if that´s you fellow developer. Designing the interface is a task of it´s own. It should not be mixed with implementing the required functionality behind the interface. Unfortunately, though, this happens quite often in TDD demonstrations. TDD is used to explore the API and implement it at the same time. To me that´s a violation of the Single Responsibility Principle (SRP) which not only should hold for software functional units but also for tasks or activities. In the case of this kata the API fortunately is obvious. Just one function is needed: string ToRoman(int arabic). And it lives in a class ArabicRomanConversions. Now what about acceptance test cases? There are hardly any stated in the kata descriptions. Roman numerals are explained, but no specific test cases from the point of view of a customer. So I just “invent” some acceptance test cases by picking roman numerals from a wikipedia article. They are supposed to be just “typical examples” without special meaning. Given the acceptance test cases I then try to develop an understanding of the problem domain. I´ll spare you that. The domain is trivial and is explain in almost all kata descriptions. How roman numerals are built is not difficult to understand. What´s more difficult, though, might be to find an efficient solution to convert into them automatically. 2. Solve The usual TDD demonstration skips a solution finding phase. Like the interface exploration it´s mixed in with the implementation. But I don´t think this is how it should be done. I even think this is not how it really works for the people demonstrating TDD. They´re simplifying their true software development process because they want to show a streamlined TDD process. I doubt this is helping anybody. Before you code you better have a plan what to code. This does not mean you have to do “Big Design Up-Front”. It just means: Have a clear picture of the logical solution in your head before you start to build a physical solution (code). Evidently such a solution can only be as good as your understanding of the problem. If that´s limited your solution will be limited, too. Fortunately, in the case of this kata your understanding does not need to be limited. Thus the logical solution does not need to be limited or preliminary or tentative. That does not mean you need to know every line of code in advance. It just means you know the rough structure of your implementation beforehand. Because it should mirror the process described by the logical or conceptual solution. Here´s my solution approach: The arabic “encoding” of numbers represents them as an ordered set of powers of 10. Each digit is a factor to multiply a power of ten with. The “encoding” 123 is the short form for a set like this: {1*10^2, 2*10^1, 3*10^0}. And the number is the sum of the set members. The roman “encoding” is different. There is no base (like 10 for arabic numbers), there are just digits of different value, and they have to be written in descending order. The “encoding” XVI is short for [10, 5, 1]. And the number is still the sum of the members of this list. The roman “encoding” thus is simpler than the arabic. Each “digit” can be taken at face value. No multiplication with a base required. But what about IV which looks like a contradiction to the above rule? It is not – if you accept roman “digits” not to be limited to be single characters only. Usually I, V, X, L, C, D, M are viewed as “digits”, and IV, IX etc. are viewed as nuisances preventing a simple solution. All looks different, though, once IV, IX etc. are taken as “digits”. Then MCMLIV is just a sum: M+CM+L+IV which is 1000+900+50+4. Whereas before it would have been understood as M-C+M+L-I+V – which is more difficult because here some “digits” get subtracted. Here´s the list of roman “digits” with their values: {1, I}, {4, IV}, {5, V}, {9, IX}, {10, X}, {40, XL}, {50, L}, {90, XC}, {100, C}, {400, CD}, {500, D}, {900, CM}, {1000, M} Since I take IV, IX etc. as “digits” translating an arabic number becomes trivial. I just need to find the values of the roman “digits” making up the number, e.g. 1954 is made up of 1000, 900, 50, and 4. I call those “digits” factors. If I move from the highest factor (M=1000) to the lowest (I=1) then translation is a two phase process: Find all the factors Translate the factors found Compile the roman representation Translation is just a look-up. Finding, though, needs some calculation: Find the highest remaining factor fitting in the value Remember and subtract it from the value Repeat with remaining value and remaining factors Please note: This is just an algorithm. It´s not code, even though it might be close. Being so close to code in my solution approach is due to the triviality of the problem. In more realistic examples the conceptual solution would be on a higher level of abstraction. With this solution in hand I finally can do what TDD advocates: find and prioritize test cases. As I can see from the small process description above, there are two aspects to test: Test the translation Test the compilation Test finding the factors Testing the translation primarily means to check if the map of factors and digits is comprehensive. That´s simple, even though it might be tedious. Testing the compilation is trivial. Testing factor finding, though, is a tad more complicated. I can think of several steps: First check, if an arabic number equal to a factor is processed correctly (e.g. 1000=M). Then check if an arabic number consisting of two consecutive factors (e.g. 1900=[M,CM]) is processed correctly. Then check, if a number consisting of the same factor twice is processed correctly (e.g. 2000=[M,M]). Finally check, if an arabic number consisting of non-consecutive factors (e.g. 1400=[M,CD]) is processed correctly. I feel I can start an implementation now. If something becomes more complicated than expected I can slow down and repeat this process. 3. Implement First I write a test for the acceptance test cases. It´s red because there´s no implementation even of the API. That´s in conformance with “TDD lore”, I´d say: Next I implement the API: The acceptance test now is formally correct, but still red of course. This will not change even now that I zoom in. Because my goal is not to most quickly satisfy these tests, but to implement my solution in a stepwise manner. That I do by “faking” it: I just “assume” three functions to represent the transformation process of my solution: My hypothesis is that those three functions in conjunction produce correct results on the API-level. I just have to implement them correctly. That´s what I´m trying now – one by one. I start with a simple “detail function”: Translate(). And I start with all the test cases in the obvious equivalence partition: As you can see I dare to test a private method. Yes. That´s a white box test. But as you´ll see it won´t make my tests brittle. It serves a purpose right here and now: it lets me focus on getting one aspect of my solution right. Here´s the implementation to satisfy the test: It´s as simple as possible. Right how TDD wants me to do it: KISS. Now for the second equivalence partition: translating multiple factors. (It´a pattern: if you need to do something repeatedly separate the tests for doing it once and doing it multiple times.) In this partition I just need a single test case, I guess. Stepping up from a single translation to multiple translations is no rocket science: Usually I would have implemented the final code right away. Splitting it in two steps is just for “educational purposes” here. How small your implementation steps are is a matter of your programming competency. Some “see” the final code right away before their mental eye – others need to work their way towards it. Having two tests I find more important. Now for the next low hanging fruit: compilation. It´s even simpler than translation. A single test is enough, I guess. And normally I would not even have bothered to write that one, because the implementation is so simple. I don´t need to test .NET framework functionality. But again: if it serves the educational purpose… Finally the most complicated part of the solution: finding the factors. There are several equivalence partitions. But still I decide to write just a single test, since the structure of the test data is the same for all partitions: Again, I´m faking the implementation first: I focus on just the first test case. No looping yet. Faking lets me stay on a high level of abstraction. I can write down the implementation of the solution without bothering myself with details of how to actually accomplish the feat. That´s left for a drill down with a test of the fake function: There are two main equivalence partitions, I guess: either the first factor is appropriate or some next. The implementation seems easy. Both test cases are green. (Of course this only works on the premise that there´s always a matching factor. Which is the case since the smallest factor is 1.) And the first of the equivalence partitions on the higher level also is satisfied: Great, I can move on. Now for more than a single factor: Interestingly not just one test becomes green now, but all of them. Great! You might say, then I must have done not the simplest thing possible. And I would reply: I don´t care. I did the most obvious thing. But I also find this loop very simple. Even simpler than a recursion of which I had thought briefly during the problem solving phase. And by the way: Also the acceptance tests went green: Mission accomplished. At least functionality wise. Now I´ve to tidy up things a bit. TDD calls for refactoring. Not uch refactoring is needed, because I wrote the code in top-down fashion. I faked it until I made it. I endured red tests on higher levels while lower levels weren´t perfected yet. But this way I saved myself from refactoring tediousness. At the end, though, some refactoring is required. But maybe in a different way than you would expect. That´s why I rather call it “cleanup”. First I remove duplication. There are two places where factors are defined: in Translate() and in Find_factors(). So I factor the map out into a class constant. Which leads to a small conversion in Find_factors(): And now for the big cleanup: I remove all tests of private methods. They are scaffolding tests to me. They only have temporary value. They are brittle. Only acceptance tests need to remain. However, I carry over the single “digit” tests from Translate() to the acceptance test. I find them valuable to keep, since the other acceptance tests only exercise a subset of all roman “digits”. This then is my final test class: And this is the final production code: Test coverage as reported by NCrunch is 100%: Reflexion Is this the smallest possible code base for this kata? Sure not. You´ll find more concise solutions on the internet. But LOC are of relatively little concern – as long as I can understand the code quickly. So called “elegant” code, however, often is not easy to understand. The same goes for KISS code – especially if left unrefactored, as it is often the case. That´s why I progressed from requirements to final code the way I did. I first understood and solved the problem on a conceptual level. Then I implemented it top down according to my design. I also could have implemented it bottom-up, since I knew some bottom of the solution. That´s the leaves of the functional decomposition tree. Where things became fuzzy, since the design did not cover any more details as with Find_factors(), I repeated the process in the small, so to speak: fake some top level, endure red high level tests, while first solving a simpler problem. Using scaffolding tests (to be thrown away at the end) brought two advantages: Encapsulation of the implementation details was not compromised. Naturally private methods could stay private. I did not need to make them internal or public just to be able to test them. I was able to write focused tests for small aspects of the solution. No need to test everything through the solution root, the API. The bottom line thus for me is: Informed TDD produces cleaner code in a systematic way. It conforms to core principles of programming: Single Responsibility Principle and/or Separation of Concerns. Distinct roles in development – being a researcher, being an engineer, being a craftsman – are represented as different phases. First find what, what there is. Then devise a solution. Then code the solution, manifest the solution in code. Writing tests first is a good practice. But it should not be taken dogmatic. And above all it should not be overloaded with purposes. And finally: moving from top to bottom through a design produces refactored code right away. Clean code thus almost is inevitable – and not left to a refactoring step at the end which is skipped often for different reasons.   PS: Yes, I have done this kata several times. But that has only an impact on the time needed for phases 1 and 2. I won´t skip them because of that. And there are no shortcuts during implementation because of that.

    Read the article

  • App Stores&ndash;In All Things, Its Quality Over Quantity

    - by D'Arcy Lussier
    Everybody has an opinion about Windows 8. People love it, people hate it, people are meh about it, people are apparently buying it from Microsoft stores in NYC as if it was water before a natural disaster…if there’s one thing that Microsoft product launches do well, its the ability to bring out strong emotional responses. Over at eweek.com, Don Reisinger wrote about 5 good and bad things about Windows 8. Yes, another opinion piece on WIndows 8. I figured since this one had good and bad it might be worthwhile to read. I then came across #10 on his list, and figured “What the hell…might as well post a bit of a rant on Windows 8 myself!” Here’s #10: 10. Bad: Too few apps Unfortunately, Microsoft wasn’t able to get too many developers to start producing applications for its Windows 8 Store. Microsoft hasn’t yet released official numbers, but some have said that the marketplace has less than 8,000 programs. Considering Apple’s App Store has 100 times that, it’s about time Microsoft starts leaning on developers to get more programs into its store. Believe me, Microsoft *has* been leaning on developers to get apps into the store. I’ve been asked at least 5 or 6 times from 5 or 6 different friends at Microsoft about whether I was going to write a Windows 8 app. I think Microsoft felt they had to try and address the number of apps available in their marketplace, since some people (like Don) would draw comparisons to the number of apps in the Apple marketplace. I feel for Microsoft in this, since the number of apps in a marketplace are an empty stat. Quality of Quantity I have an iPad that my family (wife, 10yo daughter, 3yo daughter) use. We all have our own apps installed on it. In addition, my wife has an iPhone 4S that she also installs apps on. As someone who gets asked by his kids often whether they can buy/download an app, the vast majority of the vast catalogue of iOS marketplace apps are crap! Do you realize how many “free” games are out there, only to really be not-free because you have to purchase in-game content to make the game actually playable? And how about searching – with such a vast array of apps and such high numbers of craptastic ones, trying to find something is incredibly difficult and can be frustrating. I would rather see that Microsoft has 8000 high quality apps in their store at launch, instead of 800000 that were mostly junk. Too Few Apps?! And seriously, 8000 is not a small number. How many iOS apps have I actually bought between the iPad and iPhone? I’ll be generous and say 30…heck, let’s round it up to 40. It’s not like I have 10,000 apps installed on my iPad, nor will that ever happen! So if people have, at the *launch* of a new platform ecosystem, EIGHT THOUSAND apps to choose from, I don’t see that as a fail at all! It should be noted that most of the most common apps (Netflix, Skype, etc.) are available for Windows 8 at launch – I guess I’ll have to wait a few weeks for My Pony Ranch and all its clones to start showing up; pity. Let’s Check Back in a Year So look, let’s check back in a year’s time and see what the app store looks like. My hope is that Microsoft doesn’t continue to push quantity over quality. Even knowing the optics that # of apps in the store carries and the pressure to catch Apple and Android marketplaces, I hope Microsoft avoids the scenario where there’s a good percentage of apps in the Windows Store that are utter rubbish and finding the gems will be cumbersome. But if that happens, we can thank guys like Dan who raised the false issue of app count at the launch for it.

    Read the article

  • Getting codebaseHQ SVN ChangeLog data in your application

    - by saifkhan
    I deploy apps via ClickOnce. After each deployment we have to review the changes made and send out an email to the users with the changes. What I decided now to do is to use CodebaseHQ’s API to access a project’s SVN repository and display the commit notes so some users who download new updates can check what was changed or updated in an app. This saves a heck of a lot of time, especially when your apps are in beta and you are making several changes daily based on feedback. You can read up on their API here Here is a sample on how to access the Repositories API from a windows app Public Sub GetLog() If String.IsNullOrEmpty(_url) Then Exit Sub Dim answer As String = String.Empty Dim myReq As HttpWebRequest = WebRequest.Create(_url) With myReq .Headers.Add("Authorization", String.Format("Basic {0}", Convert.ToBase64String(Encoding.ASCII.GetBytes("username:password")))) .ContentType = "application/xml" .Accept = "application/xml" .Method = "POST" End With Try Using response As HttpWebResponse = myReq.GetResponse() Using sr As New System.IO.StreamReader(response.GetResponseStream()) answer = sr.ReadToEnd() Dim doc As XDocument = XDocument.Parse(answer) Dim commits = From commit In doc.Descendants("commit") _ Select Message = commit.Element("message").Value, _ AuthorName = commit.Element("author-name").Value, _ AuthoredDate = DateTime.Parse(commit.Element("authored-at").Value).Date grdLogData.BeginUpdate() grdLogData.DataSource = commits.ToList() grdLogData.EndUpdate() End Using End Using Catch ex As Exception MsgBox(ex.Message) End Try End Sub

    Read the article

< Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >