Search Results

Search found 4879 results on 196 pages for 'geeks'.

Page 98/196 | < Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >

  • WPF and Composite Application Library &ndash; Missing The Point

    - by David Totzke
    I have a headache and it’s not even 9AM yet.  Well, ok, it’s nearly ten here now in GMT –5 but it’s before nine somewhere still. Sometimes people will miss the point of something so utterly and completely that one is left wondering how such a person can even dress themselves. Writing an application using WPF and the Composite Application Library (Prism) means that one must learn the various programming idioms common to these frameworks.  The Windows Forms event driven model simply will not suffice.  You need to come to grips with the idea of a very loosely coupled application.  Concepts that must be absorbed and internalized include Data Binding, Control and Data Templates, Commands, Dependency Injection, and Inversion of Control, as well as the Supervising Controller, Presentation Model and Model-View-View-Model patterns. It is as simple as that.  Not to embrace these concepts is to invite pain.  It is to invite noodles; and not the holy kind. Someone actually said to me that “just because it’s not WPF, doesn’t mean it’s wrong.”  And he’s right.  Unless, of course, you are writing a WPF application and especially if you are using the Composite Application Library. In simple terms then; YOU’RE DOING IT WRONG!   Dave Just because I can…

    Read the article

  • SharePoint Saturday Michigan Is Coming Up!

    - by Brian Jackett
    Next Saturday March 13th Ann Arbor, MI will be hosting SharePoint Saturday Michigan (SPSMI).  For those unfamiliar, SharePoint Saturday is a community driven event where various regional and national speakers gather to present at a FREE conference on all topics related to SharePoint.  This will be my third SharePoint Saturday and second one I’ve had the honor of presenting at.  My presentation is titled “Real World Deployment of SharePoint 2007 Solutions“ (click here for the SpeakerRate link.)     After taking a look at the speaker and session list I can tell you with great excitement that this event is packed with great speakers and topics.  Register here and come on out to SharePoint Saturday Michigan on March 13th.  If you’re attending feel free to track me down and say hi.  See you there.         -Frog Out

    Read the article

  • BizTalk: Dynamic SMTP Port: Unknown Error Description

    - by Leonid Ganeline
    Today I investigated one strange error working with Dynamic SMTP Port.   Event Type: Error Event Source: BizTalk Server 2006 Event Category: BizTalk Server 2006 Event ID: 5754 Date: ******** Time: ********AM User: N/A Computer: ******** Description: A message sent to adapter "SMTP" on send port "*********" with URI "mailto:********.com" is suspended. Error details: Unknown Error Description  MessageId:  {********} InstanceID: {********}   My code was pretty simple and the source of the error was hidden somewhere inside it.   msg_MyMessage(SMTP.CC) = var_CC; msg_MyMessage(SMTP.From) = var_From; msg_MyMessage(SMTP.Subject) = var_Subject; msg_MyMessage(SMTP.EmailBodyText) = var_Message;    // #1    msg_MyMessage(SMTP.SMTPHost) = " localhost "; msg_MyMessage(SMTP.SMTPAuthenticate) = 0; When I added line #2, this frustrating error disappeared.    msg_MyMessage(SMTP.EmailBodyTextCharset) = "UTF-8"; // #2 Conclusion: If we use the SMTP.EmailBodyText property, we must set up the SMTP.EmailBodyTextCharset property. To me it looks like a bug in BizTalk. [Maybe it is "by design", but in this case give us a useful error text!!!] And don't ask me how much time I've spent with this investigation.

    Read the article

  • Less is more: The making of a 37 signals style pizza

    - by Liam McLennan
    For years now we have been hearing from 37 signals that the way to bake a great web app is to build less – well the same is true of pizza. Our western hedonism has led us to pursue ever cheesier and more stuffed crusts at the expense of the simple flavours. All we are left with is a fatty, salty heart attack in waiting. The Italians know that the secret to great taste is simplicity. With that in mind I decided to base my pizza masterpiece on these simple flavours: tomato sopressa (spicy aged salami) mozzarella garlic basil Of course the first thing one needs when making pizza is a base.   A freshly made base is extremely important but unfortunately I was too lazy. Next up is the tomato sauce. My wife made the sauce by reducing some tomatoes and adding herbs and sugar. We had selected some fine ingredients to make our topping: sopressa salami, fresh basil and the best mozzarella we could find.   It is, according to google, important to bake pizza at a high temperature, so I set the oven to 250C (480F). Here are the before and after shots: Meanwhile, the dog did nothing.

    Read the article

  • Selectively Exposing Functionallity in .Net

    - by David V. Corbin
    Any developer should be aware of the principles of encapsulation, cross-tier isolation, and cross-functional separation of concerns. However, it seems the few take the time to consider the adage of "minimal yet complete"1 when developing the software. Consider the exposure of "business objects" to the user interface. Some common situations occur: Accessing a given element requires a compound set of calls that do not "make sense" to the User Interface. More information than absolutely required is exposed to the user interface It would be much cleaner if a custom interface was provided that exposed exactly (and only) the information that is required by the consumer. Achieving this using conventional techniques would require the creation (and maintenance!) of custom classes to filter and transpose the information into the ideal format. Determining the ROI on this approach can be very difficult to ascertain, and as a result it is often ignored completely. There is another approach, which is largely made practical by virtual of the Action and Func delegates. From a callers point of view, the following two samples can be used interchangeably:     interface ISomeInterface     {         void SampleMethod1(string param);         string SamepleMethod2(string param);     }       class ISomeInterface     {         public Action<string> SampleMethod1 {get; }         public Func<string,string> SamepleMethod2 {get; }     }   The capabilities this simple changes enable are significant (and remember it does not cange the syntax at the call site): The delegates can be initialized to directly call the proper method of any target class. The delegates can be dynamically updated based on the current state. The "interface" can NOT be cast to the concrete class (which often exposes more functionallity). This patterns By limiting the interface to the exact functionallity required, the reduced surface area will typically result in lower development, testing and maintenance costs. We are currently in the process of posting a project on CodePlex which illustrates this (and many other) techniques which have proven helpful in creating robust yet flexible solutions that are highly efficient2 and maintainable. This post will be updated as soon as the project is published. 1) Credit: Scott  Meyers, Effective C++, Addison-Wesley 1992 2) For those who read my previous post on performance it should be noted that the use of delegates is on the same order of magnitude (actually a tiny amount faster) as conventional interfaces.

    Read the article

  • Philly GiveCamp 2010

    - by wulfers
    Spent the weekend helping out several non-profits doing what we like to do best...  Designing, developing and making people very happy with their new websites, systems, applications and features.  Form what I saw at this GiveCamp about 75 percent of the non-profits needed updated or new websites supporting CMS features and the ability for staff members to update the content on their websites.... Some cool apps were designed and developed..... A centralized system for distribution of daily schedules and task assistance to autistic clients was a show stopper with an awesome central management interface, IPhone and in the future windows phone support for tactile, auditory and visual cues. SharePoint was upgraded and forms were automated for a volunteer fire company that desperately needed some automation to help the fireman do their primary job. Many cool sites for non-profits that had ether an outdated or non existent web presence.

    Read the article

  • C#/.NET Little Wonders: Constraining Generics with Where Clause

    - by James Michael Hare
    Back when I was primarily a C++ developer, I loved C++ templates.  The power of writing very reusable generic classes brought the art of programming to a brand new level.  Unfortunately, when .NET 1.0 came about, they didn’t have a template equivalent.  With .NET 2.0 however, we finally got generics, which once again let us spread our wings and program more generically in the world of .NET However, C# generics behave in some ways very differently from their C++ template cousins.  There is a handy clause, however, that helps you navigate these waters to make your generics more powerful. The Problem – C# Assumes Lowest Common Denominator In C++, you can create a template and do nearly anything syntactically possible on the template parameter, and C++ will not check if the method/fields/operations invoked are valid until you declare a realization of the type.  Let me illustrate with a C++ example: 1: // compiles fine, C++ makes no assumptions as to T 2: template <typename T> 3: class ReverseComparer 4: { 5: public: 6: int Compare(const T& lhs, const T& rhs) 7: { 8: return rhs.CompareTo(lhs); 9: } 10: }; Notice that we are invoking a method CompareTo() off of template type T.  Because we don’t know at this point what type T is, C++ makes no assumptions and there are no errors. C++ tends to take the path of not checking the template type usage until the method is actually invoked with a specific type, which differs from the behavior of C#: 1: // this will NOT compile! C# assumes lowest common denominator. 2: public class ReverseComparer<T> 3: { 4: public int Compare(T lhs, T rhs) 5: { 6: return lhs.CompareTo(rhs); 7: } 8: } So why does C# give us a compiler error even when we don’t yet know what type T is?  This is because C# took a different path in how they made generics.  Unless you specify otherwise, for the purposes of the code inside the generic method, T is basically treated like an object (notice I didn’t say T is an object). That means that any operations, fields, methods, properties, etc that you attempt to use of type T must be available at the lowest common denominator type: object.  Now, while object has the broadest applicability, it also has the fewest specific.  So how do we allow our generic type placeholder to do things more than just what object can do? Solution: Constraint the Type With Where Clause So how do we get around this in C#?  The answer is to constrain the generic type placeholder with the where clause.  Basically, the where clause allows you to specify additional constraints on what the actual type used to fill the generic type placeholder must support. You might think that narrowing the scope of a generic means a weaker generic.  In reality, though it limits the number of types that can be used with the generic, it also gives the generic more power to deal with those types.  In effect these constraints says that if the type meets the given constraint, you can perform the activities that pertain to that constraint with the generic placeholders. Constraining Generic Type to Interface or Superclass One of the handiest where clause constraints is the ability to specify the type generic type must implement a certain interface or be inherited from a certain base class. For example, you can’t call CompareTo() in our first C# generic without constraints, but if we constrain T to IComparable<T>, we can: 1: public class ReverseComparer<T> 2: where T : IComparable<T> 3: { 4: public int Compare(T lhs, T rhs) 5: { 6: return lhs.CompareTo(rhs); 7: } 8: } Now that we’ve constrained T to an implementation of IComparable<T>, this means that our variables of generic type T may now call any members specified in IComparable<T> as well.  This means that the call to CompareTo() is now legal. If you constrain your type, also, you will get compiler warnings if you attempt to use a type that doesn’t meet the constraint.  This is much better than the syntax error you would get within C++ template code itself when you used a type not supported by a C++ template. Constraining Generic Type to Only Reference Types Sometimes, you want to assign an instance of a generic type to null, but you can’t do this without constraints, because you have no guarantee that the type used to realize the generic is not a value type, where null is meaningless. Well, we can fix this by specifying the class constraint in the where clause.  By declaring that a generic type must be a class, we are saying that it is a reference type, and this allows us to assign null to instances of that type: 1: public static class ObjectExtensions 2: { 3: public static TOut Maybe<TIn, TOut>(this TIn value, Func<TIn, TOut> accessor) 4: where TOut : class 5: where TIn : class 6: { 7: return (value != null) ? accessor(value) : null; 8: } 9: } In the example above, we want to be able to access a property off of a reference, and if that reference is null, pass the null on down the line.  To do this, both the input type and the output type must be reference types (yes, nullable value types could also be considered applicable at a logical level, but there’s not a direct constraint for those). Constraining Generic Type to only Value Types Similarly to constraining a generic type to be a reference type, you can also constrain a generic type to be a value type.  To do this you use the struct constraint which specifies that the generic type must be a value type (primitive, struct, enum, etc). Consider the following method, that will convert anything that is IConvertible (int, double, string, etc) to the value type you specify, or null if the instance is null. 1: public static T? ConvertToNullable<T>(IConvertible value) 2: where T : struct 3: { 4: T? result = null; 5:  6: if (value != null) 7: { 8: result = (T)Convert.ChangeType(value, typeof(T)); 9: } 10:  11: return result; 12: } Because T was constrained to be a value type, we can use T? (System.Nullable<T>) where we could not do this if T was a reference type. Constraining Generic Type to Require Default Constructor You can also constrain a type to require existence of a default constructor.  Because by default C# doesn’t know what constructors a generic type placeholder does or does not have available, it can’t typically allow you to call one.  That said, if you give it the new() constraint, it will mean that the type used to realize the generic type must have a default (no argument) constructor. Let’s assume you have a generic adapter class that, given some mappings, will adapt an item from type TFrom to type TTo.  Because it must create a new instance of type TTo in the process, we need to specify that TTo has a default constructor: 1: // Given a set of Action<TFrom,TTo> mappings will map TFrom to TTo 2: public class Adapter<TFrom, TTo> : IEnumerable<Action<TFrom, TTo>> 3: where TTo : class, new() 4: { 5: // The list of translations from TFrom to TTo 6: public List<Action<TFrom, TTo>> Translations { get; private set; } 7:  8: // Construct with empty translation and reverse translation sets. 9: public Adapter() 10: { 11: // did this instead of auto-properties to allow simple use of initializers 12: Translations = new List<Action<TFrom, TTo>>(); 13: } 14:  15: // Add a translator to the collection, useful for initializer list 16: public void Add(Action<TFrom, TTo> translation) 17: { 18: Translations.Add(translation); 19: } 20:  21: // Add a translator that first checks a predicate to determine if the translation 22: // should be performed, then translates if the predicate returns true 23: public void Add(Predicate<TFrom> conditional, Action<TFrom, TTo> translation) 24: { 25: Translations.Add((from, to) => 26: { 27: if (conditional(from)) 28: { 29: translation(from, to); 30: } 31: }); 32: } 33:  34: // Translates an object forward from TFrom object to TTo object. 35: public TTo Adapt(TFrom sourceObject) 36: { 37: var resultObject = new TTo(); 38:  39: // Process each translation 40: Translations.ForEach(t => t(sourceObject, resultObject)); 41:  42: return resultObject; 43: } 44:  45: // Returns an enumerator that iterates through the collection. 46: public IEnumerator<Action<TFrom, TTo>> GetEnumerator() 47: { 48: return Translations.GetEnumerator(); 49: } 50:  51: // Returns an enumerator that iterates through a collection. 52: IEnumerator IEnumerable.GetEnumerator() 53: { 54: return GetEnumerator(); 55: } 56: } Notice, however, you can’t specify any other constructor, you can only specify that the type has a default (no argument) constructor. Summary The where clause is an excellent tool that gives your .NET generics even more power to perform tasks higher than just the base "object level" behavior.  There are a few things you cannot specify with constraints (currently) though: Cannot specify the generic type must be an enum. Cannot specify the generic type must have a certain property or method without specifying a base class or interface – that is, you can’t say that the generic must have a Start() method. Cannot specify that the generic type allows arithmetic operations. Cannot specify that the generic type requires a specific non-default constructor. In addition, you cannot overload a template definition with different, opposing constraints.  For example you can’t define a Adapter<T> where T : struct and Adapter<T> where T : class.  Hopefully, in the future we will get some of these things to make the where clause even more useful, but until then what we have is extremely valuable in making our generics more user friendly and more powerful!   Technorati Tags: C#,.NET,Little Wonders,BlackRabbitCoder,where,generics

    Read the article

  • Goals for 2010 Retrospective

    - by Brian Jackett
    As we approach the end of 2010 I’d like to take a  few minutes to reflect back on this past year and revisit the goals that I set for myself at the beginning of the year (click here to see those goals).  I feel it is important to track your goals not only to see if you accomplished them but also to see what new directions in life you pursued.  Once we enter into 2011 I’ll follow up with a new post on goals for the new year. Professional Blog – This year I intended to write at least 2 posts a month.  Looking back I far surpassed that goal by writing 47 posts (this one being my 48th).  As with many things in life, quantity does not mean quality.  A good example is a number of my posts announcing upcoming speaking engagements and providing links to presentation slides and scripts.  That aside, I like to at least keep content relatively fresh on this blog  which I was able to accomplish.  At the same time I’ve gotten much more comfortable in my blogging style and it has become much easier to write. Speaking – I didn’t define a clear goal for speaking engagements, but had a rough idea of wanting to speak at 2-3 events.  Once again I far exceeded that number by speaking at 10 separate events and delivering 12+ presentations.  I’m very thankful for all of the opportunities that I was given and all of the wonderful people I have met as a result. Volunteering – This year I intended to help out with the COSPUG (now Buckeye SPUG) steering committee and Stir Trek conference.  I fulfilled both goals and as well as taking on lead organizer duties for the first ever SharePoint Saturday Columbus.  Each of these events and groups turned out to be successful and I was glad to be a part of them all.  I look forward to continuing to volunteer with each next year in some capacity. Android Development – My goal for getting into Android development was a late addition, but one I didn’t necessarily fulfill.  I spent a couple nights downloading the tools, configuring my environment, and going through some “simple” tutorials.  I say “simple” because in my opinion the tutorials were not laid out very well, took a long time to get running properly, and confused me more than helped.  After about a week I was frustrated with the process and didn’t think it was a good use of my time.  On a side note, I’ve dabbled in Windows Phone 7 development over the past few months and have been very excited by how easy and intuitive it was to get started and develop some proof of concepts. Personal Getting in Shape – I had intended to play on recreational sports leagues and work out on a semi-regular basis.  For the most part I fulfilled this goal by playing on various softball and volleyball leagues as well as using the gym.  At the same time I had some major setbacks.  In the spring I badly sprained my ankle and got hit in the knee with a softball which kept me inactive for almost 2 months.  More recently I broke my knuckle (click here to read about it) which I am still recovering from. Volunteering – On the volunteering front I kept my commitments at my parish’s high school youth group.  As for other volunteering opportunities I got involved with a great organization called Columbus Gives Back (website).  I’ve volunteered with them a few times and really enjoy their goal to provide opportunities to people with busy schedules.  They  offer a variety of events typically after work hours and spread out around Columbus with no set commitments on time you need to put in.  If you have the time or motivation I highly recommend them. House/Condo – I had been thinking of buying a house or condo this past summer, but decided to extend my apartment lease for another year instead.  I have begun the search for a place in the past few weeks and am excited begin the process of owning a home. Conclusion     This year I was able to set and achieve many of my goals.  For next year I’ll try to put more specific numbers to all of my goals.  If any of you readers set goals for 2011 feel free to send me a link as I’d love to see what you are aiming to accomplish.  Have a great end of 2010 and best wishes for the start of 2011!       -Frog Out

    Read the article

  • Installing all the bits to demo Entity Framework 4 on the Visual Studio 2010 Release Candidate

    - by Eric Nelson
    Next week (17th March 2010) I am presenting on EF4 at www.devweek.com in London (and Azure on the 18th). Today I wanted to get all the latest bits on my demo machine and also check if there are any cool new resources I can point people at. Whilst most of the new improvements in Entity Framework come with the Visual Studio 2010 RC (and the RTM), there are a couple of separate items you need to install if you want to explore all the features. To demo EF4 you need: Visual Studio 2010 RC Download and install the Visual Studio 2010 Release Candidate. In my case I went from the Ultimate Edition but it will work fine on Premium and Professional. POCO Templates See the team blog post for a detailed explanation. Use the Extension Manager inside Visual Studio 2010: And install the updated POCO templates for either C# or VB (or both if you are so inclined!): Code First Next you will also need to install Code First (formally called Code Only). This is part of the Entity Framework Feature CTP 3. See the team blog post for a detailed explanation. Download the CTP from Microsoft downloads and run the setup. This will give you a new dll for Code First Optionally (but I recommend it) install LINQPad for the RC Download LINQPad Beta for .NET 4.0 Related Links 101 EF4 Resources

    Read the article

  • Live examples of the Windows Azure Platform running Java and Ruby on Rails

    - by Eric Nelson
    At QCon in March we had a booth focused on interoperability out of which came the idea to create an application implemented in both Java and Ruby on Rails, running on top of the Windows Azure Platform. Nothing fancy, just an application to capture attendees view on Microsoft and Interoperability. It was implemented by Active Web Solutions, long time fans of Azure. Wroth a quick squint :-) Check out the related links below for info to get you up and running. Check out The Java version http://ukinterop.cloudapp.net/  The Ruby on Rails version http://rubyukinterop.cloudapp.net/ (run out of time to finish this one) Related Links: Tomcat Solution Accelerator http://code.msdn.microsoft.com/winazuretomcat AzureRunMe http://azurerunme.codeplex.com/ UK Azure Online Community – join today. UK Windows Azure Site Start working with Windows Azure

    Read the article

  • C#/.NET Little Wonders: Comparer&lt;T&gt;.Default

    - by James Michael Hare
    I’ve been working with a wonderful team on a major release where I work, which has had the side-effect of occupying most of my spare time preparing, testing, and monitoring.  However, I do have this Little Wonder tidbit to offer today. Introduction The IComparable<T> interface is great for implementing a natural order for a data type.  It’s a very simple interface with a single method: 1: public interface IComparer<in T> 2: { 3: // Compare two instances of same type. 4: int Compare(T x, T y); 5: }  So what do we expect for the integer return value?  It’s a pseudo-relative measure of the ordering of x and y, which returns an integer value in much the same way C++ returns an integer result from the strcmp() c-style string comparison function: If x == y, returns 0. If x > y, returns > 0 (often +1, but not guaranteed) If x < y, returns < 0 (often –1, but not guaranteed) Notice that the comparison operator used to evaluate against zero should be the same comparison operator you’d use as the comparison operator between x and y.  That is, if you want to see if x > y you’d see if the result > 0. The Problem: Comparing With null Can Be Messy This gets tricky though when you have null arguments.  According to the MSDN, a null value should be considered equal to a null value, and a null value should be less than a non-null value.  So taking this into account we’d expect this instead: If x == y (or both null), return 0. If x > y (or y only is null), return > 0. If x < y (or x only is null), return < 0. But here’s the problem – if x is null, what happens when we attempt to call CompareTo() off of x? 1: // what happens if x is null? 2: x.CompareTo(y); It’s pretty obvious we’ll get a NullReferenceException here.  Now, we could guard against this before calling CompareTo(): 1: int result; 2:  3: // first check to see if lhs is null. 4: if (x == null) 5: { 6: // if lhs null, check rhs to decide on return value. 7: if (y == null) 8: { 9: result = 0; 10: } 11: else 12: { 13: result = -1; 14: } 15: } 16: else 17: { 18: // CompareTo() should handle a null y correctly and return > 0 if so. 19: result = x.CompareTo(y); 20: } Of course, we could shorten this with the ternary operator (?:), but even then it’s ugly repetitive code: 1: int result = (x == null) 2: ? ((y == null) ? 0 : -1) 3: : x.CompareTo(y); Fortunately, the null issues can be cleaned up by drafting in an external Comparer.  The Soltuion: Comparer<T>.Default You can always develop your own instance of IComparer<T> for the job of comparing two items of the same type.  The nice thing about a IComparer is its is independent of the things you are comparing, so this makes it great for comparing in an alternative order to the natural order of items, or when one or both of the items may be null. 1: public class NullableIntComparer : IComparer<int?> 2: { 3: public int Compare(int? x, int? y) 4: { 5: return (x == null) 6: ? ((y == null) ? 0 : -1) 7: : x.Value.CompareTo(y); 8: } 9: }  Now, if you want a custom sort -- especially on large-grained objects with different possible sort fields -- this is the best option you have.  But if you just want to take advantage of the natural ordering of the type, there is an easier way.  If the type you want to compare already implements IComparable<T> or if the type is System.Nullable<T> where T implements IComparable, there is a class in the System.Collections.Generic namespace called Comparer<T> which exposes a property called Default that will create a singleton that represents the default comparer for items of that type.  For example: 1: // compares integers 2: var intComparer = Comparer<int>.Default; 3:  4: // compares DateTime values 5: var dateTimeComparer = Comparer<DateTime>.Default; 6:  7: // compares nullable doubles using the null rules! 8: var nullableDoubleComparer = Comparer<double?>.Default;  This helps you avoid having to remember the messy null logic and makes it to compare objects where you don’t know if one or more of the values is null. This works especially well when creating say an IComparer<T> implementation for a large-grained class that may or may not contain a field.  For example, let’s say you want to create a sorting comparer for a stock open price, but if the market the stock is trading in hasn’t opened yet, the open price will be null.  We could handle this (assuming a reasonable Quote definition) like: 1: public class Quote 2: { 3: // the opening price of the symbol quoted 4: public double? Open { get; set; } 5:  6: // ticker symbol 7: public string Symbol { get; set; } 8:  9: // etc. 10: } 11:  12: public class OpenPriceQuoteComparer : IComparer<Quote> 13: { 14: // Compares two quotes by opening price 15: public int Compare(Quote x, Quote y) 16: { 17: return Comparer<double?>.Default.Compare(x.Open, y.Open); 18: } 19: } Summary Defining a custom comparer is often needed for non-natural ordering or defining alternative orderings, but when you just want to compare two items that are IComparable<T> and account for null behavior, you can use the Comparer<T>.Default comparer generator and you’ll never have to worry about correct null value sorting again.     Technorati Tags: C#,.NET,Little Wonders,BlackRabbitCoder,IComparable,Comparer

    Read the article

  • RIF PRD: Presentation syntax issues

    - by Charles Young
    Over Christmas I got to play a bit with the W3C RIF PRD and came across a few issues which I thought I would record for posterity. Specifically, I was working on a grammar for the presentation syntax using a GLR grammar parser tool (I was using the current CTP of ‘M’ (MGrammer) and Intellipad – I do so hope the MS guys don’t kill off M and Intellipad now they have dropped the other parts of SQL Server Modelling). I realise that the presentation syntax is non-normative and that any issues with it do not therefore compromise the standard. However, presentation syntax is useful in its own right, and it would be great to iron out any issues in a future revision of the standard. The main issues are actually not to do with the grammar at all, but rather with the ‘running example’ in the RIF PRD recommendation. I started with the code provided in Example 9.1. There are several discrepancies when compared with the EBNF rules documented in the standard. Broadly the problems can be categorised as follows: ·      Parenthesis mismatch – the wrong number of parentheses are used in various places. For example, in GoldRule, the RHS of the rule (the ‘Then’) is nested in the LHS (‘the If’). In NewCustomerAndWidgetRule, the RHS is orphaned from the LHS. Together with additional incorrect parenthesis, this leads to orphanage of UnknownStatusRule from the entire Document. ·      Invalid use of parenthesis in ‘Forall’ constructs. Parenthesis should not be used to enclose formulae. Removal of the invalid parenthesis gave me a feeling of inconsistency when comparing formulae in Forall to formulae in If. The use of parenthesis is not actually inconsistent in these two context, but in an If construct it ‘feels’ as if you are enclosing formulae in parenthesis in a LISP-like fashion. In reality, the parenthesis is simply being used to group subordinate syntax elements. The fact that an If construct can contain only a single formula as an immediate child adds to this feeling of inconsistency. ·      Invalid representation of compact URIs (CURIEs) in the context of Frame productions. In several places the URIs are not qualified with a namespace prefix (‘ex1:’). This conflicts with the definition of CURIEs in the RIF Datatypes and Built-Ins 1.0 document. Here are the productions: CURIE          ::= PNAME_LN                  | PNAME_NS PNAME_LN       ::= PNAME_NS PN_LOCAL PNAME_NS       ::= PN_PREFIX? ':' PN_LOCAL       ::= ( PN_CHARS_U | [0-9] ) ((PN_CHARS|'.')* PN_CHARS)? PN_CHARS       ::= PN_CHARS_U                  | '-' | [0-9] | #x00B7                  | [#x0300-#x036F] | [#x203F-#x2040] PN_CHARS_U     ::= PN_CHARS_BASE                  | '_' PN_CHARS_BASE ::= [A-Z] | [a-z] | [#x00C0-#x00D6] | [#x00D8-#x00F6]                  | [#x00F8-#x02FF] | [#x0370-#x037D] | [#x037F-#x1FFF]                  | [#x200C-#x200D] | [#x2070-#x218F] | [#x2C00-#x2FEF]                  | [#x3001-#xD7FF] | [#xF900-#xFDCF] | [#xFDF0-#xFFFD]                  | [#x10000-#xEFFFF] PN_PREFIX      ::= PN_CHARS_BASE ((PN_CHARS|'.')* PN_CHARS)? The more I look at CURIEs, the more my head hurts! The RIF specification allows prefixes and colons without local names, which surprised me. However, the CURIE Syntax 1.0 working group note specifically states that this form is supported…and then promptly provides a syntactic definition that seems to preclude it! However, on (much) deeper inspection, it appears that ‘ex1:’ (for example) is allowed, but would really represent a ‘fragment’ of the ‘reference’, rather than a prefix! Ouch! This is so completely ambiguous that it surely calls into question the whole CURIE specification.   In any case, RIF does not allow local names without a prefix. ·      Missing ‘External’ specifiers for built-in functions and predicates.  The EBNF specification enforces this for terms within frames, but does not appear to enforce (what I believe is) the correct use of External on built-in predicates. In any case, the running example only specifies ‘External’ once on the predicate in UnknownStatusRule. External() is required in several other places. ·      The List used on the LHS of UnknownStatusRule is comma-delimited. This is not supported by the EBNF definition. Similarly, the argument list of pred:list-contains is illegally comma-delimited. ·      Unnecessary use of conjunction around a single formula in DiscountRule. This is strictly legal in the EBNF, but redundant.   All the above issues concern the presentation syntax used in the running example. There are a few minor issues with the grammar itself. Note that Michael Kiefer stated in his paper “Rule Interchange Format: The Framework” that: “The presentation syntax of RIF … is an abstract syntax and, as such, it omits certain details that might be important for unambiguous parsing.” ·      The grammar cannot differentiate unambiguously between strategies and priorities on groups. A processor is forced to resolve this by detecting the use of IRIs and integers. This could easily be fixed in the grammar.   ·      The grammar cannot unambiguously parse the ‘->’ operator in frames. Specifically, ‘-’ characters are allowed in PN_LOCAL names and hence a parser cannot determine if ‘status->’ is (‘status’ ‘->’) or (‘status-’ ‘>’).   One way to fix this is to amend the PN_LOCAL production as follows: PN_LOCAL ::= ( PN_CHARS_U | [0-9] ) ((PN_CHARS|'.')* ((PN_CHARS)-('-')))? However, unilaterally changing the definition of this production, which is defined in the SPARQL Query Language for RDF specification, makes me uncomfortable. ·      I assume that the presentation syntax is case-sensitive. I couldn’t find this stated anywhere in the documentation, but function/predicate names do appear to be documented as being case-sensitive. ·      The EBNF does not specify whitespace handling. A couple of productions (RULE and ACTION_BLOCK) are crafted to enforce the use of whitespace. This is not necessary. It seems inconsistent with the rest of the specification and can cause parsing issues. In addition, the Const production exhibits whitespaces issues. The intention may have been to disallow the use of whitespace around ‘^^’, but any direct implementation of the EBNF will probably allow whitespace between ‘^^’ and the SYMSPACE. Of course, I am being a little nit-picking about all this. On the whole, the EBNF translated very smoothly and directly to ‘M’ (MGrammar) and proved to be fairly complete. I have encountered far worse issues when translating other EBNF specifications into usable grammars.   I can’t imagine there would be any difficulty in implementing the same grammar in Antlr, COCO/R, gppg, XText, Bison, etc. A general observation, which repeats a point made above, is that the use of parenthesis in the presentation syntax can feel inconsistent and un-intuitive.   It isn’t actually inconsistent, but I think the presentation syntax could be improved by adopting braces, rather than parenthesis, to delimit subordinate syntax elements in a similar way to so many programming languages. The familiarity of braces would communicate the structure of the syntax more clearly to people like me.  If braces were adopted, parentheses could be retained around ‘var (frame | ‘new()’) constructs in action blocks. This use of parenthesis feels very LISP-like, and I think that this is my issue. It’s as if the presentation syntax represents the deformed love-child of LISP and C. In some places (specifically, action blocks), parenthesis is used in a LISP-like fashion. In other places it is used like braces in C. I find this quite confusing. Here is a corrected version of the running example (Example 9.1) in compliant presentation syntax: Document(    Prefix( ex1 <http://example.com/2009/prd2> )    (* ex1:CheckoutRuleset *)  Group rif:forwardChaining (     (* ex1:GoldRule *)    Group 10 (      Forall ?customer such that And(?customer # ex1:Customer                                     ?customer[ex1:status->"Silver"])        (Forall ?shoppingCart such that ?customer[ex1:shoppingCart->?shoppingCart]           (If Exists ?value (And(?shoppingCart[ex1:value->?value]                                  External(pred:numeric-greater-than-or-equal(?value 2000))))            Then Do(Modify(?customer[ex1:status->"Gold"])))))      (* ex1:DiscountRule *)    Group (      Forall ?customer such that ?customer # ex1:Customer        (If Or( ?customer[ex1:status->"Silver"]                ?customer[ex1:status->"Gold"])         Then Do ((?s ?customer[ex1:shoppingCart-> ?s])                  (?v ?s[ex1:value->?v])                  Modify(?s [ex1:value->External(func:numeric-multiply (?v 0.95))]))))      (* ex1:NewCustomerAndWidgetRule *)    Group (      Forall ?customer such that And(?customer # ex1:Customer                                     ?customer[ex1:status->"New"] )        (If Exists ?shoppingCart ?item                   (And(?customer[ex1:shoppingCart->?shoppingCart]                        ?shoppingCart[ex1:containsItem->?item]                        ?item # ex1:Widget ) )         Then Do( (?s ?customer[ex1:shoppingCart->?s])                  (?val ?s[ex1:value->?val])                  (?voucher ?customer[ex1:voucher->?voucher])                  Retract(?customer[ex1:voucher->?voucher])                  Retract(?voucher)                  Modify(?s[ex1:value->External(func:numeric-multiply(?val 0.90))]))))      (* ex1:UnknownStatusRule *)    Group (      Forall ?customer such that ?customer # ex1:Customer        (If Not(Exists ?status                       (And(?customer[ex1:status->?status]                            External(pred:list-contains(List("New" "Bronze" "Silver" "Gold") ?status)) )))         Then Do( Execute(act:print(External(func:concat("New customer: " ?customer))))                  Assert(?customer[ex1:status->"New"]))))  ) )   I hope that helps someone out there :-)

    Read the article

  • How do I run (execute) a .bin file in Linux (Ubuntu) ?

    - by Paula DiTallo
    If you are on a desktop version of Ubuntu, you can right-click on the file icon, click the permissions tab and click on "allow execution". If you are on a server copy without the desktop bells and whistles (or you would rather work with a command line in a terminal window), then do the following: sudo chmod +x myProgram.bin after you enter your password and get the prompt back type: ./myProgram.bin

    Read the article

  • Github Project Wiki: separate page filename from page title

    - by Marko Apfel
    Starting Point In a former version of a project wiki on Github was a separation between the page filename and page title. So we build up our Wiki in a manner, that some well defined prefixes in the filename describe the overall context of the particular page. Sample: At the page themselves we used a title-tag on top of the page to get the title in the rendered HTML-page. Here “= Tabelle: Anhänge bzw. Attachments” for the page with filename “data+table+Attachment”. This was rendered as We see: there is a file “data+table+Attachment” and a title-tag “= Tabelle: Anhänge bzw. Attachments” as well as a rendering with the title “Tabelle: Anhänge bzw. Attachments”. This was fine. Problem Now the Github-Wiki uses the title of the page as the filename and vice versa. This ends up in a cluttered file system and also in suppressing titles in the page themselves. So this page renders to As we could see: the title tag “= Organisation: IT-Infrastruktur” is not more rendered. Instead the filename “organisation+IT Infrastructure” is choosen as the title for the page. That sucks. Solution I reported this by Github again and hope for a fix.

    Read the article

  • Connect Team Foundation Service/TFS 2012 with Visual Studio 2010 &amp; Visual Studio 2008

    - by Vishal
    Hello, Microsoft finally released the Team Foundation Service in late October 2012 after its long time in the preview phase. I was already using the TFS Preview which was free but I was happy to see Microsoft releasing the Team Foundation Service also FREE for upto 5 users. Isn't that great news? I know there are bunch of other free source control repositories (Github, Bitbucket, SVN etc.) out there but I somehow like TFS better. Also the other good thing about the final release was that I didn’t had to do any kind of migration of my code from preview to final release version. Just changed the TFS connection URL and it worked like a charm. Anyways, if you are a startup with small team and need some awesome Source Control along with all the good Project Management, Continuous Integration (Build, Test, Deploy), Team Collaboration, Agile/Scrum planning etc. features than Team Foundation Service is your answer. Microsoft has not yet released their pricing for more than 5 users and will be releasing it sometime in early 2013. What if as of now you have a team more than 5 users and you want to use Team Foundation Service, the good news is you can use it for FREE but when they release the final pricing, you will have to transition to the paid plan. Lot of story, getting to the point, connecting to Team Foundation Service with Visual Studio 2012 is straight forward and would work out of the box but it wont for previous versions of Visual Studio. You will have to upgrade to the latest service pack first and than install the forward compatibility pack. (1st : Service Packs & 2nd: Forward Compatibility packs) For Visual Studio 2010: Visual Studio 2010 Service Pack 1. Visual Studio 2010 forward compatibility for TFS 2012 and Team Foundation Service.         For Visual Studio 2008: Visual Studio 2008 Service Pack 1. Visual Studio 2008 forward compatibility for TFS 2012 & Team Foundation Service. Restart your system. Visual Studio 2008 will not work if you only put https://xxx.visualstudio.com. You will have to put your collection name too as shown below.       By the way, it doesn’t matter if you are an Apple Application Developer or Android App Developer, you can still use Team Foundation Service as your source control. Below are few links to connect to Team Foundation Service with other IDEs: Connect Eclipse to Team Foundation Service. Connect XCode to Team Foundation Service. Happy coding. Vishal Mody

    Read the article

  • Building apps that work Together

    - by Tim Murphy
    Originally posted on: http://geekswithblogs.net/tmurphy/archive/2013/07/03/building-apps-that-work-together.aspx  Writing apps that stand alone will only get yon so far.  If your app can allow the user to leverage other applications and share data you Can have a real winner on your hands. Jake Sabulsky started off by explaining that you should be concentrating on the core functionality of your app and letting the framework take care of the features that users require these days.  This is implemented be leveraging contracts.  When Windows 8 was released it included the File, Share and Pickers contracts.  With the release of Windows 8.1 they have added the Contacts and Calendar contracts. There have been a number of improvements to the original contracts. The File URI contract will now automatically detect the size that a new windows should be opened and will also allow you to programmatically influence new window size.  The Share contract has been enhanced by allowing apps to always share screenshots and links to the app in the store. To my thinking the contracts are one of the most powerful features of Windows 8.  Take the time view this session and learn how to leverage them. Technorati Tags: BUILD 2013,Windows 8,Live tiles

    Read the article

  • Dallas First Regionals 2012&ndash; For Inspiration and Recognition of Science and Technology

    - by T
    Wow!  That is all I have to say after the last 3 days. Three full fun filled days in a world that fed the geek, sparked the competitor, inspired the humanitarian, encouraged the inventor, and continuously warmed my heart.  As part of the Dallas First Regionals, I was awed by incredible students who teach as much as they learn, inventive and truly caring mentors that make mentoring look easy, and completely passionate and dedicated volunteers that bring meaning to giving all that you have and making events fun and safe for all.  If you have any interest in innovation, robotics, or highly motivated students, I can’t recommend anything any higher than visiting a First Robotics event. This is my third year with First and I was honored enough to serve the Dallas First Regionals as both a Web Site evaluator and as the East Field Volunteer Coordinator.  This was also the first year my daughter volunteered with me.  My daughter and I both recognize how different the First program is from other team events.  The difference with First is that everyone is a first class citizen.   It is a difference we can feel through experiencing and observing interactions between executives, respected engineers, students, and event staff.  Even with a veracious competition, you still find a lot of cooperation and we never witnessed any belittling between teams or individuals. First Robotics coined the term “Gracious Professionalism”.   It's a way of doing things that encourages high-quality work, emphasizes the value of others, and respects individuals and the community.1 I was introduced to this term as the Volunteer Coordinator when I was preparing the volunteer instructional speech.  Through the next few days, I discovered it is truly core to everything that First does.  One of the ways First accomplishes Gracious Professionalism is by utilizing another term they came up with which is “CoopertitionTM”. At FIRST, CoopertitionTM is displaying unqualified kindness and respect in the face of fierce competition.1   One of the things I never liked about sports was the need to “destroy” the other team.  First has really found a way to integrate CoopertitionTM  into the rules so teams can be competitive and part of that competition rewards cooperation. Oh and did I mention it has ROBOTS!!!  This year it had basket ball playing Kinect connected, remote controlled, robots!  There are not words for how exciting these games are.  You HAVE to check out this years game, Rebound Rumble, on youtube or you can view live action on Dallas First Video (as of this posting, the recording haven’t posted yet but should be there soon).  There are also some images below. Whatever it is, these students get it and exemplify it and these mentors ooze with it.  I am glad that First supports these events so we can all learn and be inspired by these exceptional students and mentors.  I know that no matter how much I give, it will never compare to what I gain volunteering at First.  Even if you don’t have time to volunteer, you owe it to yourself to go check out one of these events.  See what the future holds, be inspired, be encouraged, take some knowledge, leave some knowledge and most of all, have FUN.  That is what First is all about and thanks to First, I get it.   First Dallas Regionals 2012 VIEW SLIDE SHOW DOWNLOAD ALL 1.  USFirst http://www.usfirst.org/aboutus/gracious-professionalism

    Read the article

  • Silverlight Cream for November 27, 2011 -- #1176

    - by Dave Campbell
    In this Issue: Matt Eland, Parag Joshi, Jerrel Blankenship, and Joost van Schaik. Above the Fold: WP7: "Safe event detachment base class for Windows Phone 7 behaviors" Joost van Schaik Shoutouts: Michael Palermo's latest Desert Mountain Developers is up Michael Washington's latest Visual Studio #LightSwitch Daily is up From SilverlightCream.com:31 Days of Mango | Day #22: App ConnectMatt Eland takes the reigns of Jeff's blog for Day 22 and is talking about App Connect... App Connect allows apps to be listed on Quick Cards relative to an app's subject matter, and Quick Cards are items that appear in searches to let users find out more info... check out the blog post if you're not familiar with this31 Days of Mango | Day #21: SocketsJeff's Day 21 is written by Parag Joshi, and is on sockets... and is building a WP7 app for posting restaurant orders to a Silverlight OOB app running on a host machine... good sized tutorial and discussion, plus a project to download and play with31 Days of Mango | Day #20: Creating RingtonesJerrel Blankenship has Day 20 for Jeff Blankenburg's 31 Days of Mango and is discussing Ringtones... how to create and save a custom ringtone for your userSafe event detachment base class for Windows Phone 7 behaviorsJoost van Schaik revisits his Safe Event Detachment pattern for WP7 and built a base class to take care of the initialization involved to be kind to us, the developers... code includedStay in the 'Light!Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCreamJoin me @ SilverlightCream | Phoenix Silverlight User GroupTechnorati Tags:Silverlight    Silverlight 3    Silverlight 4    Windows PhoneMIX10

    Read the article

  • Windows Azure: Server and Cloud Division

    - by kaleidoscope
    On 8th Dec 2009 Microsoft announced the formation of a new organization within the Server & Tools Business that combines the Windows Server & Solutions group and the Windows Azure group, into a single organization called the Server & Cloud Division (SCD). SCD will deliver solutions that help our customers realize even greater benefits from Microsoft’s investments in on-premises and cloud technologies.  And the new division will help strengthen an already solid and extensive partner ecosystem. Together, Windows Server, Windows Azure, SQL Server, SQL Azure, Visual Studio and System Center help customers extend existing investments to include a future that will combine both on-premises and cloud solutions, and SCD is now a key player in that effort. http://blogs.technet.com/windowsserver/archive/2009/12/08/windows-server-and-windows-azure-come-together-in-a-new-stb-organization-the-server-cloud-division.aspx   Tinu, O

    Read the article

  • Force Blank TextBox with ASP.Net MVC Html.TextBox

    - by Doug Lampe
    I recently ran into a problem with the following scenario: I have data with a parent/child data with a one-to-many relationship from the parent to the child. I want to be able to update parent and existing child data AND add a new child record all in a single post. I don't want to create a model just to store the new values. One of the things I LOVE about MVC is how flexible it is in dealing with posted data.  If you have data that isn't in your model, you can simply use the non-strongly-typed HTML helper extensions and pass the data into your actions as parameters or use the FormCollection.  I thought this would give me the solution I was looking for.  I simply used Html.TextBox("NewChildKey") and Html.TextBox("NewChildValue") and added parameters to my action to take the new values.  So here is what my action looked like: [HttpPost] public ActionResult EditParent(int? id, string newChildKey, string newChildValue, FormCollection forms) {     Model model = ModelDataHelper.GetModel(id ?? 0);     if (model != null)     {         if (TryUpdateModel(model))         {             if (ModelState.IsValid)             {                 model = ModelDataHelper.UpdateModel(model);             }             string[] keys = forms.GetValues("ChildKey");             string[] values = forms.GetValues("ChildValue");             ModelDataHelper.UpdateChildData(id ?? 0, keys, values);             ModelDataHelper.AddChildData(id ?? 0, newChildKey, newChildValue);             model = ModelDataHelper.GetModel(id ?? 0);         }        return View(report);     }    return new EmptyResult(); } The only problem with this is that MVC is TOO smart.  Even though I am not using a model to store the new child values, MVC still passes the values back to the text boxes via the model state.  The fix for this is simple but not necessarily obvious, simply remove the data from the model state before returning the view: ModelState.Remove("NewChildKey"); ModelState.Remove("NewChildValue"); Two lines of code to save a lot of headaches.

    Read the article

  • Merry Christmas and Happy new year everyone and bye bye we meet again soon as possible

    - by anirudha
    few days ago i plan that i enjoy this cold days with my family so i decide that i go their and enjoy Christmas even i am Hindus no problem. so those days i stop everything relate to my daily technical life routine. because on Christmas and  new year i never meet you so Merry Christmas  and Happy new year to all [ friends , relative and all guys who know me or  even not] bye bye take care i come again as soon as possible.

    Read the article

  • TDD - Red-Light-Green_Light:: A critical view

    - by Renso
    Subject: The concept of red-light-green-light for TDD/BDD style testing has been around since the dawn of time (well almost). Having written thousands of tests using this approach I find myself questioning the validity of the principle The issue: False positive or a valid test strategy that can be trusted? A critical view: I agree that the red-green-light concept has some validity, but who has ever written 2000 tests for a system that goes through a ton of chnages due to the organic nature fo the application and does not have to change, delete or restructure their existing tests? If you asnwer to the latter question is" "Yes I had a situation(s) where I had to refactor my code and it caused me to have to rewrite/change/delete my existing tests", read on, else press CTRL+ALT+Del :-) Once a test has been written, failed the test (red light), and then you comlpete your code and now get the green light for the last test, the test for that functionality is now in green light mode. It can never return to red light again as long as the test exists, even if the test itself is not changed, and only the code it tests is changed to fail the test. Why you ask? because the reason for the initial red-light when you created the test is not guaranteed to have triggered the initial red-light result for the same reasons it is now failing after a code change has been made. Furthermore, when the same test is changed to compile correctly in case of a compile-breaking code change, the green-light once again has been invalidated. Why? Because there is no guarantee that the test code fix is in the same green-light state as it was when it first ran successfully. To make matters worse, if you fix a compile-breaking test without going through the red-light-green-light test process, your test fix is essentially useless and very dangerous as it now provides you with a false-positive at best. Thinking your code has passed all tests and that it works correctly is far worse than not having any tests at all, well at least for that part of the system that the test-code represents. What to do? My recommendation is to delete the tests affected, and re-create them from scratch. I have to agree. Hard to do and justify if it has a significant impact on project deadlines. What do you think?

    Read the article

  • SharePoint 2010 and Windows Server Backup

    - by Enrique Lima
    A couple of months ago, a friend found a bit of information on TechNet that has proven to be quite useful. See, I am of the opinion SharePoint allows for smaller deployments to be made, and with that said, I am talking about SharePoint Foundation 2010 being used for the most part. But truly the point here is not to discuss whether or not a deployment of SharePoint Foundation 2010 or SharePoint Server 2010 is right or not.  The fact is they do take place and happen.  And information will reside there. Now, the point of this post is to raise awareness on options available for companies that have implemented it and maybe are a bit “iffy” on how to protect the information being placed in libraries and lists.  In many cases I have found SharePoint comes first and business continuity becomes an afterthought.  The documentation piece from TechNet states: “You can register SharePoint Server 2010 with Windows Server Backup by using the stsadm.exe -o -registerwsswriter operation to configure the Volume Shadow Copy Service (VSS) writer for SharePoint Server. Windows Server Backup then includes SharePoint Server 2010 in server-wide backups. When you restore from a Windows Server backup, you can select Microsoft SharePoint Foundation (no matter which version of SharePoint 2010 Products is installed), and all components reported by the VSS writer forSharePoint Server 2010 on that server at the time of the backup will be restored. Windows Server Backup is recommended only for use with for single-server deployments.” Even in the event of single-server deployments you will have options to safeguard your data. The process will require that after you have executed the stsadm command above, you will then use Windows Server Backup to do a Full Server Backup.  Then when the restore operation is needed you will be able to select specifically the section that has the SharePoint technologies backup. The restore process: Hope you find this to be a helpful post.  I have found this to be specially handy in SharePoint deployments that are part of a Team Foundation Server deployment and that are isolated from any other SharePoint farm and such.   Credits:  Sean McDonough for passing along the information available on TechNet.

    Read the article

  • BizTalk: History of one project architecture

    - by Leonid Ganeline
    "In the beginning God made heaven and earth. Then he started to integrate." At the very start was the requirement: integrate two working systems. Small digging up: It was one system. It was good but IT guys want to change it to the new one, much better, chipper, more flexible, and more progressive in technologies, more suitable for the future, for the faster world and hungry competitors. One thing. One small, little thing. We cannot turn off the old system (call it A, because it was the first), turn on the new one (call it B, because it is second but not the last one). The A has a hundreds users all across a country, they must study B. A still has a lot nice custom features, home-made features that cannot disappear. These features have to be moved to the B and it is a long process, months and months of redevelopment. So, the decision was simple. Let’s move not jump, let’s both systems working side-by-side several months. In this time we could teach the users and move all custom A’s special functionality to B. That automatically means both systems should work side-by-side all these months and use the same data. Data in A and B must be in sync. That’s how the integration projects get birth. Moreover, the specific of the user tasks requires the both systems must be in sync in real-time. Nightly synchronization is not working, absolutely.   First draft The first draft seems simple. Both systems keep data in SQL databases. When data changes, the Create, Update, Delete operations performed on the data, and the sync process could be started. The obvious decision is to use triggers on tables. When we are talking about data, we are talking about several entities. For example, Orders and Items [in Orders]. We decided to use the BizTalk Server to synchronize systems. Why it was chosen is another story. Second draft   Let’s take an example how it works in more details. 1.       User creates a new entity in the A system. This fires an insert trigger on the entity table. Trigger has to pass the message “Entity created”. This message includes all attributes of the new entity, but I focused on the Id of this entity in the A system. Notation for this message is id.A. System A sends id.A to the BizTalk Server. 2.       BizTalk transforms id.A to the format of the system B. This is easiest part and I will not focus on this kind of transformations in the following text. The message on the picture is still id.A but it is in slightly different format, that’s why it is changing in color. BizTalk sends id.A to the system B. 3.       The system B creates the entity on its side. But it uses different id-s for entities, these id-s are id.B. System B saves id.A+id.B. System B sends the message id.A+id.B back to the BizTalk. 4.       BizTalk sends the message id.A+id.B to the system A. 5.       System A saves id.A+id.B. Why both id-s should be saved on both systems? It was one of the next requirements. Users of both systems have to know the systems are in sync or not in sync. Users working with the entity on the system A can see the id.B and use it to switch to the system B and work there with the copy of the same entity. The decision was to store the pairs of entity id-s on both sides. If there is only one id, the entities are not in sync yet (for the Create operation). Third draft Next problem was the reliability of the synchronization. The synchronizing process can be interrupted on each step, when message goes through the wires. It can be communication problem, timeout, temporary shutdown one of the systems, the second system cannot be synchronized by some internal reason. There were several potential problems that prevented from enclosing the whole synchronization process in one transaction. Decision was to restart the whole sync process if it was not finished (in case of the error). For this purpose was created an additional service. Let’s call it the Resync service. We still keep the id pairs in both systems, but only for the fast access not for the synchronization process. For the synchronizing these id-s now are kept in one main place, in the Resync service database. The Resync service keeps record as: ·       Id.A ·       Id.B ·       Entity.Type ·       Operation (Create, Update, Delete) ·       IsSyncStarted (true/false) ·       IsSyncFinished (true/false0 The example now looks like: 1.       System A creates id.A. id.A is saved on the A. Id.A is sent to the BizTalk. 2.       BizTalk sends id.A to the Resync and to the B. id.A is saved on the Resync. 3.       System B creates id.B. id.A+id.B are saved on the B. id.A+id.B are sent to the BizTalk. 4.       BizTalk sends id.A+id.B to the Resync and to the A. id.A+id.B are saved on the Resync. 5.       id.A+id.B are saved on the B. Resync changes the IsSyncStarted and IsSyncFinished flags accordingly. The Resync service implements three main methods: ·       Save (id.A, Entity.Type, Operation) ·       Save (id.A, id.B, Entity.Type, Operation) ·       Resync () Two Save() are used to save id-s to the service storage. See in the above example, in 2 and 4 steps. What about the Resync()? It is the method that finishes the interrupted synchronization processes. If Save() is started by the trigger event, the Resync() is working as an independent process. It periodically scans the Resync storage to find out “unfinished” records. Then it restarts the synchronization processes. It tries to synchronize them several times then gives up.     One more thing, both systems A and B must tolerate duplicates of one synchronizing process. Say on the step 3 the system B was not able to send id.A+id.B back. The Resync service must restart the synchronization process that will send the id.A to B second time. In this case system B must just send back again also created id.A+id.B pair without errors. That means “tolerate duplicates”. Fourth draft Next draft was created only because of the aesthetics. As it always happens, aesthetics gave significant performance gain to the whole system. First was the stupid question. Why do we need this additional service with special database? Can we just master the BizTalk to do something like this Resync() does? So the Resync orchestration is doing the same thing as the Resync service. It is started by the Id.A and finished by the id.A+id.B message. The first works as a Start message, the second works as a Finish message.     Here is a diagram the whole process without errors. It is pretty straightforward. The Resync orchestration is waiting for the Finish message specific period of time then resubmits the Id.A message. It resubmits the Id.A message specific number of times then gives up and gets suspended. It can be resubmitted then it starts the whole process again: waiting [, resubmitting [, get suspended]], finishing. Tuning up The Resync orchestration resubmits the id.A message with special “Resubmitted” flag. The subscription filter on the Resync orchestration includes predicate as (Resubmit_Flag != “Resubmitted”). That means only the first Sync orchestration starts the Resync orchestration. Other Sync orchestration instantiated by the resubmitting can finish this Resync orchestration but cannot start another instance of the Resync   Here is a diagram where system B was inaccessible for some period of time. The Resync orchestration resubmitted the id.A two times. Then system B got the response the id.A+id.B and this finished the Resync service execution. What is interesting about this, there were submitted several identical id.A messages and only one id.A+id.B message. Because of this, the system B and the Resync must tolerate the duplicate messages. We also told about this requirement for the system B. Now the same requirement is for the Resunc. Let’s assume the system B was very slow in the first response and the Resync service had time to resubmit two id.A messages. System B responded not, as it was in previous case, with one id.A+id.B but with two id.A+id.B messages. First of them finished the Resync execution for the id.A. What about the second id.A+id.B? Where it goes? So, we have to add one more internal requirement. The whole solution must tolerate many identical id.A+id.B messages. It is easy task with the BizTalk. I added the “SinkExtraMessages” subscriber (orchestration with one receive shape), that just get these messages and do nothing. Real design Real architecture is much more complex and interesting. In reality each system can submit several id.A almost simultaneously and completely unordered. There are not only the “Create entity” operation but the Update and Delete operations. And these operations relate each other. Say the Update operation after Delete means not the same as Update after Create. In reality there are entities related each other. Say the Order and Order Items. Change on one of it could start the series of the operations on another. Moreover, the system internals are the “black boxes” and we cannot predict the exact content and order of the operation series. It worth to say, I had to spend a time to manage the zombie message problems. The zombies are still here, but this is not a problem now. And this is another story. What is interesting in the last design? One orchestration works to help another to be more reliable. Why two orchestration design is more reliable, isn’t it something strange? The Synch orchestration takes all the message exchange between systems, here is the area where most of the errors could happen. The Resync orchestration sends and receives messages only within the BizTalk server. Is there another design? Sure. All Resync functionality could be implemented inside the Sync orchestration. Hey guys, some other ideas?

    Read the article

  • ASP.NET MVC Application In Action - I (DailyJournal)

    - by Rajesh Pillai
    Its been long due I was planning to write an article on creating some useful ASP.NET MVC application. I have code named it "DailyJournal". Its a simple application which allows creation of multiple activities and assign tasks to these activities. Its' kind of "Yet another Task/Todo application". The credentials which you can use with the attached demo application is shown below.   Collapse Copy Code User Name : admin Password : admin123 Framework/Libraries Used ASP.NET MVC jQuery + jQuery UI (for ajax and UI) ELMAH for Error logging Warning Ahead This is just a rough draft and so I am putting down some of the known limitation. Some points of warning before we move further with this application. This is just an early prototype. As such many of the design principles have been ignored. But, I try to cover that up in the next update once I get my head around this. The application in its current state supports the following features Create users Assign Activities to users Assign tasks to activities Assign a status to a task The user creation/authentication is being done by the default Membership provider. Most of the activities are highly visual i.e. you can drag-drop task to different areas, in-place edition of task details and so on.   The following are the current issues with the design which I promise to refactor in the second version. No Validations Fat Controller XSS/CSS vulnerable No Service model/abstraction yet. For the demo LINQ to SQL is implemented. No separation of layers UI Design et el... This is just an extract.  For source code and more information proceed to http://www.codeproject.com/KB/aspnet/mvcinaction.aspx Hope you like this!

    Read the article

< Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >