Daily Archives

Articles indexed Monday December 6 2010

Page 12/22 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • SQL Monitor and "The Cloud"

    - by Richard Mitchell
    So, how can we demo this thing? In the beginning there was a product, and it was a good product for the testers had decreed it so, and nobody argues with a tester. But then comes the inevitable question of how can somebody test it out without risk. Red Gate prides itself on the tools being easy for people to trial before they buy, and no cut down trial for you sir, oh no, for you sir only the best will do - a fully functional trial - suits you sir. The problem The problem comes when you get a...(read more)

    Read the article

  • Alan Turing Needs Your Help

    - by Chris Massey
    Well. sort of. Clearly, you are using a computer. If you are on this site, you are probably quite familiar with computers as artifacts of our modern society. Hopefully, you are also familiar with the fact that Alan Turing, logician and mathematician extraordinaire, was instrumental in laying down the foundations of modern computer science, and did a little work to help turn the tide of WWII in the Allies' favor. Hold that thought. A phenomenal collection of Turing's papers (including his first ever...(read more)

    Read the article

  • Listing common SQL Code Smells.

    - by Phil Factor
    Once you’ve done a number of SQL Code-reviews, you’ll know those signs in the code that all might not be well. These ’Code Smells’ are coding styles that don’t directly cause a bug, but are indicators that all is not well with the code. . Kent Beck and Massimo Arnoldi seem to have coined the phrase in the "OnceAndOnlyOnce" page of www.C2.com, where Kent also said that code "wants to be simple". Bad Smells in Code was an essay by Kent Beck and Martin Fowler, published as Chapter 3 of the book ‘Refactoring: Improving the Design of Existing Code’ (ISBN 978-0201485677) Although there are generic code-smells, SQL has its own particular coding habits that will alert the programmer to the need to re-factor what has been written. See Exploring Smelly Code   and Code Deodorants for Code Smells by Nick Harrison for a grounding in Code Smells in C# I’ve always been tempted by the idea of automating a preliminary code-review for SQL. It would be so useful to trawl through code and pick up the various problems, much like the classic ‘Lint’ did for C, and how the Code Metrics plug-in for .NET Reflector by Jonathan 'Peli' de Halleux is used for finding Code Smells in .NET code. The problem is that few of the standard procedural code smells are relevant to SQL, and we need an agreed list of code smells. Merrilll Aldrich made a grand start last year in his blog Top 10 T-SQL Code Smells.However, I'd like to make a start by discovering if there is a general opinion amongst Database developers what the most important SQL Smells are. One can be a bit defensive about code smells. I will cheerfully write very long stored procedures, even though they are frowned on. I’ll use dynamic SQL occasionally. You can only use them as an aid for your own judgment and it is fine to ‘sign them off’ as being appropriate in particular circumstances. Also, whole classes of ‘code smells’ may be irrelevant for a particular database. The use of proprietary SQL, for example, is only a ‘code smell’ if there is a chance that the database will have to be ported to another RDBMS. The use of dynamic SQL is a risk only with certain security models. As the saying goes,  a CodeSmell is a hint of possible bad practice to a pragmatist, but a sure sign of bad practice to a purist. Plamen Ratchev’s wonderful article Ten Common SQL Programming Mistakes lists some of these ‘code smells’ along with out-and-out mistakes, but there are more. The use of nested transactions, for example, isn’t entirely incorrect, even though the database engine ignores all but the outermost: but it does flag up the possibility that the programmer thinks that nested transactions are supported. If anything requires some sort of general agreement, the definition of code smells is one. I’m therefore going to make this Blog ‘dynamic, in that, if anyone twitters a suggestion with a #SQLCodeSmells tag (or sends me a twitter) I’ll update the list here. If you add a comment to the blog with a suggestion of what should be added or removed, I’ll do my best to oblige. In other words, I’ll try to keep this blog up to date. The name against each 'smell' is the name of the person who Twittered me, commented about or who has written about the 'smell'. it does not imply that they were the first ever to think of the smell! Use of deprecated syntax such as *= (Dave Howard) Denormalisation that requires the shredding of the contents of columns. (Merrill Aldrich) Contrived interfaces Use of deprecated datatypes such as TEXT/NTEXT (Dave Howard) Datatype mis-matches in predicates that rely on implicit conversion.(Plamen Ratchev) Using Correlated subqueries instead of a join   (Dave_Levy/ Plamen Ratchev) The use of Hints in queries, especially NOLOCK (Dave Howard /Mike Reigler) Few or No comments. Use of functions in a WHERE clause. (Anil Das) Overuse of scalar UDFs (Dave Howard, Plamen Ratchev) Excessive ‘overloading’ of routines. The use of Exec xp_cmdShell (Merrill Aldrich) Excessive use of brackets. (Dave Levy) Lack of the use of a semicolon to terminate statements Use of non-SARGable functions on indexed columns in predicates (Plamen Ratchev) Duplicated code, or strikingly similar code. Misuse of SELECT * (Plamen Ratchev) Overuse of Cursors (Everyone. Special mention to Dave Levy & Adrian Hills) Overuse of CLR routines when not necessary (Sam Stange) Same column name in different tables with different datatypes. (Ian Stirk) Use of ‘broken’ functions such as ‘ISNUMERIC’ without additional checks. Excessive use of the WHILE loop (Merrill Aldrich) INSERT ... EXEC (Merrill Aldrich) The use of stored procedures where a view is sufficient (Merrill Aldrich) Not using two-part object names (Merrill Aldrich) Using INSERT INTO without specifying the columns and their order (Merrill Aldrich) Full outer joins even when they are not needed. (Plamen Ratchev) Huge stored procedures (hundreds/thousands of lines). Stored procedures that can produce different columns, or order of columns in their results, depending on the inputs. Code that is never used. Complex and nested conditionals WHILE (not done) loops without an error exit. Variable name same as the Datatype Vague identifiers. Storing complex data  or list in a character map, bitmap or XML field User procedures with sp_ prefix (Aaron Bertrand)Views that reference views that reference views that reference views (Aaron Bertrand) Inappropriate use of sql_variant (Neil Hambly) Errors with identity scope using SCOPE_IDENTITY @@IDENTITY or IDENT_CURRENT (Neil Hambly, Aaron Bertrand) Schemas that involve multiple dated copies of the same table instead of partitions (Matt Whitfield-Atlantis UK) Scalar UDFs that do data lookups (poor man's join) (Matt Whitfield-Atlantis UK) Code that allows SQL Injection (Mladen Prajdic) Tables without clustered indexes (Matt Whitfield-Atlantis UK) Use of "SELECT DISTINCT" to mask a join problem (Nick Harrison) Multiple stored procedures with nearly identical implementation. (Nick Harrison) Excessive column aliasing may point to a problem or it could be a mapping implementation. (Nick Harrison) Joining "too many" tables in a query. (Nick Harrison) Stored procedure returning more than one record set. (Nick Harrison) A NOT LIKE condition (Nick Harrison) excessive "OR" conditions. (Nick Harrison) User procedures with sp_ prefix (Aaron Bertrand) Views that reference views that reference views that reference views (Aaron Bertrand) sp_OACreate or anything related to it (Bill Fellows) Prefixing names with tbl_, vw_, fn_, and usp_ ('tibbling') (Jeremiah Peschka) Aliases that go a,b,c,d,e... (Dave Levy/Diane McNurlan) Overweight Queries (e.g. 4 inner joins, 8 left joins, 4 derived tables, 10 subqueries, 8 clustered GUIDs, 2 UDFs, 6 case statements = 1 query) (Robert L Davis) Order by 3,2 (Dave Levy) MultiStatement Table functions which are then filtered 'Sel * from Udf() where Udf.Col = Something' (Dave Ballantyne) running a SQL 2008 system in SQL 2000 compatibility mode(John Stafford)

    Read the article

  • A Bit Cloudy

    - by Chris Massey
    "Systems Administrators, I come in peace. You have nothing to fear from me" - Office 365 Microsoft Business Productivity Online Suite recently absorbed a few other services and has been rebranded as Office 365, which is currently in private Beta and NDA-d up to the eyeballs. As Microsoft's (slightly delayed) answer to Google Apps Premier Edition, it shows a lot of promise; MS has technical expertise, market penetration, and financial capital all going for it. On the other hand, Google...(read more)

    Read the article

  • Subterranean IL: Constructor constraints

    - by Simon Cooper
    The constructor generic constraint is a slightly wierd one. The ECMA specification simply states that it: constrains [the type] to being a concrete reference type (i.e., not abstract) that has a public constructor taking no arguments (the default constructor), or to being a value type. There seems to be no reference within the spec to how you actually create an instance of a generic type with such a constraint. In non-generic methods, the normal way of creating an instance of a class is quite different to initializing an instance of a value type. For a reference type, you use newobj: newobj instance void IncrementableClass::.ctor() and for value types, you need to use initobj: .locals init ( valuetype IncrementableStruct s1 ) ldloca 0 initobj IncrementableStruct But, for a generic method, we need a consistent method that would work equally well for reference or value types. Activator.CreateInstance<T> To solve this problem the CLR designers could have chosen to create something similar to the constrained. prefix; if T is a value type, call initobj, and if it is a reference type, call newobj instance void !!0::.ctor(). However, this solution is much more heavyweight than constrained callvirt. The newobj call is encoded in the assembly using a simple reference to a row in a metadata table. This encoding is no longer valid for a call to !!0::.ctor(), as different constructor methods occupy different rows in the metadata tables. Furthermore, constructors aren't virtual, so we would have to somehow do a dynamic lookup to the correct method at runtime without using a MethodTable, something which is completely new to the CLR. Trying to do this in IL results in the following verification error: newobj instance void !!0::.ctor() [IL]: Error: Unable to resolve token. This is where Activator.CreateInstance<T> comes in. We can call this method to return us a new T, and make the whole issue Somebody Else's Problem. CreateInstance does all the dynamic method lookup for us, and returns us a new instance of the correct reference or value type (strangely enough, Activator.CreateInstance<T> does not itself have a .ctor constraint on its generic parameter): .method private static !!0 CreateInstance<.ctor T>() { call !!0 [mscorlib]System.Activator::CreateInstance<!!0>() ret } Going further: compiler enhancements Although this method works perfectly well for solving the problem, the C# compiler goes one step further. If you decompile the C# version of the CreateInstance method above: private static T CreateInstance() where T : new() { return new T(); } what you actually get is this (edited slightly for space & clarity): .method private static !!T CreateInstance<.ctor T>() { .locals init ( [0] !!T CS$0$0000, [1] !!T CS$0$0001 ) DetectValueType: ldloca.s 0 initobj !!T ldloc.0 box !!T brfalse.s CreateInstance CreateValueType: ldloca.s 1 initobj !!T ldloc.1 ret CreateInstance: call !!0 [mscorlib]System.Activator::CreateInstance<T>() ret } What on earth is going on here? Looking closer, it's actually quite a clever performance optimization around value types. So, lets dissect this code to see what it does. The CreateValueType and CreateInstance sections should be fairly self-explanatory; using initobj for value types, and Activator.CreateInstance for reference types. How does the DetectValueType section work? First, the stack transition for value types: ldloca.s 0 // &[!!T(uninitialized)] initobj !!T // ldloc.0 // !!T box !!T // O[!!T] brfalse.s // branch not taken When the brfalse.s is hit, the top stack entry is a non-null reference to a boxed !!T, so execution continues to to the CreateValueType section. What about when !!T is a reference type? Remember, the 'default' value of an object reference (type O) is zero, or null. ldloca.s 0 // &[!!T(null)] initobj !!T // ldloc.0 // null box !!T // null brfalse.s // branch taken Because box on a reference type is a no-op, the top of the stack at the brfalse.s is null, and so the branch to CreateInstance is taken. For reference types, Activator.CreateInstance is called which does the full dynamic lookup using reflection. For value types, a simple initobj is called, which is far faster, and also eliminates the unboxing that Activator.CreateInstance has to perform for value types. However, this is strictly a performance optimization; Activator.CreateInstance<T> works for value types as well as reference types. Next... That concludes the initial premise of the Subterranean IL series; to cover the details of generic methods and generic code in IL. I've got a few other ideas about where to go next; however, if anyone has any itching questions, suggestions, or things you've always wondered about IL, do let me know.

    Read the article

  • A weekend with the Samsung Galaxy Tab

    - by Richard Mitchell
    This weekend I took one of the Samsung Galaxy Tabs we have lying around the office here home to see how I got on with it as I've been thinking of buying one. Initial impressions The look and feel of the Tab is quite nice. It's a lot smaller than an iPad but that is no bad thing as I imagine they are targeted at different markets. The Tab fits into my inside coat pocket nicely and doesn't feel like it's weighing me down too much. Connecting up the Tab to the network at work was fine, typing in...(read more)

    Read the article

  • Musings on the launch of SQL Monitor

    - by Phil Factor
    For several years, I was responsible for the smooth running of a large number of enterprise database servers. We ran a network monitoring tool that was primitive by today’s standards but which performed the useful function of polling every system, including all the Servers in my charge. It ran a configurable script for each service that you needed to monitor that was merely required to return one of a number of integer values. These integer values represented the pain level of the service, from 10 (“hurtin’ real bad”) to 1 (“Things is great”). Not only could you program the visual appearance of each server on the network diagram according to the value of the integer, but you could even opt to run a sound file. Very soon, we had a large TFT Screen, high on the wall of the server room, with every server represented by an icon, and a speaker next to it that would give out a series of grunts, groans, snores, shrieks and funeral marches, depending on the problem. One glance at the display, and you could dive in with iSQL/QA/SSMS and check what was going on with your favourite diagnostic tools. If you saw a server icon burst into flames on the screen or droop like a jelly, you dropped your mug of coffee to do it.  It was real fun, but I remember it more for the huge difference it made to have that real-time visibility into how your servers are performing. The management soon stopped making jokes about the real reason we wanted the TFT screen. (It rendered DVDs beautifully they said; particularly flesh-tints). If you are instantly alerted when things start to go wrong, then there was a good chance you could fix it before being alerted to the problem by the users of the system.  There is a world of difference between this sort of tool, one that gives whoever is ‘on watch’ in the server room the first warning of a potential problem on one of any number of servers, and the breed of tool that attempts to provide some sort of prosthetic DBA Brain. I like to get the early warning, to get the right information to help to diagnose a problem: No auto-fix, but just the information. I prefer to leave the task of ascertaining the exact cause of a problem to my own routines, custom code, intuition and forensic instincts. A simulated aircraft cockpit doesn’t do anything for me, especially before I know where I should be flying.  Time has moved on, and that TFT screen is now, with SQL Monitor, an iPad or any other mobile or static device that can support a browser. Rather than trying to reproduce the conceptual topology of the servers, it lists them in their groups so as to give a display that scales with the increasing number of databases you monitor.  It gives the history of the major events and trends for the servers. It gives the icons and colours that you can spot out of the corner of your eye, but goes on to give you just enough information in drill-down to give you a much clearer idea of where to look with your DBA tools and routines. It doesn't swamp you with information.  Whereas a few server and database-level problems are pretty easily fixed, others depend on judgement and experience to sort out.  Although the idea of an application that automates the bulk of a DBA’s skills is attractive to many, I can’t see it happening soon. SQL Server’s complexity increases faster than the panaceas can be created. In the meantime, I believe that the best way of helping  DBAs  is to make the monitoring process as simple and effective as possible,  and provide the right sort of detail and ‘evidence’ to allow them to decide on the fix. In the end, it is still down to the skill of the DBA.

    Read the article

  • Databases and Beer

    - by Johnm
    It is a bit of a no-brainer: Include the word "beer" in a subject line of an e-mail or blog post title and you can be certain that it will be read. While there are times this practice might be a ploy to increase readership, it is not the case for this blog post. There is inspiration that can be drawn from other industries to which we, as database professionals, can apply in our industry. In this post I will highlight one of my favorite participants of the brewing industry. The Boston Beer Company started in the 1970s in Boston, Massachusetts. Others may be more familiar with this company through their Samuel Adams Boston Lager and other various seasonal beers. I am continually inspired by their commitment to mastery of the brewing process to which they evangelize frequently in their commercials. They also are continually in pursuit of pushing the boundaries of beer as we know it while working within traditional constraints. A recent example of this is their collaboration with Weihenstephan Brewery of Munich, Germany to produce the soon to be released Infinium beer. This beer, while brewed as an ale, is touted as something closer to something like Champaign - all while complying with the Reinheitsgebot. The Reinheitsgebot is also known as the "German Beer Purity Law" which was originated in 1516. This law states that beer is to consist of water, barley, hops and yeast. That's it. Quite a limiting constraint indeed. and yet, The Boston Beer Company pushed forward. Much like the process of brewing, the discipline of database design and architecture is one that is continually in process and driven by the pursuit of mastery. While we do not have purity laws to constrain us, we have many other types: best practices, company policies, government regulations, security and budgets. Through our fellow comrades, we discuss the challenges and constraints in which we operate. We boil down the principles and theories that define our profession. We reassemble these into something that is complementary to the business needs that we must fulfill. As a result, it is not uncommon to see something amazingly innovative in a small business who is pushing the boundaries of their database well beyond its intended state. It is equally common to see innovation in the use of features available in the more advanced features of databases that are found in large businesses. The tag line for The Boston Beer Company is: "Take Pride In Your Beer.", I would like to offer an alternative and say "Take Pride In Your Database." So, As you pour your next Boston Lager into a frosted glass, consider those who spend their lives mastering the craft of brewing and strive to interject their spirit into everything that you do as a database professional. Cheers!

    Read the article

  • Subterranean IL: Generics and array covariance

    - by Simon Cooper
    Arrays in .NET are curious beasts. They are the only built-in collection types in the CLR, and SZ-arrays (single dimension, zero-indexed) have their own commands and IL syntax. One of their stranger properties is they have a kind of built-in covariance long before generic variance was added in .NET 4. However, this causes a subtle but important problem with generics. First of all, we need to briefly recap on array covariance. SZ-array covariance To demonstrate, I'll tweak the classes I introduced in my previous posts: public class IncrementableClass { public int Value; public virtual void Increment(int incrementBy) { Value += incrementBy; } } public class IncrementableClassx2 : IncrementableClass { public override void Increment(int incrementBy) { base.Increment(incrementBy); base.Increment(incrementBy); } } In the CLR, SZ-arrays of reference types are implicitly convertible to arrays of the element's supertypes, all the way up to object (note that this does not apply to value types). That is, an instance of IncrementableClassx2[] can be used wherever a IncrementableClass[] or object[] is required. When an SZ-array could be used in this fashion, a run-time type check is performed when you try to insert an object into the array to make sure you're not trying to insert an instance of IncrementableClass into an IncrementableClassx2[]. This check means that the following code will compile fine but will fail at run-time: IncrementableClass[] array = new IncrementableClassx2[1]; array[0] = new IncrementableClass(); // throws ArrayTypeMismatchException These checks are enforced by the various stelem* and ldelem* il instructions in such a way as to ensure you can't insert a IncrementableClass into a IncrementableClassx2[]. For the rest of this post, however, I'm going to concentrate on the ldelema instruction. ldelema This instruction pops the array index (int32) and array reference (O) off the stack, and pushes a pointer (&) to the corresponding array element. However, unlike the ldelem instruction, the instruction's type argument must match the run-time array type exactly. This is because, once you've got a managed pointer, you can use that pointer to both load and store values in that array element using the ldind* and stind* (load/store indirect) instructions. As the same pointer can be used for both input and output to the array, the type argument to ldelema must be invariant. At the time, this was a perfectly reasonable restriction, and maintained array type-safety within managed code. However, along came generics, and with it the constrained callvirt instruction. So, what happens when we combine array covariance and constrained callvirt? .method public static void CallIncrementArrayValue() { // IncrementableClassx2[] arr = new IncrementableClassx2[1] ldc.i4.1 newarr IncrementableClassx2 // arr[0] = new IncrementableClassx2(); dup newobj instance void IncrementableClassx2::.ctor() ldc.i4.0 stelem.ref // IncrementArrayValue<IncrementableClass>(arr, 0) // here, we're treating an IncrementableClassx2[] as IncrementableClass[] dup ldc.i4.0 call void IncrementArrayValue<class IncrementableClass>(!!0[],int32) // ... ret } .method public static void IncrementArrayValue<(IncrementableClass) T>( !!T[] arr, int32 index) { // arr[index].Increment(1) ldarg.0 ldarg.1 ldelema !!T ldc.i4.1 constrained. !!T callvirt instance void IIncrementable::Increment(int32) ret } And the result: Unhandled Exception: System.ArrayTypeMismatchException: Attempted to access an element as a type incompatible with the array. at IncrementArrayValue[T](T[] arr, Int32 index) at CallIncrementArrayValue() Hmm. We're instantiating the generic method as IncrementArrayValue<IncrementableClass>, but passing in an IncrementableClassx2[], hence the ldelema instruction is failing as it's expecting an IncrementableClass[]. On features and feature conflicts What we've got here is a conflict between existing behaviour (ldelema ensuring type safety on covariant arrays) and new behaviour (managed pointers to object references used for every constrained callvirt on generic type instances). And, although this is an edge case, there is no general workaround. The generic method could be hidden behind several layers of assemblies, wrappers and interfaces that make it a requirement to use array covariance when calling the generic method. Furthermore, this will only fail at runtime, whereas compile-time safety is what generics were designed for! The solution is the readonly. prefix instruction. This modifies the ldelema instruction to ignore the exact type check for arrays of reference types, and so it lets us take the address of array elements using a covariant type to the actual run-time type of the array: .method public static void IncrementArrayValue<(IncrementableClass) T>( !!T[] arr, int32 index) { // arr[index].Increment(1) ldarg.0 ldarg.1 readonly. ldelema !!T ldc.i4.1 constrained. !!T callvirt instance void IIncrementable::Increment(int32) ret } But what about type safety? In return for ignoring the type check, the resulting controlled mutability pointer can only be used in the following situations: As the object parameter to ldfld, ldflda, stfld, call and constrained callvirt instructions As the pointer parameter to ldobj or ldind* As the source parameter to cpobj In other words, the only operations allowed are those that read from the pointer; stind* and similar that alter the pointer itself are banned. This ensures that the array element we're pointing to won't be changed to anything untoward, and so type safety within the array is maintained. This is a typical example of the maxim that whenever you add a feature to a program, you have to consider how that feature interacts with every single one of the existing features. Although an edge case, the readonly. prefix instruction ensures that generics and array covariance work together and that compile-time type safety is maintained. Tune in next time for a look at the .ctor generic type constraint, and what it means.

    Read the article

  • ReSharper C# Live Template for Dependency Property and Property Change Routed Event Boilerplate Code

    - by Bart Read
    I don't know about you but it took me about 5 seconds to get royally fed up of typing the boilerplate code necessary for creating WPF (and Silverlight) dependency properties and, if you want them, their associated property change routed events. Being a ReSharper user, I wondered if there was any live template for doing this. It turns out there's nothing built in, but there are many examples of templates for creating dependency properties out there on the web, such as this excellent one from Roy...(read more)

    Read the article

  • Are you at Super Computing 10?

    - by Daniel Moth
    Like last year, I was going to attend SC this year, but other events are unfortunately keeping me here in Seattle next week. If you are going to be in New Orleans, have fun and be sure not to miss out on the following two opportunities. MPI Debugging UX Study Throughout the week, my team is conducting 90-minute studies on debugging MPI applications within Visual Studio. In exchange for your feedback (under NDA) you will receive a Microsoft Gratuity (and the knowledge that you are impacting the development of Visual Studio). If you are interested, sign up at the Microsoft Information Desk in the Exhibitor Hall during exhibit hours. Outside of exhibit hours, send email to [email protected]. If you took part in the GPGPU study, this is very similar except it is for MPI. Microsoft High Performance Computing Summit On Monday 15th, the Microsoft annual user group meeting takes place. Shuttle transportation and lunch is provided. For full details of this event and to register, please visit the official event page. Comments about this post welcome at the original blog.

    Read the article

  • Dryad and DryadLINQ from MSR

    - by Daniel Moth
    Microsoft Research (MSR) researches technologies, incubates projects which many times result in technology that looks like a ready-to-use product (but it is important to understand that these are not the same as products built by the various… actual product teams here at Microsoft). A very popular MSR project has been DryadLINQ, which itself builds on Dryad. To learn more follow the project pages I just linked to and I also recommend this 1-hour channel 9 video. If you only have 3 minutes, watch this great elevator pitch instead. You can also stay tuned on the official blog, which includes a post that refers to internal adoption e.g by Bing, a quick DryadLINQ code example, and some history on how DryadLINQ generalizes the MapReduce pattern and makes it accessible to regular programmers (see this post and that post). Essentially, the DryadLINQ framework (building on the Dryad runtime) allows developers to re-use their LINQ skills for creating/generating programs that process large multi-gigabyte/terabyte datasets across 100s-1000s of machines. One way to think about it is that just as Parallel LINQ allows LINQ developers to seamlessly use multiple cores from a single process on a single machine, DryadLINQ allows LINQ developers to seamlessly use multiple machines for their data parallel algorithms. In the former scenario the motivation was speed of execution, in the latter it is speed of execution AND processing large datasets that simply don't fit on a single machine. Whenever I hear about execution of parallel code on multiple machines on the Microsoft platform, I immediately think of Windows HPC Server. Indeed Dryad and DryadLINQ were made available for Windows HPC Server and I encourage you to watch the PDC session on this topic: Data-Intensive Computing on Windows HPC Server with the DryadLINQ Framework. Watch this space… Comments about this post welcome at the original blog.

    Read the article

  • Visual Studio Async CTP

    - by Daniel Moth
    While most of the buzz at the recent PDC here at Microsoft's headquarters has been about Windows Azure and Windows Phone, there is a truly noteworthy technology that as a .NET developer (of any kind of application) you should pay attention to, even in its early technology preview stage: Visual Studio Async CTP. I could provide many more direct links, but you do not need them: just visit the home page of this technology to download whitepapers, watch videos on how this technology integrates with C# and with VB, (through the new async and await language keywords) as well as videos on how the technology works under the covers (based largely on the Task Parallel Library). More importantly, download the actual bits (they install on top of your Visual Studio 2010), which include many samples. Get ready for a revolution in Asynchronous Programming with C# and Visual Basic. Comments about this post welcome at the original blog.

    Read the article

  • PPL and TPL sessions on channel9

    - by Daniel Moth
    Back in June there was an internal conference in Redmond ("Engineering Forum") aimed at Microsoft engineers, and delivered by Microsoft engineers. I was asked to put together a track on Multi-Core development, so I picked 6 parallelism experts and we created 6 awesome sessions (we won the top spot in the Top 10 :-)). Two of the speakers kept the content fairly external-friendly, so we received permission to publish their recordings publicly. Enjoy (best to download the High Quality WMV): Don McCrady - Parallelism in C++ Using the Concurrency Runtime Stephen Toub - Implementing Parallel Patterns using .NET 4 To get notified on future videos on parallelism (or to browse the archive) stay tuned on this channel9 parallel computing feed. Comments about this post welcome at the original blog.

    Read the article

  • Heading to GTC 2010

    - by Daniel Moth
    Next week the GPU Technology Conference (GTC) 2010 takes place in San Jose, CA and I am lucky enough to be attending the entire week. It has been an extremely long time (in fact, I can't remember the last time) where I am registered as an attendee at a conference (full pass/access) without being a speaker *and* without having any booth duty! Having said that, we (our team at Microsoft) will be running GPU debugging UX studies throughout the entire week (similar to what I had previously advertised). If you are attending GTC 2010 and you are interested, look for the related flyer in your conference bag. The conference is an excellent opportunity to connect in-person with various individuals that I have only met virtually. From an educational perspective there is a very long and interesting session list, with multiple concurrent slots, making it very hard to choose between them, but I have managed to create my (packed) schedule. I am most looking forward to sessions on the programming languages and tools, both from Microsoft and MS partners. For full conference details, visit the GTC 2010 official page. Comments about this post welcome at the original blog.

    Read the article

  • Windows Phone 7 developer resources

    - by Daniel Moth
    Developers of Windows Mobile 6.x (and indeed Windows CE) applications still use the rich .NET Compact Framework 3.5 with Visual Studio 2008 for development. That is still a great platform and the Mobile Development Handbook is still a useful resource (if I may say so myself :-). The release of Windows Phone 7, changes the programming paradigm. The programming model has NETCF in its guts, but the developer uses the Silverlight or XNA APIs (and they can call from one into the other). I thought I'd gather here (for your reference and mine) the top 10 resources for getting started. Windows Phone Developer Home - get the official word and latest announcements. Windows Phone Developer Tools RTW - download the free developer tools (on my machine the installation took 30 minutes, over my existing vanilla Visual Studio 2010 install). Windows Phone 7 Jump Start video training - watch the 12 sessions by Wigley/Miles. Windows Phone 7 Developer Training Kit - work through the labs. Windows Phone RSS tag - channel9 has tons more WP7 videos, stay tuned. Windows Phone 7 in 7 Minutes - watch 20 7-minute videos. Programming Windows Phone 7 - read 11 free chapters from Petzold's eBook. The Windows Phone Developer Blog - subscribe to the official blog. Getting Started with Windows Phone Development - explore all links from the MSDN Library root page.            Silverlight for Windows Phone – another root MSDN library page. If after all that you get your hands dirty and still can't find the answer ask questions at the WP7 development MSDN Forum.   On a personal note, I was pleased to see that the Parallel Stacks debugger window works fine with the WP7 project ;-) Comments about this post welcome at the original blog.

    Read the article

  • IE9 Beta

    - by Daniel Moth
    I've been using Internet Explorer 8 since the early pre-release bits, but I never tried IE9 until today – the day the Beta is available. I downloaded it from here: http://www.beautyoftheweb.com/ The download took longer than what I expected, but I was doing other stuff, so no bother. After coming down, it asked me to reboot my computer. Really hate when apps do that, but I did it anyway. The first time I launched it, it prompted me with a list of add-ons I should disable including the start-up time that I could save fore each one. It even let me configure the prompt so, for example, it won't prompt me again unless an add-on contributes to more than 1 second of the startup time. Cool. First thing I noticed is that the search bar had gone and, as you'd expect, you have to search from the address box. I totally despise this feature. The first thing I've been doing with all versions of IE is to turn off the automatic searching from the address bar and now I have no way of searching if I do that. Ridiculous. The second thing I notice is that the tabs are next to the address bar and cannot be moved to go below it. One word for that decision: appalling (and, no, I didn't accidentally drop an 'e' and added an 'l' in the previous word). The third thing I notice to the right is the favorites button (star icon) and when I click on it, it brings up the favorites explorer under it on the right; then I pin the explorer and it jumps to the left(!). Why move the entry point to this feature to the right instead of leaving it on the left is beyond me (other than wanting to retrain me on what I've been used to for all this time), but the fact that pinning it makes it jump sides is… an "astonishing" design decision. As I browse I notice a little annoying pop up in the bottom left every time I hover over a link; there is no status bar. I correctly guessed to right click at the top and turn on the status bar (which also got rid of the popup thereafter) and while I am at it, I bring back my favorites bar which was hidden by default (and am pleased to see that all my favorites are still there). The next thing I notice, I like: IE9 is fast. No joke, I visit sites and they seem to be loading visibly much faster – try it! Beyond the speed, I am interested to find out what else is new. I searched and found a few good links: What's new in Internet Explorer 9 Internet Explorer 9 Features (check out the links under "Clean") Top Features If you are a developer, check out IE's msdn home for many articles, e.g. this section on Canvas and SVG. Either way: wherever you are, get IE9 Beta now and judge for yourself. If you don't like it, you can always uninstall (which auto-restores the previous version). Comments about this post welcome at the original blog.

    Read the article

  • X technology is dead

    - by Daniel Moth
    Every so often, technology pundits (i.e. people not involved in the game, but who like commenting about it) throw out big controversial statements (typically to increase their readership), with a common one being that "Technology/platform X is dead". My former colleague (who I guess is now my distant colleague) uses the same trick with his blog post: "iPhone 4 is dead". But, his motivation is to set the record straight (and I believe him) by sharing his opinion on recent commentary around Silverlight, WPF etc. I enjoyed his post and the comments, so I hope you do too :-) Comments about this post welcome at the original blog.

    Read the article

  • Processing Email in Outlook

    - by Daniel Moth
    A. Why Goal 1 = Help others: Have at most a 24-hour response turnaround to internal (from colleague) emails, typically achieving same day response. Goal 2 = Help projects: Not to implicitly pass/miss an opportunity to have impact on electronic discussions around any project on the radar. Not achieving goals 1 & 2 = Colleagues stop relying on you, drop you off conversations, don't see you as a contributing resource or someone that cares, you are perceived as someone with no peripheral vision. Note this is perfect if all you are doing is cruising at your job, trying to fly under the radar, with no ambitions of having impact beyond your absolute minimum 'day job'. B. DON'T: Leave unread email lurking around Don't: Receive or process all incoming emails in a single folder ('inbox' or 'unread mail'). This is actually possible if you receive a small number of emails (e.g. new to the job, not working at a company like Microsoft). Even so, with (your future) success at any level (company, community) comes large incoming email, so learn to deal with it. With large volumes, it is best to let the system help you by doing some categorization and filtering on your behalf (instead of trying to do that in your head as you process the single folder). See later section on how to achieve this. Don't: Leave emails as 'unread' (or worse: read them, then mark them as unread). Often done by individuals who think they possess super powers ("I can mentally cache and distinguish between the emails I chose not to read, the ones that are actually new, and the ones I decided to revisit in the future; the fact that they all show up the same (bold = unread) does not confuse me"). Interactions with this super-powered individuals typically end up with them saying stuff like "I must have missed that email you are talking about (from 2 weeks ago)" or "I am a bit behind, so I haven't read your email, can you remind me". TIP: The only place where you are "allowed" unread email is in your Deleted Items folder. Don't: Interpret a read email as an email that has been processed. Doing that, means you will always end up with fake unread email (that you have actually read, but haven't dealt with completely so you then marked it as unread) lurking between actual unread email. Another side effect is reading the email and making a 'mental' note to action it, then leaving the email as read, so the only thing left to remind you to carry out the action is… you. You are not super human, you will forget. This is a key distinction. Reading (or even scanning) a new email, means you now know what needs to be done with it, in order for it to be truly considered processed. Truly processing an email is to, for example, write an email of your own (e.g. to reply or forward), or take a non-email related action (e.g. create calendar entry, do something on some website), or read it carefully to gain some knowledge (e.g. it had a spec as an attachment), or keep it around as reference etc. 'Reading' means that you know what to do, not that you have done it. An email that is read is an email that is triaged, not an email that is resolved. Sometimes the thing that needs to be done based on receiving the email, you can (and want) to do immediately after reading the email. That is fine, you read the email and you processed it (typically when it takes no longer than X minutes, where X is your personal tolerance – mine is roughly 2 minutes). Other times, you decide that you don't want to spend X minutes at that moment, so after reading the email you need a quick system for "marking" the email as to be processed later (and you still leave it as 'read' in outlook). See later section for how. C. DO: Use Outlook rules and have multiple folders where incoming email is automatically moved to Outlook email rules are very powerful and easy to configure. Use them to automatically file email into folders. Here are mine (note that if a rule catches an email message then no further rules get processed): "personal" Email is either personal or business related. Almost all personal email goes to my gmail account. The personal emails that end up on my work email account, go to a dedicated folder – that is achieved via a rule that looks at the email's 'From' field. For those that slip through, I use the new Outlook 2010  quick step of "Conversation To Folder" feature to let the slippage only occur once per conversation, and then update my rules. "External" and "ViaBlog" The remaining external emails either come from my blog (rule on the subject line) or are unsolicited (rule on the domain name not being microsoft) and they are filed accordingly. "invites" I may do a separate blog post on calendar management, but suffice to say it should be kept up to date. All invite requests end up in this folder, so that even if mail gets out of control, the calendar can stay under control (only 1 folder to check). I.e. so I can let the organizer know why I won't be attending their meeting (or that I will be). Note: This folder is the only one that shows the total number of items in it, instead of the total unread. "Inbox" The only email that ends up here is email sent TO me and me only. Note that this is also the only email that shows up above the systray icon in the notification toast – all other emails cannot interrupt. "ToMe++" Email where I am on the TO line, but there are other recipients as well (on the TO or CC line). "CC" Email where I am on the CC line. I need to read these, but nobody is expecting a response or action from me so they are not as urgent (and if they are and follow up with me, they'll receive a link to this). "@ XYZ" Emails to aliases that are about projects that I directly work on (and I wasn't on the TO or CC line, of course). Test: these projects are in my commitments that I get measured on at the end of the year. "Z Mass" and subfolders under it per distribution list (DL) Emails to aliases that are about topics that I am interested in, but not that I formally own/contribute to. Test: if I unsubscribed from these aliases, nobody could rightfully complain. "Admin" folder, which resides under "Z Mass" folder Emails to aliases that I was added typically by an admin, e.g. broad emails to the floor/group/org/building/division/company that I am a member of. "BCC" folder, which resides under "Z Mass" Emails where I was not on the TO or the CC line explicitly and the alias it was sent to is not one I explicitly subscribed to (or I have been added to the BCC line, which I briefly touched on in another post). When there are only a few quick minutes to catch up on email, read as much as possible from these folders, in this order: Invites, Inbox, ToMe++. Only when these folders are all read (remember that doesn't mean that each email in them has been fully dealt with), we can move on to the @XYZ and then the CC folders. Only when those are read we can go on to the remaining folders. Note that the typical flow in the "Z Mass" subfolders is to scan subject lines and use the new Ctrl+Delete Outlook 2010 feature to ignore conversations. D. DO: Use Outlook Search folders in combination with categories As you process each folder, when you open a new email (i.e. click on it and read it in the preview pane) the email becomes read and stays read and you have to decide whether: It can take 2 minutes to deal with for good, right now, or It will take longer than 2 minutes, so it needs to be postponed with a clear next step, which is one of ToReply – there may be intermediate action steps, but ultimately someone else needs to receive email about this Action – no email is required, but I need to do something ReadLater – no email is required from the quick scan, but this is too long to fully read now, so it needs to be read it later WaitingFor – the email is informing of an intermediate status and 'promising' a future email update. Need to track. SomedayMaybe – interesting but not important, non-urgent, non-time-bound information. I may want to spend part of one of my weekends reading it. For all these 'next steps' use Outlook categories (right click on the email and assign category, or use shortcut key). Note that I also use category 'WaitingFor' for email that I send where I am expecting a response and need to track it. Create a new search folder for each category (I dragged the search folders into my favorites at the top left of Outlook, above my inboxes). So after the activity of reading/triaging email in the normal folders (where the email arrived) is done, the result is a bunch of emails appearing in the search folders (configure them to show the total items, not the total unread items). To actually process email (that takes more than 2 minutes to deal with) process the search folders, starting with ToReply and Action. E. DO: Get into a Routine Now you have a system in place, get into a routine of using it. Here is how I personally use mine, but this part I keep tweaking: Spend short bursts of time (between meetings, during boring but mandatory meetings and, in general, 2-4 times a day) aiming to have no unread emails (and in the process deal with some emails that take less than 2 minutes). Spend around 30 minutes at the end of each day processing most urgent items in search folders. Spend as long as it takes each Friday (or even the weekend) ensuring there is no unnecessary email baggage carried forward to the following week. F. Other resources Official Outlook help on: Create custom actions rules, Manage e-mail messages with rules, creating a search folder. Video on ignoring conversations (Ctrl+Del). Official blog post on Quick Steps and in particular the Move Conversation to folder. If you've read "Getting Things Done" it is very obvious that my approach to email management is driven by GTD. A very similar approach was described previously by ScottHa (also influenced by GTD), worth reading here. He also described how he sets up 2 outlook rules ('invites' and 'external') which I also use – worth reading that too. Comments about this post welcome at the original blog.

    Read the article

  • DirectCompute Lectures

    - by Daniel Moth
    Previously I shared resources to get you started with DirectCompute, for taking advantage of GPGPUs in an a way that doesn't tie you to a hardware vendor (e.g. nvidia, amd). I just stumbled upon and had to share a lecture series on channel9 on DirectCompute! Here are direct links to the episodes that are up there now: DirectCompute Expert Roundtable Discussion DirectCompute Lecture Series 101- Introduction to DirectCompute DirectCompute Lecture Series 110- Memory Patterns DirectCompute Lecture Series 120- Basics of DirectCompute Application Development DirectCompute Lecture Series 210- GPU Optimizations and Performance DirectCompute Lecture Series 230- GPU Accelerated Physics DirectCompute Lecture Series 250- Integration with the Graphics Pipeline Having watched these I recommend them all, but if you only want to watch a few, I suggest #2, #3, #4 and #5. Also, you should download the "WMV (High)" so you can see the code clearly and be able to Ctrl+Shift+G for fast playback… TIP: To subscribe to channel9 GPU content, use this RSS feed. Comments about this post welcome at the original blog.

    Read the article

  • Are you a GPGPU developer? Participate in our UX study

    - by Daniel Moth
    You know that I work on the parallel debugger in Visual Studio and I've talked about GPGPU before and I have also mentioned UX. Below is a request from my UX colleagues that pulls all of it together. If you write and debug parallel code that uses GPUs for non-graphical, computationally intensive operations keep reading. The Microsoft Visual Studio Parallel Computing team is seeking developers for a 90-minute research study. The study will take place via LiveMeeting or at a usability lab in Redmond, depending on your preference. We will walk you through an example of debugging GPGPU code in Visual Studio with you giving us step-by-step feedback. ("Is this what you would you expect?", "Are we showing you the things that would help you?", "How would you improve this") The walkthrough utilizes a “paper” version of our current design. After the walkthrough, we would then show you some additional design ideas and seek your input on various design tradeoffs. Are you interested or know someone who might be a good fit? Let us know at this address: [email protected]. Those who participate (and those who referred them), will receive a gratuity item from a list of current Microsoft products. Comments about this post welcome at the original blog.

    Read the article

  • User eXperience

    - by Daniel Moth
    The last few months I have been spending a lot of time designing (and help design) the developer experience for the areas I contribute to (in future versions of Visual Studio). As a technical person who defines feature sets, it is easy to get engulfed in the pure technical side of things and ignore the details that ultimately make users "love" using the product to achieve their goal, instead of just "having to use" it. Engaging in UX design helps me escape that trap. In case you are also interested in the UX side of development, I thought I'd share an interesting site I came across: UX myths. In particular, I recommend reading myths 9, 10, 12, 13, 14, 15 and 21. Let me know if there are other UX resources you recommend… Comments about this post welcome at the original blog.

    Read the article

  • Join our team at Microsoft

    - by Daniel Moth
    If you are looking for a SDE or SDET job at Microsoft, keep on reading. Back in January I posted a Dev Lead opening on our team, which was quickly filled internally (by Maria Blees). Our team is part of the recently announced Microsoft Technical Computing group. Specifically, we are working on new debugger functionality, integrated with Visual Studio (we are starting work on the next version), aimed to address HPC and GPGPU scenarios (and continuing the Parallel Debugging scenarios we started addressing with VS2010). We now have many more openings on our debugger team. We posted three of those on the careers website: Software Development Engineer Software Development Engineer II Software Development Engineer in Test II (don't let the word "Test" fool you: An SDET on our team is no different than a developer in any way, including the skills required) Please do read the contents of the links above. Specifically, note that for both positions you need to be as proficient in writing C++ code as you are with managed code (WPF experience is a plus). If you think you have what it takes, you wish to join a quality and schedule driven project, and want to contribute features to a product that has global impact, then send me your resume and I'll pass it on to the hiring managers. Comments about this post welcome at the original blog.

    Read the article

  • Hopping/Tumbling Windows Could Introduce Latency.

    This is a pre-article to one I am going to be writing on adjusting an event’s time and duration to satisfy business process requirements but it is one that I think is really useful when understanding the way that Hopping/Tumbling windows work within StreamInsight.  A Tumbling window is just a special shortcut version of  a Hopping window where the width of the window is equal to the size of the hop Here is the simplest and often used definition for a Hopping Window.  You can find them all here public static CepWindowStream<CepWindow<TPayload>> HoppingWindow<TPayload>(     this CepStream<TPayload> source,     TimeSpan windowSize,     TimeSpan hopSize,     WindowInputPolicy inputPolicy,     HoppingWindowOutputPolicy outputPolicy )   And here is the definition for a Tumbling Window public static CepWindowStream<CepWindow<TPayload>> TumblingWindow<TPayload>(     this CepStream<TPayload> source,     TimeSpan windowSize,     WindowInputPolicy inputPolicy,     HoppingWindowOutputPolicy outputPolicy )   These methods allow you to group events into windows of a temporal size.  It is a really useful and simple feature in StreamInsight.  One of the downsides though is that the windows cannot be flushed until an event in a following window occurs.  This means that you will potentially never see some events or see them with a delay.  Let me explain. Remember that a stream is a potentially unbounded sequence of events. Events in StreamInsight are given a StartTime.  It is this StartTime that is used to calculate into which temporal window an event falls.  It is best practice to assign a timestamp from the source system and not one from the system clock on the processing server.  StreamInsight cannot know when a window is over.  It cannot tell whether you have received all events in the window or whether some events have been delayed which means that StreamInsight cannot flush the stream for you.   Imagine you have events with the following Timestamps 12:10:10 PM 12:10:20 PM 12:10:35 PM 12:10:45 PM 11:59:59 PM And imagine that you have defined a 1 minute Tumbling Window over this stream using the following syntax var HoppingStream = from shift in inputStream.TumblingWindow(TimeSpan.FromMinutes(1),HoppingWindowOutputPolicy.ClipToWindowEnd) select new WindowCountPayload { CountInWindow = (Int32)shift.Count() };   The events between 12:10:10 PM and 12:10:45 PM will not be seen until the event at 11:59:59 PM arrives.  This could be a real problem if you need to react to windows promptly This can always be worked around by using a different design pattern but a lot of the examples I see assume there is a constant, very frequent stream of events resulting in windows always being flushed. Further examples of using windowing in StreamInsight can be found here

    Read the article

  • Linqpad and StreamInsight

    Slightly before the announcement of StreamInsight being available for Linqpad I downloaded it from here.  I had seen Roman Schindlauer demonstrate it at Teched and it looked a really good tool to do some StreamInsight dev.   You will need .Net 4.0 and StreamInsight installed. Here’s what you need to do after downloading and installing Linqpad. Add a new connection   The next thing we need to do is install and enable the StreamInsight driver.  Choose to view more drivers.   Choose StreamInsight     Select the driver after install     I have chosen the Default Context.     And after all that I can finally get to writing my query.  This is a very simple query where I turn a collection (IEnumerable) into a PointStream.  After doing that I create 30 minute windows over the stream before outputting the count of events in each of those windows to the result window.     I have played with Linqpad only a little but I think it is going to be a really good tool to get ideas developed and quickly.  I have also enabled Autcompletion (paid £25) and I recommend it.

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >