Search Results

Search found 25440 results on 1018 pages for 'agent based modeling'.

Page 911/1018 | < Previous Page | 907 908 909 910 911 912 913 914 915 916 917 918  | Next Page >

  • JustMock and Moles – A short overview for TDD alpha geeks

    - by RoyOsherove
    People have been lurking near my house, asking me to write something about Moles and JustMock, so I’ll try to be as objective as possible, taking in the fact that I work at Typemock. If I were NOT working at Typemock I’d write: JustMock JustMock tries to be Typemock at so many levels it’s not even funny. Technically they work the same and the API almost looks like it’s a search and replace work based on the Isolator API (awesome compliment!), but JustMock still has too many growing pains and bugs to be usable. Also, JustMock is missing alot of the legacy abilities such as Non public faking, faking all types and various other things that are really needed in real legacy code. Biggest thing (in terms of isolation integration) is that it does not integrate with other profilers such as coverage, NCover etc.) When JustMock comes out of beta, I feel that it should cost about half as Isolator costs, as it currently provides about half the abilities. Moles Moles is an addon of Pex and was originally only intended to work within the Pex environment. It started as a research project and now it’s a power-tool for VS (so it’s a separate install) Now it’s it’s own little stubbing framework. It’s not really an Isolation framework in the classic sense, because it does not provide any kind of API built in to verify object interactions. You have to use manual flags all on your own to do that. It generates two types of classes per assembly: Manual Stubs(just like you’d hand code them) and Mole classes. Each Mole class is a special API to change and break the behavior that the corresponding type. so MDateTime is how you change behavior for DateTime. In that sense the API is al over the place, and it can become highly unreadable and unmentionable over time in your test. Also, the Moles API isn’t really designed to deal with real Legacy code. It only deals with public types and methods. anything internal or private is ignored and you can’t change its behavior. You also can’t control static constructors. That takes about 95% of legacy scenarios out of the picture if that’s what you’re trying to use it for. Personally, I found it hard to get used to the idea of two parallel APIs for different abilities, and when to choose which. and I know this stuff. I would expect more usability from the API to make it more widely used. I don’t think that Moles in planning to go that route. Publishing it as an Isolation framework is really an afterthought of a tool that was design with a specific task in mind, and generic Isolation isn’t it. it’s only hope is DEQ – a simple code example that shows a simple Isolation API built on the Moles generic engine. Moles can and should be used for very simple cases of detouring functionality such a simple static methods or interfaces and virtual functions (like rhinomock and MOQ do).   Oh, Wait. Ah, good thing I work at Typemock. I won’t write all that. I’ll just write: JustMock and Moles are great tools that enlarge the market space for isolation related technologies, and they prove that the idea of productivity and unit testing can go hand in hand and get people hooked. I look forward to compete with them at this growing market.

    Read the article

  • Things to install on a new machine – revisited

    - by RoyOsherove
    as I prepare to get a new dev machine at work, I write the things I am going to install on it, before writing the first line of code on that machine: Control Freak Tools: Everything Search Engine – a free and amazingly fast search engine for files all over your machine. (just file names, not inside files). This is so fast I use it almost as a replacement for my start menu, but it’s also great for finding those files that get hidden and tucked away in dark places on my system. Ever had a situation where you needed to see exactly how many copies of X.dll were hiding on your machine and where? this tool is perfect for that. Google Chrome. It’s just fast. very fast. and Firefox has become the IE of alternative browsers in terms of speed and memory. Don’t even get me started on IE. TweetDeck – get a complete view of what’s up on twitter Total Commander – my still favorite file manager, over five years now. KatMouse – will scroll any window your hovering on, even if it’s not an active window, when you use scroll the wheel on it. PowerIso or Daemon Tools – for loading up ISO images of discs LogMeIn Ignition – quick access to your LogMeIn computers for online Backup: JungleDisk or BackBlaze KeePass – save important passwords MS Security Essentials – free anti virus that’s quoest and doesn’t make a mess of your system. for home: uTorrent – a torrent client that can read rss feeds (like the ones from ezrss.it ) Camtasia Studio and SnagIt – for recording and capturing the screen, and then adding cool effects on top. Foxit PDF Reader – much faster that adove reader. Toddler Keys (for home) – for when your baby wants to play with your keyboard. Live Writer – for writing blog posts for Lenovo ThinkPads – Lenovo System Update – if you have a “custom” system instead of the one that came built in, this will keep all your lenovo drivers up to date. FileZilla – for FTP stuff All the utils from sysinternals, (or try the live-links) especially: AutoRuns for deciding what’s really going to load at startup, procmon to see what’s really going on with processes in your system   Developer stuff: Reflector. Pure magic. Time saver. See source code of any compiled assembly. Resharper. Great for productivity and navigation across your source code FinalBuilder – a commercial build automation tool. Love it. much better than any xml based time hog out there. TeamCity – a great visual and friendly server to manage continuous integration. powerful features. Test Lint – a free addin for vs 2010 I helped create, that checks your unit tests for possible problems and hints you about it. TestDriven.NET – a great test runner for vs 2008 and 2010 with some powerful features. VisualSVN – a commercial tool if you use subversion. very reliable addin for vs 2008 and 2010 Beyond Compare – a powerful file and directory comparison tool. I love the fact that you can right click in windows exporer on any file and select “select left side to compare”, then right click on another file and select “compare with left side”. Great usability thought! PostSharp 2.0 – for addind system wide concepts into your code (tracing, exception management). Goes great hand in hand with.. SmartInspect – a powerful framework and viewer for tracing for your application. lots of hidden features. Crypto Obfuscator – a relatively new obfuscation tool for .NET that seems to do the job very well. Crypto Licensing – from the same company –finally a licensing solution that seems to really fit what I needed. And it works. Fiddler 2 – great for debugging and tracing http traffic to and from your app. Debugging Tools for Windows and DebugDiag  - great for debugging scenarios. still wanting more? I think this should keep you busy for a while.   Regulator and Regulazy – for testing and generating regular expressions Notepad 2 – for quick editing and viewing with syntax highlighting

    Read the article

  • SQLAuthority News – The Best Quotes of “Who Wrote This?” Contest

    - by pinaldave
    I am a frequent reader of Brent Ozar PLF, it is one of my favorite blogs. A recent post announced a “Who Wrote This?” contest to see if readers could tell their three contributors apart based on some writing samples. Here are my favorite lines from the sample paragraphs, from each of the three “mystery authors.” Topic 1: Working with Bad Managers Mystery Author A – “Working with bad managers means working against my own happiness, and I’ve come to learn that there’s no changing bad managers.” I love this line because, as anyone who has had a bad manager knows, often a lot of self-doubt rises up. We all have to remember that sometimes the problem is out of our control. Mystery Author B – “Mentor your manager just like you would mentor a junior DBA.” Having a bad manager can be extremely depressing, and we often feel out of control. But we all need to remember that our work is a two-way street, and that sometimes we can subtly influence those above us. Mystery Author C – “The trick to working for all bad managers is to remember that they aren’t your parent. Take charge of your career.” We all also need to learn not to play the blame game. Would you rather stay in a place where you are unhappy, or would you rather take charge of your life? I hope most people would pick the latter. Topic 2: Working with Remote Teams Mystery Author A – “Like almost anything else the key is to make sure that everyone on the team has an understanding of how and when communication will occur.” Communication is so important. I cannot over emphasize how much. And this one line captures how I feel and even communicates the idea clearly! Mystery Author B – “The key to remote team success is verifiable trust: feeling confident that invisible team members are doing the right amount of the right thing at the right time.” I think this line not only captures the key aspects of remote work – verifiable work and trust – but there were so many lines that followed that I loved and could not fit here. The whole paragraph is a list for successful remote work. Everyone could benefit from reading it. Mystery Author C – “What seems clear, precise, and specific in one time zone comes across as vague, soupy, and just plain weird in another.” You know what? I just love this description. The author is right – sometimes vague e-mails really do seem soupy and weird! Topic 3: Working with Your Nemesis Mystery Author A – “Every job is temporary, but your reputation stays with you.” Everyone needs to remember this. The workplace is meant to be a professional arena, and many people have the opinion that work is temporary and disposable. No one wants to work with co-worker like that. Mystery Author B – “Unhealthy conflict is going to lead to leaving three week old tuna fish sandwiches in someone’s desk drawer.” Sometimes humor really is the best policy! Mystery Author C – “Oh no, it’s that guy.” This might seem like a weird phrase to choose as my favorite from an entire paragraph. But the whole piece was written in the form of a story of co-workers getting drunk and plotting against a nemesis. It was too funny to overlook, but too long to post here. A must read! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Create Music Playlists in Windows 7 Media Center

    - by DigitalGeekery
    One of the new features in Windows 7 Media Center is the ability to easily create music playlists without using Media Player. Today we’ll take a closer look at how to create them directly in Media Center. Create Manual Playlists Open Windows Media Center and select the Music Library. From within the Music Library, choose playlists from the top menu.   Then select Create Playlist. Give your new playlist a name, and select Next. Choose Music Library and select Next.    Select “songs” from the top menu, choose the songs for your playlist from your library, and select Next when finished. You can also click Select All to add all songs to your playlist, or clear all to remove them. Note: you can also sort by artist, album, genre, etc. from the top menu.   Now you can review and edit your playlist. Click the up and down pointers to move songs up and down in the playlist, or “X” to remove them. You can also go back and add additional songs by selecting Add More. Click Create when you are finished.   Auto Playlists Windows Media Center also allows you to create six different auto playlists. These are dynamic playlists based on pre-defined criteria. Auto Playlists include All Music, Music added in the last month, Music auto rated at 5 stars, Music played in the last month, Music played the most, and Music rated 4 or 5 stars. These Auto Playlists will change dynamically as your library and listening habits change. Your new music playlists can be found under playlists in the music library. Select play playlist to start the music. Now kick back and enjoy the music from your playlist. Conclusion While earlier versions of WMC allowed you to create playlists, you had to do it through Windows Media Player. This is a nice new feature for music lovers who use WMC and prefer to do everything with a remote. Do you already have playlists that you’ve created in Windows Media Player? Windows Media Center can play those too. If your playlists are in the default Music folder, Media Center will detect them automatically and add them to your Music Library. Plus, any playlists you create in Media Center are also available for Media Player. For more on creating Playlists in Media Player, check out our previous articles on how to create a custom playlist in Windows Media Player 12, and how to create auto playlists in WMP 12. Similar Articles Productive Geek Tips How To Rip a Music CD in Windows 7 Media CenterCreate Custom Playlists in Windows Media Player 12Using Netflix Watchnow in Windows Vista Media Center (Gmedia)How to Create Auto Playlists in Windows Media Player 12Fixing When Windows Media Player Library Won’t Let You Add Files TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 How to Forecast Weather, without Gadgets Outlook Tools, one stop tweaking for any Outlook version Zoofs, find the most popular tweeted YouTube videos Video preview of new Windows Live Essentials 21 Cursor Packs for XP, Vista & 7 Map the Stars with Stellarium

    Read the article

  • Christian Radio Locator iPhone app

    - by Tim Hibbard
    For the last three months or so I've been working on an iPhone (and iPad) app in my spare time. It all started when I took the kids to Minneapolis and had a hard time finding radio stations to listen to on the trip. I looked in the App Store for an app that would use my GPS to show me Christian radio stations nearby, but there wasn't one. So I decided to build my own. Using public information from the FCC and a few other sources, I built a database in Google docs that contains the frequency for all Christian radio stations, where the tower is located and how far the tower can reach. I also included any streaming audio information and other contact information like Facebook or Twitter that I could find. Google spreadsheets publish in JSON format (yes, really) and Xcode can automatically deserialize JSON into a properly formatted entity. This is one area that Xcode is far superior to C#. In a just a few lines of code, I can have a list of in-memory strongly typed objects from a web-based JSON feed. To accomplish the same thing natively in .NET would be much more work and wouldn't feel nearly as clean when it was said and done. The snazzy icon shown above was built by my very talented wife. She hasn't yet provided any feedback on the app's user interface, which is why it is so plain and boring. I used a navigation view controller and EGO pull to refresh table view to construct the main window. Pulling down to refresh initiates a GPS lookup, which queries the database for radio stations in range (yes, you can pass parameters to Google spreadsheets and get a subset back in JSON). Pulling up on the table extends the range of the search and includes stations that may not be close enough to get clear audio. This feature is not that intuitive and the next version contains an update to that functionality. Tapping a cell will show a detail view that displays additional information about the station. The user can click to view the station on a map, click to listen to an online stream (if available) or click to see the station's Facebook or Twitter pages. Swiping back and forth on the table changes the information that is displayed on the right hand side of the table cell. It scrolls through the city where the tower is located, how far the phone is from the tower, the range of the tower and in the next version a signal strength indicator. This was pretty easy to implement once I figured out how to assign the gesture recognizer delegate.  Tapping and holding on a cell will jump the user to the map view screen. Which is pretty cool, but very hard for even a power user to discover. To tackle the issue of discoverability, the next version has a series of instructions displayed at the bottom of the screen to show the user the various shortcuts. Once the user has performed the swipes and long holds, the instructions disappear. I've learned a lot developing this app. Spending over a decade exclusively in .NET made the learning curve a bit steep, but once I learned the structure and syntax of Objective-C, I've learned to appreciate the power and simplicity of it. Here are a few screenshots. I would really appreciate any feedback and especially iTunes reviews. Technically it is open source and a smart googler could probably find it. I just haven't promoted it as open source.     Cross posted from timhibbard.com

    Read the article

  • Beware when using .NET's named pipes in a windows forms application

    - by FransBouma
    Yesterday a user of our .net ORM Profiler tool reported that he couldn't get the snapshot recording from code feature working in a windows forms application. Snapshot recording in code means you start recording profile data from within the profiled application, and after you're done you save the snapshot as a file which you can open in the profiler UI. When using a console application it worked, but when a windows forms application was used, the snapshot was always empty: nothing was recorded. Obviously, I wondered why that was, and debugged a little. Here's an example piece of code to record the snapshot. This piece of code works OK in a console application, but results in an empty snapshot in a windows forms application: var snapshot = new Snapshot(); snapshot.Record(); using(var ctx = new ORMProfilerTestDataContext()) { var customers = ctx.Customers.Where(c => c.Country == "USA").ToList(); } InterceptorCore.Flush(); snapshot.Stop(); string error=string.Empty; if(!snapshot.IsEmpty) { snapshot.SaveToFile(@"c:\temp\generatortest\test2\blaat.opsnapshot", out error); } if(!string.IsNullOrEmpty(error)) { Console.WriteLine("Save error: {0}", error); } (the Console.WriteLine doesn't do anything in a windows forms application, but you get the idea). ORM Profiler uses named pipes: the interceptor (referenced and initialized in your application, the application to profile) sends data over the named pipe to a listener, which when receiving a piece of data begins reading it, asynchronically, and when properly read, it will signal observers that new data has arrived so they can store it in a repository. In this case, the snapshot will be the observer and will store the data in its own repository. The reason the above code doesn't work in windows forms is because windows forms is a wrapper around Win32 and its WM_* message based system. Named pipes in .NET are wrappers around Windows named pipes which also work with WM_* messages. Even though we use BeginRead() on the named pipe (which spawns a thread to read the data from the named pipe), nothing is received by the named pipe in the windows forms application, because it doesn't handle the WM_* messages in its message queue till after the method is over, as the message pump of a windows forms application is handled by the only thread of the windows forms application, so it will handle WM_* messages when the application idles. The fix is easy though: add Application.DoEvents(); right before snapshot.Stop(). Application.DoEvents() forces the windows forms application to process all WM_* messages in its message queue at that moment: all messages for the named pipe are then handled, the .NET code of the named pipe wrapper will react on that and the whole process will complete as if nothing happened. It's not that simple to just say 'why didn't you use a worker thread to create the snapshot here?', because a thread doesn't get its own message pump: the messages would still be posted to the window's message pump. A hidden form would create its own message pump, so the additional thread should also create a window to get the WM_* messages of the named pipe posted to a different message pump than the one of the main window. This WM_* messages pain is not something you want to be confronted with when using .NET and its libraries. Unfortunately, the way they're implemented, a lot of APIs are leaky abstractions, they bleed the characteristics of the OS objects they hide away through to the .NET code. Be aware of that fact when using them :)

    Read the article

  • Collaborate10 &ndash; THEconference

    - by jean-pierre.dijcks
    After spending a few days in Mandalay Bay's THEHotel, I guess I now call everything THE... Seriously, they even tag their toilet paper with THEtp... I guess the brand builders in Vegas thought that once you are on to something you keep on doing it, and granted it is a nice hotel with nice rooms. THEanalytics Most of my collab10 experience was in a room called Reef C, where the BIWA bootcamp was held. Two solid days of BI, Warehousing and Analytics organized by the BIWA SIG at IOUG. Didn't get to see all sessions, but what struck me was the high interest in Analytics. Marty Gubar's OLAP session was full and he did some very nice things with the OLAP option. The cool bit was that he actually gets all the advanced calculations in OLAP to show up in OBI EE without any effort. It was nice to see that the idea from OWB where you generate an RPD is now also in AWM. I think it makes life so much simpler to generate these RPD's from your data model. Even if the end RPD needs some tweaking, it is all a lot less effort to get something going. You can see this stuff for yourself in this demo (click here). OBI EE uses just SQL to get to the calculations, and so, if you prefer APEX, you can build you application there and get the same nice calculations in an APEX application. Marty also showed the Simba MDX driver used with Excel. I guess we should call that THEcoolone... and it is very slick and wonderfully useful for all of you who actually know Excel. The nice thing is that you leverage pure Excel for all operations (no plug-ins). That means no new tools to learn, no new controls, all just pure Excel. THEdatabasemachine Got some very good questions in my "what makes Exadata fast" session and overall, the interest in Exadata is overwhelming. One of the things that I did try to do in my session is to get people to think in new patterns rather than in patterns based on Oracle 9i running on some random hardware configuration. We talked a little bit about the often over-indexing and how everyone has to unlearn all of that on Exadata. The main thing however is that everyone needs to get used to the shear size of some of the components in a Database machine V2. 5TB of flash cache is a lot of very fast data storage, half a TB of memory gets quite interesting as well. So what I did there was really focus on some of the content in these earlier posts on Upward ILM and In-Memory processing. In short, I do believe the these newer media point out a trend. In-memory and other fast media will get cheaper and will see more use. Some of that we do automatically by adding new functionality, but in some cases I think the end user of the system needs to start thinking about how to leverage all this new hardware. I think most people got very excited about these new capabilities and opportunities. THEcoolkids One of the cool things about the BIWA track was the hand-on track. Very cool to see big crowds for both OLAP and OWB hands-on. Also quite nice to see that the folks at RittmanMead spent so much time on preparing for that session. While all of them put down cool stuff, none was more cool that seeing Data Mining on an Apple iPAD... it all just looks great on an iPAD! Very disappointing to see that Mark Rittman still wasn't showing OWB on his iPAD ;-) THEend All in all this was a great set of sessions in the BIWA track. Lots of value to our guests (we hope) and we hope they all come again next year!

    Read the article

  • Windows Azure PowerShell for Node.js

    - by shiju
    The Windows Azure PowerShell for Node.js is a command-line tool that  allows the Node developers to build and deploy Node.js apps in Windows Azure using Windows PowerShell cmdlets. Using Windows Azure PowerShell for Node.js, you can develop, test, deploy and manage Node based hosted service in Windows Azure. For getting the PowerShell for Node.js, click All Programs, Windows Azure SDK Node.js and run  Windows Azure PowerShell for Node.js, as Administrator. The followings are the few PowerShell cmdlets that lets you to work with Node.js apps in Windows Azure Create New Hosted Service New-AzureService <HostedServiceName> The below cmdlet will created a Windows Aazure hosted service named NodeOnAzure in the folder C:\nodejs and this will also create ServiceConfiguration.Cloud.cscfg, ServiceConfiguration.Local.cscfg and ServiceDefinition.csdef and deploymentSettings.json files for the hosted service. PS C:\nodejs> New-AzureService NodeOnAzure The below picture shows the files after creating the hosted service Create Web Role Add-AzureNodeWebRole <RoleName> The following cmdlet will create a hosted service named MyNodeApp along with web.config file. PS C:\nodejs\NodeOnAzure> Add-AzureNodeWebRole MyNodeApp The below picture shows the files after creating the web role app. Install Node Module npm install <NodeModule> The following command will install Node Module Express onto your web role app. PS C:\nodejs\NodeOnAzure\MyNodeApp> npm install Express Run Windows Azure Apps Locally in the Emulator Start-AzureEmulator -launch The following cmdlet will create a local package and run Windows Azure app locally in the emulator PS C:\nodejs\NodeOnAzure\MyNodeApp> Start-AzureEmulator -launch Stop Windows Azure Emulator Stop-AzureEmulator The following cmdlet will stop your Windows Azure in the emulator. PS C:\nodejs\NodeOnAzure\MyNodeApp> Stop-AzureEmulator Download Windows Azure Publishing Settings Get-AzurePublishSettings The following cmdlet will redirect to Windows Azure portal where we can download Windows Azure publish settings PS C:\nodejs\NodeOnAzure\MyNodeApp> Get-AzurePublishSettings Import Windows Azure Publishing Settings Import-AzurePublishSettings <Location of .publishSettings file> The following cmdlet will import the publish settings file from the location c:\nodejs PS C:\nodejs\NodeOnAzure\MyNodeApp>  Import-AzurePublishSettings c:\nodejs\shijuvar.publishSettings Publish Apps to Windows Azure Publish-AzureService –name <Name> –location <Location of Data centre> The following cmdlet will publish the app to Windows Azure with name “NodeOnAzure” in the location Southeast Asia. Please keep in mind that the service name should be unique. PS C:\nodejs\NodeOnAzure\MyNodeApp> Publish-AzureService –name NodeonAzure –location "Southeast Asia” –launch Stop Windows Azure Service Stop-AzureService The following cmdlet will stop your service which you have deployed previously. PS C:\nodejs\NodeOnAzure\MyNodeApp> Stop-AzureService Remove Windows Azure Service Remove-AzureService The following cmdlet will remove your service from Windows Azure. PS C:\nodejs\NodeOnAzure\MyNodeApp> Remove-AzureService Quick Summary for PowerShell cmdlets Create  a new Hosted Service New-AzureService <HostedServiceName> Create a Web Role Add-AzureNodeWebRole <RoleName> Install Node Module npm install <NodeModule> Running Windows Azure Apps Locally in Emulator Start-AzureEmulator -launch Stop Windows Azure Emulator Stop-AzureEmulator Download Windows Azure Publishing Settings Get-AzurePublishSettings Import Windows Azure Publishing Settings Import-AzurePublishSettings <Location of .publishSettings file> Publish Apps to Windows Azure Publish-AzureService –name <Name> –location <Location of Data centre> Stop Windows Azure Service Stop-AzureService Remove Windows Azure Service Remove-AzureService

    Read the article

  • Collation errors in business

    - by Rob Farley
    At the PASS Summit last month, I did a set (Lightning Talk) about collation, and in particular, the difference between the “English” spoken by people from the US, Australia and the UK. One of the examples I gave was that in the US drivers might stop for gas, whereas in Australia, they just open the window a little. This is what’s known as a paraprosdokian, where you suddenly realise you misunderstood the first part of the sentence, based on what was said in the second. My current favourite is Emo Phillip’s line “I like to play chess with old men in the park, but it can be hard to find thirty-two of them.” Essentially, this a collation error, one that good comedians can get mileage from. Unfortunately, collation is at its worst when we have a computer comparing two things in different collations. They might look the same, and sound the same, but if one of the things is in SQL English, and the other one is in Windows English, the poor database server (with no sense of humour) will get suspicious of developers (who all have senses of humour, obviously), and declare a collation error, worried that it might not realise some nuance of the language. One example is the common scenario of a case-sensitive collation and a case-insensitive one. One may think that “Rob” and “rob” are the same, but the other might not. Clearly one of them is my name, and the other is a verb which means to steal (people called “Nick” have the same problem, of course), but I have no idea whether “Rob” and “rob” should be considered the same or not – it depends on the collation. I told a lie before – collation isn’t at its worst in the computer world, because the computer has the sense to complain about the collation issue. People don’t. People will say something, with their own understanding of what they mean. Other people will listen, and apply their own collation to it. I remember when someone was asking me about a situation which had annoyed me. They asked if I was ‘pissed’, and I said yes. I meant that I was annoyed, but they were asking if I’d been drinking. It took a moment for us to realise the misunderstanding. In business, the problem is escalated. A business user may explain something in a particular way, using terminology that they understand, but using words that mean something else to a technical person. I remember a situation with a checkbox on a form (back in VB6 days from memory). It was used to indicate that something was approved, and indicated whether a particular database field should store True or False – nothing more. However, the client understood it to mean that an entire workflow system would be implemented, with different users have permission to approve items and more. The project manager I’d just taken over from clearly hadn’t appreciated that, and I faced a situation of explaining the misunderstanding to the client. Lots of fun... Collation errors aren’t just a database setting that you can ignore. You need to remember that Americans speak a different type of English to Aussies and Poms, and techies speak a different language to their clients.

    Read the article

  • FluentPath: a fluent wrapper around System.IO

    - by Bertrand Le Roy
    .NET is now more than eight years old, and some of its APIs got old with more grace than others. System.IO in particular has always been a little awkward. It’s mostly static method calls (Path.*, Directory.*, etc.) and some stateful classes (DirectoryInfo, FileInfo). In these APIs, paths are plain strings. Since .NET v1, lots of good things happened to C#: lambda expressions, extension methods, optional parameters to name just a few. Outside of .NET, other interesting things happened as well. For example, you might have heard about this JavaScript library that had some success introducing a fluent API to handle the hierarchical structure of the HTML DOM. You know? jQuery. Knowing all that, every time I need to use the stuff in System.IO, I cringe. So I thought I’d just build a more modern wrapper around it. I used a fluent API based on an essentially immutable Path type and an enumeration of such path objects. To achieve the fluent style, a healthy dose of lambda expressions is being used to act on the objects. Without further ado, here’s an example of what you can do with the new API. In that example, I’m using a Media Center extension that wants all video files to be in their own folder. For that, I need a small tool that creates directories for each video file and moves the files in there. Here’s the code for it: Path.Get(args[0]) .Select(p => p.Extension == ".avi" || p.Extension == ".m4v" || p.Extension == ".wmv" || p.Extension == ".mp4" || p.Extension == ".dvr-ms" || p.Extension == ".mpg" || p.Extension == ".mkv") .CreateDirectory(p => p.Parent .Combine(p.FileNameWithoutExtension)) .Previous() .Move(p => p.Parent .Combine(p.FileNameWithoutExtension) .Combine(p.FileName)); This code creates a Path object pointing at the path pointed to by the first command line argument of my executable. It then selects all video files. After that, it creates directories that have the same names as each of the files, but without their extension. The result of that operation is the set of created directories. We can now get back to the previous set using the Previous method, and finally we can move each of the files in the set to the corresponding freshly created directory, whose name is the combination of the parent directory and the filename without extension. The new fluent path library covers a fair part of what’s in System.IO in a single, convenient API. Check it out, I hope you’ll enjoy it. Suggestions are more than welcome. For example, should I make this its own project on CodePlex or is this informal style just OK? Anything missing that you’d like to see? Is there a specific example you’d like to see expressed with the new API? Bugs? The code can be downloaded from here (this is under a new BSD license): http://weblogs.asp.net/blogs/bleroy/Samples/FluentPath.zip

    Read the article

  • jQuery Context Menu Plugin and Capturing Right-Click

    - by Ben Griswold
    I was thrilled to find Cory LaViska’s jQuery Context Menu Plugin a few months ago. In very little time, I was able to integrate the context menu with the jQuery Treeview.  I quickly had a really pretty user interface which took full advantage of limited real estate.  And guess what.  As promised, the plugin worked in Chrome, Safari 3, IE 6/7/8, Firefox 2/3 and Opera 9.5.  Everything was perfect and I shipped to the Integration Environment. One thing kept bugging though – right clicks aren’t the standard in a web environment. Sure, when one hovers over the treeview node, the mouse changed from an arrow to a pointer, but without help text most users will certainly left-click rather than right. As I was already doubting the design decision, we did some Mac testing.  The context menu worked in Firefox but not Safari.  Damn.  That’s when I started digging into the Madness of Javascript Mouse Events.  Don’t tell, but it’s complicated.  About as close as one can get to capture the right-click mouse event on all major browsers on Windows and Mac is this: if (event.which == null) /* IE case */ button= (event.button < 2) ? "LEFT" : ((event.button == 4) ? "MIDDLE" : "RIGHT"); else /* All others */ button= (event.which < 2) ? "LEFT" : ((event.which == 2) ? "MIDDLE" : "RIGHT"); Yikes.  The content menu code was simply checking if event.button == 2.  No problem.  Cory offers a jQuery Right Click Plugin which I’m sure works for windows but probably not the Mac either.  (Please note I haven’t verified this.) Anyway, I decided to address my UI design concern and the Safari Mac issue in one swoop.  I decided to make the context menu respond to any mouse click event.  This didn’t take much – especially after seeing how Bill Beckelman updated the library to recognize the left click. First, I added an AnyClick option to the library defaults: // Any click may trigger the dropdown and that's okay // See Javascript Madness: Mouse Events – http: //unixpapa.com/js/mouse.html if (o.anyClick == undefined) o.anyClick = false; And then I trigger the context menu dropdown based on the following conditional: if (evt.button == 2 || o.anyClick) { Nothing tricky about that, right?  Finally, I updated my menu setup to include the AnyClick value, if true: $('.member').contextMenu({ menu: 'memberContextMenu', anyClick: true },             function (action, el, pos) {                 … Now the context menu works in “all” environments if you left, right or even middle click.  Download jQuery Context Menu Plugin for Any Click *Opera 9.5 has an option to allow scripts to detect right-clicks, but it is disabled by default. Furthermore, Opera still doesn’t allow JavaScript to disable the browser’s default context menu which causes a usability conflict.

    Read the article

  • Customize Team Build 2010 – Part 12: How to debug my custom activities

    In the series the following parts have been published Part 1: Introduction Part 2: Add arguments and variables Part 3: Use more complex arguments Part 4: Create your own activity Part 5: Increase AssemblyVersion Part 6: Use custom type for an argument Part 7: How is the custom assembly found Part 8: Send information to the build log Part 9: Impersonate activities (run under other credentials) Part 10: Include Version Number in the Build Number Part 11: Speed up opening my build process template Part 12: How to debug my custom activities Part 13: Get control over the Build Output Part 14: Execute a PowerShell script Part 15: Fail a build based on the exit code of a console application       Developers are “spoilt” persons who expect to be able to have easy debugging experiences for every technique they work with. So they also expect it when developing custom activities for the build process template. This post describes how you can debug your custom activities without having to develop on the build server itself. Remote debugging prerequisites The prerequisite for these steps are to install the Microsoft Visual Studio Remote Debugging Monitor. You can find information how to install this at http://msdn.microsoft.com/en-us/library/bt727f1t.aspx. I chose for the option to run the remote debugger on the build server from a file share. Debugging symbols prerequisites To be able to start the debugging, you need to have the pdb files on the buildserver together with the assembly. The pdb must have been build with Full Debug Info. Steps In my setup I have a development machine and a build server. To setup the remote debugging, I performed the following steps Locate on your development machine the folder C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\Remote Debugger Create a share for the Remote Debugger folder. Make sure that the share (and the folder) has the correct permissions so the user on the build server has access to the share. On the build server go to the shared “Remote Debugger” folder Start msvsmon.exe which is located in the folder that represents the platform of the build server. This will open a winform application like   Go back to your development machine and open the BuildProcess solution. Start the Attach to process command (Ctrl+Alt+P) Type in the Qualifier the name of the build server. In my case the user account that has started the msvsmon is another user then the user on my development machine. In that case you have to type the qualifier in the format that is shown in the Remote Debugging Monitor (in my case LOCAL\Administrator@TFSLAB) and confirm it by pressing <Enter> Since the build service is running with other credentials, check the option “Show processes from all users”. Now the Attach to process dialog shows the TFSBuildServiceHost process Set the breakpoint in the activity you want to debug and kick of a build. Be aware that when you attach to the TFSBuildServiceHost that you debug every single build that is run by this windows service, so make sure you don’t debug the build server that is in production! You can download the full solution at BuildProcess.zip. It will include the sources of every part and will continue to evolve.

    Read the article

  • What's new in ASP.Net 4.5 and VS 2012 - part 2

    - by nikolaosk
    This is the second post in a series of posts titled "What's new in ASP.Net 4.5 and VS 2012".You can have a look at the first post in this series, here. Please find all my posts regarding VS 2012, here. In this post I will be looking into the various new features available in ASP.Net 4.5 and VS 2012.I will be looking into the enhancements in the HTML Editor,CSS Editor and Javascript Editor.In order to follow along this post you must have Visual Studio 2012 and .Net Framework 4.5 installed in your machine.Download and install VS 2012 using this link.My machine runs on Windows 8 and Visual Studio 2012 works just fine.I will work fine in Windows 7 as well so do not worry if you do not have the latest Microsoft operating system.1) Launch VS 2012 and create a new Web Forms application by going to File - >New Web Site - > ASP.Net Web Forms Site.2) Choose an appropriate name for your web site.3) I would like to point out the new enhancements in the CSS editor in VS 2012. In the Solution Explorer in the Content folder and open the Site.cssThen when I try to change the background-color property of the html element, I get a brand new handy color-picker. Have a look at the picture below  Please note that the color-picker shows all the colors that have been used in this website. Then you can expand the color-picker by clicking on the arrows. Opacity is also supported. Have a look at the picture below4) There are also mobile styles in the Site.css .These are based on media queries.Please have a look at another post of mine on CSS3 media queries. Have a look at the picture below In this case when the maximum width of the screen is less than 850px there will be a new layout that will be dictated by these new rules. Also CSS snippets are supported. Have a look at the picture below I am writing a new CSS rule for an image element. I write the property transform and hit tab and then I have cross-browser CSS handling all of the major vendors.Then I simply add the value rotate and it is applied to all the cross browser options.Have a look at the picture below.  I am sure you realise how productive you can become with all these CSS snippets. 5) Now let's have a look at the new HTML editor enhancements in VS 2012You can drag and drop a GridView web server control from the Toolbox in the Site.master file.You will see a smart tag (that was only available in the Design View) that you can expand and add fields, format the web server control.Have a look at the picture below 6) We also have available code snippets. I type <video and then press tab twice.By doing that I have the rest of the HTML 5 markup completed.Have a look at the picture below 7) I have new support for the input tag including all the HTML 5 types and all the new accessibility features.Have a look at the picture below   8) Another interesting feature is the new Intellisense capabilities. When I change the DocType to 4.01 and the type <audio>,<video> HTML 5 tags, Intellisense does not recognise them and add squiggly lines.Have a look at the picture below All these features support ASP.Net Web forms, ASP.Net MVC applications and Web Pages. 9) Finally I would like to show you the enhanced support that we have for Javascript in VS 2012. I have full Intellisense support and code snippets support.I create a sample javascript file. I type If and press tab. I type while and press tab.I type for and press tab.In all three cases code snippet support kicks in and completes the code stack. Have a look at the picture below We also have full Intellisense support.Have a look at the picture below I am creating a simple function and then type some sort of XML like comments for the input parameters. Have a look at the picture below. Then when I call this function, Intellisense has picked up the XML comments and shows the variables data types.Have a look at the picture below Hope it helps!!!

    Read the article

  • Friday Fun: The Search For Wondla

    - by Asian Angel
    The best day of the week is finally here again, so it is time to have some fun while waiting to go home for the weekend. The game we have for you today takes you far into humanity’s future where you journey with Eva Nine in her quest to find other humans. Note: Today’s game comes with a double bonus! First, there is a sequel game that you can move on to once you have completed the first one. Second, there are three wallpapers available in multiple sizes for those who enjoy the characters and artwork presented in the game (see below). The Search For Wondla The object of the game is to find the differences between two similar looking images based on artwork from The Search For Wondla by Tony DiTerlizzi. Are you ready to join Eva Nine in her quest to find other humans in the future? Note: There is a version available for those who would like to play The Search For Wondla on their iPads! The first game has 28 levels of difference finding goodness for you to work through. Each level will list the minimum number of differences that you need to find to progress to the next level. If you need a hint along the way just click on the Shake or Reveal options at the bottom of the game play window. Get a level completed quickly enough and you get bonus points! There will also be differences in the images for individual levels each time you play the game, so have fun! Note: The second game has 12 levels to complete. To give you a good feel for the game we have covered the first six levels here and provided seven clues for each level (you are only required to find a minimum of five). Eva Nine viewing the holographic outdoor projections in the main hub of her living quarters… Eva Nine is in a grumpy mood as Muthr visits her at bedtime… Eva Nine in her secret hideaway visiting old “childhood friends” as she contemplates her recent survival test failure. Eva Nine viewing the entire set of floor plans for the underground sanctuary where she was born and has been growing up. Eva Nine’s escape to the surface as the underground sanctuary is attacked by the bounty hunter creature Besteel. Eva Nine on the surface for the first time in her young life. Will she be successful in her quest? There is only one way to find out! Play The Search For Wondla Part 1 Play The Search For Wondla Part 2 Bonus Content If you have enjoyed this game you can learn more about the book and download the three wallpapers shown here by visiting the link below! Note: The wallpapers come in the following sizes: 1024*768, 1280*800, 1280*1024, 1440*900, iPhone, iPhone4, and iPad (click on the Extras link at the bottom of the page). Visit the Search For Wondla Homepage Do you enjoy playing difference finding games? Then you will definitely want to have a look at another wonderful game that we have covered here: Friday Fun: Isis Latest Features How-To Geek ETC The How-To Geek Guide to Learning Photoshop, Part 8: Filters Get the Complete Android Guide eBook for Only 99 Cents [Update: Expired] Improve Digital Photography by Calibrating Your Monitor The How-To Geek Guide to Learning Photoshop, Part 7: Design and Typography How to Choose What to Back Up on Your Linux Home Server How To Harmonize Your Dual-Boot Setup for Windows and Ubuntu Hang in There Scrat! – Ice Age Wallpaper How Do You Know When You’ve Passed Geek and Headed to Nerd? On The Tip – A Lamborghini Theme for Chrome and Iron What if Wile E. Coyote and the Road Runner were Human? [Video] Peaceful Winter Cabin Wallpaper Store Tabs for Later Viewing in Opera with Tab Vault

    Read the article

  • ASP.NET MVC for the php/asp noob

    - by dotjosh
    I was talking to a friend today, who's foremost a php developer, about his thoughts on Umbraco and he said "Well they're apparently working feverishly on the new version of Umbraco, which will be MVC... which i still don't know what that means, but I know you like it." I ended up giving him a ground up explanation of ASP.NET MVC, so I'm posting this so he can link this to his friends and for anyone else who finds it useful.  The whole goal was to be as simple as possible, not being focused on proper syntax. Model-View-Controller (or MVC) is just a pattern that is used for handling UI interaction with your backend.  In a typical web app, you can imagine the *M*odel as your database model, the *V*iew as your HTML page, and the *C*ontroller as the class inbetween.  MVC handles your web request different than your typical php/asp app.In your php/asp app, your url maps directly to a php/asp file that contains html, mixed with database access code and redirects.In an MVC app, your url route is mapped to a method on a class (the controller).  The body of this method can do some database access and THEN decide which *V*iew (html/aspx page) should be displayed;  putting the controller in charge and not the view... a clear seperation of concerns that provides better reusibility and generally promotes cleaner code. Mysite.com, a quick example:Let's say you hit the following url in your application: http://www.mysite.com/Product/ShowItem?Id=4 To avoid tedious configuration, MVC uses a lot of conventions by default. For instance, the above url in your app would automatically make MVC search for a .net class with the name "Product" and a method named "ShowItem" based on the pattern of the url.  So if you name things properly, your method would automatically be called when you entered the above url.  Additionally, it would automatically map/hydrate the "int id" parameter that was in your querystring, matched by name.Product.cspublic class Product : Controller{    public ViewResult ShowItem(int id)    {        return View();    }} From this point you can write the code in the body of this method to do some database access and then pass a "bag" (also known as the ViewData) of data to your chosen *V*iew (html page) to use for display.  The view(html) ONLY needs to be worried about displaying the flattened data that it's been given in the best way it can;  this allows the view to be reused throughout your application as *just* a view, and not be coupled to HOW the data for that view get's loaded.. Product.cspublic class Product : Controller{    public ViewResult ShowItem(int id)    {        var database = new Database();        var item = database.GetItem(id);        ViewData["TheItem"] = item;        return View();    }} Again by convention, since the class' method name is "ShowItem", it'll search for a view named "ShowItem.aspx" by default, and pass the ViewData bag to it to use. ShowItem.aspx<html>     <body>      <%        var item =(Item)ViewData["TheItem"]       %>       <h1><%= item.FullProductName %></h1>     </body></html> BUT WAIT! WHY DOES MICROSOFT HAVE TO DO THINGS SO DIFFERENTLY!?They aren't... here are some other frameworks you may have heard of that use the same pattern in a their own way: Ruby On Rails Grails Spring MVC Struts Django    

    Read the article

  • Silverlight Cream for March 28, 2010 -- #823

    - by Dave Campbell
    In this Issue: Michael Washington, Andy Beaulieu, Bill Reiss, jocelyn, Shawn Wildermuth, Cameron Albert, Shawn Oster, Alex Yakhnin, ondrejsv, Giorgetti Alessandro, Jeff Handley, SilverLaw, deepm, and Kyle McClellan. Shoutouts: If I've listed this before, it's worth another... Introduction to Prototyping with SketchFlow (twelve video series) and on the same page is Creating a Beehive Game with Behaviors in Blend 3 (ten video series) Shawn Oster announced his Slides + Code + Video from ‘An Introduction to Developing Applications for Microsoft Silverlight’ from MIX10 Tim Heuer announced earlier this week: Silverlight Client for Facebook updated for Silverlight 4 RC Nikhil Kothari announced the availability of his MIX10 Talk - Slides and Code András Velvárt backed up his great MIX09 effort with MIX10.Zoomery.com... everything in one DZ effort... thanks András! Andy Beaulieu posted his material for his Code Camp 13 in Waltham: Windows Phone: Silverlight for Casual Games From SilverlightCream.com: Silverlight MVVM - The Revolution Has Begun Michael Washington did an awesome tutorial on MVVM and Silverlight creating a simple Silverlight File Manager. The post has a link to the tutorial at CodeProject... great tutorial. Windows Phone 7 + Silverlight Performance Andy Beaulieu has a post up we should all bookmark... getting a handle on the graphics performance of our app on WP7. Great examples, and external links. Space Rocks game step 6: Keyboard handling Bill Reiss has a post up about keyboard input for the WP7 game he's building ... this is Episode 6 ... you're working along with him, right? Panoramic Navigation on Windows Phone 7 with No Code! jocelyn at InnovativeSingapore (I found this by way of Shawn's post), has a Panoramic Navigation template out there for WP7 for all of us to grab... great post about it too. My First WP7 Application Shawn Wildermuth has been playing with WP7 development and has his XBOX Game library app up on the emulator... all with source of course Silverlight and Windows Phone 7 Game Cameron Albert built a web-based game called 'Shape Attack' and also did it for WP7 to compare the performance... check it out for yourself, but hey, it's game source for the phone... cool :) Changing the Onscreen Keyboard layout in Silverlight for Windows Phone using InputScope Shawn Oster has a cool post on changing the keyboard on WP7 to go along with what you're expecting the user to type... how cool is that?? Deep Zoom on WP7 Check out the quick work Alex Yakhnin made of putting DeepZoom on WP7... all source included. How to: Create a sketchy Siverlight GroupBox in Blend/SketchFlow ondrejsv has the xaml up to take Tim Greenfield's GroupBox control and insert it into SketchFlow. Silverlight / Castle Windsor – implementing a simple logging framework Giorgetti Alessandro posted about CastleWindsor for Silverlight, and a logging system inherited from LevelFilteredLogger in the absence of Log4Net. DomainDataSource in a ViewModel Jeff Handley responds to a common forum post about using DomainDataSource in a ViewModel. Read his comments on AutoLoad and ElementName Bindins. Digital Jugendstil TextEffect (Art Nouveau) - Silverlight 3 SilverLaw has a cool TagCloud demo and a UserControl he calls Art Nouveau up at the Expression Gallery... not for a business app, I don't think :) Configuring your DomainService for a Windows Phone 7 application deepm discusses RIA Services for WP7 and how to enable a WP7 app to communicate with a DomainService. Writing a Custom Filter or Parameter for DomainDataSource Kyle McClellan by way of Jeff Handley's blog, is discussing how to leverage the custom parameter types you defined in the previous version of RIA Services. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • “I could use a little help here” or “I can do it myself, thank you” for Cloud Projects

    - by BuckWoody
    Windows Azure allows you to write code in languages within the .NET stack, you can use Java, C++, PHP, NodeJS and others. Code is code - other than keeping things stateless, using a Web or Worker Role in Azure is not all that different from working with an on-premises system. However…. Working in a scalable, component-based stateless architecture that can use federated security is not all that common for many developers. Some are used to owning the server, scaling up, and state-full paradigms that have a single security domain. Making the transition whilst trying to create a new software application or even port a previous one can be daunting. Sure, we have absolutely tons of free training, kits, videos, online books and more to learn on your own, but some things like architecture can be pivotal as you move along. So the question is, should you just strike out on your own for a Cloud project, or get Microsoft Consulting Services or another partner to work with you on your first one? I use a few decision points to help guide the projects I assist in. Note: I’m a huge fan of having help that ends up giving you training and leaves you in charge. If you do engage with someone to help you, make sure you keep this clear and take more and more ownership yourself as the project progresses. How much time do you have? Usually the first thing I ask is about the timeline for the project. It doesn’t matter how skilled you are, if you have a short window to get things done it’s better to get help - especially if this is your first cloud project. Having someone that knows the platform well can save you amazing amounts of time. If you have longer, then start with the training in the link above and once you feel confident, jump in. How complex is the project? If there are a lot of moving parts, it’s best to engage a partner. The reason is that certain interactions - particularly things like Service Bus or Data Integration  - can be quite different than what you may have encountered before. How many people do you have? I have a “pizza rule” about projects I’ve used in my career - if it takes over two pizzas to feed everyone on the project, it’s too big and will fail. That being said, one developer and a one-week deadline does not a good project make, usually. It’s best to have at least one architect (or someone in that role) guiding the project along, and at least two developers to work on a cloud project. That’s a generalization of course, since I’ve seen great software on Azure with one developer writing code all by herself, but for more complex projects, more (to a point) is better. The nice thing about bringing on a partner is that you don’t have to hire them full time - they help you and then they go away. How critical is the project? There’s no shame in using some help. If the platform is new, if the project is large and complex, and if it is critical to the business, you should engage a partner. That’s regardless of Cloud or anything else - get some help. You don’t want to hit your company’s bottom line in a negative way, but you have to innovate and get them a competitive advantage. Do your research, make sure the partner is qualified to help you, and get it done. Don’t let these questions scare you off. There are lots of projects you can implement on Windows and SQL Azure with nothing other than the Software Development Kit (SDK) that you get for free with Windows Azure. And assistance comes in many forms - sometimes just phone support, a friend you can ask. Microsoft Consulting Services or any of our great partners. You can get help on just the architecture piece or have them show you how to write the code. They’ll get involved as little or as much as you like.

    Read the article

  • From the Tips Box: Pre-installation Prep Work Makes Service Pack Upgrades Smoother

    - by Jason Fitzpatrick
    Last month Microsoft rolled out Windows 7 Service Pack 1 and, like many SP releases, quite a few people are hanging back to see what happens. If you want to update but still error on the side of caution, reader Ron Troy  offers a step-by-step guide. Ron’s cautious approach does an excellent job minimizing the number of issues that could crop up in a Service Pack upgrade by doing a thorough job updating your driver sets and clearing out old junk before you roll out the update. Read on to see how he does it: Just wanted to pass on a suggestion for people worried about installing Service Packs.  I came up with a ‘method’ a couple years back that seems to work well. Run Windows / Microsoft Update to get all updates EXCEPT the Service Pack. Use Secunia PSI to find any other updates you need. Use CCleaner or the Windows disk cleanup tools to get rid of all the old garbage out there.  Make sure that you include old system updates. Obviously, back up anything you really care about.  An image backup can be real nice to have if things go wrong. Download the correct SP version from Microsoft.com; do not use Windows / Microsoft Update to get it.  Make sure you have the 64 bit version if that’s what you have installed on your PC. Make sure that EVERYTHING that affects the OS is up to date.  That includes all sorts of drivers, starting with video and audio.  And if you have an Intel chipset, use the Intel Driver Utility to update those drivers.  It’s very quick and easy.  For the video and audio drivers, some can be updated by Intel, some by utilities on the vendor web sites, and some you just have to figure out yourself.  But don’t be lazy here; old drivers and Windows Service Packs are a poor mix. If you have 3rd party software, check to see if they have any updates for you.  They might not say that they are for the Service Pack but you cut your risk of things not working if you do this. Shut off the Antivirus software (especially if 3rd party). Reboot, hitting F8 to get the SafeMode menu.  Choose SafeMode with Networking. Log into the Administrator account to ensure that you have the right to install the SP. Run the SP.  It won’t be very fancy this way.  Maybe 45 minutes later it will reboot and then finish configuring itself, finally letting you log in. Total installation time on most of my PC’s was about 1 hour but that followed hours of preparation on each. On a separate note, I recently got on the Nvidia web site and their utility told me I had a new driver available for my GeForce 8600M GS.  This laptop had come with Vista, now has Win 7 SP1.  I had a big surprise from this driver update; the Windows Experience Score on the graphics side went way up.  Kudo’s to Nvidia for doing a driver update that actually helps day to day usage.  And unlike ATI’s updates (which I need for my AGP based system), this update was fairly quick and very easy.  Also, Nvidia drivers have never, as I can recall, given me BSOD’s, many of which I’ve gotten from ATI (TDR errors).How to Enable Google Chrome’s Secret Gold IconHTG Explains: What’s the Difference Between the Windows 7 HomeGroups and XP-style Networking?Internet Explorer 9 Released: Here’s What You Need To Know

    Read the article

  • Fünf Jahre Bonn-to-Code.Net – das muss gefeiert werden!

    - by WeigeltRo
    Als ich am 1. Januar 2006 die .NET User Group “Bonn-to-Code.Net” gründete (den genialen Namen ließ sich mein Kollege Jens Schaller in Anlehnung an das Motto meines Blogs einfallen), ahnte ich nicht, wie schnell sich alles entwickeln würde. So konnte, nach ein wenig Werbung über diverse Kanäle, bereits am 14. Februar 2006 das erste Treffen stattfinden und wenige Tage später wurde Bonn-to-Code.Net offiziell in den Kreis der INETA User Groups aufgenommen. Das ist nun etwas über fünf Jahre her und soll am 22. März 2011 um 19:00 (Einlass ab 18:30) gebührend gefeiert werden, und zwar im Rahmen unseres März-Treffens. Der Abend bietet Vorträge zu “Flow Design und seine Umsetzung mit Event Based Components” sowie “WCF Services mal anders” (ausführlichere Infos zu den Vortragsinhalten gibt es hier). Anschließend gibt es bei einer großen Verlosung neben Büchern auch hochkarätige Software-Preise zu gewinnen. Zusätzlich zu Lizenzen für JetBrains ReSharper und Telerik Ultimate Collection warten dieses Mal (mit freundlicher Unterstützung durch Microsoft Deutschland) je ein Windows 7 Ultimate und ein Office 2010 Professional Plus auf ihre glücklichen Gewinner. Und wer nicht zu spät kommt, kann auch ganz ohne Losglück eines von vielen kleinen Goodies abgreifen. Eine Anmeldung ist nicht erforderlich, eine Anfahrtsbeschreibung gibt es auf der Bonn-to-Code.Net Website. Es freut mich dabei besonders, dass wir zu diesem Termin u.a. einen Sprecher an Bord haben, der bereits beim Gründungstreffen dabei war: Stefan Lieser. Mittlerweile z.B. durch die Clean Code Developer Initiative bekannt, ist Stefan nur ein Beispiel für eine ganze Reihe von Sprechern auf den diversen Entwicklerkonferenzen, die ihre ersten Erfahrungen u.a. bei Bonn-to-Code.Net gemacht haben. …und was ist in den fünf Jahren so passiert? Einiges! Ein Community Launch Event in 2007, zwei Microsoft TechTalks (2007,2008), Gastsprecher aus ganz Deutschland und dem Ausland (JP Boodhoo, Harry Pierson). Doch nichts hat die fünf Jahre so geprägt wie die Zusammenarbeit mit “den Nachbarn aus Köln”. Zum Zeitpunkt der Gründung von Bonn-to-Code.Net gab es im gesamten Köln/Bonner Raum keine .NET User Group. Und so war es nicht ungewöhnlich, dass der erste Interessent, der sich auf meinen Blog-Eintrag vom 4. Januar 2006 hin meldete, aus Köln stammte: Albert Weinert. Kurze Zeit nach der Bonner Gruppe wurde dann – initiiert durch Angelika Wöpking und Stefan Lange – schließlich die .NET User Group Köln gegründet. Wobei Stefan wiederum vor dem Kölner Gründungstreffen Ende April bereits Bonner Treffen besucht hatte; insgesamt also eine Menge personeller Überlapp zwischen Köln und Bonn. Als nach einem etwas holprigen Start der Kölner Gruppe schließlich Albert und Stefan die Leitung übernahmen, war klar dass Köln und Bonn in vielerlei Hinsicht eng zusammenarbeiten würden. Sei es durch die Koordination von Themen und Terminen oder auch durch Werbung für die Treffen der jeweils anderen Gruppe. Der nächste Schritt kam dann mit der Beteiligung der Kölner und Bonner Gruppen an der Organisation des “AfterLaunch” im April 2008. Der große Erfolg dieser Veranstaltung war der Ansporn, in Bezug auf die Zusammenarbeit ein neues Kapitel aufzuschlagen. Anfang 2009 wurde zunächst der dotnet Köln/Bonn e.V. gegründet, um für eigene Großveranstaltungen ein solides Fundament zu schaffen. Im Mai 2009 folgte dann die erste “dotnet Cologne” – ein voller Erfolg. Und mit der “dotnet Cologne 2010” etablierte sich diese Konferenz als das große .NET Community Event in Deutschland. Am 6. Mai 2011 findet nun die “dotnet Cologne 2011” statt; hinter den Kulissen laufen die Vorbereitungen dazu bereits seit Monaten auf Hochtouren. Alles in allem sehr aufregende fünf Jahre, in denen viel passiert ist. Mal schauen, wie die nächsten fünf Jahre werden…

    Read the article

  • SQL SERVER – SQL Server Misconceptions and Resolution – A Practical Perspective – TechEd 2012 India

    - by pinaldave
    TechEd India 2012 is just around the corner and I will be presenting there in two different sessions. On the very first day of this event, my presentation will be all about SQL Server Misconceptions and Resolution – A Practical Perspective. The dictionary tells us that a “misconception” means a view or opinion that is incorrect and is based on faulty thinking or understanding. In SQL Server, there are so many misconceptions. In fact, when I hear some of these misconceptions, I feel like fainting at that very moment! Seriously, at one time, I came across the scenario where instead of using INSERT INTO…SELECT, the developer used CURSOR believing that cursor is faster (duh!). Here is the link the blog post related to this. Pinal and Vinod in 2009 I have been presenting in TechEd India for last three years. This is my fourth opportunity to present a technical session on SQL Server. Just like the previous years, I decided to present something different. Here is a novelty of this year: I will be presenting this session with Vinod Kumar. Vinod Kumar and I have a great synergy when we work together. So far, we have written one SQL Server Interview Questions and Answers book and 2 video courses: (1) SQL Server Questions and Answers (2) SQL Server Performance: Indexing Basics. Pinal and Vinod in 2011 When we sat together and started building an outline for this course, we had many options in mind for this tango session. However, we have decided that we will make this session as lively as possible while keeping it natural at the same time. We know our flow and we know our conversation highlight, but we do not know what exactly each of us is going to present. We have decided to challenge each other on stage and push each other’s knowledge to the verge. We promise that the session will be entertaining with lots of SQL Server trivia, tips and tricks. Here are the challenges that I’ll take on: I will puzzle Vinod with my difficult questions I will present such misconception that Vinod will have no resolution for it. I need your help.  Will you help me stump Vinod? If yes, come and attend our session and join me to prove that together we are superior (a friendly brain clash, but we must win!). SQL Server enthusiasts and SQL Server fans are going to have gala time at #TechEdIn as we have a very solid lineup of the speaker and extremely interesting sessions at TechEdIn. Read the complete blog post of Vinod. Session Details Title: SQL Server Misconceptions and Resolution – A Practical Perspective (Add to Calendar) Abstract: “Earth is flat”! – An ancient common misconception, which has been proven incorrect as we progressed in modern times. In this session we will see various database misconceptions prevailing and their resolution with the aid of the demos. In this unique session audience will be part of the conversation and resolution. Date and Time: March 21, 2012, 15:15 to 16:15 Location: Hotel Lalit Ashok - Kumara Krupa High Grounds, Bengaluru – 560001, Karnataka, India. Add to Calendar Please submit your questions in the comments area and I will be for sure discussing them during my session. If I pick your question to discuss during my session, here is your gift I commit right now – SQL Server Interview Questions and Answers Book. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Interview Questions and Answers, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

  • ArcGIS–Getting the Legend Labels out

    - by Avner Kashtan
    Working with ESRI’s ArcGIS package, especially the WPF API, can be confusing. There’s the REST API, the SOAP APIs, and the WPF classes themselves, which expose some web service calls and information, but not everything. With all that, it can be hard to find specific features between the different options. Some functionality is handed to you on a silver platter, while some is maddeningly hard to implement. Today, for instance, I was working on adding a Legend control to my map-based WPF application, to explain the different symbols that can appear on the map. This is how the legend looks on ESRI’s own map-editing tools:   but this is how it looks when I used the Legend control, supplied out of the box by ESRI:   Very pretty, but unfortunately missing the option to display the name of the fields that make up the symbology. Luckily, the WPF controls have a lot of templating/extensibility points, to allow you to specify the layout of each field: 1: <esri:Legend> 2: <esri:Legend.MapLayerTemplate> 3: <DataTemplate> 4: <TextBlock Text="{Binding Layer.ID}"/> 5: </DataTemplate> 6: </esri:Legend.MapLayerTemplate> 7: </esri:Legend> but that only replicates the same built in behavior. I could now add any additional fields I liked, but unfortunately, I couldn’t find them as part of the Layer, GraphicsLayer or FeatureLayer definitions. This is the part where ESRI’s lack of organization is noticeable, since I can see this data easily when accessing the ArcGis Server’s web-interface, but I had no idea how to find it as part of the built-in class. Is it a part of Layer? Of LayerInfo? Of the LayerDefinition class that exists only in the SOAP service? As it turns out, neither. Since these fields are used by the symbol renderer to determine which symbol to draw, they’re actually a part of the layer’s Renderer. Since I already had a MyFeatureLayer class derived from FeatureLayer that added extra functionality, I could just add this property to it: 1: public string LegendFields 2: { 3: get 4: { 5: if (this.Renderer is UniqueValueRenderer) 6: { 7: return (this.Renderer as UniqueValueRenderer).Field; 8: } 9: else if (this.Renderer is UniqueValueMultipleFieldsRenderer) 10: { 11: var renderer = this.Renderer as UniqueValueMultipleFieldsRenderer; 12: return string.Join(renderer.FieldDelimiter, renderer.Fields); 13: } 14: else return null; 15: } For my scenario, all of my layers used symbology derived from a single field or, as in the examples above, from several of them. The renderer even kindly supplied me with the comma to separate the fields with. Now it was a simple matter to get the Legend control in line – assuming that it was bound to a collection of MyFeatureLayer: 1: <esri:Legend> 2: <esri:Legend.MapLayerTemplate> 3: <DataTemplate> 4: <StackPanel> 5: <TextBlock Text="{Binding Layer.ID}"/> 6: <TextBlock Text="{Binding Layer.LegendFields}" Margin="10,0,0,0" TextStyle="Italic"/> 7: </StackPanel> 8: </DataTemplate> 9: </esri:Legend.MapLayerTemplate> 10: </esri:Legend> and get the look I wanted – the list of fields below the layer name, indented.

    Read the article

  • Google TV Gets Bad Reception. Can Media Center Pull in the Signal?

    - by andrewbrust
    The news hit Monday morning that Google has decided to delay the release of its Google TV platform, and has asked its OEMs to delay any products that embed the software.  Coming just about two weeks prior to the 2011 Consumer Electronics Show (CES), Google’s timing is about the worst imaginable.  CES is where the platform should have had its coming out party, especially given all the anticipation that has built up since its initial announcement came 7 months ago. At last year’s CES, it seemed every consumer electronics company had fashioned its own software stack for Internet-based video programming and applications/widgets on its TVs, optical disc players and set top boxes.  In one case, I even saw two platforms on a single TV set (one provided by Yahoo and the other one native to the TV set). The whole point of Google TV was to solve this problem and offer a standard, embeddable platform.  But that won’t be happening, at least not for a while.  Google seems unable to get it together, and more proprietary approaches, like Apple TV, don’t seem to be setting the world of TV-Internet convergence on fire, either. It seems to me, that when it comes to building a “TV operating system,” Windows Media Center is still the best of a bad bunch.  But it won’t stay so for much longer without some changes.  Will Redmond pick up the ball that Google has fumbled?  I’m skeptical, but hopeful.  Regardless, here are some steps that could help Microsoft make the most of Google’s faux pas: Introduce a new Media Center version that uses XBox 360, rather than Windows 7 (or 8), as the platform.  TV platforms should be appliance-like, not PC-like.  Combine that notion with the runaway sales numbers for Xbox 360 Kinect, and the mass appeal it has delivered for Xbox, and the switch form Windows makes even more sense. As I have pointed out before, Microsoft’s Xbox implementation of its Mediaroom platform (announced and demoed at last year’s CES) gets Redmond 80% of the way toward this goal.  Nothing stops Microsoft from going the other 20%, other than its own apathy, which I hope has dissipated. Reverse the decision to remove Drive Extender technology from Windows Home Server (WHS), and create deep integration between WHS and Media Center.  I have suggested this previously as well, but the recent announcement that Drive Extender would be dropped from WHS 2.0 creates the need for me to a) join the chorus of people urging Microsoft to reconsider and b) reiterate the importance of Media Center-WHS integration in the context of a Google compete scenario. Enable Windows Phone 7 (WP7) as a Media Center client.  This would tighten the integration loop already established between WP7, Xbox and Zune.  But it would also counter Echostar/DISH Network/Sling Media, strike a blow against Google/Android (and even Apple/iOS) and could be the final strike against TiVO. Bring the WP7 user interface to Media Center and Kinect-enable it.  This would further the integration discussed above and would be appropriate recognition of WP7’s Metro UI having been built on the heritage of the original Media Center itself.  And being able to run your DVR even if you can’t find the remote (or can’t see its buttons in the dark) could be a nifty gimmick. Microsoft can do this but its consumer-oriented organization, responsible for Xbox, Zune and WP7, has to take the reins here, or none of this will likely work.  There’s a significant chance that won’t happen, but I won’t let that stop me from hoping that it does and insisting that it must.  Honestly, this fight is Microsoft’s to lose.

    Read the article

  • VS 2012 Code Review &ndash; Before Check In OR After Check In?

    - by Tarun Arora
    “Is Code Review Important and Effective?” There is a consensus across the industry that code review is an effective and practical way to collar code inconsistency and possible defects early in the software development life cycle. Among others some of the advantages of code reviews are, Bugs are found faster Forces developers to write readable code (code that can be read without explanation or introduction!) Optimization methods/tricks/productive programs spread faster Programmers as specialists "evolve" faster It's fun “Code review is systematic examination (often known as peer review) of computer source code. It is intended to find and fix mistakes overlooked in the initial development phase, improving both the overall quality of software and the developers' skills. Reviews are done in various forms such as pair programming, informal walkthroughs, and formal inspections.” Wikipedia No where does the definition mention whether its better to review code before the code has been committed to version control or after the commit has been performed. No matter which side you favour, Visual Studio 2012 allows you to request for a code review both before check in and also request for a review after check in. Let’s weigh the pros and cons of the approaches independently. Code Review Before Check In or Code Review After Check In? Approach 1 – Code Review before Check in Developer completes the code and feels the code quality is appropriate for check in to TFS. The developer raises a code review request to have a second pair of eyes validate if the code abides to the recommended best practices, will not result in any defects due to common coding mistakes and whether any optimizations can be made to improve the code quality.                                             Image 1 – code review before check in Pros Everything that gets committed to source control is reviewed. Minimizes the chances of smelly code making its way into the code base. Decreases the cost of fixing bugs, remember, the earlier you find them, the lesser the pain in fixing them. Cons Development Code Freeze – Since the changes aren’t in the source control yet. Further development can only be done off-line. The changes have not been through a CI build, hard to say whether the code abides to all build quality standards. Inconsistent! Cumbersome to track the actual code review process.  Not every change to the code base is worth reviewing, a lot of effort is invested for very little gain. Approach 2 – Code Review after Check in Developer checks in, random code reviews are performed on the checked in code.                                                      Image 2 – Code review after check in Pros The code has already passed the CI build and run through any code analysis plug ins you may have running on the build server. Instruct the developer to ensure ZERO fx cop, style cop and static code analysis before check in. Code is cleaner and smell free even before the code review. No Offline development, developers can continue to develop against the source control. Cons Bad code can easily make its way into the code base. Since the review take place much later in the cycle, the cost of fixing issues can prove to be much higher. Approach 3 – Hybrid Approach The community advocates a more hybrid approach, a blend of tooling and human accountability quotient.                                                               Image 3 – Hybrid Approach 1. Code review high impact check ins. It is not possible to review everything, by setting up code review check in policies you can end up slowing your team. More over, the code that you are reviewing before check in hasn't even been through a green CI build either. 2. Tooling. Let the tooling work for you. By running static analysis, fx cop, style cop and other plug ins on the build agent, you can identify the real issues that in my opinion can't possibly be identified using human reviews. Configure the tooling to report back top 10 issues every day. Mandate the manual code review of individuals who keep making it to this list of shame more often. 3. During Merge. I would prefer eliminating some of the other code issues during merge from Main branch to the release branch. In a scrum project this is still easier because cheery picking the merges is a possibility and the size of code being reviewed is still limited. Let the tooling work for you, if some one breaks the CI build often, put them on a gated check in build course until you see improvement. If some one appears on the top 10 list of shame generated via the build then ensure that all their code is reviewed till you see improvement. At the end of the day, the goal is to ensure that the code being delivered is top quality. By enforcing a code review before any check in, you force the developer to work offline or stay put till the review is complete. What do the experts say? So I asked a few expects what they thought of “Code Review quality gate before Checking in code?" Terje Sandstrom | Microsoft ALM MVP You mean a review quality gate BEFORE checking in code????? That would mean a lot of code staying either local or in shelvesets, and not even been through a CI build, and a green CI build being the main criteria for going further, f.e. to the review state. I would not like code laying around with no checkin’s. Having a requirement that code is checked in small pieces, 4-8 hours work max, and AT LEAST daily checkins, a manual code review comes second down the lane. I would expect review quality gates to happen before merging back to main, or before merging to release.  But that would all be on checked-in code.  Branching is absolutely one way to ease the pain.   Another way we are using is automatic quality builds, running metrics, coverage, static code analysis.  Unfortunately it takes some time, would be great to be on CI’s – but…., so it’s done scheduled every night. Based on this we get, among other stuff,  top 10 lists of suspicious code, which is then subjected to reviews.  If a person seems to be very popular on these top 10 lists, we subject every check in from that person to a review for a period. That normally helps.   None of the clients I have can afford to have every checkin reviewed, so we need to find ways around it. I don’t disagree with the nicety of having all the code reviewed, but I find it hard to find those resources in today’s enterprises. David V. Corbin | Visual Studio ALM Ranger I tend to agree with both sides. I hate having code that is not checked in, but at the same time hate having “bad” code in the repository. I have found that branching is one approach to solving this dilemma. Code is checked into the private/feature branch before the review, but is not merged over to the “official” branch until after the review. I advocate both, depending on circumstance (especially team dynamics)   - The “pre-checkin” is usually for elements that may impact the project as a whole. Think of it as another “gate” along with passing unit tests. - The “post-checkin” may very well not be at the changeset level, but correlates to a review at the “user story” level.   Again, this depends on team dynamics in play…. Robert MacLean | Microsoft ALM MVP I do not think there is no right answer for the industry as a whole. In short the question is why do you do reviews? Your question implies risk mitigation, so in low risk areas you can get away with it after check in while in high risk you need to do it before check in. An example is those new to a team or juniors need it much earlier (maybe that is before checkin, maybe that is soon after) than seniors who have shipped twenty sprints on the team. Abhimanyu Singhal | Visual Studio ALM Ranger Depends on per scenario basis. We recommend post check-in reviews when: 1. We don't want to block other checks and processes on manual code reviews. Manual reviews take time, and some pieces may not require manual reviews at all. 2. We need to trace all changes and track history. 3. We have a code promotion strategy/process in place. For risk mitigation, post checkin code can be promoted to Accepted branches. Or can be rejected. Pre Checkin Reviews are used when 1. There is a high risk factor associated 2. Reviewers are generally (most of times) have immediate availability. 3. Team does not have strict tracking needs. Simply speaking, no single process fits all scenarios. You need to select what works best for your team/project. Thomas Schissler | Visual Studio ALM Ranger This is an interesting discussion, I’m right now discussing details about executing code reviews with my teams. I see and understand the aspects you brought in, but there is another side as well, I’d like to point out. 1.) If you do reviews per check in this is not very practical as a hard rule because this will disturb the flow of the team very often or it will lead to reduce the checkin frequency of the devs which I would not accept. 2.) If you do later reviews, for example if you review PBIs, it is not easy to find out which code you should review. Either you review all changesets associate with the PBI, but then you might review code which has been changed with a later checkin and the dev maybe has already fixed the issue. Or you review the diff of the latest changeset of the PBI with the first but then you might also review changes of other PBIs. Jakob Leander | Sr. Director, Avanade In my experience, manual code review: 1. Does not get done and at the very least does not get redone after changes (regardless of intentions at start of project) 2. When a project actually do it, they often do not do it right away = errors pile up 3. Requires a lot of time discussing/defining the standard and for the team to learn it However code review is very important since e.g. even small memory leaks in a high volume web solution have big consequences In the last years I have advocated following approach for code review - Architects up front do “at least one best practice example” of each type of component and tell the team. Copy from this one. This should include error handling, logging, security etc. - Dev lead on project continuously browse code to validate that the best practices are used. Especially that patterns etc. are not broken. You can do this formally after each sprint/iteration if you want. Once this is validated it is unlikely to “go bad” even during later code changes Agree with customer to rely on static code analysis from Visual Studio as the one and only coding standard. This has HUUGE benefits - You can easily tweak to reach the level you desire together with customer - It is easy to measure for both developers/management - It is 100% consistent across code base - It gets validated all the time so you never end up getting hammered by a customer review in the end - It is easy to tell the developer that you do not want code back unless it has zero errors = minimize communication You need to track this at least during nightly builds and make sure team sees total # issues. Do not allow #issues it to grow uncontrolled. On the project I run I require code analysis to have run on code before checkin (checkin rule). This means -  You have to have clean compile (or CA wont run) so this is extra benefit = very few broken builds - You can change a few of the rules to compile as errors instead of warnings. I often do this for “missing dispose” issues which you REALLY do not want in your app Tip: Place your custom CA rules files as part of solution. That  way it works when you do branching etc. (path to CA file is relative in VS) Some may argue that CA is not as good as manual inspection. But since manual inspection in reality suffers from the 3 issues in start it is IMO a MUCH better (and much cheaper) approach from helicopter perspective Tirthankar Dutta | Director, Avanade I think code review should be run both before and after check ins. There are some code metrics that are meant to be run on the entire codebase … Also, especially on multi-site projects, one should strive to architect in a way that lets men manage the framework while boys write the repetitive code… scales very well with the need to review less by containment and imposing architectural restrictions to emphasise the design. Bruno Capuano | Microsoft ALM MVP For code reviews (means peer reviews) in distributed team I use http://www.vsanywhere.com/default.aspx  David Jobling | Global Sr. Director, Avanade Peer review is the only way to scale and its a great practice for all in the team to learn to perform and accept. In my experience you soon learn who's code to watch more than others and tune the attention. Mikkel Toudal Kristiansen | Manager, Avanade If you have several branches in your code base, you will need to merge often. This requires manual merging, when a file has been changed in both branches. It offers a good opportunity to actually review to changed code. So my advice is: Merging between branches should be done as often as possible, it should be done by a senior developer, and he/she should perform a full code review of the code being merged. As for detecting architectural smells and code smells creeping into the code base, one really good third party tools exist: Ndepend (http://www.ndepend.com/, for static code analysis of the current state of the code base). You could also consider adding StyleCop to the solution. Jesse Houwing | Visual Studio ALM Ranger I gave a presentation on this subject on the TechDays conference in NL last year. See my presentation and slides here (talk in Dutch, but English presentation): http://blog.jessehouwing.nl/2012/03/did-you-miss-my-techdaysnl-talk-on-code.html  I’d like to add a few more points: - Before/After checking is mostly a trust issue. If you have a team that does diligent peer reviews and regularly talk/sit together or peer review, there’s no need to enforce a before-checkin policy. The peer peer-programming and regular feedback during development can take care of most of the review requirements as long as the team isn’t under stress. - Under stress, enforce pre-checkin reviews, it might sound strange, if you’re already under time or budgetary constraints, but it is under such conditions most real issues start to be created or pile up. - Use tools to catch most common errors, Code Analysis/FxCop was already mentioned. HP Fortify, Resharper, Coderush etc can help you there. There are also a lot of 3rd party rules you can add to Code Analysis. I’ve written a few myself (http://fccopcontrib.codeplex.com) and various teams from Microsoft have added their own rules (MSOCAF for SharePoint, WSSF for WCF). For common errors that keep cropping up, see if you can define a rule. It’s much easier. But more importantly make sure you have a good help page explaining *WHY* it's wrong. If you have small feature or developer branches/shelvesets, you might want to review pre-merge. It’s still better to do peer reviews and peer programming, but the most important thing is that bad quality code doesn’t make it into the important branch. So my philosophy: - Use tooling as much as possible. - Make sure the team understands the tooling and the importance of the things it flags. It’s too easy to just click suppress all to ignore the warnings. - Under stress, tighten process, it’s under stress that the problems of late reviews will really surface - Most importantly if you do reviews do them as early as possible, but never later than needed. In other words, pre-checkin/post checking doesn’t really matter, as long as the review is done before the code is released. It’ll just be much more expensive to fix any review outcomes the later you find them. --- I would love to hear what you think!

    Read the article

  • Getting a 'base' Domain from a Domain

    - by Rick Strahl
    Here's a simple one: How do you reliably get the base domain from full domain name or URI? Specifically I've run into this scenario in a few recent applications when creating the Forms Auth Cookie in my ASP.NET applications where I explicitly need to force the domain name to the common base domain. So, www.west-wind.com, store.west-wind.com, west-wind.com, dev.west-wind.com all should return west-wind.com. Here's the code where I need to use this type of logic for issuing an AuthTicket explicitly:private void IssueAuthTicket(UserState userState, bool rememberMe) { FormsAuthenticationTicket ticket = new FormsAuthenticationTicket(1, userState.UserId, DateTime.Now, DateTime.Now.AddDays(10), rememberMe, userState.ToString()); string ticketString = FormsAuthentication.Encrypt(ticket); HttpCookie cookie = new HttpCookie(FormsAuthentication.FormsCookieName, ticketString); cookie.HttpOnly = true; if (rememberMe) cookie.Expires = DateTime.Now.AddDays(10); // write out a domain cookie cookie.Domain = Request.Url.GetBaseDomain(); HttpContext.Response.Cookies.Add(cookie); } Now unfortunately there's no Uri.GetBaseDomain() method unfortunately, as I was surprised to find out. So I ended up creating one:public static class NetworkUtils { /// <summary> /// Retrieves a base domain name from a full domain name. /// For example: www.west-wind.com produces west-wind.com /// </summary> /// <param name="domainName">Dns Domain name as a string</param> /// <returns></returns> public static string GetBaseDomain(string domainName) { var tokens = domainName.Split('.'); // only split 3 segments like www.west-wind.com if (tokens == null || tokens.Length != 3) return domainName; var tok = new List<string>(tokens); var remove = tokens.Length - 2; tok.RemoveRange(0, remove); return tok[0] + "." + tok[1]; ; } /// <summary> /// Returns the base domain from a domain name /// Example: http://www.west-wind.com returns west-wind.com /// </summary> /// <param name="uri"></param> /// <returns></returns> public static string GetBaseDomain(this Uri uri) { if (uri.HostNameType == UriHostNameType.Dns) return GetBaseDomain(uri.DnsSafeHost); return uri.Host; } } I've had a need for this so frequently it warranted a couple of helpers. The second Uri helper is an Extension method to the Uri class, which is what's used the in the first code sample. This is the preferred way to call this since the URI class can differentiate between Dns names and IP Addresses. If you use the first string based version there's a little more guessing going on if a URL is an IP Address. There are a couple of small twists in dealing with 'domain names'. When passing a string only there's a possibility to not actually pass domain name, but end up passing an IP address, so the code explicitly checks for three domain segments (can there be more than 3?). IP4 Addresses have 4 and IP6 have none so they'll fall through. Then there are things like localhost or a NetBios machine name which also come back on URL strings, but also shouldn't be handled. Anyway, small thing but maybe somebody else will find this useful.© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  Networking   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Book Reviews: Art of Community and Eyetracking Web Usability

    - by ultan o'broin
    Holidays time offers a chance to catch up on some user experience and user assistance related material. So, two short book reviews (which I considered using my new Tumblr blog for. More about that another time) coming up. The Art of Community by Jono Bacon Excellent starting point for anyone wanting to get going in the community software (FLOSS, for example) space or understand how to set up, manage, and leverage the collective intelligence of communities for whatever ends. The book is a little too long in my opinion, and of course, usage of what Jono is recommending needs to be nuanced and adapted for enterprise applications space (hardly surprising there is a lot about Ubuntu, Lug Radio, and so on given Jono's interests). Shame there wasn't more information on international, non-English community considerations too. Still, some great ideas and insight into setting up and managing communities that I will leverage (watch out for the results on this blog, later in 2011). One section, on collaborative writing really jumped out. It reinforced the whole idea that to successful community initiatives are based on instigators knowing what makes the community tick in the first place. How about this for insight into user profiles for people who write community user assistance (OK then, "doc") and what tools they might use (in this case, we're talking about Jokosher): "Most people who write documentation for open source software projects would fall into the category of power user. They are technology enthusiasts who are not interested in the super-technical avenues of programming, but want to help out. Many of these people have good writing skills and a good knowledge of using the software, so the documentation fit is natural. With Jokosher we wanted to acknowledge this profile of user. As such, instead of focussing on complex text processing tools, we encouraged our documentation contributors to use a wiki." The book is available for free here, and well as being available from usual sources. Eyetracking Web Usability by Jakob Nielsen and Kara Prentice Another fine book by established experts. I have some field experience of eyetracking studies myself --in the user assistance for enterprise applications space--though Jakob and Kara concentrate on websites for their research here. I would caution how much about websites transfers easily to the applications space, especially enterprise applications, as claimed in the book too. However, Jakob and Kara do make the case very well that understanding design goals (for example, productivity improvement in the case of applications) and the context of the software use is critical. Executing a study using eyetracking technology requires that you know what you want to test, can set up realistic tasks for testing by representative testers, and then analyze the results. Be precise, as lots of data will be generated (I think the authors underplay the effort in analyzing data too). What I found disappointing was the lack of emphasis on eyetracking as only part of the usability solution. It's really for fine-tuning designs in my opinion, and should be used after other design reviews. I also wasn't that crazy about the level of disengagement between the qualitative and quantitative side of this kind of testing that the book indicated. I think it is useful to have testers verbalize their thoughts and for test engineers to prompt, intervene, or guide as necessary. More on cultural or international aspects to usability testing might have been included too (websites are available to everyone). To conclude, I enjoyed the book, took on board some key takeaways about methodologies and found the recommendations sensible and easy to follow (for example about Forms layouts). Applying enterprise applications requirements such as those relating to user profiles, design goals, and overall context of use in conjunction with what's in this book would be the way to go here. It also made me think of how interesting it would be to compare eyetracking findings between website and enterprise applications usage.

    Read the article

< Previous Page | 907 908 909 910 911 912 913 914 915 916 917 918  | Next Page >