Search Results

Search found 33445 results on 1338 pages for 'single instance storage'.

Page 1229/1338 | < Previous Page | 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236  | Next Page >

  • ASP.NET MVC 3 Hosting :: ASP.NET MVC 3 First Look

    - by mbridge
    MVC 3 View Enhancements MVC 3 introduces two improvements to the MVC view engine: - Ability to select the view engine to use. MVC 3 allows you to select from any of your  installed view engines from Visual Studio by selecting Add > View (including the newly introduced ASP.NET “Razor” engine”): - Support for the next ASP.NET “Razor” syntax. The newly previewed Razor syntax is a concise lightweight syntax. MVC 3 Control Enhancements - Global Filters: ASP.NET MVC 3  allows you to specify that a filter which applies globally to all Controllers within an app by adding it to the GlobalFilters collection.  The RegisterGlobalFilters() method is now included in the default Global.asax class template and so provides a convenient place to do this since is will then be called by the Application_Start() method: void RegisterGlobalFilters(GlobalFilterCollection filters) { filters.Add(new HandleLoggingAttribute()); filters.Add(new HandleErrorAttribute()); } void Application_Start() { RegisterGlobalFilters (GlobalFilters.Filters); } - Dynamic ViewModel Property : MVC 3 augments the ViewData API with a new “ViewModel” property on Controller which is of type “dynamic” – and therefore enables you to use the new dynamic language support in C# and VB pass ViewData items using a cleaner syntax than the current dictionary API. Public ActionResult Index() { ViewModel.Message = "Hello World"; return View(); } - New ActionResult Types : MVC 3 includes three new ActionResult types and helper methods: 1. HttpNotFoundResult – indicates that a resource which was requested by the current URL was not found. HttpNotFoundResult will return a 404 HTTP status code to the calling client. 2. PermanentRedirects – The HttpRedirectResult class contains a new Boolean “Permanent” property which is used to indicate that a permanent redirect should be done. Permanent redirects use a HTTP 301 status code.  The Controller class  includes three new methods for performing these permanent redirects: RedirectPermanent(), RedirectToRoutePermanent(), andRedirectToActionPermanent(). All  of these methods will return an instance of the HttpRedirectResult object with the Permanent property set to true. 3. HttpStatusCodeResult – used for setting an explicit response status code and its associated description. MVC 3 AJAX and JavaScript Enhancements MVC 3 ships with built-in JSON binding support which enables action methods to receive JSON-encoded data and then model-bind it to action method parameters. For example a jQuery client-side JavaScript could define a “save” event handler which will be invoked when the save button is clicked on the client. The code in the event handler then constructs a client-side JavaScript “product” object with 3 fields with their values retrieved from HTML input elements. Finally, it uses jQuery’s .ajax() method to POST a JSON based request which contains the product to a /theStore/UpdateProduct URL on the server: $('#save').click(function () { var product = { ProdName: $('#Name').val() Price: $('#Price').val(), } $.ajax({ url: '/theStore/UpdateProduct', type: "POST"; data: JSON.stringify(widget), datatype: "json", contentType: "application/json; charset=utf-8", success: function () { $('#message').html('Saved').fadeIn(), }, error: function () { $('#message').html('Error').fadeIn(), } }); return false; }); MVC will allow you to implement the /theStore/UpdateProduct URL on the server by using an action method as below. The UpdateProduct() action method will accept a strongly-typed Product object for a parameter. MVC 3 can now automatically bind an incoming JSON post value to the .NET Product type on the server without having to write any custom binding. [HttpPost] public ActionResult UpdateProduct(Product product) { // save logic here return null } MVC 3 Model Validation Enhancements MVC 3 builds on the MVC 2 model validation improvements by adding   support for several of the new validation features within the System.ComponentModel.DataAnnotations namespace in .NET 4.0: - Support for the new DataAnnotations metadata attributes like DisplayAttribute. - Support for the improvements made to the ValidationAttribute class which now supports a new IsValid overload that provides more info on  the current validation context, like what object is being validated. - Support for the new IValidatableObject interface which enables you to perform model-level validation and also provide validation error messages which are specific to the state of the overall model. MVC 3 Dependency Injection Enhancements MVC 3 includes better support for applying Dependency Injection (DI) and also integrating with Dependency Injection/IOC containers. Currently MVC 3 Preview 1 has support for DI in the below places: - Controllers (registering & injecting controller factories and injecting controllers) - Views (registering & injecting view engines, also for injecting dependencies into view pages) - Action Filters (locating and  injecting filters) And this is another important blog about Microsoft .NET and technology: - Windows 2008 Blog - SharePoint 2010 Blog - .NET 4 Blog And you can visit here if you're looking for ASP.NET MVC 3 hosting

    Read the article

  • Formal Languages, Inductive Proofs &amp; Regular Expressions

    - by MarkPearl
    So I am slogging away at my UNISA stuff. I have just finished doing the initial once non stop read through the first 11 chapters of my COS 201 Textbook - “Introduction to Computer Theory 2nd Edition” by Daniel Cohen. It has been an interesting couple of days, with familiar concepts coming up as well as some new territory. In this posting I am going to cover the first couple of chapters of the book. Let start with Formal Languages… What exactly is a formal language? Pretty much a no duh question for me but still a good one to ask – a formal language is a language that is defined in a precise mathematical way. Does that mean that the English language is a formal language? I would say no – and my main motivation for this is that one can have an English sentence that is correct grammatically that is also ambiguous. For example the ambiguous sentence: "I once shot an elephant in my pyjamas.” For this and possibly many other reasons that I am unaware of, English is termed a “Natural Language”. So why the importance of formal languages in computer science? Again a no duh question in my mind… If we want computers to be effective and useful tools then we need them to be able to evaluate a series of commands in some form of language that when interpreted by the device no confusion will exist as to what we were requesting. Imagine the mayhem that would exist if a computer misinterpreted a command to print a document and instead decided to delete it. So what is a Formal Language made up of… For my study purposes a language is made up of a finite alphabet. For a formal language to exist there needs to be a specification on the language that will describe whether a string of characters has membership in the language or not. There are two basic ways to do this: By a “machine” that will recognize strings of the language (e.g. Finite Automata). By a rule that describes how strings of a language can be formed (e.g. Regular Expressions). When we use the phrase “string of characters”, we can also be referring to a “word”. What is an Inductive Proof? So I am not to far into my textbook and of course it starts referring to proofs and different types. I have had to go through several different approaches of proofs in the past, but I can never remember their formal names , so when I saw “inductive proof” I thought to myself – what the heck is that? Google to the rescue… An inductive proof is like a normal proof but it employs a neat trick which allows you to prove a statement about an arbitrary number n by first proving it is true when n is 1 and then assuming it is true for n=k and showing it is true for n=k+1. The idea is that if you want to show that someone can climb to the nth floor of a fire escape, you need only show that you can climb the ladder up to the fire escape (n=1) and then show that you know how to climb the stairs from any level of the fire escape (n=k) to the next level (n=k+1). Does this sound like a form of recursion? No surprise then that in the same chapter they deal with recursive definitions. An example of a recursive definition for the language EVEN would the 3 rules below: 2 is in EVEN If x is in EVEN then so is x+2 The only elements in the set EVEN are those that be produced by the rules above. Nothing to exciting… So if a definition for a language is done recursively, then it makes sense that the language can be proved using induction. Regular Expressions So I am wondering to myself what use is this all – in fact – I find this the biggest challenge to any university material is that it is quite hard to find the immediate practical applications of some theory in real life stuff. How great was my joy when I suddenly saw the word regular expression being introduced. I had been introduced to regular expressions on Stack Overflow where I was trying to recognize if some text measurement put in by a user was in a valid form or not. For instance, the imperial system of measurement where you have feet and inches can be represented in so many different ways. I had eventually turned to regular expressions as an easy way to check if my parser could correctly parse the text or not and convert it to a normalize measurement. So some rules about languages and regular expressions… Any finite language can be represented by at least one if not more regular expressions A regular expressions is almost a rule syntax for expressing how regular languages can be formed regular expressions are cool For a regular expression to be valid for a language it must be able to generate all the words in the language and no other words. This is important. It doesn’t help me if my regular expression parses 100% of my measurement texts but also lets one or two invalid texts to pass as well. Okay, so this posting jumps around a bit – but introduces some very basic fundamentals for the subject which will be built on in later postings… Time to go and do some practical examples now…

    Read the article

  • PASS Summit 2010 Recap

    - by AjarnMark
    Last week I attended my eighth PASS Summit in nine years, and every year it is a fantastic event!  I was fortunate my first year to have a contact (Bill Graziano (blog | Twitter) from SQLTeam) that I was expecting to meet, and who got me started on a good track of making new contacts.  Each year I have made a few more, and renewed friendships from years past.  Many of the attendees agree that the pure networking opportunities are one of the best benefits of attending the Summit.  And there’s a lot of great technical stuff, too, some of the things that stick out for me this year include… Pre-Con Monday: PowerShell with Allen White (blog | Twitter).  This was the first time that I attended a pre-con.  For those not familiar with the concept, the regular sessions for the conference are 75-90 minutes long.  For an extra fee, you can attend a full-day session on a single topic during a pre- or post-conference training day.  I had been meaning for several months to dive in and learn PowerShell, but just never seemed to find (or make) the time for it, so when I saw this was one of the all-day sessions, and I was planning to be there on Monday anyway, I decided to go for it.  And it was well worth it!  I definitely came out of there with a good foundation to build my own PowerShell scripts, plus several sample scripts that he showed which already cover the first four or five things I was planning to do with PowerShell anyway.  This looks like the right tool for me to build an automated version of our software deployment process, which right now contains many repeated steps.  Thanks Allen! Service Broker with Denny Cherry (blog | Twitter).  I remembered reading Denny’s blog post on Using Service Broker instead of Replication, and ever since then I have been thinking about using this to populate a new reporting-focused Data Repository that we will be building in the near future.  When I saw he was doing this session, I thought it would be great to get more information and be able to ask the author questions.  When I brought this idea back to my boss, he really liked it, as we had previously been discussing doing nightly data loads, with an option to manually trigger a mid-day load if up-to-the-minute data was needed for something.  If we go the Service Broker route, we can keep the Repository current in near real-time.  Hooray! DBA Mythbusters with Paul Randal (blog | Twitter).  Even though I read every one of the posts in Paul’s blog series of the same name, I had to go see the legend in person.  It was great, and I still learned something new! How to Conduct Effective Meetings with Joe Webb (blog | Twitter).  I always like to sit in on a session that Joe does.  I met Joe several years ago when both he and Bill Graziano were on the PASS Board of Directors together, and we have kept in touch.  Joe is very well-spoken and has great experience with both SQL Server and business.  And we could certainly use some pointers at my work (probably yours, too) on making our meetings more effective and to run on-time.  Of course, now that I’m the Chapter Leader for the Professional Development virtual chapter, I also had to sit in on this ProfDev session and recruit Joe to do a presentation or two for the chapter next year. Query Optimization with David DeWitt.  Anyone who has seen Dr. David DeWitt present the 3rd keynote at a PASS Summit over the last three years knows what a great time it is to sit and listen to him make some really complicated and advanced topic easy to understand (although it still makes your head hurt).  It still amazes me that the simple two-table join query from pubs that he used in his example can possibly have 22 million possible physical query plans.  Ouch! Exhibit Hall:  This year I spent more serious time in the exhibit hall than any year past.  I have talked my boss into making a significant (for us) investment in monitoring tools next year, and this was a great opportunity to talk with all the big-hitters.  Readers of mine may recall that I fell in love with the SQL Sentry Power Suite several months ago and wrote a blog entry about it just from the trial version.  Well as things turned out, short-term budget priorities shifted, and we weren’t able to make the purchase then.  I have it in the budget for next year, but since I was going to the Summit, my boss wanted me to look at the other options to see if this was really the one that we wanted.  I spent a couple of hours talking with representatives from Red-Gate, Idera, Confio, and Quest about their offerings, and giving them each the same 3 scenarios that I wanted to be able to accomplish based on the questions and issues that arise in our company.  It was interesting to discover the different approaches or “world view” that each vendor takes to the subject of performance monitoring and troubleshooting.  I may write a separate article that goes into this in more depth, but the product that best aligned with our point of view, and met the current needs we have is still the SQL Sentry Power Suite.  I’m not saying that the others are bad or wrong or anything like that, just that the way they tackled the issue did not align as well with our particular needs as does SQL Sentry’s product.  And that was something I learned too, when you go shopping for these products, you really need to know what you want to get from them.  It’s best if you have a few example scenarios from work that you can use to test out how well each tool fits your particular needs. Overall, another GREAT event.  I can’t wait to get the DVDs so I can sit in on a bunch of other sessions that I couldn’t get to because I was in one of the ones above.  And I can hardly wait until next year!

    Read the article

  • jQuery Datatable in MVC &hellip; extended.

    - by Steve Clements
    There are a million plugins for jQuery and when a web forms developer like myself works in MVC making use of them is par-for-the-course!  MVC is the way now, web forms are but a memory!! Grids / tables are my focus at the moment.  I don’t want to get in to righting reems of css and html, but it’s not acceptable to simply dump a table on the screen, functionality like sorting, paging, fixed header and perhaps filtering are expected behaviour.  What isn’t always required though is the massive functionality like editing etc you get with many grid plugins out there. You potentially spend a long time getting everything hooked together when you just don’t need it. That is where the jQuery DataTable plugin comes in.  It doesn’t have editing “out of the box” (you can add other plugins as you require to achieve such functionality). What it does though is very nicely format a table (and integrate with jQuery UI) without needing to hook up and Async actions etc.  Take a look here… http://www.datatables.net I did in the first instance start looking at the Telerik MVC grid control – I’m a fan of Telerik controls and if you are developing an in-house of open source app you get the MVC stuff for free…nice!  Their grid however is far more than I require.  Note: Using Telerik MVC controls with your own jQuery and jQuery UI does come with some hurdles, mainly to do with the order in which all your jQuery is executing – I won’t cover that here though – mainly because I don’t have a clear answer on the best way to solve it! One nice thing about the dataTable above is how easy it is to extend http://www.datatables.net/examples/plug-ins/plugin_api.html and there are some nifty examples on the site already… I however have a requirement that wasn’t on the site … I need a grid at the bottom of the page that will size automatically to the bottom of the page and be scrollable if required within its own space i.e. everything above the grid didn’t scroll as well.  Now a CSS master may have a great solution to this … I’m not that master and so didn’t! The content above the grid can vary so any kind of fixed positioning is out. So I wrote a little extension for the DataTable, hooked that up to the document.ready event and window.resize event. Initialising my dataTable ( s )… $(document).ready(function () {   var dTable = $(".tdata").dataTable({ "bPaginate": false, "bLengthChange": false, "bFilter": true, "bSort": true, "bInfo": false, "bAutoWidth": true, "sScrollY": "400px" });   My extension to the API to give me the resizing….   // ********************************************************************** // jQuery dataTable API extension to resize grid and adjust column sizes // $.fn.dataTableExt.oApi.fnSetHeightToBottom = function (oSettings) { var id = oSettings.nTable.id; var dt = $("#" + id); var top = dt.position().top; var winHeight = $(document).height(); var remain = (winHeight - top) - 83; dt.parent().attr("style", "overflow-x: auto; overflow-y: auto; height: " + remain + "px;"); this.fnAdjustColumnSizing(); } This is very much is debug mode, so pretty verbose at the moment – I’ll tidy that up later! You can see the last call is a call to an existing method, as the columns are fixed and that normally involves so CSS voodoo, a call to adjust those sizes is required. Just above is the style that the dataTable gives the grid wrapper div, I got that from some firebug action and stick in my new height. The –83 is to give me the space at the bottom i require for fixed footer!   Finally I hook that up to the load and window resize.  I’m actually using jQuery UI tabs as well, so I’ve got that in the open event of the tabs.   $(document).ready(function () { var oTable; $("#tabs").tabs({ "show": function (event, ui) { oTable = $('div.dataTables_scrollBody>table.tdata', ui.panel).dataTable(); if (oTable.length > 0) { oTable.fnSetHeightToBottom(); } } }); $(window).bind("resize", function () { oTable.fnSetHeightToBottom(); }); }); And that all there is too it.  Testament to the wonders of jQuery and the immense community surrounding it – to which I am extremely grateful. I’ve also hooked up some custom column filtering on the grid – pretty normal stuff though – you can get what you need for that from their website.  I do hide the out of the box filter input as I wanted column specific, you need filtering turned on when initialising to get it to work and that input come with it!  Tip: fnFilter is the method you want.  With column index as a param – I used data tags to simply that one.

    Read the article

  • Handling Trailing Delimiters in HL7 Messages

    - by Thomas Canter
    Applies to: BizTalk Server 2006 with the HL7 1.3 Accelerator Outline of the problem Trailing Delimiters are empty values at the end of an object in a HL7 ER7 formatted message. Examples: Empty Field NTE|P| NTE|P|| Empty component ORC|1|725^ Empty Subcomponent ORC|1|||||27& Empty repeat OBR|1||||||||027~ Trailing delimiters indicate the following object exists and is empty, which is quite different from null, null is an explicit value indicated by a pair of double quotes -> "". The BizTalk HL7 Accelerator by default does not allow trailing delimiters. There are three methods to allow trailing delimiters. NOTE: All Schemas always allow trailing delimiters in the MSH Segment Using party identifiers MSH3.1 – Receive/inbound processing, using this value as a party allows you to configure the system to allow inbound trailing delimiters. MSH5.1 – Send/outbound processing, using this value as a party allows you to configure the system to allow outbound trailing delimiters. Generally, if you allow inbound trailing delimiters, unless you are willing to programmatically remove all trailing delimiters, then you need to configure the send to allow trailing delimiters. Add the appropriate parties to the BizTalk Parties list from these two fields in your message stream. Open the BizTalk HL7 Configuration tool and for each party check the "Allow trailing delimiters (separators)" check box on the Validation tab. Disadvantage – Each MSH3.1 and MSH5.1 value must be represented in the parties list and configured. Advantage – granular control over system behavior for each inbound/outbound system. Using instance properties of a pipeline used in a send port or receive location. Open the BizTalk Server Administration console locate the send port or receive location that contains the BTAHL72XReceivePipeline or BTAHL72XSendPipeline pipeline. Open the properties To the right of the pipeline selected locate the […] ellipses button In the property list, locate the "TrailingDelimiterAllowed" property and set it to True. Advantage – All messages through a particular Send Port or Receive Location will allow trailing delimiters. Disadvantage – Must configure each Send Port or Receive Location. No granular control over which remote parties will send or receive messages with trailing delimiters. Using a custom pipeline that uses a pre-configured BTA HL7 Pipeline component. Use Visual Studio to construct a custom receive and send pipeline using the appropriate assembler or dissasembler. Set the component property to "TrailingDelimitersAllowed" to True Compile and deploy the custom pipeline Use the custom pipeline instead of the standard pipeline for all HL7 message processing Advantage – All messages using the custom pipeline will automatically allow trailing delimiters. Disadvantage – Requires custom coding and development to create and deploy the custom pipeline. No granular control over which remote parties will send or receive messages with trailing delimiters. What does a Trailing Delimiter do to the XML Schema? Allowing trailing delimiters does not have the impact often expected in the actual XML Schema.The Schema reproduces the message with no data loss.Thus, the message when represented in XML must contain the extra fields, in order to reproduce the outbound message.Thus, a trialing delimiter results in an empty XML field.Trailing Delmiters are not stripped from the inbound message. Example:<PID_21>44172</PID_21><PID_21>9257</PID_21> -> the original maximum number of repeats<PID_21></PID_21> -> The empty repeated field Allowing trailing delimiters not remove the trailing delimiters from the message, it simply suppresses the check that will cause the message to fail parse with trailing delimiters. When can you not fix the problem by enabling trailing delimiters Each object in a message must have a location in the target BTAHL7 schema for its content to reside.If you have more objects in the message than are contained at that location, then enabling trailing delimiters will not resolve the problem. The schema must be extended to accommodate the empty message content.Examples: Extra Field NTE|P||||Only 4 fields in NTE Segment, the 4th field exists, but is empty. Extra component PID|1|1523|47^^^^^^^Only 5 components in a CX data type, the 5th component exists, but is empty Extra subcomponent ORC|1|||||27&&Only 2 subcomponents in a CQ data type, the 3rd subcomponent is empty, but exists. Extra Repeat PID|1||||||||||||||||||||4419~5217~Only 2 repeats allowed for the field "Mother's identifier", the repeat is empty, but exists. In each of these cases, you must locate the failing object and extend the type to allow an additional object of that type. FieldAdd a field of ST to the end of the segment with a suitable name in the segments_nnn.xsd Component Create a new Custom CX data type (i.e. CX_XtraComp) in the datatypes_nnn.xsd and add a new component to the custom CX data type. Update the field in the segments_nnn.xsd file to use the custom data type instead of the standard datatype. Subcomponent Create a new Custom CQ data type that accepts an additional TS value at the end of the data type. Create a custom TQ data type that uses the new custom CQ data type as the first subcomponent. Modify the ORC segment to use the new CQ data type at ORC.7 instead of the standard CQ data type. RepeatModify the Field definition for PID.21 in the segments_nnn.xsd to allow more repeats in the field.

    Read the article

  • CodePlex Daily Summary for Saturday, February 12, 2011

    CodePlex Daily Summary for Saturday, February 12, 2011Popular ReleasesEnhSim: EnhSim 2.4.0: 2.4.0This release supports WoW patch 4.06 at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 Changes since 2.3.0 - Upd...Sterling Isolated Storage Database with LINQ for Silverlight and Windows Phone 7: Sterling OODB v1.0: Note: use this changeset to download the source example that has been extended to show database generation, backup, and restore in the desktop example. Welcome to the Sterling 1.0 RTM. This version is not backwards-compatible with previous versions of Sterling. Sterling is also available via NuGet. This product has been used and tested in many applications and contains a full suite of unit tests. You can refer to the User's Guide for complete documentation, and use the unit tests as guide...PDF Rider: PDF Rider 0.5.1: Changes from the previous version * Use dynamic layout to better fit text in other languages * Includes French and Spanish localizations Prerequisites * Microsoft Windows Operating Systems (XP - Vista - 7) * Microsoft .NET Framework 3.5 runtime * A PDF rendering software (i.e. Adobe Reader) that can be opened inside Internet Explorer. Installation instructionsChoose one of the following methods: 1. Download and run the "pdfRider0.5.1-setup.exe" (reccomended) 2. Down...Snoop, the WPF Spy Utility: Snoop 2.6.1: This release is a bug fixing release. Most importantly, issues have been seen around WPF 4.0 applications not always showing up in the app chooser. Hopefully, they are fixed now. I thought this issue warranted a minor release since more and more people are going WPF 4.0 and I don't want anyone to have any problems. Dan Hanan also contributes again with several usability features. Thanks Dan! Happy Snooping! p.s. By request, I am also attaching a .zip file ... so that people can install it ...RIBA - Rich Internet Business Application for Silverlight: Preview of MVVM Framework Source + Tutorials: This is a first public release of the MVVM Framework which is part of the final RIBA application. The complete RIBA example LOB application has yet to be published. Further Documentation on the MVVM part can be found on the Blog, http://www.SilverlightBlog.Net and in the downloadable source ( mvvm/doc/ ). Please post all issues and suggestions in the issue tracker.SharePoint Learning Kit: 1.5: SharePoint Learning Kit 1.5 has the following new functionality: *Support for SharePoint 2010 *E-Learning Actions can be localised *Two New Document Library Edit Options *Automatically add the Assignment List Web Part to the Web Part Gallery *Various Bug Fixes for the Drop Box There are 2 downloads for this release SLK-1.5-2010.zip for SharePoint 2010 SLK-1.5-2007.zip for SharePoint 2007 (WSS3 & MOSS 2007)GMare: GMare Alpha 02: Alpha version 2. With fixes detailed in the issue tracker.Facebook C# SDK: 5.0.3 (BETA): This is fourth BETA release of the version 5 branch of the Facebook C# SDK. Remember this is a BETA build. Some things may change or not work exactly as planned. We are absolutely looking for feedback on this release to help us improve the final 5.X.X release. For more information about this release see the following blog posts: Facebook C# SDK - Writing your first Facebook Application Facebook C# SDK v5 Beta Internals Facebook C# SDK V5.0.0 (BETA) Released We have spend time trying ...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel Template, version 1.0.1.161: The NodeXL Excel template displays a network graph using edge and vertex lists stored in an Excel 2007 or Excel 2010 workbook. What's NewThis release adds a new Twitter List network importer, makes some minor feature improvements, and fixes a few bugs. See the Complete NodeXL Release History for details. Installation StepsFollow these steps to install and use the template: Download the Zip file. Unzip it into any folder. Use WinZip or a similar program, or just right-click the Zip file...Finestra Virtual Desktops: 1.1: This release adds a few more performance and graphical enhancements to 1.0. Switching desktops is now about as fast as you can blink. Desktop switching optimizations New welcome wizard for Vista/7 Fixed a few minor bugs Added a few more options to the options dialog (including ability to disable the taskbar switching)WCF Data Services Toolkit: WCF Data Services Toolkit: The source code and binary releases of the WCF Data Services Toolkit. For simplicity, the source code download doesn't include any of the MSTest files. If you want those, you can pull the code down via MercurialyoutubeFisher: youtubeFisher 3.0 [beta]: What's new: Video capturing improved Supports YouTube's new layout (january 2011) Internal refactoringNearforums - ASP.NET MVC forum engine: Nearforums v5.0: Version 5.0 of the ASP.NET MVC Forum Engine, containing the following improvements: .NET 4.0 as target framework using ASP.NET MVC 3. All views migrated to Razor for cleaner markup. Alternate template (Layout file) for mobile devices 4 Bug Fixes since Version 4.1 Visit the project Roadmap for more details. Webdeploy package sha1 checksum: 28785b7248052465ea0738a7775e8e8744d84c27fuv: 1.0 release, codename Chopper Joe: features: search/replace :o to open file :s to save file :q to quitASP.NET MVC Project Awesome, jQuery Ajax helpers (controls): 1.7: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager html generation optimized new features for the lookup (add additional search data ) live demo went aeroAutoLoL: AutoLoL v1.5.5: AutoChat now allows up to 6 items. Items with nr. 7-0 will be removed! News page url's are now opened in the default browser Added a context menu to the system tray icon (thanks to Alex Banagos) AutoChat now allows configuring the Chat Keys and the Modifier Key The recent files list now supports compact and full mode Fix: Swapped mouse buttons are now properly detected Fix: Sometimes the Play button was pressed while still greyed out Champion: Karma Note: You can also run the u...mojoPortal: 2.3.6.2: see release notes on mojoportal.com http://www.mojoportal.com/mojoportal-2362-released.aspx Note that we have separate deployment packages for .NET 3.5 and .NET 4.0 The deployment package downloads on this page are pre-compiled and ready for production deployment, they contain no C# source code. To download the source code see the Source Code Tab I recommend getting the latest source code using TortoiseHG, you can get the source code corresponding to this release here.Rawr: Rawr 4.0.19 Beta: Rawr is now web-based. The link to use Rawr4 is: http://elitistjerks.com/rawr.phpThis is the Cataclysm Beta Release. More details can be found at the following link http://rawr.codeplex.com/Thread/View.aspx?ThreadId=237262 As of the 4.0.16 release, you can now also begin using the new Downloadable WPF version of Rawr!This is a pre-alpha release of the WPF version, there are likely to be a lot of issues. If you have a problem, please follow the Posting Guidelines and put it into the Issue Trac...IronRuby: 1.1.2: IronRuby 1.1.2 is a servicing release that keeps on improving compatibility with Ruby 1.9.2 and includes IronRuby integration to Visual Studio 2010. We decided to drop 1.8.6 compatibility mode in all post-1.0 releases. We recommend using IronRuby 1.0 if you need 1.8.6 compatibility. In this release we fixed several major issues: - problems that blocked Gem installation in certain cases - regex syntax: the parser was replaced with a new one that is much more compatible with Ruby 1.9.2 - cras...MVVM Light Toolkit: MVVM Light Toolkit V3 SP1 (4): There was a small issue with the previous release that caused errors when installing the templates in VS10 Express. This release corrects the error. Only use this if you encountered issues when installing the previous release. No changes in the binaries.New Projects.net Statistics and Probability: z scores, ectAdvanced Lookup: Yet another custom lookup field. Advanced Lookup uses SharePoint 2010 dialog framework and supports Ajax autocomplete. Pop up dialog page could be any custom web part page containing AdvancedLookupDialogWebPart web part which should be connected to any other web parts on the pageBanico ERP: A Silverlight ERP (Enterprise Resource Planning) application.Behavior in Visual Studio 2010 WPF and Silverlight Designer- Support Tool: This tool supports to add the Behavior, Trigger / Action to the Visual Studio 2010 WPF and Silverlight designer.Branch Navigator: This component can be used for navigating to the nearest branch or station. It can be applicable for company’s websites which already have several distributed branches. It is a completely separated module which can be easily removed from or added to the already existing websites.Consejo Guild Site MVC: This is a project for a website for our WoW guild.Cronus: An application that helps keep track of your time. Setup multiple tasks as part of different projects. Includes some basic reporting (summation) functionality.Custom SharePoint List Item Attachments versions: Recently, I am working on a custom requirement to have maintaining own file versions for SPListItem Attachments with one of my engagements. This forced me to have this code published for community to share IP. DashBoardApp: AppDigibiz Advanced Media Picker: The Digibiz Advanced Media Picker (DAMP) can be used to replace the normal media picker in Umbraco because it has a lot of extra features.DnsShell: DnsShell is a Microsoft DNS administration / management module written for PowerShell 2.0. DnsShell is developed in C#.Dragger - Sokoban clone written in C#: Dragger is a sokoban clone written in WinForms C# in 2008 by CrackSoft. Now its source is availableFingering: ??????Full Thrust Logic: This project is aimed at encapsulating the “Full Thrust” (http://www.groundzerogames.net/) starship miniatures rules. This C# business logic library will enable game developers to create games based on these rules at an accelerated pace.jQuery Camera Driver: A jQuery and URL based camera driverLoggingMagic: MSBuild task for adding some logging to your application. Inject calls to Log.Trace at the beginning of each method. Integrates with nlog, log4net or your custom static logger class within your assemblyNBug: NBug is a .NET library created to automate the bug reporting process. It automatically creates and sends: * Bug reports, * Crash reports with minidump, * Error/exception reports with stack trace + ext. info. It can also be set up as a user feedback system (i.e. feature requests).NJHSpotifyEngine: NJHspotifyEngine is a c# wrapper around the Spotify Search API.PragmaSQL: T-SQL script editor with syntax highlighting and lots of other features. Princeton SharePoint User Group CodeShare: Web Parts, script, master pages, and styles used in the creation of the Princeton SharePoint User Group site, located at http://www.princetonsug.com.Reg Explore - Registry editor written in C#: RegExplore is a registry editor written by CrackSoft and released in 2008 It is now made open sourceRegEx TestBed - A regular expression testing tool written in WinForms C#: RegEx TestBed is a regular expression testing tool written in WinForms C# released in 2007 It is now made open source.Soluzione di Single Signon per BPOS: La soluzione di Comedata è in grado di interagire con Active Directory per intercettare le modifiche alla password degli utenti nel dominio locale inserendo la stessa informazione nel sistema remoto Microsoft BpoS (Microsoft Business Productivity Online Standard Suite).syobon: based on opensyobon: http://sf.net/projects/opensyobonTietaaCal: TietaaCal is an opensource agenda/scheduler for Silverlight/MoonlightWCFReactiveX: WCFReactiveX is a .NET framework that provides an unified functional process to communicating with WCF clients built around IObserverable<T> and IObserver<T>

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #005

    - by pinaldave
    Here is the list of curetted articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2006 SQL SERVER – Cursor to Kill All Process in Database I indeed wrote this cursor and when I often look back, I wonder how naive I was to write this. The reason for writing this cursor was to free up my database from any existing connection so I can do database operation. This worked fine but there can be a potentially big issue if there was any important transaction was killed by this process. There is another way to to achieve the same thing where we can use ALTER syntax to take database in single user mode. Read more about that over here and here. 2007 Rules of Third Normal Form and Normalization Advantage – 3NF The rules of 3NF are mentioned here Make a separate table for each set of related attributes, and give each table a primary key. If an attribute depends on only part of a multi-valued key, remove it to a separate table If attributes do not contribute to a description of the key, remove them to a separate table. Correct Syntax for Stored Procedure SP Sometime a simple question is the most important question. I often see in industry incorrectly written Stored Procedure. Few writes code after the most outer BEGIN…END and few writes code after the GO Statement. In this brief blog post, I have attempted to explain the same. 2008 Switch Between Result Pan and Query Pan – SQL Shortcut Many times when I am writing query I have to scroll the result displayed in the result set. Most of the developer uses the mouse to switch between and Query Pane and Result Pane. There are few developers who are crazy about Keyboard shortcuts. F6 is the keyword which can be used to switch between query pane and tabs of the result pane. Interesting Observation – Use of Index and Execution Plan Query Optimization is a complex game and it has its own rules. From the example in the article we have discovered that Query Optimizer does not use clustered index to retrieve data, sometime non clustered index provides optimal performance for retrieving Primary Key. When all the rows and columns are selected Primary Key should be used to select data as it provides optimal performance. 2009 Interesting Observation – TOP 100 PERCENT and ORDER BY If you pull up any application or system where there are more than 100 SQL Server Views are created – I am very confident that at one or two places you will notice the scenario wherein View the ORDER BY clause is used with TOP 100 PERCENT. SQL Server 2008 VIEW with ORDER BY clause does not throw an error; moreover, it does not acknowledge the presence of it as well. In this article we have taken three perfect examples and demonstrated which clause we should use when. Comma Separated Values (CSV) from Table Column A Very common question – How to create comma separated values from a table in the database? The answer is also very common if we use XML. Check out this article for quick learning on the same subject. Azure Start Guide – Step by Step Installation Guide Though Azure portal has changed a quite bit since I wrote this article, the concept used in this article are not old. They are still valid and many of the functions are still working as mentioned in the article. I believe this one article will put you on the track to use Azure! Size of Index Table for Each Index – Solution Earlier I have posted a small question on this blog and requested help from readers to participate here and provide a solution. The puzzle was to write a query that will return the size for each index that is on any particular table. We need a query that will return an additional column in the above listed query and it should contain the size of the index. This article presents two of the best solutions from the puzzle. 2010 Well, this week in 2010 was the week of puzzles as I posted three interesting puzzles. Till today I am noticing pretty good interesting in the puzzles. They are tricky but for sure brings a great value if you are a database developer for a long time. I suggest you go over this puzzles and their answers. Did you really know all of the answers? I am confident that reading following three blog post will for sure help you enhance the experience with T-SQL. SQL SERVER – Challenge – Puzzle – Usage of FAST Hint SQL SERVER – Puzzle – Challenge – Error While Converting Money to Decimal SQL SERVER – Challenge – Puzzle – Why does RIGHT JOIN Exists 2011 DVM sys.dm_os_sys_info Column Name Changed in SQL Server 2012 Have you ever faced a situation where something does not work? When you try to fix it - you enjoy fixing it and started to appreciate the breaking changes. Well, this was exactly I felt yesterday. Before I begin my story, I want to candidly state that I do not encourage anybody to use * in the SELECT statement. Now the disclaimer is over – I suggest you read the original story – you will love it! Get Directory Structure using Extended Stored Procedure xp_dirtree Here is the question to you – why would you do something in SQL Server where you can do the same task in command prompt much easily. Well, the answer is sometime there are real use cases when we have to do such thing. This is a similar example where I have demonstrated how in SQL Server 2012 we can use extended stored procedure to retrieve directory structure. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Social Business Forum Milano: Day 2

    - by me
    @YourService. The business world has flipped and small business can capitalize  by Frank Eliason (twitter: @FrankEliason ) Technology and social media tools have made it easier than ever for companies to communicate with consumers. They can listen and join in on conversations, solve problems, get instant feedback about their products and services, and more. So why, then, are most companies not doing this? Instead, it seems as if customer service is at an all time low, and that the few companies who are choosing to focus on their customers are experiencing a great competitive advantage. At Your Service explains the importance of refocusing your business on your customers and your employees, and just how to do it. Explains how to create a culture of empowered employees who understand the value of a great customer experience Advises on the need to communicate that experience to their customers and potential customers Frank Eliason, recognized by BusinessWeek as the 'most famous customer service manager in the US, possibly in the world,' has built a reputation for helping large businesses improve the way they connect with customers and enhance their relationships Quotes from the Audience: Bertrand Duperrin ?@bduperrin social service is not about shutting up the loudest cutsomers ! #sbf12 @frankeliason Paolo Pelloni ?@paolopelloniGautam Ghosh ?@GautamGhosh RT @cecildijoux: #sbf12 @frankeliason you need to change things and fix the approach it's not about social media it's about driving change  Peter H. Reiser ?@peterreiser #sbf12 Company Experience = Product Experience + Customer Interactions + Employee Experience @yourservice Engage or lose! Socialize, mobilize, conversify: engage your employees to improve business performance Christian Finn (twitter: @cfinn) First Christian was presenting the flying monkey   Then he outlined the four principals to fix the Intranet: 1. Socalize the Intranet 2. Get Thee to a Single Repository 3. Mobilize the Intranet 4. Conversationalize Your Processes Quotes from the Audience: Oscar Berg ?@oscarberg Engaged employees think their work bring out the best of their ideas @cfinn #sbf12 http://pic.twitter.com/68eddp48 John Stepper ?@johnstepper I like @cfinn's "conversify your processes" A nice related concept to "narrating your work", part of working out loud. http://johnstepper.com/2012/05/26/working-out-loud-your-personal-content-strategy/ Oscar Berg ?@oscarberg Organizations are talent markets - socializing your intranet makes this market function better @cfinn #sbf12 For profit, productivity, and personal benefit: creating a collaborative culture at Deutsche Bank John Stepper (twitter:@johnstepper) Driving adoption of collaboration + social media platforms at Deutsche Bank. John shared some great best practices on how to deploy an enterprise wide  community model  in a large company. He started with the most important question What is the commercial value of adding social ? Then he talked about the success of Community of Practices deployment and outlined some key use cases including the relevant measures to proof the ROI of the investment. Examples:  Community of practice -> measure: systematic collection of value stories  Self-service website  -> measure: based on representative models Optimizing asset inventory - > measure: Actual counts  This use case was particular interesting.  It is a crowd sourced spending/saving of infrastructure model.  User can cancel IT services they don't need (as example Software xx).  5% of the saving goes to social responsibility projects. The John outlined some  best practices on how to address the WIIFM (What's In It For Me) question of the individual users:  - change from hierarchy to graph -  working out loud = observable work + narrating  your work  - add social skills to career objectives - example: building a purposeful social network course/training as part of the job development curriculum And last but not least John gave some important tips on how to get senior management buy-in by establishing management sponsored division level collaboration boards which defines clear uses cases and measures. This divisional use cases are then implemented using a common social platform.  Thanks John - I learned a lot from your presentation!   Quotes from the Audience: Ana Silva ?@AnaDataGirl #sbf12 what's in it for individuals at Deutsche Bank? Shapping their reputations in a big org says @johnstepper #e20Ana Silva ?@AnaDataGirl Any reason why not? MT @magatorlibero #sbf12 is Deutsche B. experience on applying social inside company applicable to Italian people? Oscar Berg ?@oscarberg Your career is not a ladder, it is a network that opens up opportunities - @johnstepper #sbf12 Oscar Berg ?@oscarberg @johnstepper: Institutionalizing collaboration is next - collaboration woven into the fabric of daily work #sbf12 Ana Silva ?@AnaDataGirl #sbf12 @johnstepper talking about how Deutsche Bank is using #socbiz to build purposeful CoP & save money

    Read the article

  • Some New .NET Toys (Repost)

    - by Kevin Grossnicklaus
    Last week I was fortunate enough to spend time in Redmond on Microsoft’s campus for the 2011 Microsoft MVP Summit. It was great to hang out with a number of old friends and get the opportunity to talk tech with the various product teams up at Microsoft. The weather wasn’t exactly sunny but Microsoft always does a great job with the Summit and everyone had a blast (heck, I even got to run the bases at SafeCo field) While much of what we saw is covered under NDA, there a ton of great things in the pipeline from Microsoft and many things that are already available (or just became so) that I wasn’t necessarily aware of. The purpose of this post is to share some of the info I learned on resources and tools available to .NET developers today. Please let me know if you have any questions (or if you know of something else cool which might benefit others). Enjoy! Visual Studio 2010 SP1 Microsoft has issued the RTM release of Visual Studio 2010 SP1. You can download the full SP1 on MSDN as of today (March 10th to the general public) and take advantage of such things as: Silverlight 4 is included in the box (as opposed to a separate install) Silverlight 4 Profiling WCF RIA Services SP1 Intellitrace for 64-bit and SharePoint ASP.NET now easily supports IIS Express and SQL CE Want a description of all that’s new beyond the above biased list (which arguably only contains items I think are important)? Check out this KB article. Portable Library Tools CTP Without much fanfare Microsoft has released a CTP of a new add-in to Visual Studio 2010 which simplifies code sharing between projects targeting different runtimes (i.e. Silverlight, WPF, Win7 Phone, XBox). With this Add-In installed you can add a new project of type “Portable Library” and specify which platforms you wish to target. Once that is done, any code added to this library will be limited to use only features which are common to all selected frameworks. Other projects can now reference this portable library and be provided assemblies custom built to their environment. This greatly simplifies the current process of sharing linked files between platforms like WPF and Silverlight. You can find out more about this CTP and how it works on this great blog post. Visual Studio Async CTP Microsoft has also released a CTP of a set of language and framework enhancements to provide a much more powerful asynchronous programming model. Due to the focus on async programming in all types of platforms (and it being the ONLY option in Silverlight and Win7 phone) a move towards a simpler and more understandable model is always a good thing. This CTP (called Visual Studio Async CTP) can be downloaded here. You can read more about this CTP on this blog post. MSDN Code Samples Gallery Microsoft has also launched new code samples gallery on their MSDN site: http://code.msdn.microsoft.com/. This site allows you to easily search for small samples of code related to a particular technology or platform. If a sample of code you are looking for is not found, you can request one via the site and other developers can see your request and provide a sample to the site to suit your needs. You can also peruse requested samples and, if you find a scenario where you can provide value, upload your own sample for the benefit of others. Samples are packaged into the VS .vsix format and include any necessary references/dependencies. By using .vsix as the deployment mechanism, as samples are installed from the site they are kept in your Visual Studio 2010 Samples Gallery and kept for your future reference. If you get a chance, check out the site and see how it is done. Although a somewhat simple concept, I was very impressed with their implementation and the way they went about trying to suit a need. I’ll definitely be looking there in the future as need something or want to share something. MSDN Search Capabilities Another item I learned recently and was not aware of (that might seem trivial to some) is the power of the MSDN site’s search capabilities. Between the Code Samples Gallery described above and the search enhancements on MSDN, Microsoft is definitely investing in their platform to help provide developers of all skill levels the tools and resources they need to be successful. What do I mean by the MSDN search capability and why should you care? If you go to the MSDN home page (http://msdn.microsoft.com) and use the “Search MSDN with Big” box at the very top of the page you will see some very interesting results. First, the search actually doesn’t just search the MSDN library it searches: MSDN Library All Microsoft Blogs CodePlex StackOverflow Downloads MSDN Magazine Support Knowledgebase (I’m not sure it even ends there but the above are all I know of) Beyond just searching all the above locations, the results are formatted very nicely to give some contextual information based on where the result came from. For example, if a keyword search returned results from CodePlex, each row in the search results screen would include a large amount of information specific to CodePlex such as: Looking at the above results immediately tells you everything from the page views to the CodePlex ratings. All in all, knowing that this much information is indexed and available from a single search location will lead me to utilize this as one of my initial searches for development information.

    Read the article

  • My History with Agile

    - by Robert May
    I’m going to write my history with Agile here.  That way, in future posts, I can refer back to it, instead of typing it out in the post that contains information you may actually want to read.  Note that I’m actually a pretty senior developer, and do lots of technical interviews.  I’m an Agile fan because of the difference it makes in peoples lives and the improvement in quality it brings, and I’ll sacrifice my technological advance to help teams. Management History I started management pretty early in my career, starting with the first job that I ever had.  I actually do NOT have a CS or similar degree.  I have a Bachelor’s of Business Administration with an emphasis in Computer Information Systems. My first management gigs were around call center work and were very schedule oriented.  I didn’t understand the true value of teams, and I’m ashamed to admit, I actually installed a fingerprint scanner as a time clock in this job.  I shudder to think of the impact that I had on the team spirit.  I didn’t even trust them enough to fill out their time cards correctly.  How sad. I was managing nearly 100 people in this position, with the help of a great set of subordinates. I did try to come up with reward programs for the team, but again, didn’t understand the concept of team, so instead of letting the team determine how the rewards should work, I mandated from on high, which isn’t a good thing. I was told that I wasn’t the type that would be a good manager by people whom I respected a lot.  They said it because I was a computer geek, since they don’t understand good management either, but in retrospect, they were right about me then.  I was too green. After my first job, I went on to other jobs and with the exception of one job, I’ve managed people at them all.  The rest of the management story is important for understanding agile, so I’ll save it for my next post. Technical History I’ve been in software development for many, many years.  I technically started programming on a commodore 64 in basic.  I didn’t know that I was programming, but I was sure having fun.  That was followed by batch files, Gorilla hacking (I always had to win), WordPerfect Macro programming and other things that taught me the basics. My first “real” job was with a telephone company, and that’s where I made my first database application in DataEase, wrote my first VBA app and started using real programming tools, like turbo pascal, vb3-vb5, and semi-real tools like RPG and VisualRPG.  I wrote my first web page in 1994, and built my first data driven web page in 1995 using perlDB.  You really can do anything with Perl.  At this time, I also started a Linux based internet service provider that is still in operation today.  One of the people I worked with is now a Microsoft employee building and designing frameworks you probably know well.  Smart guy.  I also built my first ASP applications connecting to Sql Server 6.5, setup Exchange 5.5 for the company, and many other system administration stuff.  I’m a programmer by choice, mostly because I don’t really like PC support. From there, I went on to a large state agency.  I got to see and maintain true waterfall projects.  5 years of maintaining the 200 VB COM+ (MTS, actually) dlls that were used to calculate a single number is a long time.  That was all Microsoft DNS technologies.  SQL Server and VB6 were the tools of choice, although .net started to be a factor near the end of employment.  I did some heavy XML work at this job and even wrote an XSD parser and validator in VB6 that was a shim until MSXML 3.0 came out.  Prior to 3.0, XSD’s weren’t supported, and I didn’t want to write DTDs. Ironically, jobs after this were more generic.  I pretty much settled in on the .net framework and revisions of it.  Lots of WPF, some silverlight, lots of ASP.NET, some SQL Azure, lots of SQL Server, some Oracle, but I don’t think that I was as passionate about development and technologies.  I was more into the management of development.  I like people. Technorati Tags: Agile,history

    Read the article

  • Going to the Score Cards - Exceptional DBA Awards 2011

    - by Rodney
    This year marks my 4th year as a judge for the Exceptional DBA Awards, founded by Red Gate in 2008 to "recognize the essential but often overlooked contributions of DBAs, the unsung heroes of the IT community." As a professional DBA myself I have been honored to participate as a judge. It is not an easy job because there is a voluminous amount of nominees from all over the world. Each judge has to read through every word of the nominee's answers, deciding what makes each person special and stand out amongst their peers. What drives them? What single element of their submission will shine above all others? It is my hope that what I am about to divulge to you as a judge will prompt you to think about yourself or someone you know and decide that you may be the exceptional DBA who can take home the gold at this year's award ceremony in Seattle. We are more than a few weeks into the nomination process and there are quite a number of submissions already. I can not tell you how many as that would not be fair. I can say it is not 1 million or more. I can also say that it is not 100,000. But that is all I can say about that. However, I can tell you that it is enough this year that we are breaking records on the number of people who have been influenced, inspired or intrigued by the awards in the past. I remember them all like it were yesterday. fuzzy thought cloud here. It was a rainy day in Seattle (all memories for each award ceremony will start thusly) and I was in the hotel going over my notes on what I wanted to say about the winner of the 2008 Red Gate Exceptional DBA Award. The notes were on index cards that I had either bought or stolen from my wife, I do not recall, but I was nervous which was unlike me. This was, after all, a big night for the winner. Of course, we, the judges and the SQL community, had already decided the winner and now all that remained was to present the award. The room was packed. It was Casino night, sponsored by sqlservercentral.com. Money (fake), drinks (not fake) and camaraderie flowed through the room. Dan McClain won the award that year. He worked for Anheuser-Busch at the time. I promise that did not influence my decision. We presented Dan with the award. He was very proud of this achievement, rightfully so, as was the SQL community for him. I spoke with Dan throughout the conference and realized how huge this award was for him, not just personally but professionally. It was a rainy day in Seattle in 2009 and I was nervous. I was asked to speak to a group of people again as a judge for the Exceptional DBA Awards. This year, Josef Richberg would be the recipient of the award, but he would not be able to attend. We all prayed for him as he fought through an illness and congratulated him for his accomplishments as a DBA for his company. He got better and sallied forth and continued to give back to the SQL community that he saw as one big family. In 2010, and I am getting ahead of myself, he was asked to be a judge himself for the very award he had just received the year before. It was a sunny day in Seattle and I missed it, because it was in July and I was not there. It was a rainy day in Seattle and it is 2010 and Tracy Hamlin enters a submission that blows this judge away. She is managing a 50 Terabyte distributed database ("50 Gigabytes! Are you kidding me!!!", Rodney jokes.)  and loves her daily job as a DBA working with developers, mentoring them and teaching them best practices with kindness and patience. She is a people person who just happens to have 10+ years experience with RDBMS'. She wins the award and goes on to be recognized as famous at PASS. It will be a rainy day in Seattle this year when I sit amongst my old constituent judges and friends, Brad McGehee, http://www.simple-talk.com/books/sql-books/how-to-become-an-exceptional-dba,-2nd-edition/, Steve Jones, whom we all know and love at http://www.sqlservercentral.com and a young upstart to the SQL Community, this cat named Brent Ozar to announce the 2011 winner. I personally have not heard of Brent but I am told I have interviewed him for a DBA position several years ago and turned him down, http://www.brentozar.com/archive/2011/05/exceptional-dba-contest/ . I hope that did not jeopardize his future in the SQL world. I am a big hearted oaf and would feel horrible. Hopefully I will meet him at PASS and we can work this all out and I can help him get a DBA job. The rain has stopped and a new year is upon us. The stakes are high...the competition is fierce...the rewards are incredible. The entry form awaits you. http://www.exceptionaldba.com/ I very much look forward to meeting you and presenting the award to you in front of hundreds of your envious but proud peers as the new Exceptional DBA for 2011 at the PASS Summit. Here is what you could win: The Exceptional DBA of the Year receives full conference registration for the 2011 PSS Summit in Seattle, where the awards ceremony will take place, four nights' hotel accommodation, and $300 towards travel expenses. They will also be featured on Simple-Talk. Are you ready? Are you nervous?

    Read the article

  • Messaging Systems – Handshaking, Reconciliation and Tracking for Data Transparency

    - by Ahsan Alam
    As many corporations build business partnerships with other organizations, the need to share information becomes necessary. Large amount of data sharing using snail mail, email and/or fax are quickly becoming a thing of the past. More and more organizations are relying heavily on Ftp and/or Web Service to exchange data. Corporations apply wide range of technologies and techniques based on available resources and data transfer needs. Sometimes, it involves simple home-grown applications. Other times, large investments are made on products like BizTalk, TIBCO etc. Complexity of information management also varies significantly from one organizations to another. Some may deal with handful of simple steps to process and manage shared data; whereas others may rely on fairly complex processes with heavy interaction with internal and external systems in order to serve the business needs. It is not surprising that many of these systems end up becoming black boxes over a period of time. Consequently, people and business start to rely more and more on developers and support personnel just to extract simple information adding to the loss of productivity. One of the most important factor in any business is transparency to data irrespective of technology preferences and the complexity of business processes. Not knowing the state of data could become very costly to the business. Being involved in messaging systems for some time now, I have heard the same type of questions over and over again. Did we transmit messages successfully? Did we get responses back? What is the expected turn-around-time? Did the system experience any errors? When one company transmits data to one or more company, it may invoke a set of processes that could complete in matter of seconds, or it could days. As data travels from one organizations to another, the uncertainty grows, and the longer it takes to track uncertain state of the data the costlier it gets for the business, So, in every business scenario, it's extremely important to be aware of the state of the data.   Architects of messaging systems can take several steps to aid with data transparency. Some forms of data handshaking and reconciliation mechanism as well as extensive data tracking can be incorporated into the system to provide clear visibility to the data. What do I mean by handshaking and reconciliation? Some might consider these to be a single concept; however, I like to consider them in two unique categories. Handshaking serves as message receipts or acknowledgment. When one transmits messages to another, the receiver must acknowledge each message by sending immediate responses for each transaction. Whenever we use Web Services, handshaking is often achieved utilizing request/reply pattern. Similarly, if Ftp is used, a receiver can acknowledge by dropping messages for the sender as soon as the files are picked up. These forms of handshaking or acknowledgment informs the message sender and receiver that a successful transaction has occurred. I have mentioned earlier that it could take anywhere from a few seconds to a number of days before shared data is completely processed. In addition, whenever a batched transaction is used, processing time for each data element inside the batch could also vary significantly. So, in order to successfully manage data processing, reconciliation becomes extremely important; otherwise it may result into data loss or in some cases hefty penalty. Reconciliation can be done in many ways. Partner organizations can share and compare ad hoc reports to achieve reconciliation. On the other hand, partners can agree on some type of systematic reconciliation messages. Systems within responsible parties can trigger messages to partners as soon as the data process completes.   Next step in the data transparency is extensive data tracking. Some products such as BizTalk and TIBCO provide built-in functionality for data tracking; however, built-in functionality may not always be adequate. Sometimes additional tracking system (or databases) needs to be built in order monitor all types of data flow including, message transactions, handshaking, reconciliation, system errors and many more. If these types of data are captured, then these can be presented to business users in any forms or fashion. When business users are empowered with such information, then the reliance on developers and support teams decreases dramatically.   In today's collaborative world of information sharing, data transparency is key to the success of every business. The state of business data will constantly change. However, when people have easier access to various states of data, it allows them to make better and quicker decisions. Therefore, I feel that data handshaking, reconciliation and tracking is very important aspect of messaging systems.

    Read the article

  • An Alphabet of Eponymous Aphorisms, Programming Paradigms, Software Sayings, Annoying Alliteration

    - by Brian Schroer
    Malcolm Anderson blogged about “Einstein’s Razor” yesterday, which reminded me of my favorite software development “law”, the name of which I can never remember. It took much Wikipedia-ing to find it (Hofstadter’s Law – see below), but along the way I compiled the following list: Amara’s Law: We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run. Brook’s Law: Adding manpower to a late software project makes it later. Clarke’s Third Law: Any sufficiently advanced technology is indistinguishable from magic. Law of Demeter: Each unit should only talk to its friends; don't talk to strangers. Einstein’s Razor: “Make things as simple as possible, but not simpler” is the popular paraphrase, but what he actually said was “It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience”, an overly complicated quote which is an obvious violation of Einstein’s Razor. (You can tell by looking at a picture of Einstein that the dude was hardly an expert on razors or other grooming apparati.) Finagle's Law of Dynamic Negatives: Anything that can go wrong, will—at the worst possible moment. - O'Toole's Corollary: The perversity of the Universe tends towards a maximum. Greenspun's Tenth Rule: Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp. (Morris’s Corollary: “…including Common Lisp”) Hofstadter's Law: It always takes longer than you expect, even when you take into account Hofstadter's Law. Issawi’s Omelet Analogy: One cannot make an omelet without breaking eggs - but it is amazing how many eggs one can break without making a decent omelet. Jackson’s Rules of Optimization: Rule 1: Don't do it. Rule 2 (for experts only): Don't do it yet. Kaner’s Caveat: A program which perfectly meets a lousy specification is a lousy program. Liskov Substitution Principle (paraphrased): Functions that use pointers or references to base classes must be able to use objects of derived classes without knowing it Mason’s Maxim: Since human beings themselves are not fully debugged yet, there will be bugs in your code no matter what you do. Nils-Peter Nelson’s Nil I/O Rule: The fastest I/O is no I/O.    Occam's Razor: The simplest explanation is usually the correct one. Parkinson’s Law: Work expands so as to fill the time available for its completion. Quentin Tarantino’s Pie Principle: “…you want to go home have a drink and go and eat pie and talk about it.” (OK, he was talking about movies, not software, but I couldn’t find a “Q” quote about software. And wouldn’t it be cool to write a program so great that the users want to eat pie and talk about it?) Raymond’s Rule: Computer science education cannot make anybody an expert programmer any more than studying brushes and pigment can make somebody an expert painter.  Sowa's Law of Standards: Whenever a major organization develops a new system as an official standard for X, the primary result is the widespread adoption of some simpler system as a de facto standard for X. Turing’s Tenet: We shall do a much better programming job, provided we approach the task with a full appreciation of its tremendous difficulty, provided that we respect the intrinsic limitations of the human mind and approach the task as very humble programmers.  Udi Dahan’s Race Condition Rule: If you think you have a race condition, you don’t understand the domain well enough. These rules didn’t exist in the age of paper, there is no reason for them to exist in the age of computers. When you have race conditions, go back to the business and find out actual rules. Van Vleck’s Kvetching: We know about as much about software quality problems as they knew about the Black Plague in the 1600s. We've seen the victims' agonies and helped burn the corpses. We don't know what causes it; we don't really know if there is only one disease. We just suffer -- and keep pouring our sewage into our water supply. Wheeler’s Law: All problems in computer science can be solved by another level of indirection... Except for the problem of too many layers of indirection. Wheeler also said “Compatibility means deliberately repeating other people's mistakes.”. The Wrong Road Rule of Mr. X (anonymous): No matter how far down the wrong road you've gone, turn back. Yourdon’s Rule of Two Feet: If you think your management doesn't know what it's doing or that your organisation turns out low-quality software crap that embarrasses you, then leave. Zawinski's Law of Software Envelopment: Every program attempts to expand until it can read mail. Zawinski is also responsible for “Some people, when confronted with a problem, think 'I know, I'll use regular expressions.' Now they have two problems.” He once commented about X Windows widget toolkits: “Using these toolkits is like trying to make a bookshelf out of mashed potatoes.”

    Read the article

  • Unit testing is… well, flawed.

    - by Dewald Galjaard
    Hey someone had to say it. I clearly recall my first IT job. I was appointed Systems Co-coordinator for a leading South African retailer at store level. Don’t get me wrong, there is absolutely nothing wrong with an honest day’s labor and in fact I highly recommend it, however I’m obliged to refer to the designation cautiously; in reality all I had to do was monitor in-store prices and two UNIX front line controllers. If anything went wrong – I only had to phone it in… Luckily that wasn’t all I did. My duties extended to some other interesting annual occurrence – stock take. Despite a bit more curious affair, it was still a tedious process that took weeks of preparation and several nights to complete.  Then also I remember that no matter how elaborate our planning was, the entire exercise would be rendered useless if we couldn’t get the basics right – that being the act of counting. Sounds simple right? We’ll with a store which could potentially carry over tens of thousands of different items… we’ll let’s just say I believe that’s when I first became a coffee addict. In those days the act of counting stock was a very humble process. Nothing like we have today. A staff member would be assigned a bin or shelve filled with items he or she had to sort then count. Thereafter they had to record their findings on a complementary piece of paper. Every night I would manage several teams. Each team was divided into two groups - counters and auditors. Both groups had the same task, only auditors followed shortly on the heels of the counters, recounting stock levels, making sure the original count correspond to their findings. It was a simple yet hugely responsible orchestration of people and thankfully there was one fundamental and golden rule I could always abide by to ensure things run smoothly – No-one was allowed to audit their own work. Nope, not even on nights when I didn’t have enough staff available. This meant I too at times had to get up there and get counting, or have the audit stand over until the next evening. The reason for this was obvious - late at night and with so much to do we were prone to make some mistakes, then on the recount, without a fresh set of eyes, you were likely to repeat the offence. Now years later this rule or guideline still holds true as we develop software (as far removed as software development from counting stock may be). For some reason it is a fundamental guideline we’re simply ignorant of. We write our code, we write our tests and thus commit the same horrendous offence. Yes, the procedure of writing unit tests as practiced in most development houses today – is flawed. Most if not all of the tests we write today exercise application logic – our logic. They are based on the way we believe an application or method should/may/will behave or function. As we write our tests, our unit tests mirror our best understanding of the inner workings of our application code. Unfortunately these tests will therefore also include (or be unaware of) any imperfections and errors on our part. If your logic is flawed as you write your initial code, chances are, without a fresh set of eyes, you will commit the same error second time around too. Not even experience seems to be a suitable solution. It certainly helps to have deeper insight, but is that really the answer we should be looking for? Is that really failsafe? What about code review? Code review is certainly an answer. You could have one developer coding away and another (or team) making sure the logic is sound. The practice however has its obvious drawbacks. Firstly and mainly it is resource intensive and from what I’ve seen in most development houses, given heavy deadlines, this guideline is seldom adhered to. Hardly ever do we have the resources, money or time readily available. So what other options are out there? A quest to find some solution revealed a project by Microsoft Research called PEX. PEX is a framework which creates several test scenarios for each method or class you write, automatically. Think of it as your own personal auditor. Within a few clicks the framework will auto generate several unit tests for a given class or method and save them to a single project. PEX help to audit your work. It lends a fresh set of eyes to any project you’re working on and best of all; it is cost effective and fast. Check them out at http://research.microsoft.com/en-us/projects/pex/ In upcoming posts we’ll dive deeper into how it works and how it can help you.   Certainly there are more similar frameworks out there and I would love to hear from you. Please share your experiences and insights.

    Read the article

  • Simultaneously calling multiple methods on a WCF service from silverlight

    - by ola karlsson
    A while back I had to debug some performance issues in an existing Silverlight app, as the problem / solution was a bit obscure and finding info about it was quite tricky, I thought I’d share, maybe it can help the next person with this problem. The App On start, the app would do a number of calls to different methods on a WCF service, this to populate the UI with the necessary data. Recently one of those services had been changed and was now taking quite a bit longer than it used to. This was resulting in quite a long loading time for the whole UI, which was set up so it wouldn’t let the user interact with anything, until all the service calls had finished. First I broke out the longer running service call from the others, then removed the constraint that it had to be loaded for the UI in general to become responsive. I also added a loading indicator just on that area of the UI, thinking that the main UI would load while this particular section could keep loading independently. The Problem However this is where things started to get a bit strange. I found that even after these changes, the main UI wouldn’t activate until the long running call returned. So now, I did what I should have done to start with, I got Fiddler out and had a look at what was really happening. What I found was that, once the call to the long running service method was placed, all subsequent call were waiting for that one to return before executing. Not having really worked with WCF previously or knowing much about it in general, I was stumped… I knew of the issues where Silverlight is restricted by the browsers networking features in regards to number of simultaneous connections etc. However that just didn’t seem to be the issue here, you can clearly see in Fiddler that there’s numerous calls, but they’re just not returning. I thought of the problem maybe being in the WCF service, but the calls were really not that complicated and surely the service should be able to handle a lot more than what I was throwing at it! So I did what every developer does in this type of scenario, I hit the search engines. I did a whole bunch of searching on things like “multiple simultaneous WCF calls from Silverlight” and “Calling long running WCF services from Silverlight” etc. etc. This however, pretty much got me nowhere, I found a whole heap of resources on how to do WCF calls from Silverlight but most of them were very basic and of no use what so ever. The fog is clearing It wasn’t until I came across the term “ WCF blocking calls” and started incorporating that in my searches I started to get somewhere. Those searches quite quickly brought me to the following thread in the Silverlight forum “Long-running WCF call blocking subsequent calls” which discussed the exact problem I was facing and the best part, one of the guys there had the solution! The short answer is in the forum post and the guys answering, has also done a more extensive blog post about it called “Silverlight, WCF, and ASP.Net Configuration Gotchas” which covers it very well.  So come on what’s the solution?! I heard you ask, unless you’ve already gone to the links and looked it up ;) The Solution Well, it turns out that the issue is founded in a mix of Silverlight, Asp.Net and WCF, basically if you’re doing multiple calls to a single WCF web-service and you have Asp.Net session state enabled, the calls will be executed sequentially by the service, hence any long running calls will block subsequent ones. So why is Asp.Net session state effecting us, we’re working in Silverlight, right? We'll as mentioned earlier, by default Silverlight uses the browsers networking stack when doing service calls, hence to the WCF service, the call looks like it might as well be coming from a normal Asp.Net. To get around this, we look to a feature introduced in Silverlight 3, namely the Client HTTP Stack. The Client HTTP Stack to the rescue By using the following syntax (for example in our App.xaml.cs, Application_Startup method) WebRequest.RegisterPrefix("http://", WebRequestCreator.ClientHttp); we can set our Silverlight application to use the Client HTTP Stack, which incidentally solves our problem! By using Silverlights own networking stack, rather than that of the browser, we get around the Asp.Net - WCF session state issue. The above code specifies that all calls to addresses starting with “http://” should go through the client stack, this can actually be set more granular and you can specify it to be used only for certain domains etc. Summary The actual solution is well covered in the forum and blog posts I link to above. This post is more about sharing my experience, hopefully helping to spread the word about this and maybe make it a bit easier for the next poor guy with this issue to find the solution. Until next time, Ola

    Read the article

  • OPN Oracle ECM 10g R3 Implementation Boot Camp - (12-14/Abr/10)

    - by Claudia Costa
    É com entusiasmo que lhe anunciamos o bootcamp de Oracle ECM 10g R3 Implementation que irá realizar nos dias 12-14 de Abril  que abordará os tópicos abaixo descritos. Com o objectivo de ajudar os parceiros a desenvolver competências, a Oracle University e a Oracle Alliances&Channel, desenharam este bootcamp, compactando os conteúdos e reduzindo assim os custos. Preço por participante (3 dias) - 1.250 Eur + Iva  Oracle offers the most unified, usable enterprise content management platform in today's market. With centralized control across single or multiple repositories, common core functionality, and easily scalable content management capabilities, Oracle provides content management solutions for many content types and users-wherever they work in the enterprise.   The Oracle Enterprise Content Management (ECM) Implementation Boot Camp examines the fundamental concepts, techniques, and architecture of Oracle's ECM technologies. Join this training to learn how you can manage and maintain unstructured content   Target Audience:  The Oracle ECM Implementation Boot Camp is designed for architects, technical consultants, team/project leaders and functional consultants of our system integrator partners who want to ramp-up on ECM technology.   Contents:  The ECM Implementation Boot Camp is a three-day hands-on workshop, designed for Oracle Partners who are new to ECM, and will provide implementation instruction on the ECM technology offered by Oracle. The boot camp will: • Provide hands-on experience in implementing Oracle's truly unified, open and standard base ECM technology • Provide the strategic direction about Oracle's Fusion Middleware/Enterprise 2.0 and its role in composite application development • Expose broad set of Oracle's ECM technologies.   Objectives: The Oracle ECM Implementation Boot Camp is primarily focused on the Oracle's ECM offering to manage and maintain unstructured content and covers Universal Content Management (UCM), Image and Process Management (IPM), Universal Records Management (URM), and Information Rights Management (IRM):   Topics Covered • Introduction to Oracle UCM o UCM Overview o UCM Architecture Overview • Content Server and Document Management basics o Installation and Administration Skills § User and Security Admin § Configuration (metadata, DCLs, profiles, rules, etc.) § Workflow Admin § System Properties and Component Manager § Managing Subscriptions o Contributing Content § Browser form § WebDAV folder § Desktop Integration o Searching • Web Content Management o Site Studio • Universal Records Management • Information Right Management (IRM) • Image & Process Management (IPM) • Oracle Document Capture • Oracle eMail Archive Service. Labs • Content Server Installation • Use and Administration of Content Server • Introduction to Site Studio • Use and Administration of Records Manager Demo: The R&D Group and the New Patent Focus: Information Rights Management, Knowledge Management, Accounts Payable Image Automation, Imaging and Process Management Case Study Use Case 1: Enable City of Xalco to streamline internal processes by empowering city employees to quickly and efficiently manage and publish information on their employee intranet and eventually public Web site. Use Case 2: Help Acme & Co in archiving its goal is to become "paperless" by managing all of their company's business content in a central, Web-based repository. Acme's business content ranges from policies and procedures to Employee listings and marketing materials.   Agenda: Day 1 ·         ECM Overview & Content Server ·         ECM Overview ·         ECM Architecture and Installation ·         UCM and Digital Asset Management DEMO ·         Lab 1 - Content Server Installation ·         Lab 2 - Use and Administration of Content Server   Day 2 ·         Web Content Management ·         Lab 2 - Use and Administration of Content ·         Server (continued) ·         Introduction to Web Content Management ·         Lab 3 - Site Studio   Day 3 ·         URM/IRM/IPM ·         Introduction to Universal Records Management ·         Lab 4 - URM ·         Introduction to Information Rights Management ·         Information Rights Management DEMO ·         Introduction to Image and Process Management ·         Image and Process Management Demo ·         Oracle Document Capture ·         Oracle eMail Archive   Material needed for Bootcamp: This Boot camp requires attendees to provide their own laptops for this class. Attendee laptops must meet the following minimum hardware/software requirements: Hardware • RAM: 2GB RM minimum (1 GB RAM is not enough) • HDD: 15 GB free HDD space   Pre requistes: To ensure a valuable learning experience, participation in this boot camp requires completing the prerequisite courses and successfully passing the prerequisite assessment test that is mapped into the Oracle Enterprise Content Management Implementation Boot Camp guided learning path. At a minimum, participants with equivalent skills and background should review the guided learning path and successfully pass the prerequisite assessment test to ensure they possess the background necessary to benefit from participation in the boot Camp.   ---------------------------------------------------------------------   Para mais informações/inscrições, contacte: Mónica Pires  21 423 51 44 Horário e Local 9:30h - 12:30h e 14:00h - 17:00 ( 6 horas/dia )Oracle, Porto Salvo - Oeiras.

    Read the article

  • Geocaching - World wide treasure hunt

    I'm not quite sure how I came across this topic but actually I find it absolutely interesting, challenging and most of all a great fun for the family and friends. The interesting part is for sure that you can follow other peoples treasures and their preferred locations where a cache might be hidden. Of course, it wont be easy to find a cache after all. Sometimes there are even 'mystery caches' which have either riddles, further instructions or little brain games for you in order to find the actual cache - that's the challenge. And last but not least, those caches are hidden outdoor. A great experience to explore nature either on your own, or your family especially with children, or as a treasure hunting pack with a couple of friends. What is geocaching? It's a high-tech outdoor treasure hunting game that's a great way to explore the world with friends, family or on your own. Participants use GPS-enabled devices to locate hidden containers called geocaches. There are over one million geocaches hidden around the world today, waiting for you to find them. Visit Geocaching.com to search for geocaches near you.(Source: Referral Email of geocaching.com) Checkout the Geocaching 101 for further details and information. They also provide a video channel on YouTube. Which equipment do I need? Any GPS-enabled device is sufficient to go onto the hunt. I'm going to start our geocaching experience equipped with my Samsung Galaxy Tab. Additionally, I installed a geocaching.com client called c:geo that hopefully assists me soon. Combined with a map app like Google Maps and a nice Compass app you should be fully equipped and ready to go. I guess, that even a car navigation system is perfect for that task. Later on, with more experience and demand for technology (or precision) it might be interesting to opt-in for a pure GPS device, like a Garmin or any other brand on the market. {loadposition content_adsense} What is a geocache and what does it contain? In its simplest form, a cache always contains a logbook or logsheet for you to log your find. Larger caches may contain a logbook and any number of items. These items turn the adventure into a true treasure hunt. You never know what the cache owner or visitors to the cache may have left for you to enjoy. Remember, if you take something, leave something of equal or greater value in return. It is recommended that items in a cache be individually packaged in a clear, zipped plastic bag to protect them from the elements. Finding your first geocache Well, first you have to have interest to pick up the challenge. Then you have to check out the Geocache directory on geocaching.com. They have recommendations for beginner's caches but you are free to choose any. Actually, we have a Mystery Cache very close to our base, and I guess that we are going for that one on our first trip. Anyway, there is a very informative guide on the website which should answer all your questions about starting your new outdoor adventure. For sure, it's going to be rewarding. Team up with friends and family Especially as a beginner there might be misunderstandings in handling the GPS coordinates, the compass, or the map, and even finding the container at the documented position isn't easy in the first place. Luckily, there are logbook reports online from other hunters, and most of the time there are even 'spoiler' images available. But also bear in mind, that a geocache might have been removed or is lost due to unconscious people or whatever other reasons. Don't be disappointed in case that you can't find anything... There be nothing anymore. A general recommendation in this case would be to replace the missing container with a new one, and give feedback to the original owner about the state of that particular location. After all, it's about fun and active participation in a world-wide community. Geocaches in Mauritius? Yes, there are currently about 45 geocaches spread all over the island, and even a single in Rodriguez - that's gonna be a tough one. Hopefully, we will get increasing numbers as Geocaching.com allows, no better, even encourages you to hide new containers at your locations of choice. I think this is going to be real fun for us during the upcoming weeks and months. Especially, when we are travelling to other countries and transfer so-called trackable items between geocaches. On my first impression, Geocaching.com seems to be very mature, open and community-oriented. There are literally hundreds of thousands geocache 'hunters' all over the world. And usually finding a container remote from your home is very rewarding. I'll keep you updated in these matters during the next months to come...

    Read the article

  • SQL SERVER – CXPACKET – Parallelism – Usual Solution – Wait Type – Day 6 of 28

    - by pinaldave
    CXPACKET has to be most popular one of all wait stats. I have commonly seen this wait stat as one of the top 5 wait stats in most of the systems with more than one CPU. Books On-Line: Occurs when trying to synchronize the query processor exchange iterator. You may consider lowering the degree of parallelism if contention on this wait type becomes a problem. CXPACKET Explanation: When a parallel operation is created for SQL Query, there are multiple threads for a single query. Each query deals with a different set of the data (or rows). Due to some reasons, one or more of the threads lag behind, creating the CXPACKET Wait Stat. There is an organizer/coordinator thread (thread 0), which takes waits for all the threads to complete and gathers result together to present on the client’s side. The organizer thread has to wait for the all the threads to finish before it can move ahead. The Wait by this organizer thread for slow threads to complete is called CXPACKET wait. Note that not all the CXPACKET wait types are bad. You might experience a case when it totally makes sense. There might also be cases when this is unavoidable. If you remove this particular wait type for any query, then that query may run slower because the parallel operations are disabled for the query. Reducing CXPACKET wait: We cannot discuss about reducing the CXPACKET wait without talking about the server workload type. OLTP: On Pure OLTP system, where the transactions are smaller and queries are not long but very quick usually, set the “Maximum Degree of Parallelism” to 1 (one). This way it makes sure that the query never goes for parallelism and does not incur more engine overhead. EXEC sys.sp_configure N'cost threshold for parallelism', N'1' GO RECONFIGURE WITH OVERRIDE GO Data-warehousing / Reporting server: As queries will be running for long time, it is advised to set the “Maximum Degree of Parallelism” to 0 (zero). This way most of the queries will utilize the parallel processor, and long running queries get a boost in their performance due to multiple processors. EXEC sys.sp_configure N'cost threshold for parallelism', N'0' GO RECONFIGURE WITH OVERRIDE GO Mixed System (OLTP & OLAP): Here is the challenge. The right balance has to be found. I have taken a very simple approach. I set the “Maximum Degree of Parallelism” to 2, which means the query still uses parallelism but only on 2 CPUs. However, I keep the “Cost Threshold for Parallelism” very high. This way, not all the queries will qualify for parallelism but only the query with higher cost will go for parallelism. I have found this to work best for a system that has OLTP queries and also where the reporting server is set up. Here, I am setting ‘Cost Threshold for Parallelism’ to 25 values (which is just for illustration); you can choose any value, and you can find it out by experimenting with the system only. In the following script, I am setting the ‘Max Degree of Parallelism’ to 2, which indicates that the query that will have a higher cost (here, more than 25) will qualify for parallel query to run on 2 CPUs. This implies that regardless of the number of CPUs, the query will select any two CPUs to execute itself. EXEC sys.sp_configure N'cost threshold for parallelism', N'25' GO EXEC sys.sp_configure N'max degree of parallelism', N'2' GO RECONFIGURE WITH OVERRIDE GO Read all the post in the Wait Types and Queue series. Additionally a must read comment of Jonathan Kehayias. Note: The information presented here is from my experience and I no way claim it to be accurate. I suggest you all to read the online book for further clarification. All the discussion of Wait Stats over here is generic and it varies from system to system. It is recommended that you test this on the development server before implementing on the production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: DMV, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • $.fadeTo/fadeOut() operations on Table Rows in IE fail

    - by Rick Strahl
    Here’s a a small problem that one of customers ran into a few days ago: He was playing around with some of the sample code I’ve put out for one of my simple jQuery demos which deals with providing a simple pulse behavior plug-in: $.fn.pulse = function(time) { if (!time) time = 2000; // *** this == jQuery object that contains selections $(this).fadeTo(time, 0.20, function() { $(this).fadeTo(time, 1); }); return this; } it’s a very simplistic plug-in and it works fine for simple pulse animations. However he ran into a problem where it didn’t work when working with tables – specifically pulsing a table row in Internet Explorer. Works fine in FireFox and Chrome, but IE not so much. It also works just fine in IE as long as you don’t try it on tables or table rows specifically. Applying against something like this (an ASP.NET GridView): var sel = $("#gdEntries>tbody>tr") .not(":first-child") // no header .not(":last-child") // no footer .filter(":even") .addClass("gridalternate"); // *** Demonstrate simple plugin sel.pulse(2000); fails in IE. No pulsing happens in any version of IE. After some additional experimentation with single rows and various ways of selecting each and still failing, I’ve come to the conclusion that the various fade operations in jQuery simply won’t work correctly in IE (any version). So even something as ‘elemental’ as this: var el = $("#gdEntries>tbody>tr").get(0);$(el).fadeOut(2000); is not working correctly. The item will stick around for 2 seconds and then magically disappear. Likewise: sel.hide().fadeIn(5000); also doesn’t fade in although the items become immediately visible in IE. Go figure that behavior out. Thanks to a tweet from red_square and a link he provided here is a grid that explains what works and doesn’t in IE (and most last gen browsers) regarding opacity: http://www.quirksmode.org/js/opacity.html It appears from this link that table and row elements can’t be made opaque, but td elements can. This means for the row selections I can force each of the td elements to be selected and then pulse all of those. Once you have the rows it’s easy to explicitly select all the columns in those rows with .find(“td”). Aha the following actually works: var sel = $("#gdEntries>tbody>tr") .not(":first-child") // no header .not(":last-child") // no footer .filter(":even") .addClass("gridalternate"); // *** Demonstrate simple plugin sel.find("td").pulse(2000); A little unintuitive that, but it works. Stay away from <table> and <tr> Fades The moral of the story is – stay away from TR, TH and TABLE fades and opacity. If you have to do it on tables use the columns instead and if necessary use .find(“td”) on your row(s) selector to grab all the columns. I’ve been surprised by this uhm relevation, since I use fadeOut in almost every one of my applications for deletion of items and row deletions from grids are not uncommon especially in older apps. But it turns out that fadeOut actually works in terms of behavior: It removes the item when the timeout’s done and because the fade is relatively short lived and I don’t extensively test IE code any more I just never noticed that the fade wasn’t happening. Note – this behavior or rather lack thereof appears to be specific to table table,tr,th elements. I see no problems with other elements like <div> and <li> items. Chalk this one up to another of IE’s shortcomings. Incidentally I’m not the only one who has failed to address this in my simplistic plug-in: The jquery-ui pulsate effect also fails on the table rows in the same way. sel.effect("pulsate", { times: 3 }, 2000); and it also works with the same workaround. If you’re already using jquery-ui definitely use this version of the plugin which provides a few more options… Bottom line: be careful with table based fade operations and remember that if you do need to fade – fade on columns.© Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery  

    Read the article

  • How to propagate http response code from back-end to client

    - by Manoj Neelapu
    Oracle service bus can be used as for pass through casses. Some use cases require propagating the http-response code back to the caller. http://forums.oracle.com/forums/thread.jspa?messageID=4326052&#4326052 is one such example we will try to accomplish in this tutorial.We will try to demonstrate this feature using Oracle Service Bus (11.1.1.3.0. We will also use commons-logging-1.1.1, httpcomponents-client-4.0.1, httpcomponents-core-4.0.1 for writing the client to demonstrate.First we create a simple JSP which will always set response code to 304.The JSP snippet will look like <%@ page language="java"     contentType="text/xml;     charset=UTF-8"        pageEncoding="UTF-8" %><%      System.out.println("Servlet setting Responsecode=304");    response.setStatus(304);    response.flushBuffer();%>We will now deploy this JSP on weblogic server with URI=http://localhost:7021/reponsecode/For this JSP we will create a simple Any XML BS We will also create proxy service as shown below Once the proxy is created we configure pipeline for the proxy to use route node, which invokes the BS(JSPCaller) created in the first place. So now we will create a error handler for route node and will add a stage. When a HTTP BS sends a request, the JSP sends the response back. If the response code is not 200, then the http BS will consider that as error and the above configured error handler is invoked. We will print $outbound to show the response code sent by the JSP. The next actions. To test this I had create a simple clientimport org.apache.http.Header;import org.apache.http.HttpEntity;import org.apache.http.HttpHost;import org.apache.http.HttpResponse;import org.apache.http.HttpVersion;import org.apache.http.client.methods.HttpGet;import org.apache.http.conn.ClientConnectionManager;import org.apache.http.conn.scheme.PlainSocketFactory;import org.apache.http.conn.scheme.Scheme;import org.apache.http.conn.scheme.SchemeRegistry;import org.apache.http.impl.client.DefaultHttpClient;import org.apache.http.impl.conn.tsccm.ThreadSafeClientConnManager;import org.apache.http.params.BasicHttpParams;import org.apache.http.params.HttpParams;import org.apache.http.params.HttpProtocolParams;import org.apache.http.util.EntityUtils;/** * @author MNEELAPU * */public class TestProxy304{    public static void main(String arg[]) throws Exception{     HttpHost target = new HttpHost("localhost", 7021, "http");     // general setup     SchemeRegistry supportedSchemes = new SchemeRegistry();     // Register the "http" protocol scheme, it is required     // by the default operator to look up socket factories.     supportedSchemes.register(new Scheme("http",              PlainSocketFactory.getSocketFactory(), 7021));     // prepare parameters     HttpParams params = new BasicHttpParams();     HttpProtocolParams.setVersion(params, HttpVersion.HTTP_1_1);     HttpProtocolParams.setContentCharset(params, "UTF-8");     HttpProtocolParams.setUseExpectContinue(params, true);     ClientConnectionManager connMgr = new ThreadSafeClientConnManager(params,              supportedSchemes);     DefaultHttpClient httpclient = new DefaultHttpClient(connMgr, params);     HttpGet req = new HttpGet("/HttpResponseCode/ProxyExposed");     System.out.println("executing request to " + target);     HttpResponse rsp = httpclient.execute(target, req);     HttpEntity entity = rsp.getEntity();     System.out.println("----------------------------------------");     System.out.println(rsp.getStatusLine());     Header[] headers = rsp.getAllHeaders();     for (int i = 0; i < headers.length; i++) {         System.out.println(headers[i]);     }     System.out.println("----------------------------------------");     if (entity != null) {         System.out.println(EntityUtils.toString(entity));     }     // When HttpClient instance is no longer needed,      // shut down the connection manager to ensure     // immediate deallocation of all system resources     httpclient.getConnectionManager().shutdown();     }}On compiling and executing this we see the below output in STDOUT which clearly indicates the response code was propagated from Business Service to Proxy serviceexecuting request to http://localhost:7021----------------------------------------HTTP/1.1 304 Not ModifiedDate: Tue, 08 Jun 2010 16:13:42 GMTContent-Type: text/xml; charset=UTF-8X-Powered-By: Servlet/2.5 JSP/2.1----------------------------------------  

    Read the article

  • Simple Excel Export with EPPlus

    - by Jesse Taber
    Originally posted on: http://geekswithblogs.net/GruffCode/archive/2013/10/30/simple-excel-export-with-epplus.aspxAnyone I’ve ever met who works with an application that sits in front of a lot of data loves it when they can get that data exported to an Excel file for them to mess around with offline. As both developer and end user of a little website project that I’ve been working on, I found myself wanting to be able to get a bunch of the data that the application was collecting into an Excel file. The great thing about being both an end user and a developer on a project is that you can build the features that you really want! While putting this feature together I came across the fantastic EPPlus library. This library is certainly very well known and popular, but I was so impressed with it that I thought it was worth a quick blog post. This library is extremely powerful; it lets you create and manipulate Excel 2007/2010 spreadsheets in .NET code with a high degree of flexibility. My only gripe with the project is that they are not touting how insanely easy it is to build a basic Excel workbook from a simple data source. If I were running this project the approach I’m about to demonstrate in this post would be front and center on the landing page for the project because it shows how easy it really is to get started and serves as a good way to ease yourself in to some of the more advanced features. The website in question uses RavenDB, which means that we’re dealing with POCOs to model the data throughout all layers of the application. I love working like this so when it came time to figure out how to export some of this data to an Excel spreadsheet I wanted to find a way to take an IEnumerable<T> and just have it dumped to Excel with each item in the collection being modeled as a single row in the Excel worksheet. Consider the following class: public class Employee { public int Id { get; set; } public string Name { get; set; } public decimal HourlyRate { get; set; } public DateTime HireDate { get; set; } } Now let’s say we have a collection of these represented as an IEnumerable<Employee> and we want to be able to output it to an Excel file for offline querying/manipulation. As it turns out, this is dead simple to do with EPPlus. Have a look: public void ExportToExcel(IEnumerable<Employee> employees, FileInfo targetFile) { using (var excelFile = new ExcelPackage(targetFile)) { var worksheet = excelFile.Workbook.Worksheets.Add("Sheet1"); worksheet.Cells["A1"].LoadFromCollection(Collection: employees, PrintHeaders: true); excelFile.Save(); } } That’s it. Let’s break down what’s going on here: Create a ExcelPackage to model the workbook (Excel file). Note that the ‘targetFile’ value here is a FileInfo object representing the location on disk where I want the file to be saved. Create a worksheet within the workbook. Get a reference to the top-leftmost cell (addressed as A1) and invoke the ‘LoadFromCollection’ method, passing it our collection of Employee objects. Behind the scenes this is reflecting over the properties of the type provided and pulling out any public members to become columns in the resulting Excel output. The ‘PrintHeaders’ parameter tells EPPlus to grab the name of the property and put it in the first row. Save the Excel file All of the heavy lifting here is being done by the ‘LoadFromCollection’ method, and that’s a good thing. Now, this was really easy to do, but it has some limitations. Using this approach you get a very plain, un-styled Excel worksheet. The column widths are all set to the default. The number format for all cells is ‘General’ (which proves particularly interesting if you have a DateTime property in your data source). I’m a “no frills” guy, so I wasn’t bothered at all by trading off simplicity for style and formatting. That said, EPPlus has tons of samples that you can download that illustrate how to apply styles and formatting to cells and a ton of other advanced features that are way beyond the scope of this post.

    Read the article

  • Anti-Forgery Request in ASP.NET MVC and AJAX

    - by Dixin
    Background To secure websites from cross-site request forgery (CSRF, or XSRF) attack, ASP.NET MVC provides an excellent mechanism: The server prints tokens to cookie and inside the form; When the form is submitted to server, token in cookie and token inside the form are sent by the HTTP request; Server validates the tokens. To print tokens to browser, just invoke HtmlHelper.AntiForgeryToken():<% using (Html.BeginForm()) { %> <%: this.Html.AntiForgeryToken(Constants.AntiForgeryTokenSalt)%> <%-- Other fields. --%> <input type="submit" value="Submit" /> <% } %> which writes to token to the form:<form action="..." method="post"> <input name="__RequestVerificationToken" type="hidden" value="J56khgCvbE3bVcsCSZkNVuH9Cclm9SSIT/ywruFsXEgmV8CL2eW5C/gGsQUf/YuP" /> <!-- Other fields. --> <input type="submit" value="Submit" /> </form> and the cookie: __RequestVerificationToken_Lw__=J56khgCvbE3bVcsCSZkNVuH9Cclm9SSIT/ywruFsXEgmV8CL2eW5C/gGsQUf/YuP When the above form is submitted, they are both sent to server. [ValidateAntiForgeryToken] attribute is used to specify the controllers or actions to validate them:[HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult Action(/* ... */) { // ... } This is very productive for form scenarios. But recently, when resolving security vulnerabilities for Web products, I encountered 2 problems: It is expected to add [ValidateAntiForgeryToken] to each controller, but actually I have to add it for each POST actions, which is a little crazy; After anti-forgery validation is turned on for server side, AJAX POST requests will consistently fail. Specify validation on controller (not on each action) Problem For the first problem, usually a controller contains actions for both HTTP GET and HTTP POST requests, and usually validations are expected for HTTP POST requests. So, if the [ValidateAntiForgeryToken] is declared on the controller, the HTTP GET requests become always invalid:[ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public class SomeController : Controller { [HttpGet] public ActionResult Index() // Index page cannot work at all. { // ... } [HttpPost] public ActionResult PostAction1(/* ... */) { // ... } [HttpPost] public ActionResult PostAction2(/* ... */) { // ... } // ... } If user sends a HTTP GET request from a link: http://Site/Some/Index, validation definitely fails, because no token is provided. So the result is, [ValidateAntiForgeryToken] attribute must be distributed to each HTTP POST action in the application:public class SomeController : Controller { [HttpGet] public ActionResult Index() // Works. { // ... } [HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult PostAction1(/* ... */) { // ... } [HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult PostAction2(/* ... */) { // ... } // ... } Solution To avoid a large number of [ValidateAntiForgeryToken] attributes (one attribute for one HTTP POST action), I created a wrapper class of ValidateAntiForgeryTokenAttribute, where HTTP verbs can be specified:[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = false, Inherited = true)] public class ValidateAntiForgeryTokenWrapperAttribute : FilterAttribute, IAuthorizationFilter { private readonly ValidateAntiForgeryTokenAttribute _validator; private readonly AcceptVerbsAttribute _verbs; public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs) : this(verbs, null) { } public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs, string salt) { this._verbs = new AcceptVerbsAttribute(verbs); this._validator = new ValidateAntiForgeryTokenAttribute() { Salt = salt }; } public void OnAuthorization(AuthorizationContext filterContext) { string httpMethodOverride = filterContext.HttpContext.Request.GetHttpMethodOverride(); if (this._verbs.Verbs.Contains(httpMethodOverride, StringComparer.OrdinalIgnoreCase)) { this._validator.OnAuthorization(filterContext); } } } When this attribute is declared on controller, only HTTP requests with the specified verbs are validated:[ValidateAntiForgeryTokenWrapper(HttpVerbs.Post, Constants.AntiForgeryTokenSalt)] public class SomeController : Controller { // Actions for HTTP GET requests are not affected. // Only HTTP POST requests are validated. } Now one single attribute on controller turns on validation for all HTTP POST actions. Submit token via AJAX Problem For AJAX scenarios, when request is sent by JavaScript instead of form:$.post(url, { productName: "Tofu", categoryId: 1 // Token is not posted. }, callback); This kind of AJAX POST requests will always be invalid, because server side code cannot see the token in the posted data. Solution The token must be printed to browser then submitted back to server. So first of all, HtmlHelper.AntiForgeryToken() must be called in the page where the AJAX POST will be sent. Then jQuery must find the printed token in the page, and post it:$.post(url, { productName: "Tofu", categoryId: 1, __RequestVerificationToken: getToken() // Token is posted. }, callback); To be reusable, this can be encapsulated in a tiny jQuery plugin:(function ($) { $.getAntiForgeryToken = function () { // HtmlHelper.AntiForgeryToken() must be invoked to print the token. return $("input[type='hidden'][name='__RequestVerificationToken']").val(); }; var addToken = function (data) { // Converts data if not already a string. if (data && typeof data !== "string") { data = $.param(data); } data = data ? data + "&" : ""; return data + "__RequestVerificationToken=" + encodeURIComponent($.getAntiForgeryToken()); }; $.postAntiForgery = function (url, data, callback, type) { return $.post(url, addToken(data), callback, type); }; $.ajaxAntiForgery = function (settings) { settings.data = addToken(settings.data); return $.ajax(settings); }; })(jQuery); Then in the application just replace $.post() invocation with $.postAntiForgery(), and replace $.ajax() instead of $.ajaxAntiForgery():$.postAntiForgery(url, { productName: "Tofu", categoryId: 1 }, callback); // Token is posted. This solution looks hard coded and stupid. If you have more elegant solution, please do tell me.

    Read the article

  • JavaScript Class Patterns

    - by Liam McLennan
    To write object-oriented programs we need objects, and likely lots of them. JavaScript makes it easy to create objects: var liam = { name: "Liam", age: Number.MAX_VALUE }; But JavaScript does not provide an easy way to create similar objects. Most object-oriented languages include the idea of a class, which is a template for creating objects of the same type. From one class many similar objects can be instantiated. Many patterns have been proposed to address the absence of a class concept in JavaScript. This post will compare and contrast the most significant of them. Simple Constructor Functions Classes may be missing but JavaScript does support special constructor functions. By prefixing a call to a constructor function with the ‘new’ keyword we can tell the JavaScript runtime that we want the function to behave like a constructor and instantiate a new object containing the members defined by that function. Within a constructor function the ‘this’ keyword references the new object being created -  so a basic constructor function might be: function Person(name, age) { this.name = name; this.age = age; this.toString = function() { return this.name + " is " + age + " years old."; }; } var john = new Person("John Galt", 50); console.log(john.toString()); Note that by convention the name of a constructor function is always written in Pascal Case (the first letter of each word is capital). This is to distinguish between constructor functions and other functions. It is important that constructor functions be called with the ‘new’ keyword and that not constructor functions are not. There are two problems with the pattern constructor function pattern shown above: It makes inheritance difficult The toString() function is redefined for each new object created by the Person constructor. This is sub-optimal because the function should be shared between all of the instances of the Person type. Constructor Functions with a Prototype JavaScript functions have a special property called prototype. When an object is created by calling a JavaScript constructor all of the properties of the constructor’s prototype become available to the new object. In this way many Person objects can be created that can access the same prototype. An improved version of the above example can be written: function Person(name, age) { this.name = name; this.age = age; } Person.prototype = { toString: function() { return this.name + " is " + this.age + " years old."; } }; var john = new Person("John Galt", 50); console.log(john.toString()); In this version a single instance of the toString() function will now be shared between all Person objects. Private Members The short version is: there aren’t any. If a variable is defined, with the var keyword, within the constructor function then its scope is that function. Other functions defined within the constructor function will be able to access the private variable, but anything defined outside the constructor (such as functions on the prototype property) won’t have access to the private variable. Any variables defined on the constructor are automatically public. Some people solve this problem by prefixing properties with an underscore and then not calling those properties by convention. function Person(name, age) { this.name = name; this.age = age; } Person.prototype = { _getName: function() { return this.name; }, toString: function() { return this._getName() + " is " + this.age + " years old."; } }; var john = new Person("John Galt", 50); console.log(john.toString()); Note that the _getName() function is only private by convention – it is in fact a public function. Functional Object Construction Because of the weirdness involved in using constructor functions some JavaScript developers prefer to eschew them completely. They theorize that it is better to work with JavaScript’s functional nature than to try and force it to behave like a traditional class-oriented language. When using the functional approach objects are created by returning them from a factory function. An excellent side effect of this pattern is that variables defined with the factory function are accessible to the new object (due to closure) but are inaccessible from anywhere else. The Person example implemented using the functional object construction pattern is: var personFactory = function(name, age) { var privateVar = 7; return { toString: function() { return name + " is " + age * privateVar / privateVar + " years old."; } }; }; var john2 = personFactory("John Lennon", 40); console.log(john2.toString()); Note that the ‘new’ keyword is not used for this pattern, and that the toString() function has access to the name, age and privateVar variables because of closure. This pattern can be extended to provide inheritance and, unlike the constructor function pattern, it supports private variables. However, when working with JavaScript code bases you will find that the constructor function is more common – probably because it is a better approximation of mainstream class oriented languages like C# and Java. Inheritance Both of the above patterns can support inheritance but for now, favour composition over inheritance. Summary When JavaScript code exceeds simple browser automation object orientation can provide a powerful paradigm for controlling complexity. Both of the patterns presented in this article work – the choice is a matter of style. Only one question still remains; who is John Galt?

    Read the article

  • JavaScript Class Patterns

    - by Liam McLennan
    To write object-oriented programs we need objects, and likely lots of them. JavaScript makes it easy to create objects: var liam = { name: "Liam", age: Number.MAX_VALUE }; But JavaScript does not provide an easy way to create similar objects. Most object-oriented languages include the idea of a class, which is a template for creating objects of the same type. From one class many similar objects can be instantiated. Many patterns have been proposed to address the absence of a class concept in JavaScript. This post will compare and contrast the most significant of them. Simple Constructor Functions Classes may be missing but JavaScript does support special constructor functions. By prefixing a call to a constructor function with the ‘new’ keyword we can tell the JavaScript runtime that we want the function to behave like a constructor and instantiate a new object containing the members defined by that function. Within a constructor function the ‘this’ keyword references the new object being created -  so a basic constructor function might be: function Person(name, age) { this.name = name; this.age = age; this.toString = function() { return this.name + " is " + age + " years old."; }; } var john = new Person("John Galt", 50); console.log(john.toString()); Note that by convention the name of a constructor function is always written in Pascal Case (the first letter of each word is capital). This is to distinguish between constructor functions and other functions. It is important that constructor functions be called with the ‘new’ keyword and that not constructor functions are not. There are two problems with the pattern constructor function pattern shown above: It makes inheritance difficult The toString() function is redefined for each new object created by the Person constructor. This is sub-optimal because the function should be shared between all of the instances of the Person type. Constructor Functions with a Prototype JavaScript functions have a special property called prototype. When an object is created by calling a JavaScript constructor all of the properties of the constructor’s prototype become available to the new object. In this way many Person objects can be created that can access the same prototype. An improved version of the above example can be written: function Person(name, age) { this.name = name; this.age = age; } Person.prototype = { toString: function() { return this.name + " is " + this.age + " years old."; } }; var john = new Person("John Galt", 50); console.log(john.toString()); In this version a single instance of the toString() function will now be shared between all Person objects. Private Members The short version is: there aren’t any. If a variable is defined, with the var keyword, within the constructor function then its scope is that function. Other functions defined within the constructor function will be able to access the private variable, but anything defined outside the constructor (such as functions on the prototype property) won’t have access to the private variable. Any variables defined on the constructor are automatically public. Some people solve this problem by prefixing properties with an underscore and then not calling those properties by convention. function Person(name, age) { this.name = name; this.age = age; } Person.prototype = { _getName: function() { return this.name; }, toString: function() { return this._getName() + " is " + this.age + " years old."; } }; var john = new Person("John Galt", 50); console.log(john.toString()); Note that the _getName() function is only private by convention – it is in fact a public function. Functional Object Construction Because of the weirdness involved in using constructor functions some JavaScript developers prefer to eschew them completely. They theorize that it is better to work with JavaScript’s functional nature than to try and force it to behave like a traditional class-oriented language. When using the functional approach objects are created by returning them from a factory function. An excellent side effect of this pattern is that variables defined with the factory function are accessible to the new object (due to closure) but are inaccessible from anywhere else. The Person example implemented using the functional object construction pattern is: var john = new Person("John Galt", 50); console.log(john.toString()); var personFactory = function(name, age) { var privateVar = 7; return { toString: function() { return name + " is " + age * privateVar / privateVar + " years old."; } }; }; var john2 = personFactory("John Lennon", 40); console.log(john2.toString()); Note that the ‘new’ keyword is not used for this pattern, and that the toString() function has access to the name, age and privateVar variables because of closure. This pattern can be extended to provide inheritance and, unlike the constructor function pattern, it supports private variables. However, when working with JavaScript code bases you will find that the constructor function is more common – probably because it is a better approximation of mainstream class oriented languages like C# and Java. Inheritance Both of the above patterns can support inheritance but for now, favour composition over inheritance. Summary When JavaScript code exceeds simple browser automation object orientation can provide a powerful paradigm for controlling complexity. Both of the patterns presented in this article work – the choice is a matter of style. Only one question still remains; who is John Galt?

    Read the article

  • How-to call server side Java from JavaScript

    - by frank.nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The af:serverListener tag in Oracle ADF Faces allows JavaScript to call into server side Java. The example shown below uses an af:clientListener tag to invoke client side JavaScript in response to a key stroke in an Input Text field. The script then call a defined af:serverListener by its name defined in the type attribute. The server listener can be defined anywhere on the page, though from a code readability perspective it sounds like a good idea to put it close to from where it is invoked. <af:inputText id="it1" label="...">   <af:clientListener method="handleKeyUp" type="keyUp"/>   <af:serverListener type="MyCustomServerEvent"                      method="#{mybean.handleServerEvent}"/> </af:inputText> The JavaScript function below reads the event source from the event object that gets passed into the called JavaScript function. The call to the server side Java method, which is defined on a managed bean, is issued by a JavaScript call to AdfCustomEvent. The arguments passed to the custom event are the event source, the name of the server listener, a message payload formatted as an array of key:value pairs, and true/false indicating whether or not to make the call immediate in the request lifecycle. <af:resource type="javascript">     function handleKeyUp (evt) {    var inputTextComponen = event.getSource();       AdfCustomEvent.queue(inputTextComponent,                         "MyCustomServerEvent ",                         {fvalue:component.getSubmittedValue()},                         false);    event.cancel();}   </af:resource> The server side managed bean method uses a single argument signature with the argument type being ClientEvent. The client event provides information about the event source object - as provided in the call to AdfCustomEvent, as well as the payload keys and values. The payload is accessible from a call to getParameters, which returns a HashMap to get the values by its key identifiers.  public void handleServerEvent(ClientEvent ce){    String message = (String) ce.getParameters().get("fvalue");   ...  } Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Find the tag library at: http://download.oracle.com/docs/cd/E15523_01/apirefs.1111/e12419/tagdoc/af_serverListener.html

    Read the article

< Previous Page | 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236  | Next Page >