Search Results

Search found 7217 results on 289 pages for 'nice'.

Page 196/289 | < Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >

  • WebLogic Partner Community Newsletter June 2013

    - by JuergenKress
    Dear WebLogic Partner Community member, This month newsletter contains tons of Java material with the availability of Java EE7! Including many articles in the May/June Java Magazine and the Duke Award nomination! Thanks for all your efforts to become certified and Specialized. For all the experts who achieved the WebLogic Server 12c Certified Implementation Specialist  or ADF 11g Certified Implementation Specialist you can download a logo for your blog or business card at the Competence Center. For all the companies who achieved a WebLogic and ADF Specialization you can request a nice plaque for your office. Summer time is training time – make sure you attend our Fusion Middleware Summer Camps or one of our upcoming Exalogic Implementation Workshop by PTS in Spain, Turkey and Middle East. For those who can not make it we offers plenty of online courses like the New ADF Academy. Or the webcast The value of Engineered Systems for SAP. At our WebLogic Community Workspace (WebLogic Community membership required) you can find Engineered Systems Strategy presentation. Thanks to the community for all the WebLogic best practice papers and tools like Automated provision of Oracle Weblogic Server Platform & Frank’s Coherence videos on YouTube & Create and deploy applications on the Oracle Java Cloud & WebLogic Application redeployment using shared libraries - without downtime & Oracle Traffic Director. The first tests deployments for WebLogic on Oracle Database Appliance are on the way, thanks to all who are their experience: Sizing & Configuration and Traffic Director. Virtualised Oracle Database Appliance Proof of Concept - #1 Planning from Simon Haslam. Would be great if you can also share your experience via twitter @wlscommunity! In the ADF section of the newsletter you find Tuning Application Module Pools and Connection Pools & List View - Cool Looking ADF PS6 Component for Collections & Insider Essential & User Interface & Skinning & Oracle Forms to ADF Modernization reference. Great summer time! Jürgen Kress Oracle WebLogic Partner Adoption EMEA To read the newsletter please visit http://tinyurl.com/WebLogicnewsJune2013 (OPN Account required) To become a member of the WebLogic Partner Community please register at http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic Community newsletter,newsletter,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Can I prevent an IDENTIFY PACKET DEVICE command to a specific device at boot?

    - by Brian Spisak
    This is related to a previous question related to installation that is now resolved. I'm opening a new question, because I still need to get my DVD drive working. Problem: Failed boot when my ASUS DRW-24B1/ST DVD drive is attached to my asmedia ASM1061. Symptom: ata8.00: exception Emask 0x52 Sact 0x0 SErr 0xffffffff action 0xe frozen ata8: SError: { blah blah } ata8.00: failed command: IDENTIFY PACKET DEVICE ata8.00: cmd blah blah res blah blah (ATA bus error) ata8.00: status: { DRDY } ata8: hard resetting link Background: The ASM1061 is a PCIe to SATA bridge providing 2 x 6Gb/s ports and is supposed to be fully compliant to SATA specs. I just discovered in the fine print of my ASUS P8Z77-V pro motherboard that "These SATA ports are for data hard drivers only. ATAPI devices are not supported." However, I have already installed Windows 7 using this drive and I can run the Ubuntu 12.04 installer from it as well. The only time I have a problem is during Ubuntu boot when it tries an IDENTIFY PACKET DEVICE which seems to be an ATAPI command. I can't simply switch this device to another SATA port because they are already allocated to other devices. (My chipset's 2 x 6Gb/s are connected to my boot SSD and a fast HDD while the 4 x 3Gb/s ports are running a RAID 5 array.) If this can't be fixed or worked around, I suppose I'll have to go buy SATA add-in card. Blech. Thoughts: If indeed this is a device specific issue (that it doesn't support ATAPI discovery) then I can't expect - is it udev? - to work with it. But, it seems that Windows and even the Ubuntu installer work just fine. So why does udev have a problem? At the end of the day, it would be nice to have the DVD working under Ubuntu, but I can live without it. But, as this is a dual-boot machine, I can't physically disconnect it because I want it to work with Windows. (And physically disconnecting it every time I want to boot Ubuntu is NOT an option. ;-) Questions: Should this be considered a bug? My feelings are that if it works with other OS that it should probably work with Ubuntu as well. How can I work around this problem? I have a limited knowledge of linux internals, but it seems I should be able to somehow tell udev (or whatever is doing the discovery) to ignore that device. Is there a way?

    Read the article

  • Friday Fun: Play Your Favorite 8-Bit NES Games Online

    - by Mysticgeek
    We finally made it to another Friday and once again we bring you some NES fun to waste the rest of the day before the weekend. Today we take a look at a site that contains a lot of classic NES games you can play online. vNES VirtualNES.com contains hundreds of vintage NES games you can play online. If you’re old enough to remember, when the NES came out, it breathed life back into home console gaming. Here we will take a look at a few of the games they offer that will certainly bring back memories. Super Mario Bros 3 which is a personal favorite from the 8-bit era.   Play Super Mario Bros 3 Excite Bike was one of the coolest dirt bike racing games at the time as it even allowed you to create your own tracks.   Play ExciteBike Of course The Legend of Zelda was one of the first fantasy games many an hour have been spent on. Play The Legend of Zelda We’d be remiss if we didn’t bring up Pac-man since the game recently celebrated it’s 30th anniversary. Play Pac-Man If you don’t like the default keyboard controls you can change them on the Options page. Join their forum and more…this site will definitely bring you back to the good old 8-bit NES days.   The site contains hundreds of different games for you to get on your old school NES fix. If you’re sick of waiting for the whistle to blow, this site will bring you back to the good old days when you had nothing to do but mash buttons all day. Play NES Games at virtualnes.com Similar Articles Productive Geek Tips Friday Fun: Get Your Mario OnFriday Fun: Go Retro with PacmanFriday Fun: Five More Time Wasting Online GamesFriday Fun: Online Flash Games to Usher in the WeekendFriday Fun: Online Sports Flash Games TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Awesome World Cup Soccer Calendar Nice Websites To Watch TV Shows Online 24 Million Sites Windows Media Player Glass Icons (icons we like) How to Forecast Weather, without Gadgets Outlook Tools, one stop tweaking for any Outlook version

    Read the article

  • Stop YouTube Videos from Automatically Playing in Chrome

    - by The Geek
    If you’ve actually used the internet before, you’ve probably come across a page with an auto-playing YouTube clip, and chances are good it was a rather annoying one. Here’s how to stop them from starting automatically in Chrome. We’ve already told you how to stop them from automatically playing if you’re a Firefox user (best answer: use Flashblock!), but now it’s time for Chrome users to get their turn. Use the Stop Autoplay for YouTube Extension The great thing about this extension is that it stops the video from playing, but it allows it to continue buffering, so when you do feel like playing the video, it’ll already be downloaded—really useful for people with slower internet connections. There’s no UI or anything fancy, just head to the extension page and click the Install button. If you want to get rid of it later, use the Tools –> Extensions menu (or you can type chrome://extensions/ into your address bar), and then click the Uninstall link for that add-on.   Download Stop Autoplay for YouTube [Google Chrome Extensions] Using FlashBlock for Chrome If you really wanted to, you could just disable Flash across the board using FlashBlock for Chrome. Once you’ve installed the extension, you won’t see any Flash elements anywhere, and you’ll have to move your mouse over them and click to enable them each time. When I installed the extension the first time, I noticed that YouTube was already in the allow list. I’m not sure if that’s the default setting or not, but you can use the icon in the address bar, or the Options from the Extensions panel to get to the settings page, and from there you can remove anything from the White List that you wouldn’t want. Another nice feature about Flash Block is that it can also block Silverlight, or you could simply uninstall or remove unnecessary Chrome plug-ins. Download FlashBlock for Chrome Similar Articles Productive Geek Tips Stop YouTube Videos from Automatically Playing in FirefoxDisable YouTube Comments while using ChromeApologies About An Awful Audio AdvertisementImprove YouTube Video Viewing in Google ChromeWatch YouTube Videos in Cinema Style in Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Follow Finder Finds You Twitter Users To Follow Combine MP3 Files Easily QuicklyCode Provides Cheatsheets & Other Programming Stuff Download Free MP3s from Amazon Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites

    Read the article

  • Using Movemail with Thunderbird on Ubuntu

    - by rxt
    I'm trying to read local mail with Thunderbird on Ubuntu (with both 12.04 and 13.04). I've followed the instructions found here: How can I access system mail in /var/mail/ via thunderbird? I can read mail on the system using alpine or vim, so I know the mailbox is not empty. When I click the get-mail button, nothing happens. I see no Inbox (or any folder structure) for the specific account. I've set the rights for /var/mail to 1777. Settings server name: localhost username: john How can I get this working? Okay, considering the extra bounty, I would like to get this working like normal mail. The accepted answer from Qasim resulted in a much more usable situation than before - opening mail in Thunderbird with layout. I still face three problems though. When new mail is received in the mailbox, Thunderbird won't see this until after I restart Thunderbird. When Thunderbird is restarted, all mail is reset to unread and deleted mail is undone. This is probably because Thunderbird reads the mail from the /var/mail/www-data file, but doesn't update this file. So after restarting, it simply reads this file again, with the new mail and all old mail. This is probably a postfix issue: mail is sent out to existing mail addresses, but cannot be delivered because the receiving mailserver cannot be reached. This results in "Undelivered mail returned to sender". Only one mailserver can be reached: localhost. Because this is a test system, I don't want real customers to receive mail. I've blocked mail ports in UFW to be sure. When opening the returned mail, I can scroll down and then I see the original mail with proper layout. So I can read the mail, see if the proper images are included, and for me that's workable. Having to restart TB to read new mail - I know when new mail arrives, so I know when to restart. Having old mail restored after a restart - not big problem as well. I can delete the mail file if it gets too much. I know how it works, but it would be nice if it worked like normal.

    Read the article

  • Canonicalization issue regarding academic URL vs. blog URL

    - by user5395
    I'm sorry if what I am about to write is long-winded. I only wish to be clear. I am an academic in the scientific community. I maintain a web site for my research, teaching, and other professional activities. Until recently, the content for this site was hosted in a directory on my university department's own server. The address is of the typical form (universityname).edu/~(myusername) I decided that I wanted to use WordPress in order to host and manage my page. So I set up a WordPress.com blog and then replaced the index.html file in (universityname).edu/~(myusername) with a new one consisting of a single frame, containing the WordPress.com blog. Now when a user visits (universityname).edu/~(myusername), he or she sees the blog instead. This has been pretty nice because, even when the user clicks on links between pages or posts in the blog, the only thing showing up in the address bar of the browser is www.(universityname).edu/~(myusername), because the blog is constrained to a frame. However, the effect of this change on the search side of things has not been so kind to me. Before, when someone searched for my name in Google, the first result was always (universityname).edu/~(myusername). This is the most desirable outcome, for professional reasons. (Having my academic URL come up first suggests that I am an accredited professional, and not just some crank with a blog!) But now, Google seems to have canonicalized my web presence under the blog's WordPress.com address. It has completely forgotten about my academic URL and considers the WordPress.com address to be the best address representing me on the web. Unfortunately, WordPress.com doesn't support the canonical tag, so I can't tell the blog to advertise itself as my academic URL in the header. (It doesn't seem to help at all that I have used the WordPress.com dashboard to turn on no-indexing of the blog.) One obvious solution would be to use the departmental server to host my content again, and use a local installation of the WordPress platform. For reasons beyond my control, the platform will not be deployed on the departmental server at this time. Another solution would be to use shared hosting with WordPress.org support, because the WordPress.org platform does support the canonical tag (albeit via a plug-in). But this seems to usually require purchasing a domain name and other fees, and there is no guarantee that Google will listen to the canonical tag (it might use whatever domain name I end up with instead). Is there a way I can more cleverly integrate the WordPress.com blog into a page hosted on my department's server? Is there some PHP code I can write to retrieve the blog's contents in a way that Google won't treat as a link / "perceive" the blog? Please note: I am a PHP novice at best. I just feel there should be a simpler solution to all this, within the constraints of what I have described above. Thanks!

    Read the article

  • TomEE Integration in NetBeans Next

    - by Geertjan
    At JavaOne 2013, there was a lot of buzz around the TomEE server, e.g., many Tweets, nice party, and a new TomEE consulting company. For those tracking TomEE developments, it is interesting to note that recently the NetBeans IDE development builds have had added to them... TomEE support. Note: The TomEE support described here is not in NetBeans IDE 7.4, but in development builds for the next release of NetBeans IDE.For example, with NetBeans IDE development builds you're able to: register TomEE as a server in the Services window (TomEE has several distributions, e.g., one can use the "with JAX-RS" one, for example) create a Java EE 6 web project (e.g., Maven based) against this server create JPA entities from database create JAX-RS classes from JPA entities create JSF pages from JPA entities the IDE lets you create a new data source for TomEE and deploy it to the server the IDE figures out the components that are already packaged in TomEE, and the fact that (unlike with regular Tomcat), it does not need to package any components such as JSF implementation, persistence provider, or JAX-RS runtime, so that the resulting WAR file is very small the IDE can also do "deploy on save" with TomEE, so that your development cycle is very fast Adam Bien blogged about how he set up TomEE sometime ago, here. The official support in NetBeans IDE will be much more tightly integrated, simplifying the steps Adam describes. For example, the IDE does step 2 from Adam's blog for you, i.e., it sets up TomEE deployment roles. Moreover, it knows about all the technologies included in TomEE so that it can optimize the packaging; it knows about TomEE's persistence setup; it can work with TomEE data sources, etc. Below you see a Maven-based Java EE 6 PrimeFaces application (all entities and JSF pages generated from a database) deployed to TomEE in NetBeans IDE: And here's the management console for configuring and finetuning TomEE in NetBeans IDE: When I tried out the NetBeans IDE development build and TomEE, to see how everything fits together, I was surprised at how fast TomEE started up. Not sure what they did to it, but seems like a server on steroids. And setting it up in NetBeans IDE was trivial. Add the simple set up of TomEE in NetBeans IDE to the many benefits that the widely praised out of the box NetBeans Maven tools make possible, together with the fact that not one single plugin had to be installed to get everything you see described here up and running... and you have a really powerful combination of dev tools, all for free.

    Read the article

  • Ubuntu 10.10 forgets desktop theme.

    - by Marcelo Cantos
    (I posed this question on superuser.com and haven't received any answers or comments, then I came across this site, so my apologies to anyone who has seen this already.) I am running Ubuntu in VirtualBox (on a Windows 7 host). Several times now, the top-level menu bar, the task bar — and seemingly every system dialog — have forgotten the out-of-the-box "Ambiance" theme they conform to when I first installed the system. Window captions still preserve the theme, but pretty much nothing else does. I have searched high and low on Google for assistance with this problem. Everything I've found suggests either running some gconf reset or deleting .gconf* .gnome* and other similar directories. I have followed all this advice and nothing works. I still get a boring Windows-95-style gray 3D look and feel. On previous occasions, after much messing around I've given up and rebooted the VM instance, and been pleasantly suprised to see the original "Ambience" theme restored throughout the UI, but invariably it disappears again some time later, usually after a reboot, so I can never figure out what I did that broke it. Here's a sample from Ubuntu's site of what I want it to look like. And here's a screenshot of my system as it currently looks. Also note that my GNOME Terminals normally have a nice purple semi-translucent look, and as can be seen from the screenshot, they are now just a solid matt white. This last time (just yesterday), trying numerous combinations all the usual tricks and rebooting several times hasn't fixed it, so here I am on SU wondering: How do I recover the out-of-the-box theme for my Gnome/Ubuntu desktop, noting that blowing away all config files — as suggested in many places online — fails to achieve this? It might help to know that it seems to fail either after I resize the VM instance, forcing the Ubuntu desktop to resize itself, or after I play around with Compiz settings. I haven't been able to figure out which of these it is, and it could be neither. Given the amount of pain I have had to go through to get things back to normal (and given that I am at a loss as to how to do so), it has proven difficult to definitively isolate the cause.

    Read the article

  • Installing MOSS 2007 on Windows 2008 R2

    - by Manesh Karunakaran
    When you try to install MOSS 2007 on Windows 2008 R2, if you are using an installation media that is older than SP2, you would get the following error, saying that “This program is blocked due to compatibility issues”    All is not lost though, all you need to do is to slip stream the SP2 updates to the MOSS 2007 Setup. Here’s a nice how to on how to do that. http://blogs.technet.com/seanearp/archive/2009/05/20/slipstreaming-sp2-into-sharepoint-server-2007.aspx Once you slipstream the SP2 updates, you would be able to continue with the installation with out the above error. HTH.   You may already read from blogs about April Cumulative Update for separate components in SharePoint. Now, the server-packages (also known as “Uber” packages) of April Cumulative Update for Microsoft Office SharePoint Server 2007 and Windows SharePoint Services 3.0 are ready for download. Download Information Windows SharePoint Services 3.0 April cumulative update package http://support.microsoft.com/hotfix/KBHotfix.aspx?kbnum=968850 Office SharePoint Server 2007 April cumulative update package http://support.microsoft.com/hotfix/KBHotfix.aspx?kbnum=968851 Detail Description Description of the Windows SharePoint Services 3.0 April cumulative update package http://support.microsoft.com/kb/968850 Description of the Office SharePoint Server 2007 April cumulative update package http://support.microsoft.com/kb/968851 Installation Recommendation for a fresh SharePoint Server To keep all files in a SharePoint installation up-to-date, the following sequence is recommended. Service Pack 2 for Windows SharePoint Services 3.0 Service Pack 2 for Office SharePoint Server 2007 April Cumulative Update package for Windows SharePoint Services 3.0 April Cumulative Update package for Office SharePoint Server 2007 Please note: Start from April Cumulative Update, the packages will no longer install on a farm without a service pack installed. You must have installed either Service Pack 1 (SP1) or SP2 prior to the installation of the cumulative updates. After applying the preceding updates, run the SharePoint Products and Technologies Configuration Wizard or “psconfig –cmd upgrade –inplace b2b -wait” in command line. This needs to be done on every server in the farm with SharePoint installed.  The version of content databases should be 12.0.6504.5000 after successfully applying these updates. For more in-depth guidance for the update process, we recommend that customers refer to the following articles. These articles provide a correct way to deploy updates, identify known issues (and resolutions), and provide information about creating slipstream builds. Deploy software updates for Windows SharePoint Services 3.0 http://technet.microsoft.com/en-us/library/cc288269.aspx Deploy software updates for Office SharePoint Server 2007 http://technet.microsoft.com/en-us/library/cc263467.aspx Create an installation source that includes software updates (Windows SharePoint Services 3.0) http://technet.microsoft.com/en-us/library/cc287882.aspx Create an installation source that includes software updates (Office SharePoint Server 2007) http://technet.microsoft.com/en-us/library/cc261890.aspx

    Read the article

  • Microsoft Virtual Academy

    Carpe Diem It's been since a while that I could write an article for this blog but alas, I was (and still am) very busy with customer's work. Which is actually good. So, what is this article going to tell you? Well, in general, just what I already tweeted, that life is constant process of learning - especially as software craftsman. Due to an upcoming new customer project in ASP.NET I had to seize the opportunity to get my head deeper into latest available technologies, like Windows Azure and SQL Azure. I know... cloud computing and so on is not a recent development and already available since quite a while but I never any means to get myself into this since roughly two weeks ago. Microsoft Virtual Academy I can't remember exactly what guided me towards the Microsoft Virtual Academy (MVA), oh wait... Yes, it was a posting on Facebook from an old CLIP community friend. He posted a shortened URL with #MVA tag that caught my attention. Thanks for that Thomas Kuberek. After the usual sign in or registration via Live ID I was a little bit surprised that Mauritius is not an available country option... Quick mail exchange with the MVA Decan, and yeah, apologies for the missing entry. So, currently I'm learning about Microsoft products and services, and collecting points under "Not Listed Country" until Mauritius is going to be added. Hopefully soon, as MVA honors your effort with different knowledge ranks that are compared to other students with public profiles. I think it's a nice move to add some game and competition factor into the learning game. The tracks and their different modules are mainly references to publicly available material online, namely on either MSDN, TechNet, Channel9, or other Microsoft based sites. The course material therefore also varies in different media and formats, ranging from simple online articles over downloadable documents (.docx or .pdf) to Silverlight / Windows Media streams with download options. Self-assessment and students ranking Each module in a track can be finished by taking part in a self-assessment. Up to now, the assessment I did (and passed) were limited to 10 minutes available time, and consisted of six to seven questions on the module training material. Nothing too serious but it gives you a glimpse idea how Microsoft certification exams are structured. Conclusion Nothing really new but nicely gathered, assembled and presented to the MVA students. At the moment, I wouldn't dare to compare the richness and quality of those courses with professional training offers, like Pluralsight .NET Training, LearnDevNow, VTC, etc. at all, but I think that MVA has potential. Give it a try, and let me know about your opinions.

    Read the article

  • Code Analysis Rule Sets in Visual Studio 2010

    - by Anthony Trudeau
    Microsoft Visual Studio 2010 introduces the concept of rule sets when configuring code analysis.  This is a valuable change from Visual Studio 2008 that I didn't even realize I wanted.  Visual Studio 2008 by default selected all rules and then you had to remove rules on an item by item basis. The rule sets fall into logical groups including "Microsoft All Rules", "Microsoft Basic Correctness Rules", "Microsoft Security Rules", et al.  And within the project properties you can select one rule set, multiple rule sets, or you can define your own rule set based upon another. Selecting a single rule set is obviously the easiest option.  The default rule set when you create a new project is the "Microsoft Minimum Recommended Rules".  However, in my opinion the recommended rules are just too permissive.  For that reason you might want to change your rule set to "Microsoft All Rules" until you get around to creating your own rule set; or alternately you can select multiple rule sets which is an option from the rule set combo box.  The Visual Studio documentation has comprehensive help on what is contained within the rule sets. Creating your own rule set is easy if not obvious.  You need to start a rule set from an existing rule set.  To get started select a rule set in the combo box within the Code Analysis tab of the project properties.  I selected the "Microsoft All Rules" for my rule set, but you may find it easier to start with the "Microsoft Minimum Recommended Rules" if your rules are on the more permissive side. Once your rule set is selected click the Open button.  This will display a dialog that is similar in composition to the rules selection from Visual Studio 2008.  Browsing through the tree view you can select or deselect individual rules within their categories; and you can indicate that the rules are flagged as errors instead of the default which is a warning.  A nice touch to the form is that you get a help pane when you select an individual rule.  That helped me considerably when I first configured my rule set. Once you have finished selecting your rules click the Save tool button, specify a location and name, and click the Save button on the Save As dialog.  Once you're back on the Code Analysis tab you'll choose the Browse option within the combo box and open the file you just created.

    Read the article

  • need some concrete examples on user stories, tasks and how they relate to functional and technical specifications

    - by gideon
    Little heads up, Im the only lonely dev building/planning/mocking my project as I go. Ive come up with a preview release that does only the core aspects of the system, with good business value and I've coded most of the UI as dirty throw-able mockups over nicely abstracted and very minimal base code. In the end I know quite well what my clients want on the whole. I can't take agile-ish cowboying anymore because Im completely dis-organized and have no paper plan and since my clients are happy, things are getting more complex with more features and ideas. So I started using and learning Agile & Scrum Here are my problems: I know what a functional spec is.(sample): Do all user stories and/or scenarios become part of the functional spec? I know what user stories and tasks are. Are these kinda user stories? I dont see any Business Value reason added to them. I made a mind map using freemind, I had problems like this: Actor : Finance Manager Can Add a Financial Plan into the system because well thats the point of it? What Business Value reason do I add for things like this? Example : A user needs to be able to add a blog article (in the blogger app) because..?? Its the point of a blogger app, it centers around that feature? How do I go into the finer details and system definitions: Actor: Finance Manager Action: Adds a finance plan. This adding is a complicated process with lots of steps. What User Story will describe what a finance plan in the system is ?? I can add it into the functional spec under definitions explaining what a finance plan is and how one needs to add it into the system, but how do I get to the backlog planning from there? Example: A Blog Article is mostly a textual document that can be written in rich text in the system. To add a blog article one must...... But how do you create backlog list/features out of this? Where are the user stories for what a blog article is and how one adds/removes it? Finally, I'm a little confused about the relations between functional specs and user stories. Will my spec contain user stories in them with UI mockups? Now will these user stories then branch out tasks which will make up something like a technical specification? Example : EditorUser Can add a blog article. Use XML to store blog article. Add a form to add blog. Add Windows Live Writer Support. That would be agile tasks but would that also be part of/or form the technical specs? Some concrete examples/answers of my questions would be nice!!

    Read the article

  • Token based Authentication and Claims for Restful Services

    - by Your DisplayName here!
    WIF as it exists today is optimized for web applications (passive/WS-Federation) and SOAP based services (active/WS-Trust). While there is limited support for WCF WebServiceHost based services (for standard credential types like Windows and Basic), there is no ready to use plumbing for RESTful services that do authentication based on tokens. This is not an oversight from the WIF team, but the REST services security world is currently rapidly changing – and that’s by design. There are a number of intermediate solutions, emerging protocols and token types, as well as some already deprecated ones. So it didn’t make sense to bake that into the core feature set of WIF. But after all, the F in WIF stands for Foundation. So just like the WIF APIs integrate tokens and claims into other hosts, this is also (easily) possible with RESTful services. Here’s how. HTTP Services and Authentication Unlike SOAP services, in the REST world there is no (over) specified security framework like WS-Security. Instead standard HTTP means are used to transmit credentials and SSL is used to secure the transport and data in transit. For most cases the HTTP Authorize header is used to transmit the security token (this can be as simple as a username/password up to issued tokens of some sort). The Authorize header consists of the actual credential (consider this opaque from a transport perspective) as well as a scheme. The scheme is some string that gives the service a hint what type of credential was used (e.g. Basic for basic authentication credentials). HTTP also includes a way to advertise the right credential type back to the client, for this the WWW-Authenticate response header is used. So for token based authentication, the service would simply need to read the incoming Authorization header, extract the token, parse and validate it. After the token has been validated, you also typically want some sort of client identity representation based on the incoming token. This is regardless of how technology-wise the actual service was built. In ASP.NET (MVC) you could use an HttpModule or an ActionFilter. In (todays) WCF, you would use the ServiceAuthorizationManager infrastructure. The nice thing about using WCF’ native extensibility points is that you get self-hosting for free. This is where WIF comes into play. WIF has ready to use infrastructure built-in that just need to be plugged into the corresponding hosting environment: Representation of identity based on claims. This is a very natural way of translating a security token (and again I mean this in the widest sense – could be also a username/password) into something our applications can work with. Infrastructure to convert tokens into claims (called security token handler) Claims transformation Claims-based authorization So much for the theory. In the next post I will show you how to implement that for WCF – including full source code and samples. (Wanna learn more about federation, WIF, claims, tokens etc.? Click here.)

    Read the article

  • Redgate ANTS Performance Profiler

    - by Jon Canning
    Seemingly forever I've been working on a business idea, it's a REST API delivering content to mobiles, and I've never really had much idea about its performance. Yes, I have a suite of unit tests and integration tests, but these only tell me that it works, not how well it works. I was also about to embark on a major refactor, swapping the database from MongoDB to RavenDB, and was curious to see if that impacted performance at all, so I needed a profiler that supported IIS Express that I can run my integration tests against, and Google gave me:   http://www.red-gate.com/supportcenter/content/ANTS_Performance_Profiler/help/7.4/app_iise   Excellent. Following the above guide an instance of IIS Express and is launched, as is Internet Explorer. The latter eventually becomes annoying, I would like to decide whether I want a browser opened, but thankfully the guide is wrong in that it can be closed and profiling will continue. So I ran my tests, stopped profiling, and was presented with a call tree listing the endpoints called and allowing me to drill down to the source code beneath.     Although useful and fascinating this wasn't what I was expecting to see, I was after the method timings from the entire test suite. Switching Show to Methods Grid presented me with a list of my methods, with the slowest lit up in red at the top. Marvellous.     I did find that if you switch to Methods Grid before Call tree has loaded, you do not get the red warnings.   StructureMap was very busy, and next on the list was a request filter that I didn't expect to be so overworked. Highlighting it, the source code was presented to me in the bottom window with timings and a nice red indicator to show me where to look. Oh horror, that reflection hack I put in months ago, I'd forgotten all about it. It was calling Validate<T>() which in turn was resolving a validator from StructureMap. Note to self, use //TODO: when leaving smelly code lying around.     Before refactoring, remember to Save Profile Results from the File menu. Annoyingly you are not prompted to save your results when exiting, and using Save Project will only leave you thankful that you have version control and can go back in time to run your tests again.   Having implemented StructureMap’s ForGenericType, I ran my tests again and:     Win, thankyou ANTS (What does ANTS stand for BTW?)   There's definitely room in my toolbox for a profiler; what started out as idle curiosity actually solved a potential problem. When presented with a new codebase I can see enormous benefit from getting an overview of the pipeline from the call tree before drilling into the code, and as a sanity check before release it gives a little more reassurance that you've done your best, and shows you exactly where to look if you haven’t.   Next I’m going to profile a load test.

    Read the article

  • Using a SQL Prompt snippet with template parameters

    - by SQLDev
    As part of my product management role I regularly attend trade shows and man the Red Gate booth in the vendor exhibition hall. Amongst other things this involves giving product demos to customers. Our latest demo involves SQL Source Control and SQL Test in a continuous integration environment. In order to demonstrate quite how easy it is to set up our tools from scratch we start the demo by creating an entirely new database to link to source control, using an individual database name for each conference attendee. In SQL Server Management Studio this can be done either by selecting New Database from the Object Explorer or by executing “CREATE DATABASE DemoDB_John” in a query window. We recently extended the demo to include SQL Test. This uses an open source SQL Server unit testing framework called tSQLt (www.tsqlt.org), which has a CLR object that requires EXTERNAL_ACCESS to be set as follows: ALTER DATABASE DemoDB_John SET TRUSTWORTHY ON This isn’t hard to do, but if you’re giving demo after demo, this two-step process soon becomes tedious. This is where SQL Prompt snippets come into their own. I can create a snippet named create_demo_db for this following: CREATE DATABASE DemoDB_John GO USE DemoDB_John GO ALTER DATABASE DemoDB_John SET TRUSTWORTHY ON Now I just have to type the first few characters of the snippet name, select the snippet from SQL Prompt’s candidate list, and execute the code. Simple! The problem is that this can only work once due to the hard-coded database name. Luckily I can leverage a nice feature in SQL Server Management Studio called Template Parameters. If I modify my snippet to be: CREATE DATABASE <DBName,, DemoDB_> GO USE <DBName,, DemoDB_> GO ALTER DATABASE <DBName,, DemoDB_> SET TRUSTWORTHY ON Once I’ve invoked the snippet, I can press Ctrl-Shift-M, which calls up the Specify Values for Template Parameters dialog, where I can type in my database name just once. Now you can click OK and run the query. Easy. Ideally I’d like for SQL Prompt to auto-invoke the Template Parameter dialog for all snippets where it detects the angled bracket syntax, but typing in the keyboard shortcut is a small price to pay for the time savings.

    Read the article

  • NuGet JustMock

    - by mehfuzh
    As most of us already know JustMock got  a free edition. The free edition is not a stripped down of the features of the full edition but I would rather say its a strip down of the type you can mock. Technically, free version runs on  proxy as full version runs on proxy + profiler. In full version, It switches to profiler when you are mocking final methods or sealed class or anything else that can not be done using inheritance. Like in full version you can mock non public methods , in free version you can still do it but it has to be virtual for protected or must be done through InternalsVisibleTo attribute for internal virtual methods (If you have access to the source and can apply the attribute). Now, you can get a copy of free edition from the product page. Install it and off you go. But it is also exposed to NuGet. Those of you are not familiar with NuGet (that will be odd). But still NuGet is the centralized package manager from Microsoft that cuts the workflow of manual inclusion of  libraries in your project. I think NuGet in future will limit the scope of  “.vsi” packages and installers because of its ease (except in some cases). Its similar to ruby gems. In ruby, virtually you can install any library in this way “gems  install <target_library>” and you are off to go. It will check the dependencies, install them or less prompt with the steps you need to do.   Now sticking to the post, to get started you first need to install NuGet package manager. Once you have completed the step pressing “Ctrl + W, Ctrl + Z” it will bring up an console like one below:   Once you are here, you just have to type “install-package justmock” Next, it will should print the confirmation when the installation is complete: Moving to visual studio solution explorer, you will now see:   Finally, NuGet is still in its early ages and steps that are shown here may not remain the same in coming releases, but feel free to enjoy what is out there right now. Regarding JustMock free edition, there is a nice post by Phil Japikse at Introducing JustMock Free Edition. I think its worth checking if not already.   Have fun and happy holidays!

    Read the article

  • The JDEdwards EnterpriseOne PreSales University

    - by Julien Haye
    Istanbul NOV 5-9 Wednesday, NOV 7 - It is raining outside and I am sitting in my hotel room (#106) in Istanbul and create my first blog entry. Today this blog was enabled and I am excited to have the ability to share my (first) thoughts with the EMEA JDE Partner Community. I am here in Istanbul because we are currently running the JDEdwards PreSales University Event series. This PreSales University is an established event series which we deliver the fifth time now and the first time in the ECEMEA region. Delegates value the openness and competence from the Product Strategy and Product Development Team from Denver and India. Together with the regional Oracle PreSales team we had very valuable discussions around product features and functions and about the business value of the new delivered applications and tools. Additionally the event provides endless opportunities to exchange ideas with other JD Edwards Partner and the Oracle PreSales Team. With its focus on sharing and learning, best practice, user experience and transforming technologies, delegates will leave this event with an abundance of new ideas and best practices to try for your coming projects and existing customer implementations. A day out of the office gives delegates a chance to gain a new perspective on their business processes. Everybody sees better ways of working just by being immersed in an environment where the focus is on using products more effectively. Apps Track: Highly concentrated participants in Istanbul listening to Jeff Erickson presenting the news about OneView Reporting. Jeff: We believe “The things you said”. The event is organized into two tracks, one for Apps and one for Tech. Everybody was able to learn new features and functions and how to position this products. The focus was on the new Apps release 9.1 and Tools Release 9.1.2 and their Value Propositions. For all topics hands-on exercises has been given to the participants. Even very experienced senior consultants did learn a lot from this event. In total we have 55 people registered and we still have some more content to deliver. By the way: Istanbul is a nice place to be. I already booked my next trip to this beautiful city. In two weeks we deliver the JD Edwards EECIS Executive Forum again in Istanbul. Once again a tough Agenda. I will let you know if I had the ability to have a walk outside and see a bit more of this beautiful city. At least I expect to have a different room number. Many greetings Hartmut WieseOracle Alliances & Channels EMEA

    Read the article

  • Investing in Servers by Intel

    - by Koushal Deshpande
    Originally posted on: http://geekswithblogs.net/BizTalkAndOtherTechs/archive/2013/10/31/investing-in-servers-by-intel.aspxA nice article reference from Intel, refer here. Referees to cloud as well. Choose correctly what you need. 1 Do determine right server for your company. There is no use getting a server that has redundant services but still add to the costs. 2 Do get servers that can be upgraded. A server with limited memory and storage may not be able to keep up with your business growth. The basic memory and storage options might not be sufficient. Consider at least 8GB of RAM and 1 terabyte of hard disk space. 3 Do check the server has at least one Gigabit Ethernet port. This allows high speed transferring of files and increases productivity for your employees. USB and Firewire ports may not be enough as their transfer speed is too low and will affect the productivity of your company. Infinite Technologies is ready to help perform this upgrade. Contact Infinite Technologies now View our other resellers » 4 Do verify that the server comes with documentation. Documentation allows you to make a claim when your server breaks down and is supported by a warranty. 6 Do check the support options for the server from the manufacturer. Different manufacturer has different support options such as maintenance plans and software upgrades. 5 Do always look into the warranty. Get an enhanced warranty that guarantees response and repair time to avoid disruption. 7 Do get server management tools that can be used on any computer. Server management tools should be cross compatible across different operating systems to take into account future PC replacements. 8 Do check the power usage of the servers. Get the right power supply to avoid damaging server hardware and consider the Intel® Xeon® E3 processor to help save on your electricity bills. 9 Do check what built-in security packages are available. Ensure that your server is protected. Built-in security1 helps you save on getting add on security packages.

    Read the article

  • Turn Chrome’s New Tab Page into an Ubuntu Forums Powerhouse

    - by Asian Angel
    There’s a lot of start page plug-ins for Google Chrome, but if you’re an Ubuntu Forums enthusiast, you might be interested in the powerful UbuntuForums.org Start Page extension. Using the Ubuntu Forums Chrome Start Page Once you’ve installed the extension and opened a new tab, you’ll notice quite a difference from the boring default “New Tab Page”. While you may look at this and wonder if what you see is all that there is to it, there’s actually a lot of hidden customization and functionality. The upper left corner is where you will control the content displayed in the “Ubuntu New Tab Page”. The “Buttons Toolbar” that you see at the top lets you shift between the types of content viewed, from RSS feeds to bookmarks and more. This entire set of links provides direct links to the appropriate section in the Ubuntu Forums. As with the set of links pictured above each of these will open in a new tab when clicked on. There’s even a feature to browse wallpapers from DesktopNexus with categories on the left and pictures to the right. If you want to return to the regular layout you should use the “Start Page Link” highlighted here. Options The options will allow you to customize the colors shown in the “Ubuntu New Tab Page” to create a very nice match to your current browser and/or system theme if desired. You can also do some customization to the fonts, “Bookmark Shortcuts”, and populate the “Custom Links Section” in the main page. Conclusion If you are an Ubuntu enthusiast then this will be a very useful extension to add to your browser. The wealth of direct links and built-in functionality make this extension worth trying. Links Download the Ubuntuforums.org Start Page extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Turn Off Auto-Play of Audio and Video CDs and DVDs in UbuntuScroll Backwards From the Ubuntu Server Command LineAnnouncing the How-To Geek ForumsDisable the System Beep on Ubuntu EdgyEnable Smooth fonts on Ubuntu Linux TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow

    Read the article

  • How do we keep dependent data structures up to date?

    - by Geo
    Suppose you have a parse tree, an abstract syntax tree, and a control flow graph, each one logically derived from the one before. In principle it is easy to construct each graph given the parse tree, but how can we manage the complexity of updating the graphs when the parse tree is modified? We know exactly how the tree has been modified, but how can the change be propagated to the other trees in a way that doesn't become difficult to manage? Naturally the dependent graph can be updated by simply reconstructing it from scratch every time the first graph changes, but then there would be no way of knowing the details of the changes in the dependent graph. I currently have four ways to attempt to solve this problem, but each one has difficulties. Nodes of the dependent tree each observe the relevant nodes of the original tree, updating themselves and the observer lists of original tree nodes as necessary. The conceptual complexity of this can become daunting. Each node of the original tree has a list of the dependent tree nodes that specifically depend upon it, and when the node changes it sets a flag on the dependent nodes to mark them as dirty, including the parents of the dependent nodes all the way down to the root. After each change we run an algorithm that is much like the algorithm for constructing the dependent graph from scratch, but it skips over any clean node and reconstructs each dirty node, keeping track of whether the reconstructed node is actually different from the dirty node. This can also get tricky. We can represent the logical connection between the original graph and the dependent graph as a data structure, like a list of constraints, perhaps designed using a declarative language. When the original graph changes we need only scan the list to discover which constraints are violated and how the dependent tree needs to change to correct the violation, all encoded as data. We can reconstruct the dependent graph from scratch as though there were no existing dependent graph, and then compare the existing graph and the new graph to discover how it has changed. I'm sure this is the easiest way because I know there are algorithms available for detecting differences, but they are all quite computationally expensive and in principle it seems unnecessary so I'm deliberately avoiding this option. What is the right way to deal with these sorts of problems? Surely there must be a design pattern that makes this whole thing almost easy. It would be nice to have a good solution for every problem of this general description. Does this class of problem have a name?

    Read the article

  • Five Best Practices for Going Mobile

    - by kellsey.ruppel
    76% of IT decision makers indicate mobile trends will have a high to extremely high impact on their organization. Has your organization gone mobile? Looking for some ideas on how to get started? John Brunswick shares his Best Practices for Going Mobile. Mobile technology has gone from nice-to-have to a cornerstone of user engagement. Mobile access enables social networking, decision support, purchasing, content consumption, and location-based searching, extending experiences beyond what is available in traditional desktop computing.  Organizations rushing to ensure their brand's mobile availability may have taken a tactical approach to implementation, but strategically approaching mobile can enable greater returns on a similar investment and subsequent mobile projects. Here are some strategic considerations for delivering products, services, and information to mobile constituents.  Who, Why, and What? Ask yourself these key questions: who are you attempting to engage through the channel, and why are they engaging you through this channel? What experience will satisfy their needs? What outcome will support your core business? Will you be informing and/or transacting with this person?  Mobile Behavior. Mobile users generally engage for a very specific purpose. Ensure that access to information, services, and products is streamlined. Arriving on a mobile site through search only to be asked to search again frustrates users.  Mobile Is Broad. After establishing the audience and goal, review technology requirements to support them. Do you need a mobile Website, native mobile application, or both? Do you need to support multiple devices? Know the difference between native mobile and mobile Web.  Social Strategy. Users are more likely to trust reviews from peers than marketing information from a vendor. If you are selling products or services, be sure to make social integration part of your strategy.  Content Management. Consider a shared content platform strategy for Web and mobile projects. Fresh, consistent content is important for high-quality experiences. Read more from John Brunswick.We'll also be talking mobile strategies and how you can transform your portal experience and optimize online engagement -- making your portals more interactive and more engaging across multiple channels in a webcast tomorrow. We hope you'll join us!

    Read the article

  • Minimizing Dependencies For GUIs

    - by tuba09
    I've been working on a project, and have been charged with designing the projects GUI front-end. I'm coding in Java and using the Swing toolkit. Usability-wise, the GUI front-end follows all of Nielsen's heuristics. Users can easily get to where they want to go through the click of a button / JComboBox. Essentially, in Swing terms, what happens is their actions drive the creation/deletion of custom panels. The GUI is coming along fine for the most part. However, I have to admit to being utterly dismayed at the tight web of dependencies my code is being smothered in. The main problem that I've encountered, that I haven't been able to fix as of yet, is how to keep a reference to the panels/buttons being changed. I'll give an example: Say there's a button A Say there's a panel B displaying picture C Say there's another picture D (not currently being displayed by panel B) When user clicks A, panel B should remove picture C and display picture D My question is, what's the best way of keeping track of panel B? Since I need a global point of access to panel B, my solution has so far been to just shoehorn it into a static variable, and access it through a series of static getters and setters. And this static variable is usually stored in the reference's original class. I.e. UserPanel has a static variable that stores a reference to itself. Is there an easy, tried-and-true way of dealing with these kinds of situations? Like my GUI works fine, but it is not modular and/or robust at all. To add to this, the dreaded 'cyclical dependencies' issue that's shunned by so many programmers is out here in full effect. I'm fairly new to development and just want to make sure that my code will be fairly extensible and won't cause much of a headache to the next person that decides to get a try at it. I know there's loads of books out there that probably have a nice elegant solution to this, but unfortunately I just don't have the time to leisure read right now. I need something that's quick and dirty. Thanks in advance

    Read the article

  • Configurable tables in sql database

    - by dot
    I have the following tables in my database: Config Table: ====================================== Start_Range | End Range | Config_id 10 | 15 | 1 ====================================== Available_UserIDs ========================== ID | UserID | Used_YN | 1 | 10 | t | 1 | 11 | f | 1 | 12 | f | 1 | 13 | f | 1 | 14 | f | 1 | 15 | f | ========================== Users ========================== UserId | FName | LName | 10 |John | Doe | ========================== This is used in a reservation system of sorts... which lets an administrator specify a range of numbers that will be assigned to users in the config table. Once the range has been defined, the system then populates the Available_userIDs table with all the numbers in between the range, and sets the Used_YN flag to false As users sign up, they grab the next user_id number that's not in use... and reserve it. Then the system adds a record to the Users table. Once the admin has specified a range, it is possible that they can change it. For example, they can start with 10-15... and then when the range is used up, they should be able to specify another range like 16 - 99. I've put a unique constraint on the Available_UserIDs table, as well as on the Users table - to ensure that UserIds can't be duplicated. My questions are as follows: What's the best way to prevent the admins from using a range that's already in use? I thought of the following options: -- check either the Users table to see if the start range or ending range numbers are being used. If they are, assume that all the numbers in between are in use too, and reject the range. -- let them specify whatever they want, try to populate the Available_UserIDs table. If there are duplicates, just ignore that specific error message from the database and continue on. How do I find gaps in the number ranges? For example, if they specify 10-15, and then 20-25, it'd be nice to be able to somehow suggest on my web page that 16-19 is currently available. I found this article: http://stackoverflow.com/questions/1312101/how-to-find-a-gap-in-running-counter-with-sql But it only seems to return the first available number... so in my example above, it would only return the number 16. I'm sure there's a simpler way to do things that I'm overlooking!

    Read the article

  • links for 2011-01-12

    - by Bob Rhubart
    WebCenter Spaces 11g PS2 Template Customization (Javier Ductor's Blog) "Recently, we have been involved in a WebCenter Spaces customization project. A customer sent us a prototype website in HTML, and we had to transform Spaces to set the same look and feel as in the prototype..." Javier Ductor (tags: oracle otn webcenter enteprise2.0) Matt Carter: Risky Business "Incorporating risk detection and mitigation capabilities into apps is becoming all the rage. There are plenty of real-life examples of cases where prevention of cyber-security threats and fraudsters might have kept governments and companies out of the news, and with more money in their accounts." (tags: oracle otn security middleware) John Brunswick: 5 Surprisingly Good Benefits of Corporate Blogs "Some may still propose that not all corporations are going to be able to provide the five benefits above and are more focused around shameless self promotion of products and services.  If that is the case, that corporation is most likely not producing something of high value." - John Brunswick (tags: oracle otn enterprise2.0 blogging) InfoQ: IT And Architecture: Inside-Out Perspectives The software industry is in disarray, costs are escalating, and quality is diminishing. Promises of newer technologies and processes and methodologies in IT are still far from materializing on any significant scale. Bruce Laidlaw and Michael Poulin - each with more than 30 years of experience compared notes on the past and present of IT and provide insights on what IT needs to make progress. (tags: ping.fm) SOA & Middleware: Canceling a running composite instance - example Useful tips from Niall Commiskey. (tags: soa middleware oracle) BPEL 11.1.1.2 Certified for Prebuilt E-Business Suite 12.1.3 SOA Integrations (Oracle E-Business Suite Technology) "A new certification was released simultaneously with the E-Business Suite 12.1.3 Maintenance Pack late last year: the use of BPEL 11g Version 11.1.1.2 with E-Business Suite 12.1.3." -- Steven Chan (tags: oracle bpel) Marc Kelderman: OSB: Deploy Service Level Agreement (SLA), aka Alert Rule "The big issue with these SLAs is the deployment. If you have dozens of services, with multiple operations, and you have a lot of environments it takes a while to create them...[But] I have a nice workaround." - Mark Kelderman  (tags: oracle otn soa osb sla) @myfear: Java EE 7 - what's coming up for 2012? First hints. "Even if the actual Java EE 6 version is still not too widespread, we already have seen the first signs of the next EE 7 version written to the sky." -- Markus "myfear" Eisele (tags: oracle otn oracleace java)

    Read the article

  • Reporting on common code smells : A POC

    - by Dave Ballantyne
    Over the past few blog entries, I’ve been looking at parsing TSQL scripts in a variety of ways for a variety of tasks.  In my last entry ‘How to prevent ‘Select *’ : The elegant way’, I looked at parsing SQL to report upon uses of SELECT *.  The obvious question leading on from this is, “Great, what about other code smells ?”  Well, using the language service parser to do that was turning out to be a bit of a hard job,  sure I was getting tokens but no real context.  I wasn't even being told when an end of statement had been reached. One of the other parsing options available from Microsoft is exposed in the assembly ‘Microsoft.SqlServer.TransactSql.ScriptDom’,  this is ,I believe, installed with the client development tools with SQLServer.  It is much more feature rich than the original parser I had used and breaks a TSQL script into intuitive classes for analysis. So, what sort of smells can I now find using it ?  Well, for an opening gambit quite a nice little list. Use of NOLOCK Set of READ UNCOMMITTED Use of SELECT * Insert without column references Explicit datatype conversion on Sargs Cross server selects Non use of two-part naming convention Table and Query hint usage Changes in set options Use of single line comments Use of ordinal column positions in ORDER BY clause Now, lets not argue the point that “It depends” as smells on some of these, but as an academic exercise it is quite interesting.  The code is available from this link :https://www.dropbox.com/s/rfk32sou4fzl2cw/TSQLDomTest.zip  All the usual disclaimers apply to this code, I cannot be held responsible for anything ranging from mild annoyance through to universe destruction due to the use of this code or examples. The zip file contains a powershell script and my test cases.  The assembly used requires .Net 4 to run, which means that you will need powershell 3 ( though im running through PowerGUI and all works ok ) .  The code searches for all .sql files in the folder hierarchy for the workingpath,  you can override this if you want by simply changing the $Folder variable, and processes each in turn for the smells.  Feedback is not great at the moment, all it does is output to an xml file (Smells.xml) the offset position and a description of the smell found. Right now, I am interested in your feedback.  What do you think ?  Is this (or should it be) more than an academic exercise ?  Can tooling such as this be used as some form of code quality measure ?  Does it Work ? Do you have a case listed above which is not being reported ? Do you have a case that you would love to be reported ? Let me know , please mailto: [email protected]. Thanks

    Read the article

< Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >