Search Results

Search found 1353 results on 55 pages for 'rob kam'.

Page 9/55 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • What's New in 5.6 RC and more from MySQL Connect conference

    - by Rob Young
    Keeping with the tradition of great MySQL Community events, the first annual MySQL Connect conference is now in the books.  It was great to see so many familiar faces in the crowd and at the podium sharing their ideas and thoughts on the evolution of MySQL under Oracle. The headliner of the conference was Tomas' keynote announcement of the fully featured and fully enabled MySQL 5.6 Release Candidate.  This new article on the MySQL DevZone summarizes all of the great new features ready for Community adoption, all MySQL Engineering blogs and where and how to download all of the bits. As always, early adoption and feedback on the 5.6 RC is appreciated and the sooner we get your feedback the sooner we release the "ready for production" sanctioned GA product.    Also available now, Cluster 7.3 provides support for Foreign Keys, node.js NoSQL access to underlying data and a new Auto Installer that helps you quickly and easily get up and running with Cluster 7.2 and 7.3.  The 7.3 downloads are provided in the first 7.3 Development Milestone Release (under "Development Releases" tab) and via the MySQL Labs. Oracle also announced key new additions to MySQL Enterprise Edition: New policy-based compliance Auditing. MySQL Enterprise Edition Audit adds policy-based auditing compliance to existing MySQL applications without the need to change any code.  This new plugin is available for MySQL 5.5.28 and higher; existing MySQL Enterprise Edition customers can download the upgrade from the My Oracle Support portal and all can download for evaluation from Oracle's Software Delivery Cloud. New MySQL Enterprise High Available additions provide even more options for ensuring MySQL applications remain available and running a their peak: Oracle Linux + DRBD Oracle Solaris Clustering for MySQL All in all, the first MySQL Connect conference was a great success and with refinements planned in response to attendee, sponsor and speaker feedback we expect it to grow and improve going forward. As always, thanks for your continued support of MySQL!

    Read the article

  • Thank You MySQL Community! MySQL 5.6.9 Release Candidate Available Now!

    - by Rob Young
    The MySQL Community continues its good work in testing and refining MySQL 5.6, and as such the next iteration of the 5.6 Release Candidate is now available for download.  You can get MySQL 5.6.9 here (look under the "Development Releases" tab).  This version is the result of feedback we have gotten since MySQL 5.6.7 was announced at MySQL Connect in late September. As iron sharpens iron, Community feedback sharpens the quality and performance of MySQL so please download 5.6.9 and let us know how we can improve it as we move toward the production-ready product release in early 2013. MySQL 5.6 is designed to meet the agility demands of the next generation of web apps and services and includes across the board improvements to the Optimizer, InnoDB performance/scale and online DDL operations, self-healing Replication, Performance Schema Instrumentation, Security and developer enabling NoSQL functionality.  You can learn all the details and follow MySQL Engineering blogs on all of the key features in this MySQL DevZone article. On a related note, plan to join this week's live webinars to learn more about MySQL 5.6 Self-Healing Replication Clusters and Building the Next Generation of Web, Cloud, SaaS, Embedded Application and Services with MySQL 5.6.  Hurry!  Seating is limited!  As always, thanks for your continued support of MySQL!

    Read the article

  • More BI Showcase Events - Greensboro, NC & Tampa, FL

    - by Rob Reynolds
    As the momentum around OBIEE 11g continues, we are providing more opportunities to get a hands on view of the new technology via our Oracle Business Intelligence Showcases. Next week we will have Showcases in Greensboro, NC and Tampa, FL. I will be presenting at both, so please stop by and say hello, while learning about the latest in Oracle BI & DW technology. Pre-registration is required. You can register for the events at the links below: Greensboro, NC - Tuesday December 7, 2011 Tampa, FL - Wednesday, December 8, 2011 Session Agenda: Agenda 9:00 a.m. – 10:00 a.m. Registration and Welcome 10:00 a.m. – 11:00 a.m. Session Keynote: Oracle’s New Generation of Business Intelligence Solutions and Innovations 11:00 a.m. – 12:00 noon Session 1 Track 1Oracle Business Intelligence Enterprise Edition 11g: End User Experience Track 2Management Reporting with Oracle Essbase 12:00 noon – 1:00 p.m. Networking Lunch 1:00 p.m. – 2:00 p.m. Session 2 Track 1Oracle Business Intelligence Enterprise Edition 11g for Power Users, Developers, and Administrators Track 2Oracle BI Applications: The Value of Cross-Functional BI Break to change rooms 2:00 p.m.– 3:00 p.m. Session 3 Track 1 Extreme Performance Data Warehousing Track 2Master Data Management: The Single Source of Truth for Real Time Decisions 3:15 p.m. Wrap-Up and Raffle Prize

    Read the article

  • Google Fonts API JSON Data in WordPress Options-Framework-Theme

    - by Rob
    I'm developing a child-theme off of the new Twenty Twelve theme using Wordpress 3.4.2 and the development version of the Options Theme Framework by Devin Price. In Devin's tutorial, it shows of a way to implement 15 Google Web Fonts into the Theme Options page, but not all of them (roughly 560). I know I can create a "manual list", like in the tutorial that states each one with fallbacks, but this is time consuming and unproductive as Google may or may not add to, update, change or remove some of these fonts from their list. The list I've created above will ultimately store unavailable fonts the user thinks is there because of what they can see in the drop-down menu and it won't have any new ones - making the list and some selections obsolete. On the Google Developer API Web Fonts page, it talks briefly on retrieving a "dynamic list" using JSON/JavaScript. I was wondering how would I be able to pull the Google Web Fonts API into my Wordpress Theme Options page so I'm not creating my own list or have to constantly release an update to solve this issue. Could someone please walk me through what I would need to paste into my options.php, functions.php, /inc/options-framework.php file etc. or even in a new one to implement this? I've also had a look into some screencasts, plugins and tutorials on how it works, but none of them are specific enough for people just starting out. Please keep in mind I'm not the best coder... Thank you.

    Read the article

  • Ubuntu 12.04 // Likewise Open // Unable to ever authenticate AD users

    - by Rob
    So Ubuntu 12.04, Likewise latest from the beyondtrust website. Joins domain fine. Gets proper information from lw-get-status. Can use lw-find-user-by-name to retrieve/locate users. Can use lw-enum-users to get all users. Attempting to login with an AD user via SSH generates the following errors in the auth.log file: Nov 28 19:15:45 hostname sshd[2745]: PAM unable to dlopen(pam_winbind.so): /lib/security/pam_winbind.so: cannot open shared object file: No such file or directory Nov 28 19:15:45 hostname sshd[2745]: PAM adding faulty module: pam_winbind.so Nov 28 19:15:51 hostname sshd[2745]: error: PAM: Authentication service cannot retrieve authentication info for DOMAIN\\user.name from remote.hostname Nov 28 19:16:06 hostname sshd[2745]: Connection closed by 10.1.1.84 [preauth] Attempting to login via the LightDM itself generates similar errors in the auth.log file. Nov 28 19:19:29 hostname lightdm: PAM unable to dlopen(pam_winbind.so): /lib/security/pam_winbind.so: cannot open shared object file: No such file or directory Nov 28 19:19:29 hostname lightdm: PAM adding faulty module: pam_winbind.so Nov 28 19:19:47 hostname lightdm: pam_succeed_if(lightdm:auth): requirement "user ingroup nopasswdlogin" not met by user "DOMAIN\user.name" Nov 28 19:19:52 hostname lightdm: [lsass-pam] [module:pam_lsass]pam_sm_authenticate error [login:DOMAIN\user.name][error code:40022] Nov 28 19:19:54 hostname lightdm: PAM unable to dlopen(pam_winbind.so): /lib/security/pam_winbind.so: cannot open shared object file: No such file or directory Nov 28 19:19:54 hostname lightdm: PAM adding faulty module: pam_winbind.so Attempting to login via a console on the system itself generates slightly different errors: Nov 28 19:31:09 hostname login[997]: PAM unable to dlopen(pam_winbind.so): /lib/security/pam_winbind.so: cannot open shared object file: No such file or directory Nov 28 19:31:09 hostname login[997]: PAM adding faulty module: pam_winbind.so Nov 28 19:31:11 hostname login[997]: [lsass-pam] [module:pam_lsass]pam_sm_authenticate error [login:DOMAIN\user.name][error code:40022] Nov 28 19:31:14 hostname login[997]: FAILED LOGIN (1) on '/dev/tty2' FOR 'DOMAIN\user.name', Authentication service cannot retrieve authentication info Nov 28 19:31:31 hostname login[997]: FAILED LOGIN (2) on '/dev/tty2' FOR 'DOMAIN\user.name', Authentication service cannot retrieve authentication info I am baffled. The errors obviously are correct, the file /lib/security/pam_winbind.so does not exist. If its a dependancy/required, surely it should be part of the package? I've installed/reinstalled, I've used the downloaded package from the beyondtrust website, i've used the repository, nothing seems to work, every method of installing this application generates the same errors for me. UPDATE : Hrmm, I thought likewise didn't use native winbind but its own modules. Installing winbind from apt-get uninstalls pbis-open (likewise) and generates failures when installing if pbis-open is installed first. Uninstalled winbind, reinstalled pbis-open, same issue as above. The file pam_winbind.so does not exist in that location. Setting up pbis-open-legacy (7.0.1.918) ... Installing Packages was successful This computer is joined to DOMAIN.LOCAL New libraries and configurations have been installed for PAM and NSS. Clearly it thinks it has installed it, but it hasn't. It may be a legacy issue with the previous attempt to configure domain integration manually with winbind. Does anyone have a working likewise-open installation and does the /etc/nsswitch.conf include references to winbind? Or do the /etc/pam.d/common-account or /etc/pam.d/common-password reference pam_winbind.so? I'm unsure if those entries are just legacy or setup by likewise. UPDATE 2 : Complete reinstall of OS fixed it and it worked seamlessly, like it was meant to and those 2 PAM files did NOT include entries for pam_winbind.so, so that was the underlying problem. Thanks for the assist.

    Read the article

  • 24 Hours of PASS: Whine, Whine, Whine

    - by Most Valuable Yak (Rob Volk)
    24 Hours of Pass (or 24HOP) is a great program offered by PASS to provide free, online training for anyone who wants to learn more about SQL Server.  They routinely have the best SQL Server presenters available for these sessions, and attract hundreds, perhaps even a thousand attendees from around the world.  This is definitely one of the best things they've started doing in the past few years, and every session I've attended has been excellent. So why am I so grumpy about it? I'm not really, pretty much everything here is a minor annoyance that I can deal with.  However since they're so minor they seem to be things that can be easily corrected and would make the process much better. First off, this is my biggest gripe, the registration page: https://www323.livemeeting.com/lrs/8000181573/Registration.aspx?pageName=lj6378f4fhf5hpdm What grinds my gears about this?  I have to scroll alllllllllllllllllllllllllllll the way to bottom to actually register for the sessions.  This wouldn't be so bad except all the details of the session, including the presenter, is in a separate list at the top.  Both lists contain info the other does not, and scrolling between them to determine "Should I make time to listen to this?  Who is speaking at this time anyway?" is really unnecessary. My preference would be to keep the top list and add the checkboxes and schedule info in separate columns.  This is a full-width design, so there's plenty of space for this data, which is pretty small anyway.  The other huge benefit is halving the size of the page, which improves performance and lowers bandwidth usage considerably.  And if you know HTML/ASP.Net, and you view the page source, you can find PLENTY of other things that can be reduced even further.  (not just viewstate) One nice thing that PASS does is send iCal reminders to your email address so you can accept them to your calendar.  Again, they leave off the presenter in the appointment details, while still duplicating the meeting title in the body.  Sometimes I make decisions based on speaker rather than content (Natalie Portman is reading the Yellow Pages??? I'M THERE!) and having the speaker in the iCal is helpful. Next minor annoyances are the necessity for providing a company name, and the survey questions.  I know PASS needs to market themselves effectively, and they need information to do that, and since this is a free event it's really not worth complaining about, but why ask the survey question twice? (once at registration, once again when joining the LiveMeeting)  Same thing for the company name.  All of this should be tied to email address, so that's all I should need to enter when joining the LiveMeeting. The last one is also minor, but it irks me in this day and age of multiple browsers and the decline of Internet Explorer as a dominant platform.  The registration page was originally created in Visual Studio 2003, and has a lot of IE-specific crud representative of the browser situation of 2003. (IE5 references? really? and is the aforementioned viewstate big enough?)  This causes some grief with other browsers like Firefox, Chrome, and sometimes IE8 or 9.  And don't get me started on using the page on a Mac or in Safari. My main point is that PASS is an international organization, welcoming everyone from all levels of SQL Server proficiency, and in that spirit I think it would help to accommodate a wider range of browser software, especially since the registration page is extremely simple.  I recognize that this page is not hosted on the PASS website and may be maintained by some division of Microsoft, but to me that's even worse if MS can't update their own pages.  They've deprecated IE6, so they don't need to maintain support on their own websites anymore. OK, I'll shut up now. #sqlpass #24HOP

    Read the article

  • PowerShell to fetch a SQL Execution Plan

    - by Rob Farley
    With PowerShell becoming the scripting language of choice for many people, I’ve occasionally wondered about using it to analyse execution plans. After all, an execution plan is just XML, and PowerShell is just one tool which will very easily handle xml. The thing is – there’s no Get-SqlPlan cmdlet available, which has frustrated me in the past. Today I figured I’d make one. I know that I can write T-SQL to get an execution plan using SET SHOWPLAN_XML ON, but the problem is that this must be the only statement in a batch. So I used go, and a couple of newlines, and whipped up the following one-liner: function Get-SqlPlan([string] $query, [string] $server, [string] $db) { return ([xml] (invoke-sqlcmd -Server $server -Database $db -Query "set showplan_xml on;`ngo`n$query").Item( 0)) } (but please bear in mind that I have the SQL Snapins installed, which provides invoke-sqlcmd) To use this, I just do something like: $plan = get-sqlplan "select name from Production.Product" "." "AdventureWorks" And then find myself with an easy way to navigate through an execution plan! At some point I should make the function more robust, but this should be a good starter for any SQL PowerShell enthusiasts (like Aaron Nelson) out there.

    Read the article

  • MySQL Policy-Based Auditing Webinar Recording Now Availabile

    - by Rob Young
    For those who missed the live event, the recording of the "How to Add Policy-Based Auditing to your MySQL Applications" webinar is now available.  You can view it here. This presentation builds on my earlier blog post on MySQL Enterprise Audit that was announced at MySQL Connect in late September.  The web presentation expands on the introductory blog and covers: The regulatory problem to be solved (internal audit, PCI, Sarbanes-Oxley, HIPAA, others) MySQL Audit solutions for both Community and Enterprise users: General Log - use the basic features of the MySQL server MySQL 5.5 open audit API - or use your time and talent to build your own solution MySQL Enterprise Audit - or use the out of the box, ready for production solution from MySQL Simple, step-by-step process for installing, enabling and configuring the MySQL Enterprise Audit plugin for use with existing apps New variables and options for tuning the MySQL Enterprise Audit plugin for your specific use case Best practices for securing and managing audit log files and archived images Roadmap for adding an integrated solution around MySQL Enterprise Audit for MySQL only and Oracle/MySQL shops You can learn all the technical details on MySQL Enterprise Audit in the MySQL docs and learn all about MySQL Enterprise Edition and Auditing here. As always, thanks for your support of MySQL!

    Read the article

  • Top 10 Reasons to Use MySQL and MySQL Cluster as an Embedded Database

    - by Rob Young
    If you are considering using MySQL and/or MySQL Cluster as the embedded database solution for your application, you should join us for today's webcast where we will discuss how you can cut costs, add flexibility and benefit from new performance and scalability enhancements that are now available in MySQL 5.6 and MySQL Cluster 7.2.  We will cover the top 10 reasons that make MySQL and MySQL Cluster the best solutions for embedding in both shrink wrapped and SaaS provided applications, how industry leaders leverage MySQL products and how you can get started with the latest innovations and support offerings across the MySQL product line. You can learn more and reserve your seat here. As always, thanks for your support of MySQL!

    Read the article

  • SQL Saturday #162 Cambridge

    - by Most Valuable Yak (Rob Volk)
    Despite the efforts of American Airlines, this past weekend I attended the first SQL Saturday in the UK!  Hosted by the SQLCambs Chapter of PASS and organized by Mark (b|t) & Lorraine Broadbent, ably assisted by John Martin (b|t), Mark Pryce-Maher (b|t) and other folks whose names I've unfortunately forgotten, it was held at the Crowne Plaza Hotel, which is completely surrounded by Cambridge University. On Friday, they presented 3 pre-conference sessions given by the brilliant American Cloud & DBA Guru, Buck Woody (b|t), the brilliant Danish SQL Server Internals Guru, Mark Rasmussen (b|t), and the brilliant Scottish Business Intelligence Guru and recent Outstanding Pass Volunteer, Jen Stirrup (b|t).  While I would have loved to attend any of their pre-cons (having seen them present several times already), finances and American Airlines ultimately made that impossible.  But not to worry, I caught up with them during the regular sessions and at the speaker dinner.  And I got back the money they all owed me.  (Actually I owed Mark some money) The schedule was jam-packed even with only 4 tracks, there were 8 regular slots, a lunch session for sponsor presentations, and a 15 minute keynote given by Buck Woody, who besides giving an excellent history of SQL Server at Microsoft (and before), also explained the source of the "unknown contact" image that appears in Outlook.  Hint: it's not Buck himself. Amazingly, and against all better judgment, I even got to present at SQL Saturday 162!  I did a 5 minute Lightning Talk on Regular Expressions in SSMS.  I then did a regular 50 minute session on Constraints.  You can download the content for the regular session at that link, and for the regular expression presentation here. I had a great time and had a great audience for both of my sessions.  You would never have guessed this was the first event for the organizers, everything went very smoothly, especially for the number of attendees and the relative smallness of the space.  The event sponsors also deserve a lot of credit for making themselves fit in a small area and for staying through the entire event until the giveaways at the very end. Overall this was one of the best SQL Saturdays I've ever attended and I have to congratulate Mark B, Lorraine, John, Mark P-M, and all the volunteers and speakers for making this an astoundingly hard act to follow!  Well done!

    Read the article

  • Ubuntu One Bookmark sync not working.

    - by Rob
    Everything in Ubuntu One sync works great except bookmark sync. I tried the wiki answer that said to run: killall beam.smp beam rm ~/.config/desktop-couch/desktop-couchdb.ini dbus-send --session --dest=org.desktopcouch.CouchDB --print-reply --type=method_call / org.desktopcouch.CouchDB.getPort This is what my terminal came back with: robin@robin-MIDWAY:~$ killall beam.smp beam beam: no process found robin@robin-MIDWAY:~$ rm ~/.config/desktop-couch/desktop-couchdb.ini rm: cannot remove `/home/robin/.config/desktop-couch/desktop-couchdb.ini': No such file or directory robin@robin-MIDWAY:~$ dbus-send --session --dest=org.desktopcouch.CouchDB --print-reply --type=method_call / org.desktopcouch.CouchDB.getPort Error org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. robin@robin-MIDWAY:~$ I'm a computer "newbie" so it's possible I'm doing something wrong, are there any tutorials out there on how to use the CouchDB? I have Bindwood installed.

    Read the article

  • June 25 changes to BIS 742.15 How does it impact SSL iPhone App export compliance

    - by Rob
    This question isn't strictly development-related but I hope it's still acceptable :) On June 25, 2010 the BIS updated 742.15 and of interest to me is the new 742.14(b)(4) "Exclusions from mass market classification request, encryption registration and self-classification reporting requirements" and 742.15(b)(4)(ii) which states… (ii) Foreign products developed with or incorporating U.S.-origin encryption source code, components, or toolkits. Foreign products developed with or incorporating U.S. origin encryption source code, components or toolkits that are subject to the EAR, provided that the U.S. origin encryption items have previously been classified or registered and authorized by BIS and the cryptographic functionality has not been changed. Such products include foreign developed products that are designed to operate with U.S. products through a cryptographic interface. I take this to mean that my Canadian produced product that uses https is now excluded from requiring a CCATTS. What does everyone else think?

    Read the article

  • Will upgrading disrupt online backup software (Crashplan)?

    - by Rob
    Has anyone had any experience upgrading to Ubuntu 11.04, and how this might affect online backup software processes? Specifically, I'm currently running Ubuntu 10.10 and using Crashplan v3. Crashplan backup engine runs constantly in the background to monitor files for changes and back up accordingly. Will upgrading to Ubuntu 11.04 affect any of the Crashplan backup process? Has anyone done this successfully? I'd like to upgrade to the latest and greatest Ubuntu release, but not at the risk of affecting the backup, or worse, having to start from scratch again and upload the entire library!

    Read the article

  • T-SQL Tuesday #31 - Logging Tricks with CONTEXT_INFO

    - by Most Valuable Yak (Rob Volk)
    This month's T-SQL Tuesday is being hosted by Aaron Nelson [b | t], fellow Atlantan (the city in Georgia, not the famous sunken city, or the resort in the Bahamas) and covers the topic of logging (the recording of information, not the harvesting of trees) and maintains the fine T-SQL Tuesday tradition begun by Adam Machanic [b | t] (the SQL Server guru, not the guy who fixes cars, check the spelling again, there will be a quiz later). This is a trick I learned from Fernando Guerrero [b | t] waaaaaay back during the PASS Summit 2004 in sunny, hurricane-infested Orlando, during his session on Secret SQL Server (not sure if that's the correct title, and I haven't used parentheses in this paragraph yet).  CONTEXT_INFO is a neat little feature that's existed since SQL Server 2000 and perhaps even earlier.  It lets you assign data to the current session/connection, and maintains that data until you disconnect or change it.  In addition to the CONTEXT_INFO() function, you can also query the context_info column in sys.dm_exec_sessions, or even sysprocesses if you're still running SQL Server 2000, if you need to see it for another session. While you're limited to 128 bytes, one big advantage that CONTEXT_INFO has is that it's independent of any transactions.  If you've ever logged to a table in a transaction and then lost messages when it rolled back, you can understand how aggravating it can be.  CONTEXT_INFO also survives across multiple SQL batches (GO separators) in the same connection, so for those of you who were going to suggest "just log to a table variable, they don't get rolled back":  HA-HA, I GOT YOU!  Since GO starts a new batch all variable declarations are lost. Here's a simple example I recently used at work.  I had to test database mirroring configurations for disaster recovery scenarios and measure the network throughput.  I also needed to log how long it took for the script to run and include the mirror settings for the database in question.  I decided to use AdventureWorks as my database model, and Adam Machanic's Big Adventure script to provide a fairly large workload that's repeatable and easily scalable.  My test would consist of several copies of AdventureWorks running the Big Adventure script while I mirrored the databases (or not). Since Adam's script contains several batches, I decided CONTEXT_INFO would have to be used.  As it turns out, I only needed to grab the start time at the beginning, I could get the rest of the data at the end of the process.   The code is pretty small: declare @time binary(128)=cast(getdate() as binary(8)) set context_info @time   ... rest of Big Adventure code ...   go use master; insert mirror_test(server,role,partner,db,state,safety,start,duration) select @@servername, mirroring_role_desc, mirroring_partner_instance, db_name(database_id), mirroring_state_desc, mirroring_safety_level_desc, cast(cast(context_info() as binary(8)) as datetime), datediff(s,cast(cast(context_info() as binary(8)) as datetime),getdate()) from sys.database_mirroring where db_name(database_id) like 'Adv%';   I declared @time as a binary(128) since CONTEXT_INFO is defined that way.  I couldn't convert GETDATE() to binary(128) as it would pad the first 120 bytes as 0x00.  To keep the CAST functions simple and avoid using SUBSTRING, I decided to CAST GETDATE() as binary(8) and let SQL Server do the implicit conversion.  It's not the safest way perhaps, but it works on my machine. :) As I mentioned earlier, you can query system views for sessions and get their CONTEXT_INFO.  With a little boilerplate code this can be used to monitor long-running procedures, in case you need to kill a process, or are just curious  how long certain parts take.  In this example, I added code to Adam's Big Adventure script to set CONTEXT_INFO messages at strategic places I want to monitor.  (His code is in UPPERCASE as it was in the original, mine is all lowercase): declare @msg binary(128) set @msg=cast('Altering bigProduct.ProductID' as binary(128)) set context_info @msg go ALTER TABLE bigProduct ALTER COLUMN ProductID INT NOT NULL GO set context_info 0x0 go declare @msg1 binary(128) set @msg1=cast('Adding pk_bigProduct Constraint' as binary(128)) set context_info @msg1 go ALTER TABLE bigProduct ADD CONSTRAINT pk_bigProduct PRIMARY KEY (ProductID) GO set context_info 0x0 go declare @msg2 binary(128) set @msg2=cast('Altering bigTransactionHistory.TransactionID' as binary(128)) set context_info @msg2 go ALTER TABLE bigTransactionHistory ALTER COLUMN TransactionID INT NOT NULL GO set context_info 0x0 go declare @msg3 binary(128) set @msg3=cast('Adding pk_bigTransactionHistory Constraint' as binary(128)) set context_info @msg3 go ALTER TABLE bigTransactionHistory ADD CONSTRAINT pk_bigTransactionHistory PRIMARY KEY NONCLUSTERED(TransactionID) GO set context_info 0x0 go declare @msg4 binary(128) set @msg4=cast('Creating IX_ProductId_TransactionDate Index' as binary(128)) set context_info @msg4 go CREATE NONCLUSTERED INDEX IX_ProductId_TransactionDate ON bigTransactionHistory(ProductId,TransactionDate) INCLUDE(Quantity,ActualCost) GO set context_info 0x0   This doesn't include the entire script, only those portions that altered a table or created an index.  One annoyance is that SET CONTEXT_INFO requires a literal or variable, you can't use an expression.  And since GO starts a new batch I need to declare a variable in each one.  And of course I have to use CAST because it won't implicitly convert varchar to binary.  And even though context_info is a nullable column, you can't SET CONTEXT_INFO NULL, so I have to use SET CONTEXT_INFO 0x0 to clear the message after the statement completes.  And if you're thinking of turning this into a UDF, you can't, although a stored procedure would work. So what does all this aggravation get you?  As the code runs, if I want to see which stage the session is at, I can run the following (assuming SPID 51 is the one I want): select CAST(context_info as varchar(128)) from sys.dm_exec_sessions where session_id=51   Since SQL Server 2005 introduced the new system and dynamic management views (DMVs) there's not as much need for tagging a session with these kinds of messages.  You can get the session start time and currently executing statement from them, and neatly presented if you use Adam's sp_whoisactive utility (and you absolutely should be using it).  Of course you can always use xp_cmdshell, a CLR function, or some other tricks to log information outside of a SQL transaction.  All the same, I've used this trick to monitor long-running reports at a previous job, and I still think CONTEXT_INFO is a great feature, especially if you're still using SQL Server 2000 or want to supplement your instrumentation.  If you'd like an exercise, consider adding the system time to the messages in the last example, and an automated job to query and parse it from the system tables.  That would let you track how long each statement ran without having to run Profiler. #TSQL2sDay

    Read the article

  • Postfix and tmpfs for /var/spool

    - by Rob Fisher
    My main disk is an SSD so in order to preserve its lifetime by reducing writes I followed some advice and made /var/spool a ram disk by adding this line to /etc/fstab: tmpfs /var/spool tmpfs defaults,noatime,mode=1777 0 0 Later I configured postfix because I have a RAID array on my system and mdadm wants to send me email if the RAID array fails which sounds like a fine idea. Email sending worked fine until I rebooted, at which point: postfix: fatal: open /etc/postfix-out/main.cf: No such file or directory The fix for this is apparently: mkdir /var/spool/postfix postfix check Then I found I also had to do: mkfifo /var/spool/postfix/public/pickup service postfix restart Now sending emails works fine...until the next reboot. So: what is the most correct way to recreate the contents of /var/spool/postfix automatically at boot time if it does not exist? I am using Ubuntu Server 12.04.

    Read the article

  • SQL Saturday #220 - Atlanta - Pre-Conference Scholarships!

    - by Most Valuable Yak (Rob Volk)
    We Want YOU…To Learn! AtlantaMDF and Idera are teaming up to find a few good people. If you are: A student looking to work in the database or business intelligence fields A database professional who is between jobs or wants a better one A developer looking to step up to something new On a limited budget and can’t afford professional SQL Server training Able to attend training from 9 to 5 on May 17, 2013 AtlantaMDF is presenting 5 Pre-Conference Sessions (pre-cons) for SQL Saturday #220! And thanks to Idera’s sponsorship, we can offer one free ticket to each of these sessions to eligible candidates! That means one scholarship per Pre-Con! One Recipient Each will Attend: Denny Cherry: SQL Server Security http://sqlsecurity.eventbrite.com/ Adam Machanic: Surfing the Multicore Wave: Processors, Parallelism, and Performance http://surfmulticore.eventbrite.com/ Stacia Misner: Languages of BI http://languagesofbi.eventbrite.com/ Bill Pearson: Practical Self-Service BI with PowerPivot for Excel http://selfservicebi.eventbrite.com/ Eddie Wuerch: The DBA Skills Upgrade Toolkit http://dbatoolkit.eventbrite.com/ If you are interested in attending these pre-cons send an email by April 30, 2013 to [email protected] and tell us: Why you are a good candidate to receive this scholarship Which sessions you’d like to attend, and why (list multiple sessions in order of preference) What the session will teach you and how it will help you achieve your goals The emails will be evaluated by the good folks at Midlands PASS in Columbia, SC. The recipients will be notified by email and announcements made on May 6, 2013. GOOD LUCK! P.S. - Don't forget that SQLSaturday #220 offers free* training in addition to the pre-cons! You can find more information about SQL Saturday #220 at http://www.sqlsaturday.com/220/eventhome.aspx. View the scheduled sessions at http://www.sqlsaturday.com/220/schedule.aspx and register for them at http://www.sqlsaturday.com/220/register.aspx. * Registration charges a $10 fee to cover lunch expenses.

    Read the article

  • The SSIS tuning tip that everyone misses

    - by Rob Farley
    I know that everyone misses this, because I’m yet to find someone who doesn’t have a bit of an epiphany when I describe this. When tuning Data Flows in SQL Server Integration Services, people see the Data Flow as moving from the Source to the Destination, passing through a number of transformations. What people don’t consider is the Source, getting the data out of a database. Remember, the source of data for your Data Flow is not your Source Component. It’s wherever the data is, within your database, probably on a disk somewhere. You need to tune your query to optimise it for SSIS, and this is what most people fail to do. I’m not suggesting that people don’t tune their queries – there’s plenty of information out there about making sure that your queries run as fast as possible. But for SSIS, it’s not about how fast your query runs. Let me say that again, but in bolder text: The speed of an SSIS Source is not about how fast your query runs. If your query is used in a Source component for SSIS, the thing that matters is how fast it starts returning data. In particular, those first 10,000 rows to populate that first buffer, ready to pass down the rest of the transformations on its way to the Destination. Let’s look at a very simple query as an example, using the AdventureWorks database: We’re picking the different Weight values out of the Product table, and it’s doing this by scanning the table and doing a Sort. It’s a Distinct Sort, which means that the duplicates are discarded. It'll be no surprise to see that the data produced is sorted. Obvious, I know, but I'm making a comparison to what I'll do later. Before I explain the problem here, let me jump back into the SSIS world... If you’ve investigated how to tune an SSIS flow, then you’ll know that some SSIS Data Flow Transformations are known to be Blocking, some are Partially Blocking, and some are simply Row transformations. Take the SSIS Sort transformation, for example. I’m using a larger data set for this, because my small list of Weights won’t demonstrate it well enough. Seven buffers of data came out of the source, but none of them could be pushed past the Sort operator, just in case the last buffer contained the data that would be sorted into the first buffer. This is a blocking operation. Back in the land of T-SQL, we consider our Distinct Sort operator. It’s also blocking. It won’t let data through until it’s seen all of it. If you weren’t okay with blocking operations in SSIS, why would you be happy with them in an execution plan? The source of your data is not your OLE DB Source. Remember this. The source of your data is the NCIX/CIX/Heap from which it’s being pulled. Picture it like this... the data flowing from the Clustered Index, through the Distinct Sort operator, into the SELECT operator, where a series of SSIS Buffers are populated, flowing (as they get full) down through the SSIS transformations. Alright, I know that I’m taking some liberties here, because the two queries aren’t the same, but consider the visual. The data is flowing from your disk and through your execution plan before it reaches SSIS, so you could easily find that a blocking operation in your plan is just as painful as a blocking operation in your SSIS Data Flow. Luckily, T-SQL gives us a brilliant query hint to help avoid this. OPTION (FAST 10000) This hint means that it will choose a query which will optimise for the first 10,000 rows – the default SSIS buffer size. And the effect can be quite significant. First let’s consider a simple example, then we’ll look at a larger one. Consider our weights. We don’t have 10,000, so I’m going to use OPTION (FAST 1) instead. You’ll notice that the query is more expensive, using a Flow Distinct operator instead of the Distinct Sort. This operator is consuming 84% of the query, instead of the 59% we saw from the Distinct Sort. But the first row could be returned quicker – a Flow Distinct operator is non-blocking. The data here isn’t sorted, of course. It’s in the same order that it came out of the index, just with duplicates removed. As soon as a Flow Distinct sees a value that it hasn’t come across before, it pushes it out to the operator on its left. It still has to maintain the list of what it’s seen so far, but by handling it one row at a time, it can push rows through quicker. Overall, it’s a lot more work than the Distinct Sort, but if the priority is the first few rows, then perhaps that’s exactly what we want. The Query Optimizer seems to do this by optimising the query as if there were only one row coming through: This 1 row estimation is caused by the Query Optimizer imagining the SELECT operation saying “Give me one row” first, and this message being passed all the way along. The request might not make it all the way back to the source, but in my simple example, it does. I hope this simple example has helped you understand the significance of the blocking operator. Now I’m going to show you an example on a much larger data set. This data was fetching about 780,000 rows, and these are the Estimated Plans. The data needed to be Sorted, to support further SSIS operations that needed that. First, without the hint. ...and now with OPTION (FAST 10000): A very different plan, I’m sure you’ll agree. In case you’re curious, those arrows in the top one are 780,000 rows in size. In the second, they’re estimated to be 10,000, although the Actual figures end up being 780,000. The top one definitely runs faster. It finished several times faster than the second one. With the amount of data being considered, these numbers were in minutes. Look at the second one – it’s doing Nested Loops, across 780,000 rows! That’s not generally recommended at all. That’s “Go and make yourself a coffee” time. In this case, it was about six or seven minutes. The faster one finished in about a minute. But in SSIS-land, things are different. The particular data flow that was consuming this data was significant. It was being pumped into a Script Component to process each row based on previous rows, creating about a dozen different flows. The data flow would take roughly ten minutes to run – ten minutes from when the data first appeared. The query that completes faster – chosen by the Query Optimizer with no hints, based on accurate statistics (rather than pretending the numbers are smaller) – would take a minute to start getting the data into SSIS, at which point the ten-minute flow would start, taking eleven minutes to complete. The query that took longer – chosen by the Query Optimizer pretending it only wanted the first 10,000 rows – would take only ten seconds to fill the first buffer. Despite the fact that it might have taken the database another six or seven minutes to get the data out, SSIS didn’t care. Every time it wanted the next buffer of data, it was already available, and the whole process finished in about ten minutes and ten seconds. When debugging SSIS, you run the package, and sit there waiting to see the Debug information start appearing. You look for the numbers on the data flow, and seeing operators going Yellow and Green. Without the hint, I’d sit there for a minute. With the hint, just ten seconds. You can imagine which one I preferred. By adding this hint, it felt like a magic wand had been waved across the query, to make it run several times faster. It wasn’t the case at all – but it felt like it to SSIS.

    Read the article

  • 24 Hours of PASS scheduling

    - by Rob Farley
    I have a new appreciation for Tom LaRock (@sqlrockstar), who is doing a tremendous job leading the organising committee for the 24 Hours of PASS event (Twitter: #24hop). We’ve just been going through the list of speakers and their preferences for time slots, and hopefully we’ve kept everyone fairly happy. All the submitted sessions (59 of them) were put up for a vote, and over a thousand of you picking your favourites. The top 28 sessions as voted were all included (24 sessions plus 4 reserves), and duplicates (when a single presenter had two sessions in the top 28) were swapped out for others. For example, both sessions submitted by Cindy Gross were in the top 28. These swaps were chosen by the committee to get a good balance of topics. Amazingly, some big names missed out, and even the top ten included some surprises. T-SQL, Indexes and Reporting featured well in the top ten, and in the end, the mix between BI, Dev and DBA ended up quite nicely too. The ten most voted-for sessions were (in order): Jennifer McCown - T-SQL Code Sins: The Worst Things We Do to Code and Why Michelle Ufford - Index Internals for Mere Mortals Audrey Hammonds - T-SQL Awesomeness: 3 Ways to Write Cool SQL Cindy Gross - SQL Server Performance Tools Jes Borland - Reporting Services 201: the Next Level Isabel de la Barra - SQL Server Performance Karen Lopez - Five Physical Database Design Blunders and How to Avoid Them Julie Smith - Cool Tricks to Pull From Your SSIS Hat Kim Tessereau - Indexes and Execution Plans Jen Stirrup - Dashboards Design and Practice using SSRS I think you’ll all agree this is shaping up to be an excellent event.

    Read the article

  • T-SQL in Chicago – the LobsterPot teams with DataEducation

    - by Rob Farley
    In May, I’ll be in the US. I have board meetings for PASS at the SQLRally event in Dallas, and then I’m going to be spending a bit of time in Chicago. The big news is that while I’m in Chicago (May 14-16), I’m going to teach my “Advanced T-SQL Querying and Reporting: Building Effectiveness” course. This is a course that I’ve been teaching since the 2005 days, and have modified over time for 2008 and 2012. It’s very much my most popular course, and I love teaching it. Let me tell you why. For years, I wrote queries and thought I was good at it. I was a developer. I’d written a lot of C (and other, more fun languages like Prolog and Lisp) at university, and then got into the ‘real world’ and coded in VB, PL/SQL, and so on through to C#, and saw SQL (whichever database system it was) as just a way of getting the data back. I could write a query to return just about whatever data I wanted, and that was good. I was better at it than the people around me, and that helped. (It didn’t help my progression into management, then it just became a frustration, but for the most part, it was good to know that I was good at this particular thing.) But then I discovered the other side of querying – the execution plan. I started to learn about the translation from what I’d written into the plan, and this impacted my query-writing significantly. I look back at the queries I wrote before I understood this, and shudder. I wrote queries that were correct, but often a long way from effective. I’d done query tuning, but had largely done it without considering the plan, just inferring what indexes would help. This is not a performance-tuning course. It’s focused on the T-SQL that you read and write. But performance is a significant and recurring theme. Effective T-SQL has to be about performance – it’s the biggest way that a query becomes effective. There are other aspects too though – such as using constructs better. For example – I can write code that modifies data nicely, but if I haven’t learned about the MERGE statement and the way that it can impact things, I’m missing a few tricks. If you’re going to do this course, a good place to be is the situation I was in a few years before I wrote this course. You’re probably comfortable with writing T-SQL queries. You know how to make a SELECT statement do what you need it to, but feel there has to be a better way. You can write JOINs easily, and understand how to use LEFT JOIN to make sure you don’t filter out rows from the first table, but you’re coding blind. The first module I cover is on Query Execution. Take a look at the Course Outline at Data Education’s website. The first part of the first module is on the components of a SELECT statement (where I make you think harder about GROUP BY than you probably have before), but then we jump straight into Execution Plans. Some stuff on indexes is in there too, as is simplification and SARGability. Some of this is stuff that you may have heard me present on at conferences, but here you have me for three days straight. I’m sure you can imagine that we revisit these topics throughout the rest of the course as well, and you’d be right. In the second and third modules we look at a bunch of other aspects, including some of the T-SQL constructs that lots of people don’t know, and various other things that can help your T-SQL be, well, more effective. I’ve had quite a lot of people do this course and be itching to get back to work even on the first day. That’s not a comment about the jokes I tell, but because people want to look at the queries they run. LobsterPot Solutions is thrilled to be partnering with Data Education to bring this training to Chicago. Visit their website to register for the course. @rob_farley

    Read the article

  • MCM Lab exam this week

    - by Rob Farley
    In two days I’ll’ve finished the MCM Lab exam, 88-971. If you do an internet search for 88-971, it’ll tell you the answer is –883. Obviously. It’ll also give you a link to the actual exam page, which is useful too, once you’ve finished being distracted by the calculator instead of going to the thing you’re actually looking for. (Do people actually search the internet for the results of mathematical questions? Really?) The list of Skills Measured for this exam is quite short, but can essentially be broken down into one word “Anything”. The Preparation Materials section is even better. Classroom Training – none available. Microsoft E-Learning – none available. Microsoft Press Books – none available. Practice Tests – none available. But there are links to Readiness Videos and a page which has no resources listed, but tells you a list of people who have already qualified. Three in Australia who have MCM SQL Server 2008 so far. The list doesn’t include some of the latest batch, such as Jason Strate or Tom LaRock. I’ve used SQL Server for almost 15 years. During that time I’ve been awarded SQL Server MVP seven times, but the MVP award doesn’t actually mean all that much when considering this particular certification. I know lots of MVPs who have tried this particular exam and failed – including Jason and Tom. Right now, I have no idea whether I’ll pass or not. People tell me I’ll pass no problem, but I honestly have no idea. There’s something about that “Anything” aspect that worries me. I keep looking at the list of things in the Readiness Videos, and think to myself “I’m comfortable with Resource Governor (or whatever) – that should be fine.” Except that then I feel like I maybe don’t know all the different things that can go wrong with Resource Governor (or whatever), and I wonder what kind of situations I’ll be faced with. And then I find myself looking through the stuff that’s explained in the videos, and wondering what kinds of things I should know that I don’t, and then I get amazingly bored and frustrated (after all, I tell people that these exams aren’t supposed to be studied for – you’ve been studying for the last 15 years, right?), and I figure “What’s the worst that can happen? A fail?” I’m told that the exam provides a list of scenarios (maybe 14 of them?) and you have 5.5 hours to complete them. When I say “complete”, I mean complete – you don’t get to leave them unfinished, that’ll get you ‘nil points’ for that scenario. Apparently no-one gets to complete all of them. Now, I’m a consultant. I get called on to fix the problems that people have on their SQL boxes. Sometimes this involves fixing corruption. Sometimes it’s figuring out some performance problem. Sometimes it’s as straight forward as getting past a full transaction log; sometimes it’s as tricky as recovering a database that has lost its metadata, without backups. Most situations aren’t a problem, but I also have the confidence of being able to do internet searches to verify my maths (in case I forget it’s –883). In the exam, I’ll have maybe twenty minutes per scenario (but if I need longer, I’ll have to take longer – no point in stopping half way if it takes more than twenty minutes, unless I don’t see an end coming up), so I’ll have time constraints too. And of course, I won’t have any of my usual tools. I can’t take scripts in, I can’t take staff members. Hopefully I can use the coffee machine that will be in the room. I figure it’s going to feel like one of those days when I’ve gone into a client site, and found that the problems are way worse than I expected, and that the site is down, with people standing over me needing me to get things right first time... ...so it should be fine, I’ve done that before. :) If I do fail, it won’t make me any less of a consultant. It won’t make me any less able to help all of my clients (including you if you get in touch – hehe), it’ll just mean that the particular problem might’ve taken me more than the twenty minutes that the exam gave me. @rob_farley PS: Apparently the done thing is to NOT advertise that you’re sitting the exam at a particular time, only that you’re expecting to take it at some point in the future. I think it’s akin to the idea of not telling people you’re pregnant for the first few months – it’s just in case the worst happens. Personally, I’m happy to tell you all that I’m going to take this exam the day after tomorrow (which is the 19th in the US, the 20th here). If I end up failing, you can all commiserate and tell me that I’m not actually as unqualified as I feel.

    Read the article

  • Coming to a City Near You: Oracle Business Analytics Summits

    - by Rob Reynolds
    More and more organizations use analytics to identify new business opportunities, reduce costs, and optimize business processes. How? By making business information available throughout the enterprise—and making sure that it is relevant, actionable, and easy to access.Oracle invites you to join us for an information-packed event where you’ll learn about the latest trends, best practices, and innovations in business intelligence, analytic applications, and data warehousing.If you are an IT professional involved in BI strategy, program management, systems management, architecture, or deployment, this event is for you. You’ll find out about: New ways of deploying and delivering business intelligence on premise, in the cloud, and on mobile devices to a diverse base of business users New approaches for integrating, storing, managing, securing, and accessing your ever-growing volumes of structured and unstructured data The latest strategies for dramatically increasing the ROI of your ERP and CRM deployments Click here to view the presentation abstracts. Agenda 9:00 a.m. Registration 10:00 a.m. Keynote: Business Analytics—Be the First to Know 11:00 a.m. Break Breakout Sessions Technology and Architecture Strategy Track Business Insight and Analytic Delivery Track 11:15 a.m. Emerging Trends in Enterprise BI Platforms 11:15 a.m. Mobile BI—More than Dashboards on a Tablet 12:00 noon Networking Lunch 12:00 noon Networking Lunch 1:00 p.m. Is Your Business Intelligence Data at Risk? 1:00 p.m. Geospatial Intelligence—Location, Location, Location! 1:45 p.m. What Extreme Performance Means for Your Business 1:45 p.m. The Role of BI in Your ERP and Performance Management Initiatives 2:30 p.m. Become a BI Architect 2:30 p.m. BI Applications: Step 1 in Your ERP Upgrade or Expansion 3:00 p.m. Partner Spotlight Registration links for each city are below: New York , NY- July 26 Miami, FL - July 27 Reston, VA, July 27 Atlanta, GA - July 28 Boston, MA - July 28 Rochester, NY - Aug 2 (event link coming soon!) Menlo Park, CA - August 2 Charlotte, NC - August 3 Newport Beach, CA - August 3 Register online at the links above or call 1.800.820.5592 ext. 9218 to reserve your place.

    Read the article

  • A view from the call center for the Nashville Flood telethon

    - by Rob Foster
    I want to break away from my usual topic of something technical and talk about what I experienced tonight while working in the call center for the Nashville Flood telethon, which was broadcast on WSMV, CNN, and The Weather Channel.  We started receiving calls about 7pm local time and to be honest, I had no idea what to expect when going into this.  I mean, I'm a pretty good talker, but this is different...We had a good script of what to say and how we were supposed to say it, as well as paper forms and pens that we used to collect information from people who wanted to donate their money to help.  I took my first few calls pretty easily and it went pretty quick and easy.  Everyone was upbeat and happy to be in the call center as well as people happy to be donating money. Pizza, snacks, and soft drinks were flowing well.  Everyone is smiling and happy.  :) About 3 or 4 calls into my night, I got a call from a lady that had lost 2 family members in West Nashville who drowned in the floods.  She was crying when she called and I of course tried to console her.  She told me how bad her situation was, losing family members and much of her neighborhood.  After all this, she still just wanted to help other people.  She was donating all the money that she could to the telethon and I want to share a direct quote from her: "I want to donate this instead of buying flowers for my family members' funeral because people out there need help.". Please let me pause while I get myself together <again>.  That caught me so off guard (and still does). I had kids calling wanting to donate their allowance, open their piggy banks, whatever they could do.  These are kids.  Kids not much older than my boys.  Kids who should be focused on buying the next cool video game or toy or whatever but wanted to do something.  Everyone just seemed to want to help. I took calls from as far away as British Columbia as well and pretty much coast to coast.  how cool is that? Yet another thing that caught me off guard.  This kind lady that called from British Columbia told me how much she loved visiting Nashville and just hated to see this happen.  I belive that she said that she will be attending the CMA Fest this year too.  I was sure to tell her not to cancel her plans!  :) It felt like every call I took (and I took A LOT, as did everyone else) was very personal and heartfelt.  I've never had the privelage to do anything like this and fell lucky to have been able to help out with answering phones and logging donations.  Nashville will bounce back very quickly, people are out there day and night helping each other, and the spirits are very high here.  I hope that one day, my kids read this blog and better understand who they are, where they come from, and what the human spirt is and can be.  I love this city, I love the people here, I love the culture and even more than ever am proud to say that this is me.  This is us.  We are Nashville!

    Read the article

  • Improve efficiency of web building setup and processes - Wordpress on Mac

    - by Rob
    Can anyone see any ways in which I can improve my speed and efficiency with the following setup? Or if there are any obvious holes in my building process? This is for building Wordpress websites on Mac: 1) I have a standard Wordpress setup that I work from which includes various plugins that I tend to use across all setups - thus cutting out the step of having to download them all the time! 2) My standard WP files are copied into a Dropbox folder - thus creating backups of the files. 3) I then open up MAMP and setup a local version. 4) I open up Coda and setup the FTP details so files can be uploaded to the live domain by using the publish button. If anyone has any advice on how to improve this process then please let me know!

    Read the article

  • Handy SQL Server Functions Series (HSSFS) Part 2.0 - Prelude to Parsing Patterns Properly

    - by Most Valuable Yak (Rob Volk)
    In Part 1 of the series I wrote about 2 lesser-known and somewhat undocumented functions. In this part, I'm going to cover some familiar string functions like Substring(), Parsename(), Patindex(), and Charindex() and delve into their strengths and weaknesses. I'm also splitting this part up into sub-parts to help focus on a particular technique and/or problem with the technique, hence the Part 2.0. Consider this a composite post, or com-post, if you will. (It may just turn out to be a pile of sh_t after all) I'll be using a contrived example, perhaps the most frustratingly useful, or usefully frustrating, function in SQL Server: @@VERSION. Contrived, because there are better ways to get the information (which I'll cover later); frustrating, because of the way Microsoft formatted the value; and useful because it does have 1 or 2 bits of information not found elsewhere. First let's take a look at the output of @@VERSION: Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (Intel X86) Apr 2 2010 15:53:02 Copyright (c) Microsoft Corporation Developer Edition on Windows NT 5.1 <X86> (Build 2600: Service Pack 3) There are 4 lines, with lines 2-4 indented with a tab character.  In case your browser (or this blog software) doesn't show it correctly, I gave each line a different color.  While this PRINTs nicely, if you SELECT @@VERSION in grid mode it all runs together because it ignores carriage return/line feed (CR/LF) characters.  Not fatal, but annoying. Note that @@VERSION's output will vary depending on edition and version of SQL Server, and also the OS it's installed on.  Despite the differences, the output is laid out the same way and the relevant pieces are in the same order. I'll be using the following view for Parts 2.1 onward, so we have a nice collection of @@VERSION information: create view version(SQLVersion,VersionString) AS ( select 2000, 'Microsoft SQL Server 2000 - 8.00.2055 (Intel X86) Dec 16 2008 19:46:53 Copyright (c) 1988-2003 Microsoft Corporation Developer Edition on Windows NT 5.1 (Build 2600: Service Pack 3)' union all select 2005, 'Microsoft SQL Server 2005 - 9.00.4053.00 (Intel X86) May 26 2009 14:24:20 Copyright (c) 1988-2005 Microsoft Corporation Developer Edition on Windows NT 5.1 (Build 2600: Service Pack 3)' union all select 2008, 'Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (Intel X86) Apr 2 2010 15:53:02 Copyright (c) Microsoft Corporation Developer Edition on Windows NT 5.1 <X86> (Build 2600: Service Pack 3)' union all select 2005, 'Microsoft SQL Server 2005 - 9.00.3080.00 (Intel X86) Sep 6 2009 01:43:32 Copyright (c) 1988-2005 Microsoft Corporation Standard Edition on Windows NT 5.2 (Build 3790: Service Pack 2)' union all select 2008, 'Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (X64) Apr 2 2010 15:48:46 Copyright (c) Microsoft Corporation Developer Edition (64-bit) on Windows NT 6.1 <X64> (Build 7600: ) (Hypervisor)' union all select 2008, 'Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (X64) Apr 2 2010 15:48:46 Copyright (c) Microsoft Corporation Express Edition with Advanced Services (64-bit) on Windows NT 6.1 <X64> (Build 7600: ) (Hypervisor)' ) Feel free to add your own @@VERSION info if it's not already there. In Part 2.1 I'll focus on extracting the SQL Server version number (10.50.1600.1 in first example) and the Edition (Developer), but will have a solution that works with all versions.  Stay tuned!

    Read the article

  • SQL Saturday #220 - Atlanta - Pre-Con Scholarship Winners!

    - by Most Valuable Yak (Rob Volk)
    A few weeks ago, AtlantaMDF offered scholarships for each of our upcoming Pre-conference sessions at SQL Saturday #220. We would like to congratulate the winners! David Thomas SQL Server Security http://sqlsecurity.eventbrite.com/ Vince Bible Surfing the Multicore Wave: Processors, Parallelism, and Performance http://surfmulticore.eventbrite.com/ Mostafa Maged Languages of BI http://languagesofbi.eventbrite.com/ Daphne Adams Practical Self-Service BI with PowerPivot for Excel http://selfservicebi.eventbrite.com/ Tim Lawrence The DBA Skills Upgrade Toolkit http://dbatoolkit.eventbrite.com/ Thanks to everyone who applied! And once again we must thank Idera's generous sponsorship, and the time and effort made by Bobby Dimmick (w|t) and Brian Kelley (w|t) of Midlands PASS for judging all the applicants. Don't forget, there's still time to attend the Pre-Cons on May 17, 2013! Click on the EventBrite links for more details and to register!

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >