Search Results

Search found 10711 results on 429 pages for 'ghost blog'.

Page 108/429 | < Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >

  • Building the Internet of Things – with Microsoft StreamInsight and the Microsoft .Net Micro Framework

    - by Roman Schindlauer
    Fresh from the press – The March 2012 issue of MSDN Magazine features an article about the Internet of Things. It discusses in depth how you can use StreamInsight to process all the data that is continuously produced in typical Internet of Things scenarios. It also gives you an end-to-end perspective on developing Internet of Things solutions in the .NET world, ranging from the .NET Micro Framework application running on the device, the communication between the devices and the server-side all the way to powerful cross-device streaming analytics implemented in StreamInsight LINQ. You can find an online version of the article here. Happy reading! Regards, The StreamInsight Team

    Read the article

  • Windows Azure Virtual Machines - Make Sure You Follow the Documentation

    - by BuckWoody
    To create a Windows Azure Infrastructure-as-a-Service Virtual Machine you have several options. You can simply select an image from a “Gallery” which includes Windows or Linux operating systems, or even a Windows Server with pre-installed software like SQL Server. One of the advantages to Windows Azure Virtual Machines is that it is stored in a standard Hyper-V format – with the base hard-disk as a VHD. That means you can move a Virtual Machine from on-premises to Windows Azure, and then move it back again. You can even use a simple series of PowerShell scripts to do the move, or automate it with other methods. And this then leads to another very interesting option for deploying systems: you can create a server VHD, configure it with the software you want, and then run the “SYSPREP” process on it. SYSPREP is a Windows utility that essentially strips the identity from a system, and when you re-start that system it asks a few details on what you want to call it and so on. By doing this, you can essentially create your own gallery of systems, either for testing, development servers, demo systems and more. You can learn more about how to do that here: http://msdn.microsoft.com/en-us/library/windowsazure/gg465407.aspx   But there is a small issue you can run into that I wanted to make you aware of. Whenever you deploy a system to Windows Azure Virtual Machines, you must meet certain password complexity requirements. However, when you build the machine locally and SYSPREP it, you might not choose a strong password for the account you use to Remote Desktop to the machine. In that case, you might not be able to reach the system after you deploy it. Once again, the key here is reading through the instructions before you start. Check out the link I showed above, and this link: http://technet.microsoft.com/en-us/library/cc264456.aspx to make sure you understand what you want to deploy.  

    Read the article

  • 32-bit ODBC on Windows Server 2008 R2

    - by John Paul Cook
    Heterogeneous data access requires having the right drivers. If you have to use 32-bit ODBC drivers, you won’t find then when you start the Microsoft ODBC Administrator because it is 64-bit. The 32-bit ODBC Administrator is found here: C:\Windows\SysWOW64\odbcad32.exe You might want to make a shortcut for it to make it easy to find. You’ll need to use it when make 32-bit ODBC data connections. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!...(read more)

    Read the article

  • Execute a SSIS package in Sync or Async mode from SQL Server 2012

    - by Davide Mauri
    Today I had to schedule a package stored in the shiny new SSIS Catalog store that can be enabled with SQL Server 2012. (http://msdn.microsoft.com/en-us/library/hh479588(v=SQL.110).aspx) Once your packages are stored here, they will be executed using the new stored procedures created for this purpose. This is the script that will get executed if you try to execute your packages right from management studio or through a SQL Server Agent job, will be similar to the following: Declare @execution_id bigint EXEC [SSISDB].[catalog].[create_execution] @package_name='my_package.dtsx', @execution_id=@execution_id OUTPUT, @folder_name=N'BI', @project_name=N'DWH', @use32bitruntime=False, @reference_id=Null Select @execution_id DECLARE @var0 smallint = 1 EXEC [SSISDB].[catalog].[set_execution_parameter_value] @execution_id,  @object_type=50, @parameter_name=N'LOGGING_LEVEL', @parameter_value=@var0 DECLARE @var1 bit = 0 EXEC [SSISDB].[catalog].[set_execution_parameter_value] @execution_id,  @object_type=50, @parameter_name=N'DUMP_ON_ERROR', @parameter_value=@var1 EXEC [SSISDB].[catalog].[start_execution] @execution_id GO The problem here is that the procedure will simply start the execution of the package and will return as soon as the package as been started…thus giving you the opportunity to execute packages asynchrously from your T-SQL code. This is just *great*, but what happens if I what to execute a package and WAIT for it to finish (and thus having a synchronous execution of it)? You have to be sure that you add the “SYNCHRONIZED” parameter to the package execution. Before the start_execution procedure: exec [SSISDB].[catalog].[set_execution_parameter_value] @execution_id,  @object_type=50, @parameter_name=N'SYNCHRONIZED', @parameter_value=1 And that’s it . PS From the RC0, the SYNCHRONIZED parameter is automatically added each time you schedule a package execution through the SQL Server Agent. If you’re using an external scheduler, just keep this post in mind .

    Read the article

  • We Are SQLFamily

    - by AllenMWhite
    On Monday, Tom LaRock ( b /@sqlrockstar) presented his #MemeMonday topic as What #SQLFamily Means To Me . The #sqlfamily hash tag is a relatively new one, but is amazingly appropriate. I've been working with relational databases for almost 20 years, and for most of that time I've been the lone DBA. The only one to set things up, explain how things work, fix the problems, make it go faster, etc., etc., yadda, yadda, yadda. I enjoy being 'the guy', but at the same time it gets hard. What if I'm wrong?...(read more)

    Read the article

  • An XEvent a Day (28 of 31) – Tracking Page Compression Operations

    - by Jonathan Kehayias
    The Database Compression feature in SQL Server 2008 Enterprise Edition can provide some significant reductions in storage requirements for SQL Server databases, and in the right implementations and scenarios performance improvements as well.  There isn’t really a whole lot of information about the operations of database compression that is documented as being available in the DMV’s or SQL Trace.  Paul Randal pointed out on Twitter today that sys.dm_db_index_operational_stats() provides...(read more)

    Read the article

  • SSIS Training 15-19 Oct in Reston Virginia

    - by andyleonard
    Early bird registration is now open for Linchpin People ’s SSIS training course From Zero To SSIS scheduled for 15-19 Oct 2012 in Reston Virginia! Register today – the early bird discount ends 28 Sep 2012. Training Description From Zero to SSIS was developed by Andy Leonard to train technology professionals in the fine art of using SQL Server Integration Services (SSIS) to build data integration and Extract-Transform-Load (ETL) solutions. The training is focused around labs and emphasizes a hands-on...(read more)

    Read the article

  • SSIS Design Patterns Training in London 8-11 Sep!

    - by andyleonard
    A few seats remain for my course SQL Server Integration Services 2012 Design Patterns to be delivered in London 8-11 Sep 2014. Register today to learn more about: New features in SSIS 2012 and 2014 Advanced patterns for loading data warehouses Error handling The (new) Project Deployment Model Scripting in SSIS The (new) SSIS Catalog Designing custom SSIS tasks Executing, managing, monitoring, and administering SSIS in the enterprise Business Intelligence Markup Language (Biml) BimlScript ETL Instrumentation...(read more)

    Read the article

  • TechEd 2010 Day Two – No SQL Server in Sight

    - by BuckWoody
    Today I worked the booth at TechEd 2010, manning the new “Surface” computer, which is just the coolest object on the planet. After that I didn’t attend a single SQL Server session – instead I’ve been frequenting SharePoint, Microsoft Office, and even the High-Performance Computing sessions. The reason is that I get really high quality SQL Server presentations at PASS, SQL Saturdays, and online from Microsoft and other vendors. While there are SQL Server sessions here (after all, I’m giving one of them!) I tend to try and see things that I don’t normally get to learn about. And the cross-pollination between those technologies and mine is fantastic.     I’ve even managed to go to an Entity Framework presentation for the developers. I actually have (a little) more respect for that technology – and I’ve modified my presentation to encompass more of that information. So whenever you have the chance, take a walk outside your comfort zone. Even at PASS and SQL Saturdays (and certainly online) you can investigate technologies other than the ones you know best.  Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Declarative Architectures in Infrastructure as a Service (IaaS)

    - by BuckWoody
    I deal with computing architectures by first laying out requirements, and then laying in any constraints for it's success. Only then do I bring in computing elements to apply to the system. As an example, a requirement might be "world-side availability" and a constraint might be "with less than 80ms response time and full HA" or something similar. Then I can choose from the best fit of technologies which range from full-up on-premises computing to IaaS, PaaS or SaaS. I also deal in abstraction layers - on-premises systems are fully under your control, in IaaS the hardware is abstracted (but not the OS, scale, runtimes and so on), in PaaS the hardware and the OS is abstracted and you focus on code and data only, and in SaaS everything is abstracted - you merely purchase the function you want (like an e-mail server or some such) and simply use it. When you think about solutions this way, the architecture moves to the primary factor in your decision. It's problem-first architecting, and then laying in whatever technology or vendor best fixes the problem. To that end, most architects design a solution using a graphical tool (I use Visio) and then creating documents that  let the rest of the team (and business) know what is required. It's the template, or recipe, for the solution. This is extremely easy to do for SaaS - you merely point out what the needs are, research the vendor and present the findings (and bill) to the business. IT might not even be involved there. In PaaS it's not much more complicated - you use the same Application Lifecycle Management and design tools you always have for code, such as Visual Studio or some other process and toolset, and you can "stamp out" the application in multiple locations, update it and so on. IaaS is another story. Here you have multiple machines, operating systems, patches, virus scanning, run-times, scale-patterns and tools and much more that you have to deal with, since essentially it's just an in-house system being hosted by someone else. You can certainly automate builds of servers - we do this as technical professionals every day. From Windows to Linux, it's simple enough to create a "build script" that makes a system just like the one we made yesterday. What is more problematic is being able to tie those systems together in a coherent way (as a solution) and then stamp that out repeatedly, especially when you might want to deploy that solution on-premises, or in one cloud vendor or another. Lately I've been working with a company called RightScale that does exactly this. I'll point you to their site for more info, but the general idea is that you document out your intent for a set of servers, and it will deploy them to on-premises clouds, Windows Azure, and other cloud providers all from the same script. In other words, it doesn't contain the images or anything like that - it contains the scripts to build them on-premises or on a cloud vendor like Microsoft. Using a tool like this, you combine the steps of designing a system (all the way down to passwords and accounts if you wish) and then the document drives the distribution and implementation of that intent. As time goes on and more and more companies implement solutions on various providers (perhaps for HA and DR) then this becomes a compelling investigation. The RightScale information is here, if you want to investigate it further. Yes, there are other methods I've found, but most are tied to a single kind of cloud, and I'm not into vendor lock-in. Poppa Bear Level - Hands-on EvaluateRightScale at no cost.  Just bring your Windows Azurecredentials and follow the these tutorials: Sign Up for Windows Azure Add     Windows Azure to a RightScale Account Windows Azure Virtual Machines     3-tier Deployment Momma Bear Level - Just the Right level... ;0)  WindowsAzure Evaluation Guide - if you are new toWindows Azure Virtual Machines and new to RightScale, we recommend that youread the entire evaluation guide to gain a more complete understanding of theWindows Azure + RightScale solution.    WindowsAzure Support Page @ support.rightscale.com - FAQ's, tutorials,etc. for  Windows Azure Virtual Machines (Work in Progress) Baby Bear Level - Marketing WindowsAzure Page @ www.rightscale.com - find overview informationincluding solution briefs and presentation & demonstration videos   Scale     and Automate Applications on Windows Azure  Solution Brief     - how RightScale makes Windows Azure Virtual Machine even better SQL     Server on Windows Azure  Solution Brief   -       Run Highly Available SQL Server on Windows Azure Virtual Machines

    Read the article

  • Two BULK INSERT issues I worked around recently

    - by AaronBertrand
    Since I am still afraid of SSIS, and because I am dealing mostly with CSV files and table structures that are relatively simple and require only one of the three letters in the acronym "ETL," I find myself using BULK INSERT a lot. I have been meaning to switch to using CLR, since I am doing a lot of file system querying using xp_cmdshell, but I haven't had the chance to really explore it yet. I know, a lot of you are probably thinking, wow, look at all those bad habits. But for every person thinking...(read more)

    Read the article

  • An update on using Rosetta Stone: Studio now isn't very useful and is not great value as an add-on option

    - by Greg Low
    I had a surprisingly large number of responses from my previous posting about learning Chinese. An update for those considering Rosetta Stone (www.rosettastone.com) for Chinese, Spanish or any other language that they offer:I had to renew my "Studio" subscription today and it's now a much worse deal than it was.It's now $75 for 6 months for Studio sessions. Online classes used to be 45 mins. Recently they reduced them to 20 mins. Given how often people have connection issues, etc. that 20 mins can disappear very quickly.They've also reduced the number you can attend. You used to be able to have 2 scheduled at any point in time. Now they limit you to 2 "group sessions" per month during the period. (You can pay for additional private sessions). The combination of these two changes now makes it much less useful. Two x 20 min sessions per month is an almost meaningless amount of practice. They also now automatically change you to auto-renew when you subscribe. They tell you where to remove this auto-renewal but the first 4 or 5 times that I went into that screen, no such option appeared. Later, an option did appear and I used it.Overall, things just aren't what they used to be at Rosetta Stone. It's now pretty hard to recommend the Studio option where it was a no-brainer before.FURTHER UPDATE: <sigh>Even after I renewed, I could not even connect to their "new" service. Although the system processed the renewal, it still tells me it's expired. My online chat person "Siva S" tells me that the problem is that I've purchased all 5 levels of the program. I can't wait till they explain to me how making an extra purchase from them stops me from logging on. Siva told me that they had "renewed" the program. I'd have to speak to Customer Care; they aren't available and then disconnected himself. Impressive (not).Their website is now full of issues too. It insists that my billing address is in the USA, even though it pretends to accept changes to it.Overall, it's gone from something that could be recommended (with some limitations) to now being an app to avoid. That's a pity as I liked much of it before.

    Read the article

  • Auto-Configuring SSIS Packages

    - by Davide Mauri
    SSIS Package Configurations are very useful to make packages flexible so that you can change objects properties at run-time and thus make the package configurable without having to open and edit it. In a complex scenario where you have dozen of packages (even in in the smallest BI project I worked on I had 50 packages), each package may have its own configuration needs. This means that each time you have to run the package you have to pass the correct Package Configuration. I usually use XML configuration files and I also force everyone that works with me to make sure that an object that is used in several packages has the same name in all package where it is used, in order to simplify configurations usage. Connection Managers are a good example of one of those objects. For example, all the packages that needs to access to the Data Warehouse database must have a Connection Manager named DWH. Basically we define a set of “global” objects so that we can have a configuration file for them, so that it can be used by all packages. If a package as some specific configuration needs, we create a specific – or “local” – XML configuration file or we set the value that needs to be configured at runtime using DTLoggedExec’s Package Parameters: http://dtloggedexec.davidemauri.it/Package%20Parameters.ashx Now, how we can improve this even more? I’d like to have a package that, when it’s run, automatically goes “somewhere” and search for global or local configuration, loads it and applies it to itself. That’s the basic idea of Auto-Configuring Packages. The “somewhere” is a SQL Server table, defined in this way In this table you’ll put the values that you want to be used at runtime by your package: The ConfigurationFilter column specify to which package that configuration line has to be applied. A package will use that line only if the value specified in the ConfigurationFilter column is equal to its name. In the above sample. only the package named “simple-package” will use the line number two. There is an exception here: the $$Global value indicate a configuration row that has to be applied to any package. With this simple behavior it’s possible to replicate the “global” and the “local” configuration approach I’ve described before. The ConfigurationValue contains the value you want to be applied at runtime and the PackagePath contains the object to which that value will be applied. The ConfiguredValueType column defined the data type of the value and the Checksum column is contains a calculated value that is simply the hash value of ConfigurationFilter plus PackagePath so that it can be used as a Primary Key to guarantee uniqueness of configuration rows. As you may have noticed the table is very similar to the table originally used by SSIS in order to put DTS Configuration into SQL Server tables: SQL Server SSIS Configuration Type: http://msdn.microsoft.com/en-us/library/ms141682.aspx Now, how it works? It’s very easy: you just have to call DTLoggedExec with the /AC option: DTLoggedExec.exe /FILE:”mypackage.dtsx” /AC:"localhost;ssis_auto_configuration;ssiscfg.configuration" the AC option expects a string with the following format: <database_server>;<database_name>;<table_name>; only Windows Authentication is supported. When DTLoggedExec finds an Auto-Configuration request, it injects a new connection manager in the loaded package. The injected connection manager is named $$DTLoggedExec_AutoConfigure and is used by the two SQL Server DTS Configuration ($$DTLoggedExec_Global and $$DTLoggedExec_Local) also injected by DTLoggedExec, used to load “local” and “global” configuration. Now, you may start to wonder why this approach cannot be used without having all this stuff going around, but just passing to a package always two XML DTS Configuration files, (to have to “local” and the “global” configurations) doing something like this: DTLoggedExec.exe /FILE:”mypackage.dtsx” /CONF:”global.dtsConfig” /CONF:”mypackage.dtsConfig” The problem is that this approach doesn’t work if you have, in one of the two configuration file, a value that has to be applied to an object that doesn’t exists in the loaded package. This situation will raise an error that will halt package execution. To solve this problem, you may want to create a configuration file for each package. Unfortunately this will make deployment and management harder, since you’ll have to deal with a great number of configuration files. The Auto-Configuration approach solve all these problems at once! We’re using it in a project where we have hundreds of packages and I can tell you that deployment of packages and their configuration for the pre-production and production environment has never been so easy! To use the Auto-Configuration option you have to download the latest DTLoggedExec release: http://dtloggedexec.codeplex.com/releases/view/62218 Feedback, as usual, are very welcome!

    Read the article

  • Cumulative Updates available for SQL Server 2008 R2

    - by AaronBertrand
    Today the SQL Server Release Services Team has pushed out new cumulative updates for SQL Server 2008 R2: Cumulative Update #14 for SQL Server 2008 R2 RTM - KB article http://support.microsoft.com/kb/2703280 - build number 10.50.1817.0 - 7 fixes - relevant for builds between 10.50.1600 and 10.50.1816 Cumulative Update #7 for SQL Server 2008 R2 Service Pack 1 - KB article http://support.microsoft.com/kb/2703282 - build number 10.50.2817.0 - 24 fixes - relevant for builds between 10.50.2500 and 10.50.2816...(read more)

    Read the article

  • List columns where collation doesn't match database collation

    - by TiborKaraszi
    Below script lists all database/table/column where the column collation doesn't match the database collation. I just wrote it for a migration project and thought I'd share it. I'm sure lots of tings can be improved, but below worked just fine for me for a one-time execution on a number of servers. IF OBJECT_ID ( 'tempdb..#res' ) IS NOT NULL DROP TABLE #res GO DECLARE @db sysname , @sql nvarchar ( 2000 ) CREATE TABLE #res ( server_name sysname , db_name sysname , db_collation sysname , table_name...(read more)

    Read the article

  • White Paper on Parallel Processing

    - by Andrew Kelly
      I just ran across what I think is a newly published white paper on parallel  processing in SQL Server 2008 R2. The date is October 2010 but this is the first time I have seen it so I am not 100% sure how new it really is. Maybe you have seen it already but if not I recommend taking a look. So far I haven’t had time to read it extensively but from a cursory look it appears to be quite informative and this is one of the areas least understood by a lot of dba’s. It was authored by Don Pinto...(read more)

    Read the article

  • Cardinality Estimation Bug with Lookups in SQL Server 2008 onward

    - by Paul White
    Cost-based optimization stands or falls on the quality of cardinality estimates (expected row counts).  If the optimizer has incorrect information to start with, it is quite unlikely to produce good quality execution plans except by chance.  There are many ways we can provide good starting information to the optimizer, and even more ways for cardinality estimation to go wrong.  Good database people know this, and work hard to write optimizer-friendly queries with a schema and metadata (e.g. statistics) that reduce the chances of poor cardinality estimation producing a sub-optimal plan.  Today, I am going to look at a case where poor cardinality estimation is Microsoft’s fault, and not yours. SQL Server 2005 SELECT th.ProductID, th.TransactionID, th.TransactionDate FROM Production.TransactionHistory AS th WHERE th.ProductID = 1 AND th.TransactionDate BETWEEN '20030901' AND '20031231'; The query plan on SQL Server 2005 is as follows (if you are using a more recent version of AdventureWorks, you will need to change the year on the date range from 2003 to 2007): There is an Index Seek on ProductID = 1, followed by a Key Lookup to find the Transaction Date for each row, and finally a Filter to restrict the results to only those rows where Transaction Date falls in the range specified.  The cardinality estimate of 45 rows at the Index Seek is exactly correct.  The table is not very large, there are up-to-date statistics associated with the index, so this is as expected. The estimate for the Key Lookup is also exactly right.  Each lookup into the Clustered Index to find the Transaction Date is guaranteed to return exactly one row.  The plan shows that the Key Lookup is expected to be executed 45 times.  The estimate for the Inner Join output is also correct – 45 rows from the seek joining to one row each time, gives 45 rows as output. The Filter estimate is also very good: the optimizer estimates 16.9951 rows will match the specified range of transaction dates.  Eleven rows are produced by this query, but that small difference is quite normal and certainly nothing to worry about here.  All good so far. SQL Server 2008 onward The same query executed against an identical copy of AdventureWorks on SQL Server 2008 produces a different execution plan: The optimizer has pushed the Filter conditions seen in the 2005 plan down to the Key Lookup.  This is a good optimization – it makes sense to filter rows out as early as possible.  Unfortunately, it has made a bit of a mess of the cardinality estimates. The post-Filter estimate of 16.9951 rows seen in the 2005 plan has moved with the predicate on Transaction Date.  Instead of estimating one row, the plan now suggests that 16.9951 rows will be produced by each clustered index lookup – clearly not right!  This misinformation also confuses SQL Sentry Plan Explorer: Plan Explorer shows 765 rows expected from the Key Lookup (it multiplies a rounded estimate of 17 rows by 45 expected executions to give 765 rows total). Workarounds One workaround is to provide a covering non-clustered index (avoiding the lookup avoids the problem of course): CREATE INDEX nc1 ON Production.TransactionHistory (ProductID) INCLUDE (TransactionDate); With the Transaction Date filter applied as a residual predicate in the same operator as the seek, the estimate is again as expected: We could also force the use of the ultimate covering index (the clustered one): SELECT th.ProductID, th.TransactionID, th.TransactionDate FROM Production.TransactionHistory AS th WITH (INDEX(1)) WHERE th.ProductID = 1 AND th.TransactionDate BETWEEN '20030901' AND '20031231'; Summary Providing a covering non-clustered index for all possible queries is not always practical, and scanning the clustered index will rarely be optimal.  Nevertheless, these are the best workarounds we have today. In the meantime, watch out for poor cardinality estimates when a predicate is applied as part of a lookup. The worst thing is that the estimate after the lookup join in the 2008+ plans is wrong.  It’s not hopelessly wrong in this particular case (45 versus 16.9951 is not the end of the world) but it easily can be much worse, and there’s not much you can do about it.  Any decisions made by the optimizer after such a lookup could be based on very wrong information – which can only be bad news. If you think this situation should be improved, please vote for this Connect item. © 2012 Paul White – All Rights Reserved twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • Be the surgeon

    - by Rob Farley
    It’s a phrase I use often, especially when teaching, and I wish I had realised the concept years earlier. (And of course, fits with this month’s T-SQL Tuesday topic, hosted by Argenis Fernandez) When I’m sick enough to go to the doctor, I see a GP. I used to typically see the same guy, but he’s moved on now. However, when he has been able to roughly identify the area of the problem, I get referred to a specialist, sometimes a surgeon. Being a surgeon requires a refined set of skills. It’s why they often don’t like to be called “Doctor”, and prefer the traditional “Mister” (the history is that the doctor used to make the diagnosis, and then hand the patient over to the person who didn’t have a doctorate, but rather was an expert cutter, typically from a background in butchering). But if you ask the surgeon about the pain you have in your leg sometimes, you’ll get told to ask your GP. It’s not that your surgeon isn’t interested – they just don’t know the answer. IT is the same now. That wasn’t something that I really understood when I got out of university. I knew there was a lot to know about IT – I’d just done an honours degree in it. But I also knew that I’d done well in just about all my subjects, and felt like I had a handle on everything. I got into developing, and still felt that having a good level of understanding about every aspect of IT was a good thing. This got me through for the first six or seven years of my career. But then I started to realise that I couldn’t compete. I’d moved into management, and was spending my days running projects, rather than writing code. The kids were getting older. I’d had a bad back injury (ask anyone with chronic pain how it affects  your ability to concentrate, retain information, etc). But most of all, IT was getting larger. I knew kids without lives who knew more than I did. And I felt like I could easily identify people who were better than me in whatever area I could think of. Except writing queries (this was before I discovered technical communities, and people like Paul White and Dave Ballantyne). And so I figured I’d specialise. I wish I’d done it years earlier. Now, I can tell you plenty of people who are better than me at any area you can pick. But there are also more people who might consider listing me in some of their lists too. If I’d stayed the GP, I’d be stuck in management, and finding that there were better managers than me too. If you’re reading this, SQL could well be your thing. But it might not be either. Your thing might not even be in IT. Find out, and then see if you can be a world-beater at it. But it gets even better, because you can find other people to complement the things that you’re not so good at. My company, LobsterPot Solutions, has six people in it at the moment. I’ve hand-picked those six people, along with the one who quit. The great thing about it is that I’ve been able to pick people who don’t necessarily specialise in the same way as me. I don’t write their T-SQL for them – generally they’re good enough at that themselves. But I’m on-hand if needed. Consider Roger Noble, for example. He’s doing stuff in HTML5 and jQuery that I could never dream of doing to create an amazing HTML5 version of PivotViewer. Or Ashley Sewell, a guy who does project management far better than I do. I could go on. My team is brilliant, and I love them to bits. We’re all surgeons, and when we work together, I like to think we’re pretty good! @rob_farley

    Read the article

  • T-SQL User-Defined Functions: the good, the bad, and the ugly (part 1)

    - by Hugo Kornelis
    So you thought that encapsulating code in user-defined functions for easy reuse is a good idea? Think again! SQL Server supports three types of user-defined functions. Only one of them qualifies as good. The other two – well, the title says it all, doesn’t it? The bad: scalar functions A scalar user-defined function (UDF) is very much like a stored procedure, except that it always returns a single value of a predefined data type – and because of that property, it isn’t invoked with an EXECUTE statement,...(read more)

    Read the article

  • Connect Spotlight: Rename Instance Name

    - by Lara Rubbelke
    Every now and then customers ask me how they can suggest changes or behavior changes to SQL Server. Many of us are aware of Connect , where you can add feature recommendations and vote on other people’s suggestions. There are a LOT of recommendations, and I know Microsoft values your feedback and suggestions. Sometimes these recommendations are grand, and others are small – in either case, your votes do make a difference on how Microsoft prioritizes features and changes in future releases. Recently,...(read more)

    Read the article

  • Presentations about SARGability tomorrow

    - by Rob Farley
    Tomorrow, via LiveMeeting, I’m giving two presentations about SARGability. That’s April 27th , for anyone reading this later. The first will be at the Adelaide SQL Server User Group . If you’re in Adelaide, you should definitely come along. Both meetings are being broadcast to the PASS AppDev Virtual Chapter . The second will be in the Adelaide evening at 9:30pm, and will just be and my computer. The audience will be on the other end of a phone-line, which is very different for me. I’m very much...(read more)

    Read the article

  • So long 2010 and Hello 2011

    - by AllenMWhite
    This has been an interesting year, one in which I had some successes and some failures. It's the first year I really worked "for myself". In the past I've had some periods between jobs where I've done some contracting, but in November 2009 I struck out on my own (with a great business partner in Shawn Upchurch). This first full calendar year in this role had its ups and downs, but overall it's been a success and I'm grateful to Shawn, my clients, and especially to my wife of 30 years, Cindi. The...(read more)

    Read the article

  • Here Comes the FY11 Earmarks Database

    - by Mike C
    I'm really interested in politics (don't worry, I'm not going to start bashing politicians and hammering you with political rage). The point is when the U.S. FY11 Omnibus Spending Bill (the bill to fund the U.S. Government for another year) was announced it piqued my interest. I'm fascinated by " earmarks " (also affectionally known as " pork "). For those who aren't familiar with U.S. politics, "earmark" is a slang term for "Congressionally Directed Spending". It's basically the set of provisions...(read more)

    Read the article

  • Export all SSIS packages from msdb using Powershell

    - by jamiet
    Have you ever wanted to dump all the SSIS packages stored in msdb out to files? Of course you have, who wouldn’t? Right? Well, at least one person does because this was the subject of a thread (save all ssis packages to file) on the SSIS forum earlier today. Some of you may have already figured out a way of doing this but for those that haven’t here is a nifty little script that will do it for you and it uses our favourite jack-of-all tools … Powershell!! Imagine I have the following package folder structure on my Integration Services server (i.e. in [msdb]): There are two packages in there called “20110111 Chaining Expression components” & “Package”, I want to export those two packages into a folder structure that mirrors that in [msdb]. Here is the Powershell script that will do that:Param($SQLInstance = "localhost") #####Add all the SQL goodies (including Invoke-Sqlcmd)##### add-pssnapin sqlserverprovidersnapin100 -ErrorAction SilentlyContinue add-pssnapin sqlservercmdletsnapin100 -ErrorAction SilentlyContinue cls $Packages = Invoke-Sqlcmd -MaxCharLength 10000000 -ServerInstance $SQLInstance -Query "WITH cte AS ( SELECT cast(foldername as varchar(max)) as folderpath, folderid FROM msdb..sysssispackagefolders WHERE parentfolderid = '00000000-0000-0000-0000-000000000000' UNION ALL SELECT cast(c.folderpath + '\' + f.foldername as varchar(max)), f.folderid FROM msdb..sysssispackagefolders f INNER JOIN cte c ON c.folderid = f.parentfolderid ) SELECT c.folderpath,p.name,CAST(CAST(packagedata AS VARBINARY(MAX)) AS VARCHAR(MAX)) as pkg FROM cte c INNER JOIN msdb..sysssispackages p ON c.folderid = p.folderid WHERE c.folderpath NOT LIKE 'Data Collector%'" Foreach ($pkg in $Packages) { $pkgName = $Pkg.name $folderPath = $Pkg.folderpath $fullfolderPath = "c:\temp\$folderPath\" if(!(test-path -path $fullfolderPath)) { mkdir $fullfolderPath | Out-Null } $pkg.pkg | Out-File -Force -encoding ascii -FilePath "$fullfolderPath\$pkgName.dtsx" } To run it simply change the “localhost” parameter of the server you want to connect to either by editing the script or passing it in when the script is executed. It will create the folder structure in C:\Temp (which you can also easily change if you so wish – just edit the script accordingly). Here’s the folder structure that it created for me: Notice how it is a mirror of the folder structure in [msdb]. Hope this is useful! @Jamiet UPDATE: THis post prompted Chad Miller to write a post describing his Powershell add-in that utilises a SSIS API to do exporting of packages. Go take a read here: http://sev17.com/2011/02/importing-and-exporting-ssis-packages-using-powershell/

    Read the article

< Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >