Search Results

Search found 10447 results on 418 pages for 'blog blomqvist no'.

Page 97/418 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • Security Updates Available for SQL Server 2008, 2008 R2, 2012, 2014

    - by AaronBertrand
    If you are running 2008 SP3, 2008 R2 SP2, 2012 SP1 (SP2 is not affected, RTM is no longer supported), or 2014, you'll want to check out Security Bulletin MS14-044 for details on a denial of service / privilege escalation issue that has been patched: http://technet.microsoft.com/en-us/library/security/MS14-044 For SQL Server 2012 and SQL Server 2014, I've blogged about recent builds and recommendations here: http://blogs.sqlsentry.com/team-posts/latest-builds-sql-server-2012/ http://blogs.sqlsentry.com/team-posts/latest-builds-sql-server-2014...(read more)

    Read the article

  • Why I don't use SSIS checkpoint files

    - by jamiet
    In a recent discussion in regard to general ETL best practises the subject of checkpoint files as a means for package restartability came up and I stated that I was dead against using them. For anyone that may care, here is why: Configuring them is distinctly unintuitive (that's a matter of opinion but if you follow the link I'll wager that you will agree) they don't make any allowance for loop iterations they cannot store variables of type Object they are limited in ability. There are many scenarios where you may want to execute certain containers regardless of whether the package is started from a checkpoint file but the current usage model does not allow for this. they are ignored by eventhandlers, which wouldn't be so bad if there were a way to toggle this behaviour in certain scenarios they dont work properly I'll expand on the last bullet point. I have encountered situations where the behaviour for tasks executing concurrently is unpredictable. That is, sometimes the completion of a task that executes concurrently with a failed/failing task will make it into the checkpoint file and sometimes it won't. This is near-impossible to reproduce but it does happen as my good friend John Welch will hopefully concur (if he is reading). Is anyone out there making successful use of checkpoint files within SSIS? I would be interested in knowing about that if so. @Jamiet

    Read the article

  • FIX adapter for StreamInsight

    - by Roman Schindlauer
    Over the last couple of month, Rapid Addition, a leading FIX and FAST solutions provider for the financial services industry, has been working closely with the StreamInsight team to enable StreamInsight Complex Event Processing queries to receive input feeds from Rapid Addition’s FIX engine and to send result events back into FIX. Earlier today, Toby Corballis from Rapid Addition blogged about these capabilities here on HedgeHogs. We are very excited to demonstrate these capabilities at the SIFMA conference in New York. The session will take place tomorrow, Tuesday, 11am – 12noon, at the Hilton Hotel New York, 1335 Avenue of the Americas, East Suite 4th floor. Torsten Grabs from the StreamInsight team will join the RapidAddition and local Microsoft teams for the session.  If you are interested in attending the session please register at http://bit.ly/c0bbLL. We are looking forward to meeting you tomorrow at SIFMA! Best regards,The StreamInsight Team

    Read the article

  • Two Free Training Webcasts Open for Registration

    - by KKline
    We've got two sessions that you need to sign up for right away. The upcoming webcast for Oracle-oriented folks has huge registration numbers. So get in while you still can before we hit the limit of what LiveMeeting can handle. Pain of the Week: SQL Server for the Oracle DBA Webcast: SQL Server for the Oracle DBA Date: Thursday, May 27, 2010 (Just a couple days hence!) Time: 8 a.m. Pacific / 11 a.m. Eastern / 4 p.m. United Kingdom / 5 p.m. Central Europe Duration: 45-60 minutes Cost: FREE In enterprise...(read more)

    Read the article

  • Determining distribution of NULL values

    - by AaronBertrand
    Today on the twitter hash tag #sqlhelp, @leenux_tux asked: How can I figure out the percentage of fields that don't have data ? After further clarification, it turns out he is after what proportion of columns are NULL. Some folks suggested using a data profiling task in SSIS . There may be some validity to that, but I'm still a fan of sticking to T-SQL when I can, so here is how I would approach it: Create a #temp table or @table variable to store the results. Create a cursor that loops through all...(read more)

    Read the article

  • SolidQ Journal for January (free and available now)

    - by Greg Low
    I've been travelling in Tasmania for a week or so and didn't get to post about the SolidQ Journal for January. It's our free monthly journal at: http://www.solidq.com/sqj . I promised to write a part two on controlling the security context of stored procedures but didn't get time to write this month. I'll rectify that very soon. However, in the meantime, the rest of the team have done a great job again. Guillermo Bas has described how to access SharePoint 2010 data through Windows Phone 7. Marino...(read more)

    Read the article

  • SSAS DMVs: useful links

    - by Davide Mauri
    From time to time happens that I need to extract metadata informations from Analysis Services DMVS in order to quickly get an overview of the entire situation and/or drill down to detail level. As a memo I post the link I use most when need to get documentation on SSAS Objects Data DMVs: SSAS: Using DMV Queries to get Cube Metadata http://bennyaustin.wordpress.com/2011/03/01/ssas-dmv-queries-cube-metadata/ SSAS DMV (Dynamic Management View) http://dwbi1.wordpress.com/2010/01/01/ssas-dmv-dynamic-management-view/ Use Dynamic Management Views (DMVs) to Monitor Analysis Services http://msdn.microsoft.com/en-us/library/hh230820.aspx

    Read the article

  • Cumulative Update #7 for SQL Server 2008 Service Pack 3 is available

    - by AaronBertrand
    Today Microsoft has released a new cumulative update for SQL Server 2008. Cumulative Update #7 for SQL Server 2008 Service Pack 3 Knowledge Base Article: KB #2738350 At the time of writing, there are 9 fixes listed The build number is 10.00.5794 Relevant for @@VERSION between 10.00.5500 and 10.00.5793 No word yet on an update for Service Pack 2. As usual, I'll post my standard disclaimer here: these updates are NOT for SQL Server 2008 R2 (where @@VERSION will report 10.50.xxxx)....(read more)

    Read the article

  • 2011 PASS Board Applicants: Sri Sridharan

    - by andyleonard
    Introduction I am interviewing 2011 PASS Board Nominee Applicants. As listed on the PASS Board Elections site the applicants are: Rob Farley Geoff Hiten Adam Jorgensen Denise McInerney Sri Sridharan Kendal Van Dyke I'm asking everyone the same questions and blogging the responses in the order received. Sri Sridharan is next up: Interview With Sri Sridharan 1. What's your day job? I work for VHA as a Data Architect. I am responsible for 3 main goals. · Responsible for Data Governance initiatives in...(read more)

    Read the article

  • The Primary Cause of Failed IT Projects

    - by Paul Nielsen
    During my career I’ve been a part of dozens of projects. Some I was on from the start, most I came in to help bail out. Some went smooth and were a pleasure to build and maintain and some projects failed (failed being broadly defined as projects that were not completed, or were completed but were a horrid mess – very complex, impossible to maintain, refactor, and a royal pain to keep running.) While there are a number of factors that can contribute to a failed project, in my career it seems the primary...(read more)

    Read the article

  • In the Cloud, Everything Costs Money

    - by BuckWoody
    I’ve been teaching my daughter about budgeting. I’ve explained that most of the time the money coming in is from only one or two sources – and you can only change that from time to time. The money going out, however, is to many locations, and it changes all the time. She’s made a simple debits and credits spreadsheet, and I’m having her research each part of the budget. Her eyes grow wide when she finds out everything has a cost – the house, gas for the lawnmower, dishes, water for showers, food, electricity to run the fridge, a new fridge when that one breaks, everything has a cost. She asked me “how do you pay for all this?” It’s a sentiment many adults have looking at their own budgets – and one reason that some folks don’t even make a budget. It’s hard to face up to the realities of how much it costs to do what we want to do. When we design a computing solution, it’s interesting to set up a similar budget, because we don’t always consider all of the costs associated with it. I’ve seen design sessions where the new software or servers are considered, but the “sunk” costs of personnel, networking, maintenance, increased storage, new sizes for backups and offsite storage and so on are not added in. They are already on premises, so they are assumed to be paid for already. When you move to a distributed architecture, you'll see more costs directly reflected. Store something, pay for that storage. If the system is deployed and no one is using it, you’re still paying for it. As you watch those costs rise, you might be tempted to think that a distributed architecture costs more than an on-premises one. And you might be right – for some solutions. I’ve worked with a few clients where moving to a distributed architecture doesn’t make financial sense – so we didn’t implement it. I still designed the system in a distributed fashion, however, so that when it does make sense there isn’t much re-architecting to do. In other cases, however, if you consider all of the on-premises costs and compare those accurately to operating a system in the cloud, the distributed system is much cheaper. Again, I never recommend that you take a “here-or-there-only” mentality – I think a hybrid distributed system is usually best – but each solution is different. There simply is no “one size fits all” to architecting a solution. As you design your solution, cost out each element. You might find that using a hybrid approach saves you money in one design and not in another. It’s a brave new world indeed. So yes, in the cloud, everything costs money. But an on-premises solution also costs money – it’s just that “dad” (the company) is paying for it and we don’t always see it. When we go out on our own in the cloud, we need to ensure that we consider all of the costs.

    Read the article

  • Back up a single table in SQL Server

    - by BuckWoody
    SQL Server doesn’t have an easy way to take a table backup, so I often use the bcp (Bulk Copy Program) to accomplish the same goal. I’ve mentioned this before, and someone told me when they tried it they couldn’t restore the table – ah the dangers of telling people half the information! I should have mentioned that you need to have a “format file” ready if the table does not exist at the destination. In my case I already had the table, in this person’s case they did not. The format file can be used to rebuild that table structure before the data is bcp’d in, and you can read more about it here: http://msdn.microsoft.com/en-us/library/ms191516.aspx There’s another way to back up a table, and that’s to create a Filegroup and place the table there. Then you can take a Filegroup backup to back up a single table. Of course, there are other methods of moving a single table’s data in an out, including SQL Server Integration Services and even the older Data Transformation Services, or simply by using hte SQLCMD or PowerShell utilities to run a query and just save the output to a file. In fact, these days I’m using a PowerShell script to build INSERT statements from that query. That could also easily be modified to create the table structure (or modify one if needed) quite easily. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQL Bits X – Temporal Snapshot Fact Table Session Slide & Demos

    - by Davide Mauri
    Already 10 days has passed since SQL Bits X in London. I really enjoyed it! Those kind of events are great not only for the content but also to meet friends that – due to distance – is not possible to meet every day. Friends from PASS, SQL CAT, Microsoft, MVP and so on all in one place, drinking beers, whisky and having fun. A perfect mixture for a great learning and sharing experience! I’ve also enjoyed a lot delivering my session on Temporal Snapshot Fact Tables. Given that the subject is very specific I was not expecting a lot of attendees….but I was totally wrong! It seems that the problem of handling daily snapshot of data is more common than what I expected. I’ve also already had feedback from several attendees that applied the explained technique to their existing solution with success. This is just what a speaker in such conference wish to hear! :) If you want to take a look at the slides and the demos, you can find them on SkyDrive: https://skydrive.live.com/redir.aspx?cid=377ea1391487af21&resid=377EA1391487AF21!1151&parid=root The demo is available both for SQL Sever 2008 and for SQL Server 2012. With this last version, you can also simplify the ETL process using the new LEAD analytic function. (This is not done in the demo, I’ve left this option as a little exercise for you :) )

    Read the article

  • Quote of the day: On backups

    - by BuckWoody
    I saw this one yesterday, and it was a slam-dunk for this morning: "Those who do not archive the past are condemned to retype it." - Garfinkel and Spafford Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Columnstore Case Study #2: Columnstore faster than SSAS Cube at DevCon Security

    - by aspiringgeek
    Preamble This is the second in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in my big deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. See also Columnstore Case Study #1: MSIT SONAR Aggregations Why Columnstore? As stated previously, If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. The Customer DevCon Security provides home & business security services & has been in business for 135 years. I met DevCon personnel while speaking to the Utah County SQL User Group on 20 February 2012. (Thanks to TJ Belt (b|@tjaybelt) & Ben Miller (b|@DBADuck) for the invitation which serendipitously coincided with the height of ski season.) The App: DevCon Security Reporting: Optimized & Ad Hoc Queries DevCon users interrogate a SQL Server 2012 Analysis Services cube via SSRS. In addition, the SQL Server 2012 relational back end is the target of ad hoc queries; this DW back end is refreshed nightly during a brief maintenance window via conventional table partition switching. SSRS, SSAS, & MDX Conventional relational structures were unable to provide adequate performance for user interaction for the SSRS reports. An SSAS solution was implemented requiring personnel to ramp up technically, including learning enough MDX to satisfy requirements. Ad Hoc Queries Even though the fact table is relatively small—only 22 million rows & 33GB—the table was a typical DW table in terms of its width: 137 columns, any of which could be the target of ad hoc interrogation. As is common in DW reporting scenarios such as this, it is often nearly to optimize for such queries using conventional indexing. DevCon DBAs & developers attended PASS 2012 & were introduced to the marvels of columnstore in a session presented by Klaus Aschenbrenner (b|@Aschenbrenner) The Details Classic vs. columnstore before-&-after metrics are impressive. Scenario   Conventional Structures   Columnstore   Δ SSRS via SSAS 10 - 12 seconds 1 second >10x Ad Hoc 5-7 minutes (300 - 420 seconds) 1 - 2 seconds >100x Here are two charts characterizing this data graphically.  The first is a linear representation of Report Duration (in seconds) for Conventional Structures vs. Columnstore Indexes.  As is so often the case when we chart such significant deltas, the linear scale doesn’t expose some the dramatically improved values corresponding to the columnstore metrics.  Just to make it fair here’s the same data represented logarithmically; yet even here the values corresponding to 1 –2 seconds aren’t visible.  The Wins Performance: Even prior to columnstore implementation, at 10 - 12 seconds canned report performance against the SSAS cube was tolerable. Yet the 1 second performance afterward is clearly better. As significant as that is, imagine the user experience re: ad hoc interrogation. The difference between several minutes vs. one or two seconds is a game changer, literally changing the way users interact with their data—no mental context switching, no wondering when the results will appear, no preoccupation with the spinning mind-numbing hurry-up-&-wait indicators.  As we’ve commonly found elsewhere, columnstore indexes here provided performance improvements of one, two, or more orders of magnitude. Simplified Infrastructure: Because in this case a nonclustered columnstore index on a conventional DW table was faster than an Analysis Services cube, the entire SSAS infrastructure was rendered superfluous & was retired. PASS Rocks: Once again, the value of attending PASS is proven out. The trip to Charlotte combined with eager & enquiring minds let directly to this success story. Find out more about the next PASS Summit here, hosted this year in Seattle on November 4 - 7, 2014. DevCon BI Team Lead Nathan Allan provided this unsolicited feedback: “What we found was pretty awesome. It has been a game changer for us in terms of the flexibility we can offer people that would like to get to the data in different ways.” Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the second in a series of reports on columnstore implementations, results from DevCon Security, a live customer production app for which performance increased by factors of from 10x to 100x for all report queries, including canned queries as well as reducing time for results for ad hoc queries from 5 - 7 minutes to 1 - 2 seconds. As a result of columnstore performance, the customer retired their SSAS infrastructure. I invite you to consider leveraging columnstore in your own environment. Let me know if you have any questions.

    Read the article

  • Implement Budget Allocation in DAX for Power Pivot and Tabular #powerpivot #tabular #ssas #dax

    - by Marco Russo (SQLBI)
    Comparing sales and budget, or costs and budget, is a very common operation. However, it is often the case that you have different granularities for different tables containing budget and the data to compare with. There are two ways to do that: you can limit the comparison to the granularity that is common to the two tables, or you can allocate the budget where it’s not defined. For example, if you have a budget defined by quarter and category, you might want to allocate it by month and product. In this way, you will do the comparison as you had a more granular definition of the budget, without actually having to do the manual job of allocating data (usually in an Excel worksheet!). If you want to do budget allocation in DAX, you can use the Budget Patterns we published on DAX Patterns. If you come from and MDX/OLAP background, at first you might find it hard to solve the problem of not having attribute hierarchies that helps you in propagating the budget values to lower hierarchical levels. However, I think that once you get used to DAX, you will find the behavior very predictable and easy to “debug” also for more complex allocation formula. You just have to be careful in writing the DAX formula, but probably the pattern we wrote should help you designing the right data model, without creating physical relationships to the budget table! This pattern is also based on the Handling Different Granularities scenario I discussed a couple of weeks ago.

    Read the article

  • Recap of SQLSat #65

    - by RickHeiges
    Since the MVP Summit was this past week, I decided to head out to the West Coast a little earlier and attended SQLSat#65. I did not submit to speak at the conference, but I did help out some by introducing speakers in one of the rooms and a few other places where I could. I started out in a session by Scott Klein about SQLAzure. BTW, Microsoft now has a 30-day offer for SQL Azure where you do not need to provide Credit Card info. I then sat in for a while on Alan Hirt's Session on building a Cluster...(read more)

    Read the article

  • Smart defaults [SSDT]

    - by jamiet
    I’ve just discovered a new, somewhat hidden, feature in SSDT that I didn’t know about and figured it would be worth highlighting here because I’ll bet not many others know it either; the feature is called Smart Defaults. It gets around the problem of adding a NOT NULLable column to an existing table that has got data in it – previous to SSDT you would need to define a DEFAULT constraint however it does feel rather cumbersome to create an object purely for the purpose of pushing through a deployment – that’s the situation that Smart Defaults is meant to alleviate. The Smart Defaults option exists in the advanced section of a Publish Profile file: The description of the setting is “Automatically provides a default value when updating a table that contains data with a column that does not allow null values”, in other words checking that option will cause SSDT to insert an arbitrary default value into your newly created NON NULLable column. In case you’re wondering how it does it, here’s how: SSDT creates a DEFAULT CONSTRAINT at the same time as the column is created and then immediately removes that constraint: ALTER TABLE [dbo].[T1]    ADD [C1] INT NOT NULL,         CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b] DEFAULT 0 FOR [C1];ALTER TABLE [dbo].[T1] DROP CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b]; You can then update the value as appropriate in a Post-Deployment script. Pretty cool! On the downside, you can only specify this option for the whole project, not for an individual table or even an individual column – I’m not sure that I’d want to turn this on for an entire project as it could hide problems that a failed deployment would highlight, in other words smart defaults could be seen to be “papering over the cracks”. If you think that should be improved go and vote (and leave a comment) at [SSDT] Allow us to specify Smart defaults per table or even per column. @Jamiet

    Read the article

  • Highlight Word add-in for Visual Studio 2010 [SSDT]

    - by jamiet
    I’ve just been alerted by my colleague Kyle Harvie to a Visual Studio 2010 add-in that should prove very useful if you are an SSDT user. Its simply called Highlight all occurrences of selected word and does exactly what it says on the tin, you highlight a word and it shows all other occurrences of that word in your script: There’s a limitation for .sql files (which I have reported) where the highlighting doesn’t work if the word is wrapped in square brackets but what the heck, its free, it takes about ten seconds to install….install it already! @Jamiet

    Read the article

  • T-SQL Tuesday: Personality Clashes, Style Collisions, and Differences of Opinion

    - by andyleonard
    This post is the twenty-sixth part of a ramble-rant about the software business. The current posts in this series are: Goodwill, Negative and Positive Visions, Quests, Missions Right, Wrong, and Style Follow Me Balance, Part 1 Balance, Part 2 Definition of a Great Team The 15-Minute Meeting Metaproblems: Drama The Right Question Software is Organic, Part 1 Metaproblem: Terror I Don't Work On My Car A Turning Point Human Doings Everything Changes Getting It Right The First Time One-Time Boosts Institutionalized!...(read more)

    Read the article

  • OT: NCAA Pick'em Returns...

    - by RickHeiges
    Every year in March, the Men's College Basketball Championship Tourney Begins. For the past few years, I've put together a "League". This year is no different. The prize... Bragging Rights - that's it - nothing else.... Follow the link below to sign up! Picks must be made by Thursday before the games begin. http://tournament.fantasysports.yahoo.com/t1/register/joinprivategroup_assign_team?GID=65521&P=sqlblog&P=sqlblog Share this post: email it! | bookmark it! | digg it! | reddit! | kick it!...(read more)

    Read the article

  • BI Beginner: Excel 2013 Power View Maps

    - by John Paul Cook
    If you know how to use Excel, you can be productive in minutes with the new features of Excel 2013. Don’t be intimidated. Follow these simple steps and produce something snazzy! The Excel file used in this example comes from the following SQL Server query which was run against the AdventureWorks2012 database: SELECT Purchasing . Vendor . Name , Person . Address . City , Person . StateProvince . Name AS State FROM Purchasing . Vendor INNER JOIN Person . BusinessEntityAddress ON Purchasing . Vendor...(read more)

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >