Search Results

Search found 489 results on 20 pages for 'b seven'.

Page 5/20 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Back from the OData Roadshow

    - by Fabrice Marguerie
    I'm just back from the OData Roadshow with Douglas Purdy and Jonathan Carter. Paris was the last location of seven cities around the world.If there was something you wanted to know about OData, that was the place to be!These guys gave a great tour around OData. I learned things I didn't know about OData and I was able to give a demo of Sesame to the audience.More ideas and use cases popping-up!

    Read the article

  • The SSIS tuning tip that everyone misses

    - by Rob Farley
    I know that everyone misses this, because I’m yet to find someone who doesn’t have a bit of an epiphany when I describe this. When tuning Data Flows in SQL Server Integration Services, people see the Data Flow as moving from the Source to the Destination, passing through a number of transformations. What people don’t consider is the Source, getting the data out of a database. Remember, the source of data for your Data Flow is not your Source Component. It’s wherever the data is, within your database, probably on a disk somewhere. You need to tune your query to optimise it for SSIS, and this is what most people fail to do. I’m not suggesting that people don’t tune their queries – there’s plenty of information out there about making sure that your queries run as fast as possible. But for SSIS, it’s not about how fast your query runs. Let me say that again, but in bolder text: The speed of an SSIS Source is not about how fast your query runs. If your query is used in a Source component for SSIS, the thing that matters is how fast it starts returning data. In particular, those first 10,000 rows to populate that first buffer, ready to pass down the rest of the transformations on its way to the Destination. Let’s look at a very simple query as an example, using the AdventureWorks database: We’re picking the different Weight values out of the Product table, and it’s doing this by scanning the table and doing a Sort. It’s a Distinct Sort, which means that the duplicates are discarded. It'll be no surprise to see that the data produced is sorted. Obvious, I know, but I'm making a comparison to what I'll do later. Before I explain the problem here, let me jump back into the SSIS world... If you’ve investigated how to tune an SSIS flow, then you’ll know that some SSIS Data Flow Transformations are known to be Blocking, some are Partially Blocking, and some are simply Row transformations. Take the SSIS Sort transformation, for example. I’m using a larger data set for this, because my small list of Weights won’t demonstrate it well enough. Seven buffers of data came out of the source, but none of them could be pushed past the Sort operator, just in case the last buffer contained the data that would be sorted into the first buffer. This is a blocking operation. Back in the land of T-SQL, we consider our Distinct Sort operator. It’s also blocking. It won’t let data through until it’s seen all of it. If you weren’t okay with blocking operations in SSIS, why would you be happy with them in an execution plan? The source of your data is not your OLE DB Source. Remember this. The source of your data is the NCIX/CIX/Heap from which it’s being pulled. Picture it like this... the data flowing from the Clustered Index, through the Distinct Sort operator, into the SELECT operator, where a series of SSIS Buffers are populated, flowing (as they get full) down through the SSIS transformations. Alright, I know that I’m taking some liberties here, because the two queries aren’t the same, but consider the visual. The data is flowing from your disk and through your execution plan before it reaches SSIS, so you could easily find that a blocking operation in your plan is just as painful as a blocking operation in your SSIS Data Flow. Luckily, T-SQL gives us a brilliant query hint to help avoid this. OPTION (FAST 10000) This hint means that it will choose a query which will optimise for the first 10,000 rows – the default SSIS buffer size. And the effect can be quite significant. First let’s consider a simple example, then we’ll look at a larger one. Consider our weights. We don’t have 10,000, so I’m going to use OPTION (FAST 1) instead. You’ll notice that the query is more expensive, using a Flow Distinct operator instead of the Distinct Sort. This operator is consuming 84% of the query, instead of the 59% we saw from the Distinct Sort. But the first row could be returned quicker – a Flow Distinct operator is non-blocking. The data here isn’t sorted, of course. It’s in the same order that it came out of the index, just with duplicates removed. As soon as a Flow Distinct sees a value that it hasn’t come across before, it pushes it out to the operator on its left. It still has to maintain the list of what it’s seen so far, but by handling it one row at a time, it can push rows through quicker. Overall, it’s a lot more work than the Distinct Sort, but if the priority is the first few rows, then perhaps that’s exactly what we want. The Query Optimizer seems to do this by optimising the query as if there were only one row coming through: This 1 row estimation is caused by the Query Optimizer imagining the SELECT operation saying “Give me one row” first, and this message being passed all the way along. The request might not make it all the way back to the source, but in my simple example, it does. I hope this simple example has helped you understand the significance of the blocking operator. Now I’m going to show you an example on a much larger data set. This data was fetching about 780,000 rows, and these are the Estimated Plans. The data needed to be Sorted, to support further SSIS operations that needed that. First, without the hint. ...and now with OPTION (FAST 10000): A very different plan, I’m sure you’ll agree. In case you’re curious, those arrows in the top one are 780,000 rows in size. In the second, they’re estimated to be 10,000, although the Actual figures end up being 780,000. The top one definitely runs faster. It finished several times faster than the second one. With the amount of data being considered, these numbers were in minutes. Look at the second one – it’s doing Nested Loops, across 780,000 rows! That’s not generally recommended at all. That’s “Go and make yourself a coffee” time. In this case, it was about six or seven minutes. The faster one finished in about a minute. But in SSIS-land, things are different. The particular data flow that was consuming this data was significant. It was being pumped into a Script Component to process each row based on previous rows, creating about a dozen different flows. The data flow would take roughly ten minutes to run – ten minutes from when the data first appeared. The query that completes faster – chosen by the Query Optimizer with no hints, based on accurate statistics (rather than pretending the numbers are smaller) – would take a minute to start getting the data into SSIS, at which point the ten-minute flow would start, taking eleven minutes to complete. The query that took longer – chosen by the Query Optimizer pretending it only wanted the first 10,000 rows – would take only ten seconds to fill the first buffer. Despite the fact that it might have taken the database another six or seven minutes to get the data out, SSIS didn’t care. Every time it wanted the next buffer of data, it was already available, and the whole process finished in about ten minutes and ten seconds. When debugging SSIS, you run the package, and sit there waiting to see the Debug information start appearing. You look for the numbers on the data flow, and seeing operators going Yellow and Green. Without the hint, I’d sit there for a minute. With the hint, just ten seconds. You can imagine which one I preferred. By adding this hint, it felt like a magic wand had been waved across the query, to make it run several times faster. It wasn’t the case at all – but it felt like it to SSIS.

    Read the article

  • Hi Availability blog posts on TechNet UK

    - by Testas
    Back in April I started a blog series on SQL Server High Availability BI.These are currently hosted on Technet UKPart Ihttp://blogs.technet.com/b/uktechnet/archive/2012/05/03/guest-post-seven-worlds-will-collide-high-availability-bi-is-not-such-a-distant-sun.aspxPart IIhttp://blogs.technet.com/b/uktechnet/archive/2012/05/30/guest-post-providing-the-foundation-for-a-high-availability-bi-infrastructure.aspxPart IIIhttp://blogs.technet.com/b/uktechnet/archive/2012/07/16/guest-post-part-3-highly-available-bi-me-myself-and-i.aspxThe final part will be released via Technet in a couple of weeksThanksChris

    Read the article

  • Happy Birthday, SQLPeople!

    - by andyleonard
    One year ago today, I began sending out batches of SQLPeople interview emails to friends in the SQL Server Community. Since then, Brian Moran ( Blog | @briancmoran ) and Matt Velic ( Blog | @mvelic | SQLPeople ) have joined the effort, we have published dozens of interviews, and there have been two events! You can join in the fun. If you haven’t already, visit the interview page and answer the seven questions. You can also join us on LinkedIn and Facebook . And you can follow us on Twitter ( @SQLPeople...(read more)

    Read the article

  • SQLAuthority News SQL Server 2008 R2 Hosted Trial

    This is a bit old news but for me but it will new for many of you know. SQLPASS, Dell, Microsoft and MaximumASP has come together and build hosted environment for free to all of us to use and experiment with. Register now to try out up to seven labs: SQL Server 2008 R2 [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQL Rally Relational Database Design Pre-Con Preview

    - by drsql
    On May 9, 2012, I will be presenting a pre-con session at the SQL Rally in Dallas, TX on relational database design. The fact is, database design is a topic that demands more than a simple one hour session to really do it right. So in my Relational Database Design Workshop, we will have seven times the amount of time in the typical session, giving us time to cover our topics in a bit more detail, look at a lot more designs/code, and even get some time to do some design as a group. Our topics will...(read more)

    Read the article

  • Guaranteed Google Indexing

    Do you believe that you can get any site listed in Google in under seven days? Are you frustrated at hearing this but still your waiting months to get your site indexed in Google? If this sounds like you then maybe I can help.

    Read the article

  • How Does PowerPoint Play In A Great Presentation?

    ?Four score and seven years ago?? Abraham Lincoln?s famous Gettysburg address. ?Ask not what your country can do for you, but what you can do for your country.? John F. Kennedy?s famous address to t... [Author: Anne Warfield - Computers and Internet - June 10, 2010]

    Read the article

  • Tech Mahindra Applications Consolidation Project

    - by Javier Puerta
    “With Oracle’s end-to-end hardware and software solutions, we seamlessly migrated 22 applications from the legacy platform to the new platform in just seven weeks. Thanks to Oracle, we gained an integrated view of enterprisewide data across 49 locations and increased storage capacity by 25%, enabling us to improve service delivery and support our revenue-growth target.” - Ved Prakash Nirbhya, CIO, Tech Mahindra Limited Read full story details here

    Read the article

  • What ever happened to the Defense Software Reuse System (DSRS)?

    - by emddudley
    I've been reading some papers from the early 90s about a US Department of Defense software reuse initiative called the Defense Software Reuse System (DSRS). The most recent mention of it I could find was in a paper from 2000 - A Survey of Software Reuse Repositories Defense Software Repository System (DSRS) The DSRS is an automated repository for storing and retrieving Reusable Software Assets (RSAs) [14]. The DSRS software now manages inventories of reusable assets at seven software reuse support centers (SRSCs). The DSRS serves as a central collection point for quality RSAs, and facilitates software reuse by offering developers the opportunity to match their requirements with existing software products. DSRS accounts are available for Government employees and contractor personnel currently supporting Government projects... ...The DoD software community is trying to change its software engineering model from its current software cycle to a process-driven, domain-specific, architecture-based, repository-assisted way of constructing software [15]. In this changing environment, the DSRS has the highest potential to become the DoD standard reuse repository because it is the only existing deployed, operational repository with multiple interoperable locations across DoD. Seven DSRS locations support nearly 1,000 users and list nearly 9,000 reusable assets. The DISA DSRS alone lists 3,880 reusable assets and has 400 user accounts... The far-term strategy of the DSRS is to support a virtual repository. These interconnected repositories will provide the ability to locate and share reusable components across domains and among the services. An effective and evolving DSRS is a central requirement to the success of the DoD software reuse initiative. Evolving DoD repository requirements demand that DISA continue to have an operational DSRS site to support testing in an actual repository operation and to support DoD users. The classification process for the DSRS is a basic technology for providing customer support [16]. This process is the first step in making reusable assets available for implementing the functional and technical migration strategies. ... [14] DSRS - Defense Technology for Adaptable, Reliable Systems URL: http://ssed1.ims.disa.mil/srp/dsrspage.html [15] STARS - Software Technology for Adaptable, Reliable Systems URL: http://www.stars.ballston.paramax.com/index.html [16] D. E. Perry and S. S. Popovitch, “Inquire: Predicate-based use and reuse,'' in Proceedings of the 8th Knowledge-Based Software Engineering Conference, pp. 144-151, September 1993. ... Is DSRS dead, and were there any post-mortem reports on it? Are there other more-recent US government initiatives or reports on software reuse?

    Read the article

  • Common database deployment blockers and Continuous Delivery headaches

    Deployability is now a first class concern for databases, so why isn’t it as easy as it should be? Matthew Skelton explores seven of the most common challenges which will bring your database deployments to their knees. Get alerts within 15 seconds of SQL Server issuesSQL Monitor checks performance data every 15 seconds, so you can fix issues before your users even notice them. Start monitoring with a free trial.

    Read the article

  • Microsoft Releases June Patch Targeting 34 Flaws

    Microsoft today released 10 fixes in its June security update, with three deemed "critical" and seven considered "important" to patch....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • DevWeek & SQL Server DevCon 2011

    The 14th annual DevWeek conference takes place from 14-18 March 2011, at the Barbican Centre in central London, and once again incorporates two dedicated tracks on SQL Server, alongside seven concurrent tracks aimed at software developers. Free trial of SQL Backup™“SQL Backup was able to cut down my backup time significantly AND achieved a 90% compression at the same time!” Joe Cheng. Download a free trial now.

    Read the article

  • DAX Statistical Functions

    Following on from his first four articles on using Data Analysis Expressions (DAX) with tabular databases, Robert Sheldon dives into some of the DAX statistical functions available, demonstrating which are the most useful and examples of how they work. The seven tools in the SQL DBA Bundle support your core SQL Server database administration tasks.Make backups a breeze! Enjoy trouble-free troubleshooting! Make the most of monitoring! Download a free trial now.

    Read the article

  • PowerShell the SQL Server Way

    Although Windows PowerShell has been available to IT professionals going on seven years, there are still many IT pros who are just now deciding to see what the fuss is all about. Depending on your job, you might find PowerShell an invaluable tool. Microsoft's plan is that PowerShell will be the management tool for all of its servers and platforms. For most IT pros, it's not a matter of if you'll be using PowerShell, only a matter of when.

    Read the article

  • How to Get Indexed by Google - Fast

    Do you want to get indexed in Google fast? If you are new to the internet getting your first website listed on Google can be a real pain. In fact so many people give up on ever getting their website listed on Google as it can take six months plus to get listed if you submit your site directly to Google. But what if you could find out how to get listed in just seven days?

    Read the article

  • How quickly toggle smart quotes in Word 2010?

    - by KnowItAllWannabe
    I'm working on a long technical document that contains numerous displays of computer code. In running text, I want my quotation marks to be curly, which means that Word's "smart quotes" autoformatting-as-I-type feature is one I want on. But in code displays, curly quotes are incorrect, so in these cases, I want smart-quotes-as-I-type disabled. Is there a fast way to toggle this setting? Or is there a way I can tie it to the paragraph style I'm in? (I use a distinct style for code displays.) Currently, to toggle the setting, I have to click FileOptionsProofingAutoCorrect Options..."Straight quotes" with "smart quotes"OKOK, which is seven mouse clicks. Toggling it back is another seven mouse clicks. Isn't there a faster way? A keyhboard shortcut to do the toggling or a toolbar button that would toggle it with a single click would be great. Having the setting depend on the paragraph style I was in would be even better.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >